Python Requests for 2025: Tools, Tips & Proxy Secrets

Nathan Reynolds

Last edited on May 4, 2025
Last edited on May 4, 2025

Coding Tutorials

Diving into Python's Requests Library: A 2025 Perspective

If you're working with Python and need to interact with the web, chances are you'll bump into the requests library. It's pretty much the go-to tool for sending HTTP requests programmatically. Think of it like using your web browser, but supercharged for automation and efficiency, letting you fetch web pages, talk to APIs, and more, all from your Python script.

This guide will walk you through the essentials of requests. We'll cover fetching web pages and JSON data, understand the meaning behind HTTP status codes and methods, explore query parameters, and crucially, delve into how proxies work with requests to keep your interactions smooth and anonymous when needed.

So, What Exactly is the Python Requests Library?

At its core, Requests is an elegant and simple HTTP library for Python, built for human beings. Its main job is to let your Python code send HTTP requests to servers and handle the responses that come back.

You can think of it as performing the same actions a web browser does when you visit a website. Browsers are fantastic HTTP clients, designed for visually navigating the web. However, they aren't built for automated data fetching or interacting with web services programmatically.

That's where libraries like requests step in. They handle the underlying HTTP communication without the overhead of rendering graphical interfaces, making them much faster and more resource-efficient for tasks like web scraping or API integration. Plus, they offer features specifically designed for automation.

Getting Requests Installed

Before you can use requests, you need to add it to your Python environment. Open your terminal or command prompt and run this command:

Once installed, you need to import it into your Python script. Just add this line at the beginning of your .py file:

import requests

With that done, you're ready to start making web requests!

Fetching a Web Page's Content

Let's start with a basic example: downloading the HTML content of a web page. Create a Python file (e.g., fetch_example.py) and add the following code:

import requests
# Let's try fetching content from httpbin, a useful testing site
url = "https://httpbin.org/html"
response = requests.get(url)
# Check if the request was successful (status code 200)
if response.status_code == 200:
    print(response.text)
else:
    print(f"Request failed with status code: {response.status_code}")

Run this script using python fetch_example.py. You should see the HTML source code of the httpbin HTML page printed to your console:

<!DOCTYPE html>
<html>
  <head>
  </head>
  <body>
    <h1>Herman Melville - Moby-Dick</h1>
    ...
  </body>
</html>

This demonstrates the fundamental use case: sending a GET request to a URL and receiving the server's response. The response.text attribute holds the content, which in this case is HTML.

However, requests itself doesn't parse HTML. If you need to extract specific data from the HTML (like finding all the links or extracting text from certain tags), you'll need an additional library. A popular choice for this task is Beautiful Soup. While we won't detail HTML parsing here, combining requests with a parser is key for web scraping.

Working with JSON APIs using Requests

Web pages aren't the only thing you can fetch. requests is also excellent for interacting with web APIs, which often communicate using JSON (JavaScript Object Notation).

JSON is a lightweight data format that's easy for humans to read and machines to parse. Here's a snippet of what JSON data might look like, perhaps representing user information:

{
  "id": 42,
  "username": "dev_guru",
  "email": "guru@example.com",
  "isActive": true,
  "roles": [
    "admin",
    "editor"
  ]
}

While HTML is designed for presentation in browsers, JSON is structured for data exchange between systems, making it ideal for APIs (Application Programming Interfaces).

Let's use a test API, like the one provided by DummyJSON, to fetch some sample data. This API offers fake data sets (like users, posts, products) for testing.

import requests  # Fetching sample user data

# Fetching sample user data
response = requests.get("https://dummyjson.com/users")
# Requests has a built-in JSON decoder
if response.status_code == 200:
    data = response.json()
    # 'data' is now a Python dictionary/list
    # print(data) # Uncomment to see the full structure
else:
    print(f"API request failed: {response.status_code}")

Notice the response.json() method. It automatically parses the JSON response into native Python data structures (usually dictionaries and lists), making it incredibly easy to work with.

Let's say we want to print just the username and email of each user from the response:

import requests

response = requests.get("https://dummyjson.com/users")

if response.status_code == 200:
    data = response.json()
    users = data.get("users", [])  # Use .get for safer access
    print("User List:")
    for user in users:
        username = user.get("username", "N/A")
        email = user.get("email", "N/A")
        print(f"- Username: {username}, Email: {email}")
else:
    print(f"API request failed: {response.status_code}")

Running this would output a neat list like:

User List:
- Username: atuny0, Email: atuny0@sohu.com
- Username: hbingley1, Email: hbingley1@plala.or.jp
- Username: rshawe2, Email: rshawe2@51.la
... and so on

Leveraging Query Parameters

Often, when interacting with APIs, you'll want to refine your request. Maybe you only need a subset of data, or you want to search for specific items. This is often done using query parameters, which are appended to the URL after a question mark (?), like https://example.com/search?query=python&limit=10.

Manually building these URLs by concatenating strings can be messy and error-prone. Thankfully, requests provides a clean way to handle this using the params argument.

Let's modify our DummyJSON example to fetch only 5 users and skip the first 10:

import requests

api_url = "https://dummyjson.com/users"
query_params = {
    'limit': 5,
    'skip': 10,
    'select': 'firstName,lastName,email'  # Select specific fields
}

response = requests.get(api_url, params=query_params)

# The actual URL requested would be something like:
# https://dummyjson.com/users?limit=5&skip=10&select=firstName%2ClastName%2Cemail
# Requests handles the encoding (like %2C for the comma)

if response.status_code == 200:
    data = response.json()
    print(data)
else:
    print(f"Request failed: {response.status_code}")
    print(f"URL attempted: {response.url}")  # See the final URL

Using the params dictionary keeps your code cleaner and lets requests handle the URL encoding correctly.

Understanding HTTP Status Codes

Every HTTP response includes a status code, a three-digit number indicating the outcome of the request. You've already seen 200 OK, which means success.

It's crucial to check the status code before attempting to process the response data. The response.status_code attribute holds this value.

import requests

# Successful request
response_ok = requests.get("https://dummyjson.com/products/1")
print(f"Product 1 Status: {response_ok.status_code}") # Expected: 200

# Request for a non-existent resource
response_not_found = requests.get("https://dummyjson.com/products/9999")
print(f"Product 9999 Status: {response_not_found.status_code}") # Expected: 404

Common status codes include:

  • 2xx (e.g., 200 OK, 201 Created): Success

  • 3xx (e.g., 301 Moved Permanently, 302 Found): Redirection

  • 4xx (e.g., 404 Not Found, 403 Forbidden, 401 Unauthorized): Client Errors (you did something wrong)

  • 5xx (e.g., 500 Internal Server Error, 503 Service Unavailable): Server Errors (the server had a problem)

Always check the status code to ensure your request succeeded as expected. You can find a comprehensive list on the MDN Web Docs.

Exploring Different HTTP Methods

So far, we've only used the GET method, which is used to retrieve data. However, the HTTP protocol defines several other methods for different actions.

requests provides convenient functions for these common methods:

  • requests.get(url, ...): Retrieve data.

  • requests.post(url, data=..., json=...): Submit data to be processed (e.g., create a new resource).

  • requests.put(url, data=..., json=...): Update an existing resource completely.

  • requests.patch(url, data=..., json=...): Partially update an existing resource.

  • requests.delete(url, ...): Delete a resource.

Let's try adding a new (fake) product to the DummyJSON API using POST. We'll send the product data as JSON.

import requests

new_gadget = {
    'title': 'Awesome New Gadget',
    'description': 'The latest must-have tech item!',
    'price': 199.99,
    'category': 'electronics'
    # DummyJSON might auto-assign ID, brand etc. or ignore extras
}

response = requests.post(
    "https://dummyjson.com/products/add",
    json=new_gadget  # Use 'json' param to send data as JSON
)

print(f"POST Status: {response.status_code}")  # Should be 200 or 201

if response.status_code in [200, 201]:
    print("Response Data:")
    print(response.json())  # See what the API returns (often the created/updated item)
else:
    print("POST request failed.")

Similarly, you could delete a resource:

import requests

# Let's pretend to delete product with ID 5
product_id_to_delete = 5

response = requests.delete(f"https://dummyjson.com/products/{product_id_to_delete}")

print(f"DELETE Status: {response.status_code}") # Should be 200 if successful

if response.status_code == 200:
    print("Response Data:")
    print(response.json()) # API might return info about the deleted item
else:
    print("DELETE request failed.")

Using the correct HTTP method is fundamental to interacting correctly with web servers and APIs according to REST principles.

Using Proxies with Requests

Sometimes, you need your requests to originate from a different IP address than your own. This is common in web scraping to avoid rate limits or blocks, or for accessing geo-restricted content. This is where proxy servers come in.

A proxy server acts as an intermediary: your request goes to the proxy, and the proxy forwards it to the target server. The target server sees the proxy's IP address, not yours.

Evomi offers various types of ethically sourced proxies, including high-performance Datacenter proxies (starting from just $0.30/GB), versatile Residential proxies ($0.49/GB) using real home IPs, Mobile proxies ($2.20/GB) using cellular networks, and stable Static ISP proxies (from $1/IP). Being based in Switzerland, we emphasize quality and reliability.

Configuring proxies in requests is straightforward. You need to create a dictionary specifying the proxy URL for HTTP and HTTPS traffic.

Here's a template showing how you might structure the proxy information using Evomi's endpoints (replace placeholders with your actual credentials and chosen endpoint):

import requests

# Replace with your actual Evomi credentials and endpoint
# Example using Residential Proxies (HTTP port)
proxy_user = "YOUR_USERNAME"
proxy_pass = "YOUR_PASSWORD"
proxy_host = "rp.evomi.com"  # Evomi residential proxy endpoint
proxy_port = "1000"          # HTTP port for residential

proxy_url_http = f"http://{proxy_user}:{proxy_pass}@{proxy_host}:{proxy_port}"
# Often the same for HTTP-based proxies
proxy_url_https = f"http://{proxy_user}:{proxy_pass}@{proxy_host}:{proxy_port}"

proxies = {
   "http": proxy_url_http,
   "https": proxy_url_https,
}

target_url = "https://geo.evomi.com/"  # Let's check the IP seen by the server

try:
    # Make the request through the proxy
    # Add timeout
    response = requests.get(target_url, proxies=proxies, timeout=10)
    # Raises exception for bad status codes (4xx or 5xx)
    response.raise_for_status()
    print("Request successful via proxy!")
    print("Response Content (IP Info):")
    # Should show the proxy's IP details
    print(response.text)
except requests.exceptions.RequestException as e:
    print(f"An error occurred: {e}")

This setup routes both HTTP and HTTPS requests through your specified Evomi proxy. Remember to consult the Evomi dashboard for your specific credentials and endpoint details (e.g., dc.evomi.com for Datacenter, mp.evomi.com for Mobile, or specific IPs for Static ISP proxies) and the correct ports (e.g., 1001 for HTTPS residential, 2002 for SOCKS5 datacenter, etc.).

Want to test it out? Evomi offers a completely free trial for our Residential, Mobile, and Datacenter proxies, letting you experience the performance and ease of use firsthand.

Handling Authentication

Some resources require authentication before you can access them. requests supports various authentication schemes.

The most basic is HTTP Basic Authentication. You can provide your username and password directly using the auth parameter, which takes a tuple.

import requests

# Example using httpbin's basic-auth endpoint
auth_url = 'https://httpbin.org/basic-auth/myuser/mypass'
username = 'myuser'
password = 'mypass'

try:
    response = requests.get(auth_url, auth=(username, password))
    response.raise_for_status()  # Check for errors
    print("Authentication successful!")
    print("Response JSON:")
    print(response.json())
except requests.exceptions.HTTPError as e:
    if e.response.status_code == 401:
        print("Authentication failed: Incorrect credentials.")
    else:
        print(f"An HTTP error occurred: {e}")
except requests.exceptions.RequestException as e:
    print(f"An error occurred: {e}")

This should return a JSON response confirming successful authentication:

{
  "authenticated": true,
  "user": "myuser"
}

For more complex schemes like OAuth, requests often integrates with helper libraries (like requests-oauthlib) or you might handle it by manually adding authorization headers.

Wrapping Up

We've journeyed through the Python requests library, covering how to fetch HTML and JSON, understand HTTP methods and status codes, use query parameters, and integrate proxies for enhanced capabilities. requests truly simplifies the often complex world of HTTP interactions, making it an indispensable tool for Python developers.

Ready to put this knowledge into practice? Consider combining requests with a parsing library like Beautiful Soup for a web scraping project. You can find inspiration and guidance in our comprehensive Python web scraping guide.

Alternatively, explore the vast world of APIs! There are countless free public APIs available online covering diverse fields. Pick one that interests you and use requests to fetch and play with its data.

Diving into Python's Requests Library: A 2025 Perspective

If you're working with Python and need to interact with the web, chances are you'll bump into the requests library. It's pretty much the go-to tool for sending HTTP requests programmatically. Think of it like using your web browser, but supercharged for automation and efficiency, letting you fetch web pages, talk to APIs, and more, all from your Python script.

This guide will walk you through the essentials of requests. We'll cover fetching web pages and JSON data, understand the meaning behind HTTP status codes and methods, explore query parameters, and crucially, delve into how proxies work with requests to keep your interactions smooth and anonymous when needed.

So, What Exactly is the Python Requests Library?

At its core, Requests is an elegant and simple HTTP library for Python, built for human beings. Its main job is to let your Python code send HTTP requests to servers and handle the responses that come back.

You can think of it as performing the same actions a web browser does when you visit a website. Browsers are fantastic HTTP clients, designed for visually navigating the web. However, they aren't built for automated data fetching or interacting with web services programmatically.

That's where libraries like requests step in. They handle the underlying HTTP communication without the overhead of rendering graphical interfaces, making them much faster and more resource-efficient for tasks like web scraping or API integration. Plus, they offer features specifically designed for automation.

Getting Requests Installed

Before you can use requests, you need to add it to your Python environment. Open your terminal or command prompt and run this command:

Once installed, you need to import it into your Python script. Just add this line at the beginning of your .py file:

import requests

With that done, you're ready to start making web requests!

Fetching a Web Page's Content

Let's start with a basic example: downloading the HTML content of a web page. Create a Python file (e.g., fetch_example.py) and add the following code:

import requests
# Let's try fetching content from httpbin, a useful testing site
url = "https://httpbin.org/html"
response = requests.get(url)
# Check if the request was successful (status code 200)
if response.status_code == 200:
    print(response.text)
else:
    print(f"Request failed with status code: {response.status_code}")

Run this script using python fetch_example.py. You should see the HTML source code of the httpbin HTML page printed to your console:

<!DOCTYPE html>
<html>
  <head>
  </head>
  <body>
    <h1>Herman Melville - Moby-Dick</h1>
    ...
  </body>
</html>

This demonstrates the fundamental use case: sending a GET request to a URL and receiving the server's response. The response.text attribute holds the content, which in this case is HTML.

However, requests itself doesn't parse HTML. If you need to extract specific data from the HTML (like finding all the links or extracting text from certain tags), you'll need an additional library. A popular choice for this task is Beautiful Soup. While we won't detail HTML parsing here, combining requests with a parser is key for web scraping.

Working with JSON APIs using Requests

Web pages aren't the only thing you can fetch. requests is also excellent for interacting with web APIs, which often communicate using JSON (JavaScript Object Notation).

JSON is a lightweight data format that's easy for humans to read and machines to parse. Here's a snippet of what JSON data might look like, perhaps representing user information:

{
  "id": 42,
  "username": "dev_guru",
  "email": "guru@example.com",
  "isActive": true,
  "roles": [
    "admin",
    "editor"
  ]
}

While HTML is designed for presentation in browsers, JSON is structured for data exchange between systems, making it ideal for APIs (Application Programming Interfaces).

Let's use a test API, like the one provided by DummyJSON, to fetch some sample data. This API offers fake data sets (like users, posts, products) for testing.

import requests  # Fetching sample user data

# Fetching sample user data
response = requests.get("https://dummyjson.com/users")
# Requests has a built-in JSON decoder
if response.status_code == 200:
    data = response.json()
    # 'data' is now a Python dictionary/list
    # print(data) # Uncomment to see the full structure
else:
    print(f"API request failed: {response.status_code}")

Notice the response.json() method. It automatically parses the JSON response into native Python data structures (usually dictionaries and lists), making it incredibly easy to work with.

Let's say we want to print just the username and email of each user from the response:

import requests

response = requests.get("https://dummyjson.com/users")

if response.status_code == 200:
    data = response.json()
    users = data.get("users", [])  # Use .get for safer access
    print("User List:")
    for user in users:
        username = user.get("username", "N/A")
        email = user.get("email", "N/A")
        print(f"- Username: {username}, Email: {email}")
else:
    print(f"API request failed: {response.status_code}")

Running this would output a neat list like:

User List:
- Username: atuny0, Email: atuny0@sohu.com
- Username: hbingley1, Email: hbingley1@plala.or.jp
- Username: rshawe2, Email: rshawe2@51.la
... and so on

Leveraging Query Parameters

Often, when interacting with APIs, you'll want to refine your request. Maybe you only need a subset of data, or you want to search for specific items. This is often done using query parameters, which are appended to the URL after a question mark (?), like https://example.com/search?query=python&limit=10.

Manually building these URLs by concatenating strings can be messy and error-prone. Thankfully, requests provides a clean way to handle this using the params argument.

Let's modify our DummyJSON example to fetch only 5 users and skip the first 10:

import requests

api_url = "https://dummyjson.com/users"
query_params = {
    'limit': 5,
    'skip': 10,
    'select': 'firstName,lastName,email'  # Select specific fields
}

response = requests.get(api_url, params=query_params)

# The actual URL requested would be something like:
# https://dummyjson.com/users?limit=5&skip=10&select=firstName%2ClastName%2Cemail
# Requests handles the encoding (like %2C for the comma)

if response.status_code == 200:
    data = response.json()
    print(data)
else:
    print(f"Request failed: {response.status_code}")
    print(f"URL attempted: {response.url}")  # See the final URL

Using the params dictionary keeps your code cleaner and lets requests handle the URL encoding correctly.

Understanding HTTP Status Codes

Every HTTP response includes a status code, a three-digit number indicating the outcome of the request. You've already seen 200 OK, which means success.

It's crucial to check the status code before attempting to process the response data. The response.status_code attribute holds this value.

import requests

# Successful request
response_ok = requests.get("https://dummyjson.com/products/1")
print(f"Product 1 Status: {response_ok.status_code}") # Expected: 200

# Request for a non-existent resource
response_not_found = requests.get("https://dummyjson.com/products/9999")
print(f"Product 9999 Status: {response_not_found.status_code}") # Expected: 404

Common status codes include:

  • 2xx (e.g., 200 OK, 201 Created): Success

  • 3xx (e.g., 301 Moved Permanently, 302 Found): Redirection

  • 4xx (e.g., 404 Not Found, 403 Forbidden, 401 Unauthorized): Client Errors (you did something wrong)

  • 5xx (e.g., 500 Internal Server Error, 503 Service Unavailable): Server Errors (the server had a problem)

Always check the status code to ensure your request succeeded as expected. You can find a comprehensive list on the MDN Web Docs.

Exploring Different HTTP Methods

So far, we've only used the GET method, which is used to retrieve data. However, the HTTP protocol defines several other methods for different actions.

requests provides convenient functions for these common methods:

  • requests.get(url, ...): Retrieve data.

  • requests.post(url, data=..., json=...): Submit data to be processed (e.g., create a new resource).

  • requests.put(url, data=..., json=...): Update an existing resource completely.

  • requests.patch(url, data=..., json=...): Partially update an existing resource.

  • requests.delete(url, ...): Delete a resource.

Let's try adding a new (fake) product to the DummyJSON API using POST. We'll send the product data as JSON.

import requests

new_gadget = {
    'title': 'Awesome New Gadget',
    'description': 'The latest must-have tech item!',
    'price': 199.99,
    'category': 'electronics'
    # DummyJSON might auto-assign ID, brand etc. or ignore extras
}

response = requests.post(
    "https://dummyjson.com/products/add",
    json=new_gadget  # Use 'json' param to send data as JSON
)

print(f"POST Status: {response.status_code}")  # Should be 200 or 201

if response.status_code in [200, 201]:
    print("Response Data:")
    print(response.json())  # See what the API returns (often the created/updated item)
else:
    print("POST request failed.")

Similarly, you could delete a resource:

import requests

# Let's pretend to delete product with ID 5
product_id_to_delete = 5

response = requests.delete(f"https://dummyjson.com/products/{product_id_to_delete}")

print(f"DELETE Status: {response.status_code}") # Should be 200 if successful

if response.status_code == 200:
    print("Response Data:")
    print(response.json()) # API might return info about the deleted item
else:
    print("DELETE request failed.")

Using the correct HTTP method is fundamental to interacting correctly with web servers and APIs according to REST principles.

Using Proxies with Requests

Sometimes, you need your requests to originate from a different IP address than your own. This is common in web scraping to avoid rate limits or blocks, or for accessing geo-restricted content. This is where proxy servers come in.

A proxy server acts as an intermediary: your request goes to the proxy, and the proxy forwards it to the target server. The target server sees the proxy's IP address, not yours.

Evomi offers various types of ethically sourced proxies, including high-performance Datacenter proxies (starting from just $0.30/GB), versatile Residential proxies ($0.49/GB) using real home IPs, Mobile proxies ($2.20/GB) using cellular networks, and stable Static ISP proxies (from $1/IP). Being based in Switzerland, we emphasize quality and reliability.

Configuring proxies in requests is straightforward. You need to create a dictionary specifying the proxy URL for HTTP and HTTPS traffic.

Here's a template showing how you might structure the proxy information using Evomi's endpoints (replace placeholders with your actual credentials and chosen endpoint):

import requests

# Replace with your actual Evomi credentials and endpoint
# Example using Residential Proxies (HTTP port)
proxy_user = "YOUR_USERNAME"
proxy_pass = "YOUR_PASSWORD"
proxy_host = "rp.evomi.com"  # Evomi residential proxy endpoint
proxy_port = "1000"          # HTTP port for residential

proxy_url_http = f"http://{proxy_user}:{proxy_pass}@{proxy_host}:{proxy_port}"
# Often the same for HTTP-based proxies
proxy_url_https = f"http://{proxy_user}:{proxy_pass}@{proxy_host}:{proxy_port}"

proxies = {
   "http": proxy_url_http,
   "https": proxy_url_https,
}

target_url = "https://geo.evomi.com/"  # Let's check the IP seen by the server

try:
    # Make the request through the proxy
    # Add timeout
    response = requests.get(target_url, proxies=proxies, timeout=10)
    # Raises exception for bad status codes (4xx or 5xx)
    response.raise_for_status()
    print("Request successful via proxy!")
    print("Response Content (IP Info):")
    # Should show the proxy's IP details
    print(response.text)
except requests.exceptions.RequestException as e:
    print(f"An error occurred: {e}")

This setup routes both HTTP and HTTPS requests through your specified Evomi proxy. Remember to consult the Evomi dashboard for your specific credentials and endpoint details (e.g., dc.evomi.com for Datacenter, mp.evomi.com for Mobile, or specific IPs for Static ISP proxies) and the correct ports (e.g., 1001 for HTTPS residential, 2002 for SOCKS5 datacenter, etc.).

Want to test it out? Evomi offers a completely free trial for our Residential, Mobile, and Datacenter proxies, letting you experience the performance and ease of use firsthand.

Handling Authentication

Some resources require authentication before you can access them. requests supports various authentication schemes.

The most basic is HTTP Basic Authentication. You can provide your username and password directly using the auth parameter, which takes a tuple.

import requests

# Example using httpbin's basic-auth endpoint
auth_url = 'https://httpbin.org/basic-auth/myuser/mypass'
username = 'myuser'
password = 'mypass'

try:
    response = requests.get(auth_url, auth=(username, password))
    response.raise_for_status()  # Check for errors
    print("Authentication successful!")
    print("Response JSON:")
    print(response.json())
except requests.exceptions.HTTPError as e:
    if e.response.status_code == 401:
        print("Authentication failed: Incorrect credentials.")
    else:
        print(f"An HTTP error occurred: {e}")
except requests.exceptions.RequestException as e:
    print(f"An error occurred: {e}")

This should return a JSON response confirming successful authentication:

{
  "authenticated": true,
  "user": "myuser"
}

For more complex schemes like OAuth, requests often integrates with helper libraries (like requests-oauthlib) or you might handle it by manually adding authorization headers.

Wrapping Up

We've journeyed through the Python requests library, covering how to fetch HTML and JSON, understand HTTP methods and status codes, use query parameters, and integrate proxies for enhanced capabilities. requests truly simplifies the often complex world of HTTP interactions, making it an indispensable tool for Python developers.

Ready to put this knowledge into practice? Consider combining requests with a parsing library like Beautiful Soup for a web scraping project. You can find inspiration and guidance in our comprehensive Python web scraping guide.

Alternatively, explore the vast world of APIs! There are countless free public APIs available online covering diverse fields. Pick one that interests you and use requests to fetch and play with its data.

Diving into Python's Requests Library: A 2025 Perspective

If you're working with Python and need to interact with the web, chances are you'll bump into the requests library. It's pretty much the go-to tool for sending HTTP requests programmatically. Think of it like using your web browser, but supercharged for automation and efficiency, letting you fetch web pages, talk to APIs, and more, all from your Python script.

This guide will walk you through the essentials of requests. We'll cover fetching web pages and JSON data, understand the meaning behind HTTP status codes and methods, explore query parameters, and crucially, delve into how proxies work with requests to keep your interactions smooth and anonymous when needed.

So, What Exactly is the Python Requests Library?

At its core, Requests is an elegant and simple HTTP library for Python, built for human beings. Its main job is to let your Python code send HTTP requests to servers and handle the responses that come back.

You can think of it as performing the same actions a web browser does when you visit a website. Browsers are fantastic HTTP clients, designed for visually navigating the web. However, they aren't built for automated data fetching or interacting with web services programmatically.

That's where libraries like requests step in. They handle the underlying HTTP communication without the overhead of rendering graphical interfaces, making them much faster and more resource-efficient for tasks like web scraping or API integration. Plus, they offer features specifically designed for automation.

Getting Requests Installed

Before you can use requests, you need to add it to your Python environment. Open your terminal or command prompt and run this command:

Once installed, you need to import it into your Python script. Just add this line at the beginning of your .py file:

import requests

With that done, you're ready to start making web requests!

Fetching a Web Page's Content

Let's start with a basic example: downloading the HTML content of a web page. Create a Python file (e.g., fetch_example.py) and add the following code:

import requests
# Let's try fetching content from httpbin, a useful testing site
url = "https://httpbin.org/html"
response = requests.get(url)
# Check if the request was successful (status code 200)
if response.status_code == 200:
    print(response.text)
else:
    print(f"Request failed with status code: {response.status_code}")

Run this script using python fetch_example.py. You should see the HTML source code of the httpbin HTML page printed to your console:

<!DOCTYPE html>
<html>
  <head>
  </head>
  <body>
    <h1>Herman Melville - Moby-Dick</h1>
    ...
  </body>
</html>

This demonstrates the fundamental use case: sending a GET request to a URL and receiving the server's response. The response.text attribute holds the content, which in this case is HTML.

However, requests itself doesn't parse HTML. If you need to extract specific data from the HTML (like finding all the links or extracting text from certain tags), you'll need an additional library. A popular choice for this task is Beautiful Soup. While we won't detail HTML parsing here, combining requests with a parser is key for web scraping.

Working with JSON APIs using Requests

Web pages aren't the only thing you can fetch. requests is also excellent for interacting with web APIs, which often communicate using JSON (JavaScript Object Notation).

JSON is a lightweight data format that's easy for humans to read and machines to parse. Here's a snippet of what JSON data might look like, perhaps representing user information:

{
  "id": 42,
  "username": "dev_guru",
  "email": "guru@example.com",
  "isActive": true,
  "roles": [
    "admin",
    "editor"
  ]
}

While HTML is designed for presentation in browsers, JSON is structured for data exchange between systems, making it ideal for APIs (Application Programming Interfaces).

Let's use a test API, like the one provided by DummyJSON, to fetch some sample data. This API offers fake data sets (like users, posts, products) for testing.

import requests  # Fetching sample user data

# Fetching sample user data
response = requests.get("https://dummyjson.com/users")
# Requests has a built-in JSON decoder
if response.status_code == 200:
    data = response.json()
    # 'data' is now a Python dictionary/list
    # print(data) # Uncomment to see the full structure
else:
    print(f"API request failed: {response.status_code}")

Notice the response.json() method. It automatically parses the JSON response into native Python data structures (usually dictionaries and lists), making it incredibly easy to work with.

Let's say we want to print just the username and email of each user from the response:

import requests

response = requests.get("https://dummyjson.com/users")

if response.status_code == 200:
    data = response.json()
    users = data.get("users", [])  # Use .get for safer access
    print("User List:")
    for user in users:
        username = user.get("username", "N/A")
        email = user.get("email", "N/A")
        print(f"- Username: {username}, Email: {email}")
else:
    print(f"API request failed: {response.status_code}")

Running this would output a neat list like:

User List:
- Username: atuny0, Email: atuny0@sohu.com
- Username: hbingley1, Email: hbingley1@plala.or.jp
- Username: rshawe2, Email: rshawe2@51.la
... and so on

Leveraging Query Parameters

Often, when interacting with APIs, you'll want to refine your request. Maybe you only need a subset of data, or you want to search for specific items. This is often done using query parameters, which are appended to the URL after a question mark (?), like https://example.com/search?query=python&limit=10.

Manually building these URLs by concatenating strings can be messy and error-prone. Thankfully, requests provides a clean way to handle this using the params argument.

Let's modify our DummyJSON example to fetch only 5 users and skip the first 10:

import requests

api_url = "https://dummyjson.com/users"
query_params = {
    'limit': 5,
    'skip': 10,
    'select': 'firstName,lastName,email'  # Select specific fields
}

response = requests.get(api_url, params=query_params)

# The actual URL requested would be something like:
# https://dummyjson.com/users?limit=5&skip=10&select=firstName%2ClastName%2Cemail
# Requests handles the encoding (like %2C for the comma)

if response.status_code == 200:
    data = response.json()
    print(data)
else:
    print(f"Request failed: {response.status_code}")
    print(f"URL attempted: {response.url}")  # See the final URL

Using the params dictionary keeps your code cleaner and lets requests handle the URL encoding correctly.

Understanding HTTP Status Codes

Every HTTP response includes a status code, a three-digit number indicating the outcome of the request. You've already seen 200 OK, which means success.

It's crucial to check the status code before attempting to process the response data. The response.status_code attribute holds this value.

import requests

# Successful request
response_ok = requests.get("https://dummyjson.com/products/1")
print(f"Product 1 Status: {response_ok.status_code}") # Expected: 200

# Request for a non-existent resource
response_not_found = requests.get("https://dummyjson.com/products/9999")
print(f"Product 9999 Status: {response_not_found.status_code}") # Expected: 404

Common status codes include:

  • 2xx (e.g., 200 OK, 201 Created): Success

  • 3xx (e.g., 301 Moved Permanently, 302 Found): Redirection

  • 4xx (e.g., 404 Not Found, 403 Forbidden, 401 Unauthorized): Client Errors (you did something wrong)

  • 5xx (e.g., 500 Internal Server Error, 503 Service Unavailable): Server Errors (the server had a problem)

Always check the status code to ensure your request succeeded as expected. You can find a comprehensive list on the MDN Web Docs.

Exploring Different HTTP Methods

So far, we've only used the GET method, which is used to retrieve data. However, the HTTP protocol defines several other methods for different actions.

requests provides convenient functions for these common methods:

  • requests.get(url, ...): Retrieve data.

  • requests.post(url, data=..., json=...): Submit data to be processed (e.g., create a new resource).

  • requests.put(url, data=..., json=...): Update an existing resource completely.

  • requests.patch(url, data=..., json=...): Partially update an existing resource.

  • requests.delete(url, ...): Delete a resource.

Let's try adding a new (fake) product to the DummyJSON API using POST. We'll send the product data as JSON.

import requests

new_gadget = {
    'title': 'Awesome New Gadget',
    'description': 'The latest must-have tech item!',
    'price': 199.99,
    'category': 'electronics'
    # DummyJSON might auto-assign ID, brand etc. or ignore extras
}

response = requests.post(
    "https://dummyjson.com/products/add",
    json=new_gadget  # Use 'json' param to send data as JSON
)

print(f"POST Status: {response.status_code}")  # Should be 200 or 201

if response.status_code in [200, 201]:
    print("Response Data:")
    print(response.json())  # See what the API returns (often the created/updated item)
else:
    print("POST request failed.")

Similarly, you could delete a resource:

import requests

# Let's pretend to delete product with ID 5
product_id_to_delete = 5

response = requests.delete(f"https://dummyjson.com/products/{product_id_to_delete}")

print(f"DELETE Status: {response.status_code}") # Should be 200 if successful

if response.status_code == 200:
    print("Response Data:")
    print(response.json()) # API might return info about the deleted item
else:
    print("DELETE request failed.")

Using the correct HTTP method is fundamental to interacting correctly with web servers and APIs according to REST principles.

Using Proxies with Requests

Sometimes, you need your requests to originate from a different IP address than your own. This is common in web scraping to avoid rate limits or blocks, or for accessing geo-restricted content. This is where proxy servers come in.

A proxy server acts as an intermediary: your request goes to the proxy, and the proxy forwards it to the target server. The target server sees the proxy's IP address, not yours.

Evomi offers various types of ethically sourced proxies, including high-performance Datacenter proxies (starting from just $0.30/GB), versatile Residential proxies ($0.49/GB) using real home IPs, Mobile proxies ($2.20/GB) using cellular networks, and stable Static ISP proxies (from $1/IP). Being based in Switzerland, we emphasize quality and reliability.

Configuring proxies in requests is straightforward. You need to create a dictionary specifying the proxy URL for HTTP and HTTPS traffic.

Here's a template showing how you might structure the proxy information using Evomi's endpoints (replace placeholders with your actual credentials and chosen endpoint):

import requests

# Replace with your actual Evomi credentials and endpoint
# Example using Residential Proxies (HTTP port)
proxy_user = "YOUR_USERNAME"
proxy_pass = "YOUR_PASSWORD"
proxy_host = "rp.evomi.com"  # Evomi residential proxy endpoint
proxy_port = "1000"          # HTTP port for residential

proxy_url_http = f"http://{proxy_user}:{proxy_pass}@{proxy_host}:{proxy_port}"
# Often the same for HTTP-based proxies
proxy_url_https = f"http://{proxy_user}:{proxy_pass}@{proxy_host}:{proxy_port}"

proxies = {
   "http": proxy_url_http,
   "https": proxy_url_https,
}

target_url = "https://geo.evomi.com/"  # Let's check the IP seen by the server

try:
    # Make the request through the proxy
    # Add timeout
    response = requests.get(target_url, proxies=proxies, timeout=10)
    # Raises exception for bad status codes (4xx or 5xx)
    response.raise_for_status()
    print("Request successful via proxy!")
    print("Response Content (IP Info):")
    # Should show the proxy's IP details
    print(response.text)
except requests.exceptions.RequestException as e:
    print(f"An error occurred: {e}")

This setup routes both HTTP and HTTPS requests through your specified Evomi proxy. Remember to consult the Evomi dashboard for your specific credentials and endpoint details (e.g., dc.evomi.com for Datacenter, mp.evomi.com for Mobile, or specific IPs for Static ISP proxies) and the correct ports (e.g., 1001 for HTTPS residential, 2002 for SOCKS5 datacenter, etc.).

Want to test it out? Evomi offers a completely free trial for our Residential, Mobile, and Datacenter proxies, letting you experience the performance and ease of use firsthand.

Handling Authentication

Some resources require authentication before you can access them. requests supports various authentication schemes.

The most basic is HTTP Basic Authentication. You can provide your username and password directly using the auth parameter, which takes a tuple.

import requests

# Example using httpbin's basic-auth endpoint
auth_url = 'https://httpbin.org/basic-auth/myuser/mypass'
username = 'myuser'
password = 'mypass'

try:
    response = requests.get(auth_url, auth=(username, password))
    response.raise_for_status()  # Check for errors
    print("Authentication successful!")
    print("Response JSON:")
    print(response.json())
except requests.exceptions.HTTPError as e:
    if e.response.status_code == 401:
        print("Authentication failed: Incorrect credentials.")
    else:
        print(f"An HTTP error occurred: {e}")
except requests.exceptions.RequestException as e:
    print(f"An error occurred: {e}")

This should return a JSON response confirming successful authentication:

{
  "authenticated": true,
  "user": "myuser"
}

For more complex schemes like OAuth, requests often integrates with helper libraries (like requests-oauthlib) or you might handle it by manually adding authorization headers.

Wrapping Up

We've journeyed through the Python requests library, covering how to fetch HTML and JSON, understand HTTP methods and status codes, use query parameters, and integrate proxies for enhanced capabilities. requests truly simplifies the often complex world of HTTP interactions, making it an indispensable tool for Python developers.

Ready to put this knowledge into practice? Consider combining requests with a parsing library like Beautiful Soup for a web scraping project. You can find inspiration and guidance in our comprehensive Python web scraping guide.

Alternatively, explore the vast world of APIs! There are countless free public APIs available online covering diverse fields. Pick one that interests you and use requests to fetch and play with its data.

Author

Nathan Reynolds

Web Scraping & Automation Specialist

About Author

Nathan specializes in web scraping techniques, automation tools, and data-driven decision-making. He helps businesses extract valuable insights from the web using ethical and efficient scraping methods powered by advanced proxies. His expertise covers overcoming anti-bot mechanisms, optimizing proxy rotation, and ensuring compliance with data privacy regulations.

Like this article? Share it.
You asked, we answer - Users questions:
How can I handle specific network errors like timeouts when using Python requests?+
Is there a way to make multiple requests to the same website more efficiently with Python requests?+
The article shows how to use one proxy. How can I rotate through a list of different proxies?+
Are there newer Python libraries for HTTP requests, perhaps for asynchronous programming?+
How can I set custom HTTP headers, such as a specific User-Agent, in my Python requests?+

In This Article

Read More Blogs