Build a Best Buy Price Tracker with Python & Proxies

David Foster

Last edited on May 15, 2025
Last edited on May 15, 2025

Use Cases

Building Your Own Best Buy Price Tracker with Python

Best Buy. It's a go-to spot for many of us seeking the latest electronics, components, or gadgets. Whether you're upgrading your workstation or diving deep into a tech hobby, chances are you've browsed their virtual or physical aisles. But let's be honest, tech can get pricey, leading many savvy shoppers to hunt relentlessly for deals and price drops.

Constantly hitting refresh on a product page? That gets old fast. This need has spurred the creation of various Best Buy price tracking tools. Some folks even take matters into their own hands and build custom trackers – and it's perhaps less daunting than you might think. Ready to see how you can build one yourself?

So, What Exactly is a Best Buy Price Tracker?

Think of a Best Buy price tracker as an automated scout that monitors the online store for product prices. It diligently records this information and compares it against historical data it has gathered over time. The magic happens when it detects a discount, a significant price reduction, or a sale – often triggering an alert to the user.

At its core, most price tracking relies on web scraping: using a script to automatically download and extract specific data from websites. The raw downloaded information (the website's HTML code) is then processed to isolate the crucial bits – in this case, the price.

These tools usually grab more than just the price. They'll often collect product URLs, names, and maybe even descriptions to provide context. Some might calculate the discount percentage or display a neat price history graph.

Alerts are a key feature. Many users want to set a price threshold. Imagine telling your tracker: "Email me if this graphics card drops below $500." When the condition is met, you get notified, potentially saving you a good chunk of change.

All these components work together with one primary objective: to keep you informed about price fluctuations and empower you to snag deals the moment they appear.

Why Bother Tracking Prices at Best Buy?

The most obvious motivation for tracking Best Buy prices is to catch the best possible deals and save money. For professionals buying equipment regularly, monitoring prices is vital for managing budgets and maximizing profit margins.

Beyond immediate savings, price tracking helps you gauge timing. Is it worth waiting for a potential discount? By looking at the historical price data, you can see if an item frequently goes on sale or if its price has remained stubbornly static. This helps manage expectations.

Furthermore, if you track the same product across multiple retailers (using similar tools or techniques), you can make more informed purchasing decisions, ensuring you get the absolute best value available at any given time.

Does Best Buy Offer Its Own Price Alerts?

Best Buy sometimes provides alerts, but typically only during specific promotional events or for certain types of deals. For everyday tracking of individual items, there isn't a built-in feature to notify you of price changes. This limitation is precisely why many turn to third-party tools or build their own solutions.

Crafting a Price Tracker with Python: A Step-by-Step Guide

Python is a fantastic choice for this project. It's known for being relatively easy to pick up, and its rich ecosystem of libraries simplifies tasks like web scraping. We'll leverage Python to create our price monitoring tool and start collecting that valuable price history.

First things first, open your favorite code editor or IDE, create a new project, and install the necessary Python libraries. Open your terminal or command prompt and run:

These libraries handle web requests, HTML parsing, data manipulation, and task scheduling, respectively. Let's begin by writing a function to visit a Best Buy product page and grab its price:

import requests
from bs4 import BeautifulSoup
import time

# We might need time.sleep later to be polite


# Target product URL (Example: ASUS ROG Laptop)
TARGET_URL = (
    "https://www.bestbuy.com/site/asus-rog-zephyrus-g14-14-165hz-gaming-laptop-qhd-"
    "amd-ryzen-9-with-16gb-memory-nvidia-geforce-rtx-4060-512gb-ssd-moonlight-white/"
    "6535495.p?skuId=6535495"
)


def fetch_product_price(url):
    # Mimic a real browser visit with a User-Agent header
    headers = {
        "User-Agent": (
            "Mozilla/5.0 (Windows NT 10.0; Win64; x64) "
            "AppleWebKit/537.36 (KHTML, like Gecko) "
            "Chrome/114.0.0.0 Safari/537.36"
        )
    }

    try:
        page_content = requests.get(url, headers=headers, timeout=15)
        page_content.raise_for_status() # Raise an exception for bad status codes (4xx or 5xx)
        parsed_html = BeautifulSoup(page_content.content, 'html.parser')

        # --- IMPORTANT ---
        # Finding the price requires inspecting the Best Buy page HTML.
        # The selectors below worked at one time but WILL change.
        # Use your browser's developer tools (right-click -> Inspect) to find the current tag and class for the price.
        price_container = parsed_html.find(
            "div", {"class": "priceView-hero-price priceView-customer-price"}
        )

        if price_container:
            # Often, the visually hidden span has the clean price
            price_span = price_container.find("span", {"aria-hidden": "true"})
            if price_span:
                price_text = price_span.get_text()
                # Clean up the price string (remove $, commas)
                price_numeric = float(price_text.replace("$", "").replace(",", ""))
                return price_numeric
            else:
                print("Could not find the price span within the container.")
                return None
        else:
             print("Could not find the price container element. Best Buy's layout may have changed.")
             return None
    except requests.exceptions.RequestException as e:
        print(f"Error fetching URL {url}: {e}")
        return None
    except Exception as e:
        print(f"An error occurred during parsing: {e}")
        return None


# Let's test it
current_price = fetch_product_price(TARGET_URL)

if current_price:
    print(f"Current price found: ${current_price}")
else:
    print("Failed to retrieve the price.")

We've selected an ASUS ROG laptop as our example, but you can adapt this for any product by changing the TARGET_URL.

Our script starts by defining a User-Agent header. This tells the website we're using a standard web browser, which is crucial because many sites block simple scripts using default library user agents.

The code then fetches the page's HTML content using the requests library. We include error handling for network issues or if Best Buy blocks the request. BeautifulSoup then parses this HTML. The critical part involves finding the specific HTML element (like a <div> or <span>) containing the price. Warning: Website structures change frequently! You'll absolutely need to use your browser's developer tools ("Inspect Element") to find the correct HTML tags and classes for the price on the current Best Buy page layout. The example selectors might be outdated. Finally, we extract the text, clean it (remove '$' and ',') and convert it to a numerical float value.

Running a test print like shown helps confirm if your scraper is correctly identifying and extracting the price.

Now, a key consideration: Best Buy, like many retailers, often shows different content or blocks access based on geographic location. If you're running this script from outside the US, you might get blocked or see irrelevant information. This is where proxies become essential. Using a proxy server routes your request through a different IP address, making it appear as if you're browsing from the proxy's location (e.g., the US).

For reliable access, especially for frequent checks, consider using high-quality proxies. Evomi offers ethically sourced residential proxies that are ideal for mimicking real user traffic and bypassing geo-restrictions. They look like genuine ISP-provided IPs, reducing the chance of blocks. We even offer a free trial if you'd like to test them out.

Integrating a proxy with `requests` is straightforward. You'd typically define your proxy details and pass them to the `requests.get` function:

# Example using Evomi residential proxy authentication
proxy_user = 'your_evomi_username'
proxy_pass = 'your_evomi_password'
proxy_host = 'rp.evomi.com'
proxy_port_http = 1000
proxy_port_https = 1001

proxies = {
    'http': f'http://{proxy_user}:{proxy_pass}@{proxy_host}:{proxy_port_http}',
    'https': f'http://{proxy_user}:{proxy_pass}@{proxy_host}:{proxy_port_https}'  # Note: requests uses http auth even for https proxies here
}

# Modify the get request in the function like this:
# page_content = requests.get(url, headers=headers, proxies=proxies, timeout=20)  # Increased timeout often needed with proxies

Logging the Price Data

Right now, the price is just fetched and maybe printed. We need to store this data persistently to track changes over time. The pandas library is excellent for this. Let's create a function to append the fetched price, timestamp, and URL to a CSV file:

import pandas as pd
from datetime import datetime
import os # (Make sure 'import os' is at the top of your script with other imports)

LOG_FILE = 'bestbuy_price_log.csv'

def log_price_data(price, url):
    if price is None: # Don't log if fetching failed
        return
    timestamp = datetime.now()
    new_data = {
        'Timestamp': [timestamp],
        'Price_USD': [price],
        'Product_URL': [url]
    }
    df = pd.DataFrame(new_data)
    try:
        # Append to CSV without writing header if file exists, include header otherwise
        header = False if os.path.exists(LOG_FILE) else True
        df.to_csv(LOG_FILE, mode='a', header=header, index=False)
        # print(f"Data logged: {timestamp}, ${price}, {url}") # Optional: uncomment for verbose logging
    except Exception as e:
        print(f"Error writing to CSV {LOG_FILE}: {e}")

A quick structural note: While we're showing code snippets step-by-step, remember to place all your import statements at the very top of your Python file, followed by function definitions, and then the main script logic. Don't intersperse imports and function definitions throughout the file as shown here for explanation.

This log_price_data function takes the price (retrieved by fetch_product_price()) and the URL. It records the current date and time, bundles everything into a pandas DataFrame, and appends it as a new row to our `bestbuy_price_log.csv` file. The `mode='a'` ensures we add to the file rather than overwrite it, and `header=False` (after the first write) prevents writing the column names repeatedly.

Your script structure, combining the fetching and logging, might look roughly like this now:

import requests
from bs4 import BeautifulSoup
import pandas as pd
from datetime import datetime
import time
import os # Added for file checking

TARGET_URL = "https://www.bestbuy.com/site/asus-rog-zephyrus-g14-14-165hz-gaming-laptop-qhd-amd-ryzen-9-with-16gb-memory-nvidia-geforce-rtx-4060-512gb-ssd-moonlight-white/6535495.p?skuId=6535495"
LOG_FILE = 'bestbuy_price_log.csv'

# --- Function: fetch_product_price(url) ---
# (Insert the fetch_product_price function code from above here)
# --- End Function: fetch_product_price ---

# --- Function: log_price_data(price, url) ---
# (Insert the log_price_data function code from above here)
# --- End Function: log_price_data ---

# --- Main execution part ---
product_price = fetch_product_price(TARGET_URL)

if product_price is not None:
    log_price_data(product_price, TARGET_URL)
    print(f"Successfully fetched and logged price: ${product_price}")
else:
    print("Could not fetch or log price.")

Adding Email Notifications

The real power comes from alerts. Let's set up email notifications for when the price drops below a certain point. For this, we'll use Gmail's SMTP server. You'll need a Gmail account (creating a dedicated one for alerts is recommended). Enable 2-Factor Authentication (2FA) on this account, then generate an "App Password" specifically for this script via your Google Account security settings (https://myaccount.google.com/apppasswords). Do not use your main Gmail password in the script!

import smtplib
import ssl  # For secure connection

# --- IMPORTANT SECURITY NOTE ---
# NEVER hardcode credentials directly in your script like below.
# Use environment variables (e.g., via python-dotenv library) or a secure config file.
SENDER_EMAIL = "your.alert.sender@gmail.com"  # Your dedicated Gmail address
SENDER_PASSWORD = "YOUR_GENERATED_APP_PASSWORD"  # The App Password you created
RECIPIENT_EMAIL = "your.personal.email@example.com"  # Where alerts should go

def send_price_alert(current_price, url, alert_threshold):
    if current_price is None or current_price > alert_threshold:
        return  # No alert needed

    subject = "Price Alert: Item Below Threshold!"
    body = f"Good news!\n\nThe price for the item at {url}\nhas dropped to ${current_price:.2f} (Your threshold was ${alert_threshold:.2f}).\n\nCheck it out!"
    message = f"Subject: {subject}\n\n{body}"

    context = ssl.create_default_context()  # Create a secure SSL context
    try:
        print(f"Attempting to send email alert for price ${current_price:.2f}...")
        with smtplib.SMTP_SSL("smtp.gmail.com", 465, context=context) as server:  # Use SMTP_SSL for port 465
            server.login(SENDER_EMAIL, SENDER_PASSWORD)
            server.sendmail(SENDER_EMAIL, RECIPIENT_EMAIL, message)
            print(f"Email alert sent successfully to {RECIPIENT_EMAIL}!")
    except smtplib.SMTPAuthenticationError:
        print("SMTP Authentication Error: Check sender email/app password.")
    except Exception as e:
        print(f"Failed to send email: {e}")

# Remember to add 'import ssl' at the top of your file.

This function checks if the fetched price is below your desired `alert_threshold`. If it is, it connects to Gmail's secure SMTP server (using `SMTP_SSL` and port 465), logs in with your app password, and sends the alert email. Again, please heed the security warning: use a library like python-dotenv to load your email credentials from a separate .env file instead of writing them directly in the code, especially if you share or version control your script. We show them here only for clarity in the example.

Now we need to run this check automatically.

Automating the Checks with Scheduling

Manually running the script isn't practical for continuous monitoring. We need it to execute automatically at regular intervals. The schedule library makes this easy:

import schedule
import time
from datetime import datetime # Assuming datetime is used based on the print statement

# --- Define constants and previous functions here ---
# TARGET_URL = "..."
# LOG_FILE = "..."
# SENDER_EMAIL = "..." (loaded securely!)
# SENDER_PASSWORD = "..." (loaded securely!)
# RECIPIENT_EMAIL = "..."
# ALERT_THRESHOLD = 1499.99 # Set your desired price alert level for the example product

# fetch_product_price(url) definition...
# log_price_data(price, url) definition...
# send_price_alert(price, url, threshold) definition...

# --- End definitions ---

def perform_price_check():
    print(f"Running scheduled price check at {datetime.now()}...")
    # Assuming fetch_product_price is defined elsewhere and returns a float or None
    current_price = fetch_product_price(TARGET_URL)

    if current_price is not None:
        # Assuming log_price_data and send_price_alert are defined elsewhere
        log_price_data(current_price, TARGET_URL)
        send_price_alert(current_price, TARGET_URL, ALERT_THRESHOLD)
        print(f"Check complete. Current price: ${current_price:.2f}")
    else:
        print("Price check failed (fetch returned None).")

# Schedule the job: Run once daily at 10:30 AM
schedule.every().day.at("10:30").do(perform_price_check)
# Other options: .hour, .minute.at(":15"), .monday .at("..."), etc.
# schedule.every(6).hours.do(perform_price_check) # Example: run every 6 hours

print("Scheduler started. Waiting for the scheduled job...")

# Initial check immediately upon starting (optional)
# perform_price_check()

# Keep the script running to allow the scheduler to work
while True:
    schedule.run_pending()
    time.sleep(60) # Check every 60 seconds if a scheduled job is due

We bundle the fetching, logging, and alerting logic into a single function, perform_price_check. Then, using schedule, we tell the script to run this function every day at a specific time (e.g., "10:30"). The `while True` loop at the end is essential; it keeps the script alive, checking every minute if it's time to run any scheduled tasks.

Your complete script should now look something like this (remembering to organize imports and functions properly and handle credentials securely):

import os
import smtplib
import ssl
import time
from datetime import datetime

import pandas as pd
import requests
import schedule
from bs4 import BeautifulSoup

# Consider importing dotenv for secure credential loading
# from dotenv import load_dotenv #
# load_dotenv() # Load variables from .env file

# --- Configuration ---
TARGET_URL = "https://www.bestbuy.com/site/asus-rog-zephyrus-g14-14-165hz-gaming-laptop-qhd-amd-ryzen-9-with-16gb-memory-nvidia-geforce-rtx-4060-512gb-ssd-moonlight-white/6535495.p?skuId=6535495"
LOG_FILE = 'bestbuy_price_log.csv'
ALERT_THRESHOLD = 1499.99  # Desired price for alert

# Securely load credentials (Example using environment variables)
# SENDER_EMAIL = os.getenv("GMAIL_SENDER")
# SENDER_PASSWORD = os.getenv("GMAIL_APP_PASSWORD")
# RECIPIENT_EMAIL = os.getenv("ALERT_RECIPIENT")

# Replace with your actual secure loading method or hardcoded values (NOT recommended for production)
SENDER_EMAIL = "your.alert.sender@gmail.com"
SENDER_PASSWORD = "YOUR_GENERATED_APP_PASSWORD"
RECIPIENT_EMAIL = "your.personal.email@example.com"


# --- Function Definitions ---

def fetch_product_price(url):
    headers = {
        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36"
    }
    # Add proxy settings here if needed:
    # proxies = { ... }
    try:
        # Add 'proxies=proxies' if using them
        page_content = requests.get(url, headers=headers, timeout=15)
        page_content.raise_for_status()
        parsed_html = BeautifulSoup(page_content.content, 'html.parser')

        # --- Find price element (MUST BE CHECKED/UPDATED) ---
        price_container = parsed_html.find("div", {"class": "priceView-hero-price priceView-customer-price"})
        if price_container:
            price_span = price_container.find("span", {"aria-hidden": "true"})
            if price_span:
                price_text = price_span.get_text()
                price_numeric = float(price_text.replace("$", "").replace(",", ""))
                return price_numeric
    # --- (Rest of the function with error handling as shown before) --- # Note: The original snippet's comment here seems incomplete based on the try block structure. Assuming it meant error handling within the except block.
    except Exception as e:
        print(f"Error fetching or parsing price: {e}")
    return None  # Return None if any step fails


def log_price_data(price, url):
    if price is None:
        return
    timestamp = datetime.now()
    new_data = {'Timestamp': [timestamp], 'Price_USD': [price], 'Product_URL': [url]}
    df = pd.DataFrame(new_data)
    try:
        header = not os.path.exists(LOG_FILE)
        df.to_csv(LOG_FILE, mode='a', header=header, index=False)
    except Exception as e:
        print(f"Error writing to CSV: {e}")


def send_price_alert(current_price, url, alert_threshold):
    if current_price is None or current_price > alert_threshold or not SENDER_EMAIL or not SENDER_PASSWORD or not RECIPIENT_EMAIL:
        return  # No alert needed or config missing

    subject = "Price Alert: Item Below Threshold!"
    body = f"Price Drop!\n\nThe item at {url}\nis now ${current_price:.2f} (Threshold: ${alert_threshold:.2f}).\n\nCheck it out!"
    message = f"Subject: {subject}\n\n{body}"
    context = ssl.create_default_context()

    try:
        with smtplib.SMTP_SSL("smtp.gmail.com", 465, context=context) as server:
            server.login(SENDER_EMAIL, SENDER_PASSWORD)
            server.sendmail(SENDER_EMAIL, RECIPIENT_EMAIL, message)
            print(f"Email alert sent for price ${current_price:.2f}!")
    except Exception as e:
        print(f"Failed to send email alert: {e}")


def perform_price_check():
    print(f"[{datetime.now()}] Running scheduled price check...")
    current_price = fetch_product_price(TARGET_URL)
    if current_price is not None:
        log_price_data(current_price, TARGET_URL)
        send_price_alert(current_price, TARGET_URL, ALERT_THRESHOLD)
        print(f"Check complete. Price: ${current_price:.2f}")
    else:
        print("Price check failed.")


# --- Scheduling ---
schedule.every().day.at("10:30").do(perform_price_check)
# schedule.every(1).hour.do(perform_price_check) # Alternative: hourly check

print("Price tracker starting...")
print(f"Tracking: {TARGET_URL}")
print(f"Alert Threshold: ${ALERT_THRESHOLD:.2f}")
print(f"Logging to: {LOG_FILE}")
print(f"Scheduled daily check at: 10:30")

# Optional: Run check once immediately on start
# perform_price_check()

# --- Main Loop ---
while True:
    schedule.run_pending()
    time.sleep(60)

To fully test this, you'll need to set up the Gmail account and app password, fill in your details (securely!), and either wait for the scheduled time or temporarily call perform_price_check() directly after the scheduling setup to trigger an immediate run.

Wrapping Up

And there you have it – the foundation for your own Best Buy price tracker built with Python! To monitor multiple items, you could modify the script to loop through a list of URLs, perhaps storing different alert thresholds for each. Remember that websites actively try to prevent scraping, so you might need to adjust selectors, implement delays (using `time.sleep()`), and employ robust proxy solutions like those from Evomi to ensure your tracker runs reliably over the long term, especially if you increase the checking frequency.

Building Your Own Best Buy Price Tracker with Python

Best Buy. It's a go-to spot for many of us seeking the latest electronics, components, or gadgets. Whether you're upgrading your workstation or diving deep into a tech hobby, chances are you've browsed their virtual or physical aisles. But let's be honest, tech can get pricey, leading many savvy shoppers to hunt relentlessly for deals and price drops.

Constantly hitting refresh on a product page? That gets old fast. This need has spurred the creation of various Best Buy price tracking tools. Some folks even take matters into their own hands and build custom trackers – and it's perhaps less daunting than you might think. Ready to see how you can build one yourself?

So, What Exactly is a Best Buy Price Tracker?

Think of a Best Buy price tracker as an automated scout that monitors the online store for product prices. It diligently records this information and compares it against historical data it has gathered over time. The magic happens when it detects a discount, a significant price reduction, or a sale – often triggering an alert to the user.

At its core, most price tracking relies on web scraping: using a script to automatically download and extract specific data from websites. The raw downloaded information (the website's HTML code) is then processed to isolate the crucial bits – in this case, the price.

These tools usually grab more than just the price. They'll often collect product URLs, names, and maybe even descriptions to provide context. Some might calculate the discount percentage or display a neat price history graph.

Alerts are a key feature. Many users want to set a price threshold. Imagine telling your tracker: "Email me if this graphics card drops below $500." When the condition is met, you get notified, potentially saving you a good chunk of change.

All these components work together with one primary objective: to keep you informed about price fluctuations and empower you to snag deals the moment they appear.

Why Bother Tracking Prices at Best Buy?

The most obvious motivation for tracking Best Buy prices is to catch the best possible deals and save money. For professionals buying equipment regularly, monitoring prices is vital for managing budgets and maximizing profit margins.

Beyond immediate savings, price tracking helps you gauge timing. Is it worth waiting for a potential discount? By looking at the historical price data, you can see if an item frequently goes on sale or if its price has remained stubbornly static. This helps manage expectations.

Furthermore, if you track the same product across multiple retailers (using similar tools or techniques), you can make more informed purchasing decisions, ensuring you get the absolute best value available at any given time.

Does Best Buy Offer Its Own Price Alerts?

Best Buy sometimes provides alerts, but typically only during specific promotional events or for certain types of deals. For everyday tracking of individual items, there isn't a built-in feature to notify you of price changes. This limitation is precisely why many turn to third-party tools or build their own solutions.

Crafting a Price Tracker with Python: A Step-by-Step Guide

Python is a fantastic choice for this project. It's known for being relatively easy to pick up, and its rich ecosystem of libraries simplifies tasks like web scraping. We'll leverage Python to create our price monitoring tool and start collecting that valuable price history.

First things first, open your favorite code editor or IDE, create a new project, and install the necessary Python libraries. Open your terminal or command prompt and run:

These libraries handle web requests, HTML parsing, data manipulation, and task scheduling, respectively. Let's begin by writing a function to visit a Best Buy product page and grab its price:

import requests
from bs4 import BeautifulSoup
import time

# We might need time.sleep later to be polite


# Target product URL (Example: ASUS ROG Laptop)
TARGET_URL = (
    "https://www.bestbuy.com/site/asus-rog-zephyrus-g14-14-165hz-gaming-laptop-qhd-"
    "amd-ryzen-9-with-16gb-memory-nvidia-geforce-rtx-4060-512gb-ssd-moonlight-white/"
    "6535495.p?skuId=6535495"
)


def fetch_product_price(url):
    # Mimic a real browser visit with a User-Agent header
    headers = {
        "User-Agent": (
            "Mozilla/5.0 (Windows NT 10.0; Win64; x64) "
            "AppleWebKit/537.36 (KHTML, like Gecko) "
            "Chrome/114.0.0.0 Safari/537.36"
        )
    }

    try:
        page_content = requests.get(url, headers=headers, timeout=15)
        page_content.raise_for_status() # Raise an exception for bad status codes (4xx or 5xx)
        parsed_html = BeautifulSoup(page_content.content, 'html.parser')

        # --- IMPORTANT ---
        # Finding the price requires inspecting the Best Buy page HTML.
        # The selectors below worked at one time but WILL change.
        # Use your browser's developer tools (right-click -> Inspect) to find the current tag and class for the price.
        price_container = parsed_html.find(
            "div", {"class": "priceView-hero-price priceView-customer-price"}
        )

        if price_container:
            # Often, the visually hidden span has the clean price
            price_span = price_container.find("span", {"aria-hidden": "true"})
            if price_span:
                price_text = price_span.get_text()
                # Clean up the price string (remove $, commas)
                price_numeric = float(price_text.replace("$", "").replace(",", ""))
                return price_numeric
            else:
                print("Could not find the price span within the container.")
                return None
        else:
             print("Could not find the price container element. Best Buy's layout may have changed.")
             return None
    except requests.exceptions.RequestException as e:
        print(f"Error fetching URL {url}: {e}")
        return None
    except Exception as e:
        print(f"An error occurred during parsing: {e}")
        return None


# Let's test it
current_price = fetch_product_price(TARGET_URL)

if current_price:
    print(f"Current price found: ${current_price}")
else:
    print("Failed to retrieve the price.")

We've selected an ASUS ROG laptop as our example, but you can adapt this for any product by changing the TARGET_URL.

Our script starts by defining a User-Agent header. This tells the website we're using a standard web browser, which is crucial because many sites block simple scripts using default library user agents.

The code then fetches the page's HTML content using the requests library. We include error handling for network issues or if Best Buy blocks the request. BeautifulSoup then parses this HTML. The critical part involves finding the specific HTML element (like a <div> or <span>) containing the price. Warning: Website structures change frequently! You'll absolutely need to use your browser's developer tools ("Inspect Element") to find the correct HTML tags and classes for the price on the current Best Buy page layout. The example selectors might be outdated. Finally, we extract the text, clean it (remove '$' and ',') and convert it to a numerical float value.

Running a test print like shown helps confirm if your scraper is correctly identifying and extracting the price.

Now, a key consideration: Best Buy, like many retailers, often shows different content or blocks access based on geographic location. If you're running this script from outside the US, you might get blocked or see irrelevant information. This is where proxies become essential. Using a proxy server routes your request through a different IP address, making it appear as if you're browsing from the proxy's location (e.g., the US).

For reliable access, especially for frequent checks, consider using high-quality proxies. Evomi offers ethically sourced residential proxies that are ideal for mimicking real user traffic and bypassing geo-restrictions. They look like genuine ISP-provided IPs, reducing the chance of blocks. We even offer a free trial if you'd like to test them out.

Integrating a proxy with `requests` is straightforward. You'd typically define your proxy details and pass them to the `requests.get` function:

# Example using Evomi residential proxy authentication
proxy_user = 'your_evomi_username'
proxy_pass = 'your_evomi_password'
proxy_host = 'rp.evomi.com'
proxy_port_http = 1000
proxy_port_https = 1001

proxies = {
    'http': f'http://{proxy_user}:{proxy_pass}@{proxy_host}:{proxy_port_http}',
    'https': f'http://{proxy_user}:{proxy_pass}@{proxy_host}:{proxy_port_https}'  # Note: requests uses http auth even for https proxies here
}

# Modify the get request in the function like this:
# page_content = requests.get(url, headers=headers, proxies=proxies, timeout=20)  # Increased timeout often needed with proxies

Logging the Price Data

Right now, the price is just fetched and maybe printed. We need to store this data persistently to track changes over time. The pandas library is excellent for this. Let's create a function to append the fetched price, timestamp, and URL to a CSV file:

import pandas as pd
from datetime import datetime
import os # (Make sure 'import os' is at the top of your script with other imports)

LOG_FILE = 'bestbuy_price_log.csv'

def log_price_data(price, url):
    if price is None: # Don't log if fetching failed
        return
    timestamp = datetime.now()
    new_data = {
        'Timestamp': [timestamp],
        'Price_USD': [price],
        'Product_URL': [url]
    }
    df = pd.DataFrame(new_data)
    try:
        # Append to CSV without writing header if file exists, include header otherwise
        header = False if os.path.exists(LOG_FILE) else True
        df.to_csv(LOG_FILE, mode='a', header=header, index=False)
        # print(f"Data logged: {timestamp}, ${price}, {url}") # Optional: uncomment for verbose logging
    except Exception as e:
        print(f"Error writing to CSV {LOG_FILE}: {e}")

A quick structural note: While we're showing code snippets step-by-step, remember to place all your import statements at the very top of your Python file, followed by function definitions, and then the main script logic. Don't intersperse imports and function definitions throughout the file as shown here for explanation.

This log_price_data function takes the price (retrieved by fetch_product_price()) and the URL. It records the current date and time, bundles everything into a pandas DataFrame, and appends it as a new row to our `bestbuy_price_log.csv` file. The `mode='a'` ensures we add to the file rather than overwrite it, and `header=False` (after the first write) prevents writing the column names repeatedly.

Your script structure, combining the fetching and logging, might look roughly like this now:

import requests
from bs4 import BeautifulSoup
import pandas as pd
from datetime import datetime
import time
import os # Added for file checking

TARGET_URL = "https://www.bestbuy.com/site/asus-rog-zephyrus-g14-14-165hz-gaming-laptop-qhd-amd-ryzen-9-with-16gb-memory-nvidia-geforce-rtx-4060-512gb-ssd-moonlight-white/6535495.p?skuId=6535495"
LOG_FILE = 'bestbuy_price_log.csv'

# --- Function: fetch_product_price(url) ---
# (Insert the fetch_product_price function code from above here)
# --- End Function: fetch_product_price ---

# --- Function: log_price_data(price, url) ---
# (Insert the log_price_data function code from above here)
# --- End Function: log_price_data ---

# --- Main execution part ---
product_price = fetch_product_price(TARGET_URL)

if product_price is not None:
    log_price_data(product_price, TARGET_URL)
    print(f"Successfully fetched and logged price: ${product_price}")
else:
    print("Could not fetch or log price.")

Adding Email Notifications

The real power comes from alerts. Let's set up email notifications for when the price drops below a certain point. For this, we'll use Gmail's SMTP server. You'll need a Gmail account (creating a dedicated one for alerts is recommended). Enable 2-Factor Authentication (2FA) on this account, then generate an "App Password" specifically for this script via your Google Account security settings (https://myaccount.google.com/apppasswords). Do not use your main Gmail password in the script!

import smtplib
import ssl  # For secure connection

# --- IMPORTANT SECURITY NOTE ---
# NEVER hardcode credentials directly in your script like below.
# Use environment variables (e.g., via python-dotenv library) or a secure config file.
SENDER_EMAIL = "your.alert.sender@gmail.com"  # Your dedicated Gmail address
SENDER_PASSWORD = "YOUR_GENERATED_APP_PASSWORD"  # The App Password you created
RECIPIENT_EMAIL = "your.personal.email@example.com"  # Where alerts should go

def send_price_alert(current_price, url, alert_threshold):
    if current_price is None or current_price > alert_threshold:
        return  # No alert needed

    subject = "Price Alert: Item Below Threshold!"
    body = f"Good news!\n\nThe price for the item at {url}\nhas dropped to ${current_price:.2f} (Your threshold was ${alert_threshold:.2f}).\n\nCheck it out!"
    message = f"Subject: {subject}\n\n{body}"

    context = ssl.create_default_context()  # Create a secure SSL context
    try:
        print(f"Attempting to send email alert for price ${current_price:.2f}...")
        with smtplib.SMTP_SSL("smtp.gmail.com", 465, context=context) as server:  # Use SMTP_SSL for port 465
            server.login(SENDER_EMAIL, SENDER_PASSWORD)
            server.sendmail(SENDER_EMAIL, RECIPIENT_EMAIL, message)
            print(f"Email alert sent successfully to {RECIPIENT_EMAIL}!")
    except smtplib.SMTPAuthenticationError:
        print("SMTP Authentication Error: Check sender email/app password.")
    except Exception as e:
        print(f"Failed to send email: {e}")

# Remember to add 'import ssl' at the top of your file.

This function checks if the fetched price is below your desired `alert_threshold`. If it is, it connects to Gmail's secure SMTP server (using `SMTP_SSL` and port 465), logs in with your app password, and sends the alert email. Again, please heed the security warning: use a library like python-dotenv to load your email credentials from a separate .env file instead of writing them directly in the code, especially if you share or version control your script. We show them here only for clarity in the example.

Now we need to run this check automatically.

Automating the Checks with Scheduling

Manually running the script isn't practical for continuous monitoring. We need it to execute automatically at regular intervals. The schedule library makes this easy:

import schedule
import time
from datetime import datetime # Assuming datetime is used based on the print statement

# --- Define constants and previous functions here ---
# TARGET_URL = "..."
# LOG_FILE = "..."
# SENDER_EMAIL = "..." (loaded securely!)
# SENDER_PASSWORD = "..." (loaded securely!)
# RECIPIENT_EMAIL = "..."
# ALERT_THRESHOLD = 1499.99 # Set your desired price alert level for the example product

# fetch_product_price(url) definition...
# log_price_data(price, url) definition...
# send_price_alert(price, url, threshold) definition...

# --- End definitions ---

def perform_price_check():
    print(f"Running scheduled price check at {datetime.now()}...")
    # Assuming fetch_product_price is defined elsewhere and returns a float or None
    current_price = fetch_product_price(TARGET_URL)

    if current_price is not None:
        # Assuming log_price_data and send_price_alert are defined elsewhere
        log_price_data(current_price, TARGET_URL)
        send_price_alert(current_price, TARGET_URL, ALERT_THRESHOLD)
        print(f"Check complete. Current price: ${current_price:.2f}")
    else:
        print("Price check failed (fetch returned None).")

# Schedule the job: Run once daily at 10:30 AM
schedule.every().day.at("10:30").do(perform_price_check)
# Other options: .hour, .minute.at(":15"), .monday .at("..."), etc.
# schedule.every(6).hours.do(perform_price_check) # Example: run every 6 hours

print("Scheduler started. Waiting for the scheduled job...")

# Initial check immediately upon starting (optional)
# perform_price_check()

# Keep the script running to allow the scheduler to work
while True:
    schedule.run_pending()
    time.sleep(60) # Check every 60 seconds if a scheduled job is due

We bundle the fetching, logging, and alerting logic into a single function, perform_price_check. Then, using schedule, we tell the script to run this function every day at a specific time (e.g., "10:30"). The `while True` loop at the end is essential; it keeps the script alive, checking every minute if it's time to run any scheduled tasks.

Your complete script should now look something like this (remembering to organize imports and functions properly and handle credentials securely):

import os
import smtplib
import ssl
import time
from datetime import datetime

import pandas as pd
import requests
import schedule
from bs4 import BeautifulSoup

# Consider importing dotenv for secure credential loading
# from dotenv import load_dotenv #
# load_dotenv() # Load variables from .env file

# --- Configuration ---
TARGET_URL = "https://www.bestbuy.com/site/asus-rog-zephyrus-g14-14-165hz-gaming-laptop-qhd-amd-ryzen-9-with-16gb-memory-nvidia-geforce-rtx-4060-512gb-ssd-moonlight-white/6535495.p?skuId=6535495"
LOG_FILE = 'bestbuy_price_log.csv'
ALERT_THRESHOLD = 1499.99  # Desired price for alert

# Securely load credentials (Example using environment variables)
# SENDER_EMAIL = os.getenv("GMAIL_SENDER")
# SENDER_PASSWORD = os.getenv("GMAIL_APP_PASSWORD")
# RECIPIENT_EMAIL = os.getenv("ALERT_RECIPIENT")

# Replace with your actual secure loading method or hardcoded values (NOT recommended for production)
SENDER_EMAIL = "your.alert.sender@gmail.com"
SENDER_PASSWORD = "YOUR_GENERATED_APP_PASSWORD"
RECIPIENT_EMAIL = "your.personal.email@example.com"


# --- Function Definitions ---

def fetch_product_price(url):
    headers = {
        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36"
    }
    # Add proxy settings here if needed:
    # proxies = { ... }
    try:
        # Add 'proxies=proxies' if using them
        page_content = requests.get(url, headers=headers, timeout=15)
        page_content.raise_for_status()
        parsed_html = BeautifulSoup(page_content.content, 'html.parser')

        # --- Find price element (MUST BE CHECKED/UPDATED) ---
        price_container = parsed_html.find("div", {"class": "priceView-hero-price priceView-customer-price"})
        if price_container:
            price_span = price_container.find("span", {"aria-hidden": "true"})
            if price_span:
                price_text = price_span.get_text()
                price_numeric = float(price_text.replace("$", "").replace(",", ""))
                return price_numeric
    # --- (Rest of the function with error handling as shown before) --- # Note: The original snippet's comment here seems incomplete based on the try block structure. Assuming it meant error handling within the except block.
    except Exception as e:
        print(f"Error fetching or parsing price: {e}")
    return None  # Return None if any step fails


def log_price_data(price, url):
    if price is None:
        return
    timestamp = datetime.now()
    new_data = {'Timestamp': [timestamp], 'Price_USD': [price], 'Product_URL': [url]}
    df = pd.DataFrame(new_data)
    try:
        header = not os.path.exists(LOG_FILE)
        df.to_csv(LOG_FILE, mode='a', header=header, index=False)
    except Exception as e:
        print(f"Error writing to CSV: {e}")


def send_price_alert(current_price, url, alert_threshold):
    if current_price is None or current_price > alert_threshold or not SENDER_EMAIL or not SENDER_PASSWORD or not RECIPIENT_EMAIL:
        return  # No alert needed or config missing

    subject = "Price Alert: Item Below Threshold!"
    body = f"Price Drop!\n\nThe item at {url}\nis now ${current_price:.2f} (Threshold: ${alert_threshold:.2f}).\n\nCheck it out!"
    message = f"Subject: {subject}\n\n{body}"
    context = ssl.create_default_context()

    try:
        with smtplib.SMTP_SSL("smtp.gmail.com", 465, context=context) as server:
            server.login(SENDER_EMAIL, SENDER_PASSWORD)
            server.sendmail(SENDER_EMAIL, RECIPIENT_EMAIL, message)
            print(f"Email alert sent for price ${current_price:.2f}!")
    except Exception as e:
        print(f"Failed to send email alert: {e}")


def perform_price_check():
    print(f"[{datetime.now()}] Running scheduled price check...")
    current_price = fetch_product_price(TARGET_URL)
    if current_price is not None:
        log_price_data(current_price, TARGET_URL)
        send_price_alert(current_price, TARGET_URL, ALERT_THRESHOLD)
        print(f"Check complete. Price: ${current_price:.2f}")
    else:
        print("Price check failed.")


# --- Scheduling ---
schedule.every().day.at("10:30").do(perform_price_check)
# schedule.every(1).hour.do(perform_price_check) # Alternative: hourly check

print("Price tracker starting...")
print(f"Tracking: {TARGET_URL}")
print(f"Alert Threshold: ${ALERT_THRESHOLD:.2f}")
print(f"Logging to: {LOG_FILE}")
print(f"Scheduled daily check at: 10:30")

# Optional: Run check once immediately on start
# perform_price_check()

# --- Main Loop ---
while True:
    schedule.run_pending()
    time.sleep(60)

To fully test this, you'll need to set up the Gmail account and app password, fill in your details (securely!), and either wait for the scheduled time or temporarily call perform_price_check() directly after the scheduling setup to trigger an immediate run.

Wrapping Up

And there you have it – the foundation for your own Best Buy price tracker built with Python! To monitor multiple items, you could modify the script to loop through a list of URLs, perhaps storing different alert thresholds for each. Remember that websites actively try to prevent scraping, so you might need to adjust selectors, implement delays (using `time.sleep()`), and employ robust proxy solutions like those from Evomi to ensure your tracker runs reliably over the long term, especially if you increase the checking frequency.

Building Your Own Best Buy Price Tracker with Python

Best Buy. It's a go-to spot for many of us seeking the latest electronics, components, or gadgets. Whether you're upgrading your workstation or diving deep into a tech hobby, chances are you've browsed their virtual or physical aisles. But let's be honest, tech can get pricey, leading many savvy shoppers to hunt relentlessly for deals and price drops.

Constantly hitting refresh on a product page? That gets old fast. This need has spurred the creation of various Best Buy price tracking tools. Some folks even take matters into their own hands and build custom trackers – and it's perhaps less daunting than you might think. Ready to see how you can build one yourself?

So, What Exactly is a Best Buy Price Tracker?

Think of a Best Buy price tracker as an automated scout that monitors the online store for product prices. It diligently records this information and compares it against historical data it has gathered over time. The magic happens when it detects a discount, a significant price reduction, or a sale – often triggering an alert to the user.

At its core, most price tracking relies on web scraping: using a script to automatically download and extract specific data from websites. The raw downloaded information (the website's HTML code) is then processed to isolate the crucial bits – in this case, the price.

These tools usually grab more than just the price. They'll often collect product URLs, names, and maybe even descriptions to provide context. Some might calculate the discount percentage or display a neat price history graph.

Alerts are a key feature. Many users want to set a price threshold. Imagine telling your tracker: "Email me if this graphics card drops below $500." When the condition is met, you get notified, potentially saving you a good chunk of change.

All these components work together with one primary objective: to keep you informed about price fluctuations and empower you to snag deals the moment they appear.

Why Bother Tracking Prices at Best Buy?

The most obvious motivation for tracking Best Buy prices is to catch the best possible deals and save money. For professionals buying equipment regularly, monitoring prices is vital for managing budgets and maximizing profit margins.

Beyond immediate savings, price tracking helps you gauge timing. Is it worth waiting for a potential discount? By looking at the historical price data, you can see if an item frequently goes on sale or if its price has remained stubbornly static. This helps manage expectations.

Furthermore, if you track the same product across multiple retailers (using similar tools or techniques), you can make more informed purchasing decisions, ensuring you get the absolute best value available at any given time.

Does Best Buy Offer Its Own Price Alerts?

Best Buy sometimes provides alerts, but typically only during specific promotional events or for certain types of deals. For everyday tracking of individual items, there isn't a built-in feature to notify you of price changes. This limitation is precisely why many turn to third-party tools or build their own solutions.

Crafting a Price Tracker with Python: A Step-by-Step Guide

Python is a fantastic choice for this project. It's known for being relatively easy to pick up, and its rich ecosystem of libraries simplifies tasks like web scraping. We'll leverage Python to create our price monitoring tool and start collecting that valuable price history.

First things first, open your favorite code editor or IDE, create a new project, and install the necessary Python libraries. Open your terminal or command prompt and run:

These libraries handle web requests, HTML parsing, data manipulation, and task scheduling, respectively. Let's begin by writing a function to visit a Best Buy product page and grab its price:

import requests
from bs4 import BeautifulSoup
import time

# We might need time.sleep later to be polite


# Target product URL (Example: ASUS ROG Laptop)
TARGET_URL = (
    "https://www.bestbuy.com/site/asus-rog-zephyrus-g14-14-165hz-gaming-laptop-qhd-"
    "amd-ryzen-9-with-16gb-memory-nvidia-geforce-rtx-4060-512gb-ssd-moonlight-white/"
    "6535495.p?skuId=6535495"
)


def fetch_product_price(url):
    # Mimic a real browser visit with a User-Agent header
    headers = {
        "User-Agent": (
            "Mozilla/5.0 (Windows NT 10.0; Win64; x64) "
            "AppleWebKit/537.36 (KHTML, like Gecko) "
            "Chrome/114.0.0.0 Safari/537.36"
        )
    }

    try:
        page_content = requests.get(url, headers=headers, timeout=15)
        page_content.raise_for_status() # Raise an exception for bad status codes (4xx or 5xx)
        parsed_html = BeautifulSoup(page_content.content, 'html.parser')

        # --- IMPORTANT ---
        # Finding the price requires inspecting the Best Buy page HTML.
        # The selectors below worked at one time but WILL change.
        # Use your browser's developer tools (right-click -> Inspect) to find the current tag and class for the price.
        price_container = parsed_html.find(
            "div", {"class": "priceView-hero-price priceView-customer-price"}
        )

        if price_container:
            # Often, the visually hidden span has the clean price
            price_span = price_container.find("span", {"aria-hidden": "true"})
            if price_span:
                price_text = price_span.get_text()
                # Clean up the price string (remove $, commas)
                price_numeric = float(price_text.replace("$", "").replace(",", ""))
                return price_numeric
            else:
                print("Could not find the price span within the container.")
                return None
        else:
             print("Could not find the price container element. Best Buy's layout may have changed.")
             return None
    except requests.exceptions.RequestException as e:
        print(f"Error fetching URL {url}: {e}")
        return None
    except Exception as e:
        print(f"An error occurred during parsing: {e}")
        return None


# Let's test it
current_price = fetch_product_price(TARGET_URL)

if current_price:
    print(f"Current price found: ${current_price}")
else:
    print("Failed to retrieve the price.")

We've selected an ASUS ROG laptop as our example, but you can adapt this for any product by changing the TARGET_URL.

Our script starts by defining a User-Agent header. This tells the website we're using a standard web browser, which is crucial because many sites block simple scripts using default library user agents.

The code then fetches the page's HTML content using the requests library. We include error handling for network issues or if Best Buy blocks the request. BeautifulSoup then parses this HTML. The critical part involves finding the specific HTML element (like a <div> or <span>) containing the price. Warning: Website structures change frequently! You'll absolutely need to use your browser's developer tools ("Inspect Element") to find the correct HTML tags and classes for the price on the current Best Buy page layout. The example selectors might be outdated. Finally, we extract the text, clean it (remove '$' and ',') and convert it to a numerical float value.

Running a test print like shown helps confirm if your scraper is correctly identifying and extracting the price.

Now, a key consideration: Best Buy, like many retailers, often shows different content or blocks access based on geographic location. If you're running this script from outside the US, you might get blocked or see irrelevant information. This is where proxies become essential. Using a proxy server routes your request through a different IP address, making it appear as if you're browsing from the proxy's location (e.g., the US).

For reliable access, especially for frequent checks, consider using high-quality proxies. Evomi offers ethically sourced residential proxies that are ideal for mimicking real user traffic and bypassing geo-restrictions. They look like genuine ISP-provided IPs, reducing the chance of blocks. We even offer a free trial if you'd like to test them out.

Integrating a proxy with `requests` is straightforward. You'd typically define your proxy details and pass them to the `requests.get` function:

# Example using Evomi residential proxy authentication
proxy_user = 'your_evomi_username'
proxy_pass = 'your_evomi_password'
proxy_host = 'rp.evomi.com'
proxy_port_http = 1000
proxy_port_https = 1001

proxies = {
    'http': f'http://{proxy_user}:{proxy_pass}@{proxy_host}:{proxy_port_http}',
    'https': f'http://{proxy_user}:{proxy_pass}@{proxy_host}:{proxy_port_https}'  # Note: requests uses http auth even for https proxies here
}

# Modify the get request in the function like this:
# page_content = requests.get(url, headers=headers, proxies=proxies, timeout=20)  # Increased timeout often needed with proxies

Logging the Price Data

Right now, the price is just fetched and maybe printed. We need to store this data persistently to track changes over time. The pandas library is excellent for this. Let's create a function to append the fetched price, timestamp, and URL to a CSV file:

import pandas as pd
from datetime import datetime
import os # (Make sure 'import os' is at the top of your script with other imports)

LOG_FILE = 'bestbuy_price_log.csv'

def log_price_data(price, url):
    if price is None: # Don't log if fetching failed
        return
    timestamp = datetime.now()
    new_data = {
        'Timestamp': [timestamp],
        'Price_USD': [price],
        'Product_URL': [url]
    }
    df = pd.DataFrame(new_data)
    try:
        # Append to CSV without writing header if file exists, include header otherwise
        header = False if os.path.exists(LOG_FILE) else True
        df.to_csv(LOG_FILE, mode='a', header=header, index=False)
        # print(f"Data logged: {timestamp}, ${price}, {url}") # Optional: uncomment for verbose logging
    except Exception as e:
        print(f"Error writing to CSV {LOG_FILE}: {e}")

A quick structural note: While we're showing code snippets step-by-step, remember to place all your import statements at the very top of your Python file, followed by function definitions, and then the main script logic. Don't intersperse imports and function definitions throughout the file as shown here for explanation.

This log_price_data function takes the price (retrieved by fetch_product_price()) and the URL. It records the current date and time, bundles everything into a pandas DataFrame, and appends it as a new row to our `bestbuy_price_log.csv` file. The `mode='a'` ensures we add to the file rather than overwrite it, and `header=False` (after the first write) prevents writing the column names repeatedly.

Your script structure, combining the fetching and logging, might look roughly like this now:

import requests
from bs4 import BeautifulSoup
import pandas as pd
from datetime import datetime
import time
import os # Added for file checking

TARGET_URL = "https://www.bestbuy.com/site/asus-rog-zephyrus-g14-14-165hz-gaming-laptop-qhd-amd-ryzen-9-with-16gb-memory-nvidia-geforce-rtx-4060-512gb-ssd-moonlight-white/6535495.p?skuId=6535495"
LOG_FILE = 'bestbuy_price_log.csv'

# --- Function: fetch_product_price(url) ---
# (Insert the fetch_product_price function code from above here)
# --- End Function: fetch_product_price ---

# --- Function: log_price_data(price, url) ---
# (Insert the log_price_data function code from above here)
# --- End Function: log_price_data ---

# --- Main execution part ---
product_price = fetch_product_price(TARGET_URL)

if product_price is not None:
    log_price_data(product_price, TARGET_URL)
    print(f"Successfully fetched and logged price: ${product_price}")
else:
    print("Could not fetch or log price.")

Adding Email Notifications

The real power comes from alerts. Let's set up email notifications for when the price drops below a certain point. For this, we'll use Gmail's SMTP server. You'll need a Gmail account (creating a dedicated one for alerts is recommended). Enable 2-Factor Authentication (2FA) on this account, then generate an "App Password" specifically for this script via your Google Account security settings (https://myaccount.google.com/apppasswords). Do not use your main Gmail password in the script!

import smtplib
import ssl  # For secure connection

# --- IMPORTANT SECURITY NOTE ---
# NEVER hardcode credentials directly in your script like below.
# Use environment variables (e.g., via python-dotenv library) or a secure config file.
SENDER_EMAIL = "your.alert.sender@gmail.com"  # Your dedicated Gmail address
SENDER_PASSWORD = "YOUR_GENERATED_APP_PASSWORD"  # The App Password you created
RECIPIENT_EMAIL = "your.personal.email@example.com"  # Where alerts should go

def send_price_alert(current_price, url, alert_threshold):
    if current_price is None or current_price > alert_threshold:
        return  # No alert needed

    subject = "Price Alert: Item Below Threshold!"
    body = f"Good news!\n\nThe price for the item at {url}\nhas dropped to ${current_price:.2f} (Your threshold was ${alert_threshold:.2f}).\n\nCheck it out!"
    message = f"Subject: {subject}\n\n{body}"

    context = ssl.create_default_context()  # Create a secure SSL context
    try:
        print(f"Attempting to send email alert for price ${current_price:.2f}...")
        with smtplib.SMTP_SSL("smtp.gmail.com", 465, context=context) as server:  # Use SMTP_SSL for port 465
            server.login(SENDER_EMAIL, SENDER_PASSWORD)
            server.sendmail(SENDER_EMAIL, RECIPIENT_EMAIL, message)
            print(f"Email alert sent successfully to {RECIPIENT_EMAIL}!")
    except smtplib.SMTPAuthenticationError:
        print("SMTP Authentication Error: Check sender email/app password.")
    except Exception as e:
        print(f"Failed to send email: {e}")

# Remember to add 'import ssl' at the top of your file.

This function checks if the fetched price is below your desired `alert_threshold`. If it is, it connects to Gmail's secure SMTP server (using `SMTP_SSL` and port 465), logs in with your app password, and sends the alert email. Again, please heed the security warning: use a library like python-dotenv to load your email credentials from a separate .env file instead of writing them directly in the code, especially if you share or version control your script. We show them here only for clarity in the example.

Now we need to run this check automatically.

Automating the Checks with Scheduling

Manually running the script isn't practical for continuous monitoring. We need it to execute automatically at regular intervals. The schedule library makes this easy:

import schedule
import time
from datetime import datetime # Assuming datetime is used based on the print statement

# --- Define constants and previous functions here ---
# TARGET_URL = "..."
# LOG_FILE = "..."
# SENDER_EMAIL = "..." (loaded securely!)
# SENDER_PASSWORD = "..." (loaded securely!)
# RECIPIENT_EMAIL = "..."
# ALERT_THRESHOLD = 1499.99 # Set your desired price alert level for the example product

# fetch_product_price(url) definition...
# log_price_data(price, url) definition...
# send_price_alert(price, url, threshold) definition...

# --- End definitions ---

def perform_price_check():
    print(f"Running scheduled price check at {datetime.now()}...")
    # Assuming fetch_product_price is defined elsewhere and returns a float or None
    current_price = fetch_product_price(TARGET_URL)

    if current_price is not None:
        # Assuming log_price_data and send_price_alert are defined elsewhere
        log_price_data(current_price, TARGET_URL)
        send_price_alert(current_price, TARGET_URL, ALERT_THRESHOLD)
        print(f"Check complete. Current price: ${current_price:.2f}")
    else:
        print("Price check failed (fetch returned None).")

# Schedule the job: Run once daily at 10:30 AM
schedule.every().day.at("10:30").do(perform_price_check)
# Other options: .hour, .minute.at(":15"), .monday .at("..."), etc.
# schedule.every(6).hours.do(perform_price_check) # Example: run every 6 hours

print("Scheduler started. Waiting for the scheduled job...")

# Initial check immediately upon starting (optional)
# perform_price_check()

# Keep the script running to allow the scheduler to work
while True:
    schedule.run_pending()
    time.sleep(60) # Check every 60 seconds if a scheduled job is due

We bundle the fetching, logging, and alerting logic into a single function, perform_price_check. Then, using schedule, we tell the script to run this function every day at a specific time (e.g., "10:30"). The `while True` loop at the end is essential; it keeps the script alive, checking every minute if it's time to run any scheduled tasks.

Your complete script should now look something like this (remembering to organize imports and functions properly and handle credentials securely):

import os
import smtplib
import ssl
import time
from datetime import datetime

import pandas as pd
import requests
import schedule
from bs4 import BeautifulSoup

# Consider importing dotenv for secure credential loading
# from dotenv import load_dotenv #
# load_dotenv() # Load variables from .env file

# --- Configuration ---
TARGET_URL = "https://www.bestbuy.com/site/asus-rog-zephyrus-g14-14-165hz-gaming-laptop-qhd-amd-ryzen-9-with-16gb-memory-nvidia-geforce-rtx-4060-512gb-ssd-moonlight-white/6535495.p?skuId=6535495"
LOG_FILE = 'bestbuy_price_log.csv'
ALERT_THRESHOLD = 1499.99  # Desired price for alert

# Securely load credentials (Example using environment variables)
# SENDER_EMAIL = os.getenv("GMAIL_SENDER")
# SENDER_PASSWORD = os.getenv("GMAIL_APP_PASSWORD")
# RECIPIENT_EMAIL = os.getenv("ALERT_RECIPIENT")

# Replace with your actual secure loading method or hardcoded values (NOT recommended for production)
SENDER_EMAIL = "your.alert.sender@gmail.com"
SENDER_PASSWORD = "YOUR_GENERATED_APP_PASSWORD"
RECIPIENT_EMAIL = "your.personal.email@example.com"


# --- Function Definitions ---

def fetch_product_price(url):
    headers = {
        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36"
    }
    # Add proxy settings here if needed:
    # proxies = { ... }
    try:
        # Add 'proxies=proxies' if using them
        page_content = requests.get(url, headers=headers, timeout=15)
        page_content.raise_for_status()
        parsed_html = BeautifulSoup(page_content.content, 'html.parser')

        # --- Find price element (MUST BE CHECKED/UPDATED) ---
        price_container = parsed_html.find("div", {"class": "priceView-hero-price priceView-customer-price"})
        if price_container:
            price_span = price_container.find("span", {"aria-hidden": "true"})
            if price_span:
                price_text = price_span.get_text()
                price_numeric = float(price_text.replace("$", "").replace(",", ""))
                return price_numeric
    # --- (Rest of the function with error handling as shown before) --- # Note: The original snippet's comment here seems incomplete based on the try block structure. Assuming it meant error handling within the except block.
    except Exception as e:
        print(f"Error fetching or parsing price: {e}")
    return None  # Return None if any step fails


def log_price_data(price, url):
    if price is None:
        return
    timestamp = datetime.now()
    new_data = {'Timestamp': [timestamp], 'Price_USD': [price], 'Product_URL': [url]}
    df = pd.DataFrame(new_data)
    try:
        header = not os.path.exists(LOG_FILE)
        df.to_csv(LOG_FILE, mode='a', header=header, index=False)
    except Exception as e:
        print(f"Error writing to CSV: {e}")


def send_price_alert(current_price, url, alert_threshold):
    if current_price is None or current_price > alert_threshold or not SENDER_EMAIL or not SENDER_PASSWORD or not RECIPIENT_EMAIL:
        return  # No alert needed or config missing

    subject = "Price Alert: Item Below Threshold!"
    body = f"Price Drop!\n\nThe item at {url}\nis now ${current_price:.2f} (Threshold: ${alert_threshold:.2f}).\n\nCheck it out!"
    message = f"Subject: {subject}\n\n{body}"
    context = ssl.create_default_context()

    try:
        with smtplib.SMTP_SSL("smtp.gmail.com", 465, context=context) as server:
            server.login(SENDER_EMAIL, SENDER_PASSWORD)
            server.sendmail(SENDER_EMAIL, RECIPIENT_EMAIL, message)
            print(f"Email alert sent for price ${current_price:.2f}!")
    except Exception as e:
        print(f"Failed to send email alert: {e}")


def perform_price_check():
    print(f"[{datetime.now()}] Running scheduled price check...")
    current_price = fetch_product_price(TARGET_URL)
    if current_price is not None:
        log_price_data(current_price, TARGET_URL)
        send_price_alert(current_price, TARGET_URL, ALERT_THRESHOLD)
        print(f"Check complete. Price: ${current_price:.2f}")
    else:
        print("Price check failed.")


# --- Scheduling ---
schedule.every().day.at("10:30").do(perform_price_check)
# schedule.every(1).hour.do(perform_price_check) # Alternative: hourly check

print("Price tracker starting...")
print(f"Tracking: {TARGET_URL}")
print(f"Alert Threshold: ${ALERT_THRESHOLD:.2f}")
print(f"Logging to: {LOG_FILE}")
print(f"Scheduled daily check at: 10:30")

# Optional: Run check once immediately on start
# perform_price_check()

# --- Main Loop ---
while True:
    schedule.run_pending()
    time.sleep(60)

To fully test this, you'll need to set up the Gmail account and app password, fill in your details (securely!), and either wait for the scheduled time or temporarily call perform_price_check() directly after the scheduling setup to trigger an immediate run.

Wrapping Up

And there you have it – the foundation for your own Best Buy price tracker built with Python! To monitor multiple items, you could modify the script to loop through a list of URLs, perhaps storing different alert thresholds for each. Remember that websites actively try to prevent scraping, so you might need to adjust selectors, implement delays (using `time.sleep()`), and employ robust proxy solutions like those from Evomi to ensure your tracker runs reliably over the long term, especially if you increase the checking frequency.

Author

David Foster

Proxy & Network Security Analyst

About Author

David is an expert in network security, web scraping, and proxy technologies, helping businesses optimize data extraction while maintaining privacy and efficiency. With a deep understanding of residential, datacenter, and rotating proxies, he explores how proxies enhance cybersecurity, bypass geo-restrictions, and power large-scale web scraping. David’s insights help businesses and developers choose the right proxy solutions for SEO monitoring, competitive intelligence, and anonymous browsing.

Like this article? Share it.
You asked, we answer - Users questions:
Is it legal to scrape Best Buy's website for prices using a script like this?+
How can I modify this Python script to track multiple Best Buy products simultaneously?+
Can I configure this Best Buy price tracker to send alerts via SMS or other messaging apps instead of email?+
Why are residential proxies specifically recommended for scraping Best Buy, and can I use datacenter proxies instead?+
How often can I schedule the Best Buy price check without risking getting blocked, even with proxies?+

In This Article