Master R Web Scraping: The 2025 Proxy-Friendly Guide





David Foster
Scraping Techniques
Diving Into Web Scraping with R: A 2025 Guide with Proxies in Mind
Web scraping is the art of writing automated scripts designed to visit numerous web pages and meticulously extract specific data. It’s a task tackled using various programming languages, each bringing its own set of advantages and quirks to the table.
R, traditionally the domain of statisticians and data analysts, offers a unique edge here. Since a huge part of web scraping involves interpreting and utilizing the harvested data, R's powerful data manipulation capabilities make it a compelling choice. While its web scraping community might not be as vast as some others, it's certainly robust enough for most data extraction endeavors.
Where R truly shines is its ecosystem of libraries built specifically for data analysis and visualization. These tools can be incredibly valuable once you've gathered your data and need to make sense of it.
R vs. Python: Choosing Your Web Scraping Toolkit
Python often gets the spotlight in the web scraping world, and for good reason:
Approachability: Python's syntax is generally considered clean and readable, making it less intimidating for newcomers compared to languages like C++. Its object-oriented nature also aids in structuring larger projects.
Community & Libraries: Python boasts a massive community. This translates into a wealth of web scraping libraries (like Scrapy, BeautifulSoup, Requests) specifically designed for efficient data extraction, plus countless examples and tutorials available online.
These factors often position Python as a go-to for many web scraping tasks.
However, R is far from a runner-up; in certain scenarios, it might even be the more logical pick. R offers capable web scraping packages, and while they might differ in approach from Python's giants, R's native strength lies in what comes after the scraping – data handling, statistical analysis, and visualization.
Ultimately, the choice often boils down to your project's primary goal. If the focus is purely on extracting data from complex sites, Python might offer a smoother path. But if the project heavily involves subsequent data analysis and manipulation, R's integrated environment could streamline your workflow significantly.
Let's look at two popular libraries: BeautifulSoup (Python) and Rvest (R).
BeautifulSoup (Python): Highly flexible for parsing messy HTML and XML. It enjoys extensive documentation and active development. However, BeautifulSoup itself doesn't fetch web pages; it needs companions like Requests or browser automation tools (Selenium, Playwright) to retrieve the content first.
Rvest (R): Designed to work seamlessly within the R ecosystem (specifically the Tidyverse). It handles basic HTML fetching and parsing well, often allowing you to accomplish simple scraping tasks with less code compared to the Python combo. It might be less forgiving with extremely complex or poorly structured HTML than BeautifulSoup.
In summary, Python often excels in large-scale, complex scraping projects involving dynamic websites and intricate extraction logic. R presents a strong case for projects where data analysis is paramount or for developers already comfortable within the R environment, especially when dealing with simpler, static sites.
Getting Started: Web Scraping with the Rvest Package
Before diving into Rvest, you'll need R itself and an IDE. RStudio is a popular and highly recommended choice, available for free here.
Once R and RStudio are set up, launch RStudio. In the console pane (usually bottom-left), type the following command to install the Rvest package:
install.packages("rvest")
After the installation finishes, create a new R Script (File > New File > R Script).
First, load the Rvest library into your script's environment:
library(rvest)
Now, let's fetch the HTML content of a page. We'll use books.toscrape.com
, a website designed for scraping practice:
# Define the target URL
target_url <- "https://books.toscrape.com/"
# Read the HTML content from the URL
web_content <- read_html(target_url)
Running this code fetches the page, but doesn't extract anything yet. We need to specify what we're looking for using CSS selectors. Let's grab all the links ( tags) and their corresponding URLs (href
attributes):
# Extract links using CSS selectors and the pipe operator
all_links <- web_content %>%
html_nodes("a") %>%
html_attr("href")
# Print the extracted links
print(all_links)
This code snippet takes the downloaded web_content
, finds all nodes matching the CSS selector "a"
(anchor tags), and then extracts the value of the "href"
attribute from those nodes. Notice the %>%
operator? That's the "pipe" from the magrittr package (loaded automatically with rvest), which passes the result of one function forward as the first argument to the next function, making the code cleaner to read.
To run the entire script in RStudio, you can select all lines (Ctrl+A or Cmd+A) and click "Run", or use the shortcut Ctrl+Shift+S to "Source" the script.
We can adapt this pattern easily. For example, to extract the text content of all top-level headings (
tags):Similarly, extracting image source URLs (src
attribute from tags) looks like this:This gives you a basic toolkit for pulling structured data from static HTML pages using Rvest.Scaling Up: Crawling Websites with RcrawlerFor scraping more than just a single page, Rcrawler is a powerful option. It's designed to crawl entire websites, following links automatically and extracting data based on your specifications. This means you can target a whole site rather than individual pages.First, install the package:Then, load the library:Now, a single command can initiate a crawl. Let's crawl our example site and extract all H1 headings and paragraph text:Here, Rcrawler starts at the specified Website
. The ExtractXpathPat
argument takes a vector of XPath expressions – in this case, selecting all
and elements anywhere on the pages it visits. We've added MaxDepth = 1
to prevent it from crawling too deeply for this example. The no_cores
and no_conn
arguments allow for parallel processing, speeding up the crawl. Using multiple connections effectively often requires robust proxy infrastructure to avoid overwhelming the target server or getting blocked, especially for larger crawls. Residential or mobile proxies from services like Evomi are ideal for this, rotating IPs automatically for each connection.By default, Rcrawler saves the raw HTML of visited pages into a folder and compiles the extracted data (matching your XPath patterns) into a CSV file named INDEX.csv
in your working directory. A great feature is its resilience; if you stop the process, the data collected up to that point is already saved in the CSV.
Handling Modern Websites: Scraping JavaScript-Rendered Content
Many websites today rely heavily on JavaScript to load content after the initial HTML page is received. Simple fetchers like Rvest's read_html
might miss this dynamic content because they don't execute JavaScript. To scrape these sites accurately, you need to simulate a real browser.
RSelenium is R's answer to Python's Selenium, allowing you to control a web browser programmatically. Install it first:
install.packages("RSelenium")
Using RSelenium involves starting a Selenium server and connecting a browser driver. Here's a basic setup using Chrome:
library(RSelenium)
# Start the Selenium server and Chrome driver
# This might prompt you to download the correct driver if needed
driver_instance <- rsDriver(
browser = "chrome",
port = 4445L,
chromever = "latest"
)
remote_driver <- driver_instance[["client"]]
# Note: Ensure you have Chrome installed
Now, let's navigate to our target site using the controlled browser:
# Navigate to the URL
target_site <- "https://quotes.toscrape.com/js/" # A site that uses JS to load content
remote_driver$navigate(target_site)
# Give the page a second or two to load JavaScript content
Sys.sleep(2)
# Get the page source *after* JavaScript execution
page_html <- remote_driver$getPageSource()[[1]]
# Close the browser and server when done
remote_driver$close()
driver_instance[["server"]]$stop()
The page_html
variable now contains the full HTML source, including any content loaded by JavaScript. You can then parse this page_html
using Rvest's read_html()
function and proceed with extraction as shown earlier:
# Load rvest again if needed
library(rvest)
# Parse the JS-rendered HTML
rendered_content <- read_html(page_html)
# Now extract elements as before, e.g., quotes
quotes <- rendered_content %>%
html_nodes(".quote .text") %>%
html_text()
print(quotes)
Scraping dynamic sites often increases the chances of detection. Using proxies, especially high-quality residential proxies like those offered by Evomi, becomes crucial. They make your requests appear as if they're coming from genuine users in different locations, significantly reducing block rates. Evomi's network sources IPs ethically and provides robust options starting from $0.49/GB for residential proxies. You can even explore a free trial to test how proxies enhance your scraping success with tools like RSelenium.
Final Thoughts on R for Web Scraping
R offers a powerful and integrated environment for web scraping, particularly when data analysis and visualization are key components of your project. With packages like Rvest for basic scraping, Rcrawler for site-wide crawling, and RSelenium for handling JavaScript-heavy sites, you have a capable toolkit at your disposal.
Remember that successful and responsible web scraping, especially at scale, often involves navigating challenges like IP blocks and dynamic content. Leveraging reliable proxy services, such as Evomi's diverse and ethically sourced residential, mobile, datacenter, and ISP proxies, can be essential for maintaining access and ensuring your scraping projects run smoothly. Combining R's analytical power with robust infrastructure sets you up for effective data acquisition.
Diving Into Web Scraping with R: A 2025 Guide with Proxies in Mind
Web scraping is the art of writing automated scripts designed to visit numerous web pages and meticulously extract specific data. It’s a task tackled using various programming languages, each bringing its own set of advantages and quirks to the table.
R, traditionally the domain of statisticians and data analysts, offers a unique edge here. Since a huge part of web scraping involves interpreting and utilizing the harvested data, R's powerful data manipulation capabilities make it a compelling choice. While its web scraping community might not be as vast as some others, it's certainly robust enough for most data extraction endeavors.
Where R truly shines is its ecosystem of libraries built specifically for data analysis and visualization. These tools can be incredibly valuable once you've gathered your data and need to make sense of it.
R vs. Python: Choosing Your Web Scraping Toolkit
Python often gets the spotlight in the web scraping world, and for good reason:
Approachability: Python's syntax is generally considered clean and readable, making it less intimidating for newcomers compared to languages like C++. Its object-oriented nature also aids in structuring larger projects.
Community & Libraries: Python boasts a massive community. This translates into a wealth of web scraping libraries (like Scrapy, BeautifulSoup, Requests) specifically designed for efficient data extraction, plus countless examples and tutorials available online.
These factors often position Python as a go-to for many web scraping tasks.
However, R is far from a runner-up; in certain scenarios, it might even be the more logical pick. R offers capable web scraping packages, and while they might differ in approach from Python's giants, R's native strength lies in what comes after the scraping – data handling, statistical analysis, and visualization.
Ultimately, the choice often boils down to your project's primary goal. If the focus is purely on extracting data from complex sites, Python might offer a smoother path. But if the project heavily involves subsequent data analysis and manipulation, R's integrated environment could streamline your workflow significantly.
Let's look at two popular libraries: BeautifulSoup (Python) and Rvest (R).
BeautifulSoup (Python): Highly flexible for parsing messy HTML and XML. It enjoys extensive documentation and active development. However, BeautifulSoup itself doesn't fetch web pages; it needs companions like Requests or browser automation tools (Selenium, Playwright) to retrieve the content first.
Rvest (R): Designed to work seamlessly within the R ecosystem (specifically the Tidyverse). It handles basic HTML fetching and parsing well, often allowing you to accomplish simple scraping tasks with less code compared to the Python combo. It might be less forgiving with extremely complex or poorly structured HTML than BeautifulSoup.
In summary, Python often excels in large-scale, complex scraping projects involving dynamic websites and intricate extraction logic. R presents a strong case for projects where data analysis is paramount or for developers already comfortable within the R environment, especially when dealing with simpler, static sites.
Getting Started: Web Scraping with the Rvest Package
Before diving into Rvest, you'll need R itself and an IDE. RStudio is a popular and highly recommended choice, available for free here.
Once R and RStudio are set up, launch RStudio. In the console pane (usually bottom-left), type the following command to install the Rvest package:
install.packages("rvest")
After the installation finishes, create a new R Script (File > New File > R Script).
First, load the Rvest library into your script's environment:
library(rvest)
Now, let's fetch the HTML content of a page. We'll use books.toscrape.com
, a website designed for scraping practice:
# Define the target URL
target_url <- "https://books.toscrape.com/"
# Read the HTML content from the URL
web_content <- read_html(target_url)
Running this code fetches the page, but doesn't extract anything yet. We need to specify what we're looking for using CSS selectors. Let's grab all the links ( tags) and their corresponding URLs (href
attributes):
# Extract links using CSS selectors and the pipe operator
all_links <- web_content %>%
html_nodes("a") %>%
html_attr("href")
# Print the extracted links
print(all_links)
This code snippet takes the downloaded web_content
, finds all nodes matching the CSS selector "a"
(anchor tags), and then extracts the value of the "href"
attribute from those nodes. Notice the %>%
operator? That's the "pipe" from the magrittr package (loaded automatically with rvest), which passes the result of one function forward as the first argument to the next function, making the code cleaner to read.
To run the entire script in RStudio, you can select all lines (Ctrl+A or Cmd+A) and click "Run", or use the shortcut Ctrl+Shift+S to "Source" the script.
We can adapt this pattern easily. For example, to extract the text content of all top-level headings (
tags):Similarly, extracting image source URLs (src
attribute from tags) looks like this:This gives you a basic toolkit for pulling structured data from static HTML pages using Rvest.Scaling Up: Crawling Websites with RcrawlerFor scraping more than just a single page, Rcrawler is a powerful option. It's designed to crawl entire websites, following links automatically and extracting data based on your specifications. This means you can target a whole site rather than individual pages.First, install the package:Then, load the library:Now, a single command can initiate a crawl. Let's crawl our example site and extract all H1 headings and paragraph text:Here, Rcrawler starts at the specified Website
. The ExtractXpathPat
argument takes a vector of XPath expressions – in this case, selecting all
and elements anywhere on the pages it visits. We've added MaxDepth = 1
to prevent it from crawling too deeply for this example. The no_cores
and no_conn
arguments allow for parallel processing, speeding up the crawl. Using multiple connections effectively often requires robust proxy infrastructure to avoid overwhelming the target server or getting blocked, especially for larger crawls. Residential or mobile proxies from services like Evomi are ideal for this, rotating IPs automatically for each connection.By default, Rcrawler saves the raw HTML of visited pages into a folder and compiles the extracted data (matching your XPath patterns) into a CSV file named INDEX.csv
in your working directory. A great feature is its resilience; if you stop the process, the data collected up to that point is already saved in the CSV.
Handling Modern Websites: Scraping JavaScript-Rendered Content
Many websites today rely heavily on JavaScript to load content after the initial HTML page is received. Simple fetchers like Rvest's read_html
might miss this dynamic content because they don't execute JavaScript. To scrape these sites accurately, you need to simulate a real browser.
RSelenium is R's answer to Python's Selenium, allowing you to control a web browser programmatically. Install it first:
install.packages("RSelenium")
Using RSelenium involves starting a Selenium server and connecting a browser driver. Here's a basic setup using Chrome:
library(RSelenium)
# Start the Selenium server and Chrome driver
# This might prompt you to download the correct driver if needed
driver_instance <- rsDriver(
browser = "chrome",
port = 4445L,
chromever = "latest"
)
remote_driver <- driver_instance[["client"]]
# Note: Ensure you have Chrome installed
Now, let's navigate to our target site using the controlled browser:
# Navigate to the URL
target_site <- "https://quotes.toscrape.com/js/" # A site that uses JS to load content
remote_driver$navigate(target_site)
# Give the page a second or two to load JavaScript content
Sys.sleep(2)
# Get the page source *after* JavaScript execution
page_html <- remote_driver$getPageSource()[[1]]
# Close the browser and server when done
remote_driver$close()
driver_instance[["server"]]$stop()
The page_html
variable now contains the full HTML source, including any content loaded by JavaScript. You can then parse this page_html
using Rvest's read_html()
function and proceed with extraction as shown earlier:
# Load rvest again if needed
library(rvest)
# Parse the JS-rendered HTML
rendered_content <- read_html(page_html)
# Now extract elements as before, e.g., quotes
quotes <- rendered_content %>%
html_nodes(".quote .text") %>%
html_text()
print(quotes)
Scraping dynamic sites often increases the chances of detection. Using proxies, especially high-quality residential proxies like those offered by Evomi, becomes crucial. They make your requests appear as if they're coming from genuine users in different locations, significantly reducing block rates. Evomi's network sources IPs ethically and provides robust options starting from $0.49/GB for residential proxies. You can even explore a free trial to test how proxies enhance your scraping success with tools like RSelenium.
Final Thoughts on R for Web Scraping
R offers a powerful and integrated environment for web scraping, particularly when data analysis and visualization are key components of your project. With packages like Rvest for basic scraping, Rcrawler for site-wide crawling, and RSelenium for handling JavaScript-heavy sites, you have a capable toolkit at your disposal.
Remember that successful and responsible web scraping, especially at scale, often involves navigating challenges like IP blocks and dynamic content. Leveraging reliable proxy services, such as Evomi's diverse and ethically sourced residential, mobile, datacenter, and ISP proxies, can be essential for maintaining access and ensuring your scraping projects run smoothly. Combining R's analytical power with robust infrastructure sets you up for effective data acquisition.
Diving Into Web Scraping with R: A 2025 Guide with Proxies in Mind
Web scraping is the art of writing automated scripts designed to visit numerous web pages and meticulously extract specific data. It’s a task tackled using various programming languages, each bringing its own set of advantages and quirks to the table.
R, traditionally the domain of statisticians and data analysts, offers a unique edge here. Since a huge part of web scraping involves interpreting and utilizing the harvested data, R's powerful data manipulation capabilities make it a compelling choice. While its web scraping community might not be as vast as some others, it's certainly robust enough for most data extraction endeavors.
Where R truly shines is its ecosystem of libraries built specifically for data analysis and visualization. These tools can be incredibly valuable once you've gathered your data and need to make sense of it.
R vs. Python: Choosing Your Web Scraping Toolkit
Python often gets the spotlight in the web scraping world, and for good reason:
Approachability: Python's syntax is generally considered clean and readable, making it less intimidating for newcomers compared to languages like C++. Its object-oriented nature also aids in structuring larger projects.
Community & Libraries: Python boasts a massive community. This translates into a wealth of web scraping libraries (like Scrapy, BeautifulSoup, Requests) specifically designed for efficient data extraction, plus countless examples and tutorials available online.
These factors often position Python as a go-to for many web scraping tasks.
However, R is far from a runner-up; in certain scenarios, it might even be the more logical pick. R offers capable web scraping packages, and while they might differ in approach from Python's giants, R's native strength lies in what comes after the scraping – data handling, statistical analysis, and visualization.
Ultimately, the choice often boils down to your project's primary goal. If the focus is purely on extracting data from complex sites, Python might offer a smoother path. But if the project heavily involves subsequent data analysis and manipulation, R's integrated environment could streamline your workflow significantly.
Let's look at two popular libraries: BeautifulSoup (Python) and Rvest (R).
BeautifulSoup (Python): Highly flexible for parsing messy HTML and XML. It enjoys extensive documentation and active development. However, BeautifulSoup itself doesn't fetch web pages; it needs companions like Requests or browser automation tools (Selenium, Playwright) to retrieve the content first.
Rvest (R): Designed to work seamlessly within the R ecosystem (specifically the Tidyverse). It handles basic HTML fetching and parsing well, often allowing you to accomplish simple scraping tasks with less code compared to the Python combo. It might be less forgiving with extremely complex or poorly structured HTML than BeautifulSoup.
In summary, Python often excels in large-scale, complex scraping projects involving dynamic websites and intricate extraction logic. R presents a strong case for projects where data analysis is paramount or for developers already comfortable within the R environment, especially when dealing with simpler, static sites.
Getting Started: Web Scraping with the Rvest Package
Before diving into Rvest, you'll need R itself and an IDE. RStudio is a popular and highly recommended choice, available for free here.
Once R and RStudio are set up, launch RStudio. In the console pane (usually bottom-left), type the following command to install the Rvest package:
install.packages("rvest")
After the installation finishes, create a new R Script (File > New File > R Script).
First, load the Rvest library into your script's environment:
library(rvest)
Now, let's fetch the HTML content of a page. We'll use books.toscrape.com
, a website designed for scraping practice:
# Define the target URL
target_url <- "https://books.toscrape.com/"
# Read the HTML content from the URL
web_content <- read_html(target_url)
Running this code fetches the page, but doesn't extract anything yet. We need to specify what we're looking for using CSS selectors. Let's grab all the links ( tags) and their corresponding URLs (href
attributes):
# Extract links using CSS selectors and the pipe operator
all_links <- web_content %>%
html_nodes("a") %>%
html_attr("href")
# Print the extracted links
print(all_links)
This code snippet takes the downloaded web_content
, finds all nodes matching the CSS selector "a"
(anchor tags), and then extracts the value of the "href"
attribute from those nodes. Notice the %>%
operator? That's the "pipe" from the magrittr package (loaded automatically with rvest), which passes the result of one function forward as the first argument to the next function, making the code cleaner to read.
To run the entire script in RStudio, you can select all lines (Ctrl+A or Cmd+A) and click "Run", or use the shortcut Ctrl+Shift+S to "Source" the script.
We can adapt this pattern easily. For example, to extract the text content of all top-level headings (
tags):Similarly, extracting image source URLs (src
attribute from tags) looks like this:This gives you a basic toolkit for pulling structured data from static HTML pages using Rvest.Scaling Up: Crawling Websites with RcrawlerFor scraping more than just a single page, Rcrawler is a powerful option. It's designed to crawl entire websites, following links automatically and extracting data based on your specifications. This means you can target a whole site rather than individual pages.First, install the package:Then, load the library:Now, a single command can initiate a crawl. Let's crawl our example site and extract all H1 headings and paragraph text:Here, Rcrawler starts at the specified Website
. The ExtractXpathPat
argument takes a vector of XPath expressions – in this case, selecting all
and elements anywhere on the pages it visits. We've added MaxDepth = 1
to prevent it from crawling too deeply for this example. The no_cores
and no_conn
arguments allow for parallel processing, speeding up the crawl. Using multiple connections effectively often requires robust proxy infrastructure to avoid overwhelming the target server or getting blocked, especially for larger crawls. Residential or mobile proxies from services like Evomi are ideal for this, rotating IPs automatically for each connection.By default, Rcrawler saves the raw HTML of visited pages into a folder and compiles the extracted data (matching your XPath patterns) into a CSV file named INDEX.csv
in your working directory. A great feature is its resilience; if you stop the process, the data collected up to that point is already saved in the CSV.
Handling Modern Websites: Scraping JavaScript-Rendered Content
Many websites today rely heavily on JavaScript to load content after the initial HTML page is received. Simple fetchers like Rvest's read_html
might miss this dynamic content because they don't execute JavaScript. To scrape these sites accurately, you need to simulate a real browser.
RSelenium is R's answer to Python's Selenium, allowing you to control a web browser programmatically. Install it first:
install.packages("RSelenium")
Using RSelenium involves starting a Selenium server and connecting a browser driver. Here's a basic setup using Chrome:
library(RSelenium)
# Start the Selenium server and Chrome driver
# This might prompt you to download the correct driver if needed
driver_instance <- rsDriver(
browser = "chrome",
port = 4445L,
chromever = "latest"
)
remote_driver <- driver_instance[["client"]]
# Note: Ensure you have Chrome installed
Now, let's navigate to our target site using the controlled browser:
# Navigate to the URL
target_site <- "https://quotes.toscrape.com/js/" # A site that uses JS to load content
remote_driver$navigate(target_site)
# Give the page a second or two to load JavaScript content
Sys.sleep(2)
# Get the page source *after* JavaScript execution
page_html <- remote_driver$getPageSource()[[1]]
# Close the browser and server when done
remote_driver$close()
driver_instance[["server"]]$stop()
The page_html
variable now contains the full HTML source, including any content loaded by JavaScript. You can then parse this page_html
using Rvest's read_html()
function and proceed with extraction as shown earlier:
# Load rvest again if needed
library(rvest)
# Parse the JS-rendered HTML
rendered_content <- read_html(page_html)
# Now extract elements as before, e.g., quotes
quotes <- rendered_content %>%
html_nodes(".quote .text") %>%
html_text()
print(quotes)
Scraping dynamic sites often increases the chances of detection. Using proxies, especially high-quality residential proxies like those offered by Evomi, becomes crucial. They make your requests appear as if they're coming from genuine users in different locations, significantly reducing block rates. Evomi's network sources IPs ethically and provides robust options starting from $0.49/GB for residential proxies. You can even explore a free trial to test how proxies enhance your scraping success with tools like RSelenium.
Final Thoughts on R for Web Scraping
R offers a powerful and integrated environment for web scraping, particularly when data analysis and visualization are key components of your project. With packages like Rvest for basic scraping, Rcrawler for site-wide crawling, and RSelenium for handling JavaScript-heavy sites, you have a capable toolkit at your disposal.
Remember that successful and responsible web scraping, especially at scale, often involves navigating challenges like IP blocks and dynamic content. Leveraging reliable proxy services, such as Evomi's diverse and ethically sourced residential, mobile, datacenter, and ISP proxies, can be essential for maintaining access and ensuring your scraping projects run smoothly. Combining R's analytical power with robust infrastructure sets you up for effective data acquisition.

Author
David Foster
Proxy & Network Security Analyst
About Author
David is an expert in network security, web scraping, and proxy technologies, helping businesses optimize data extraction while maintaining privacy and efficiency. With a deep understanding of residential, datacenter, and rotating proxies, he explores how proxies enhance cybersecurity, bypass geo-restrictions, and power large-scale web scraping. David’s insights help businesses and developers choose the right proxy solutions for SEO monitoring, competitive intelligence, and anonymous browsing.