Skip to content

Instantly share code, notes, and snippets.

What would you like to do?
Really short intro to scraping with Beautiful Soup and Requests

Web Scraping Workshop

Using Requests and Beautiful Soup, with the most recent Beautiful Soup 4 docs.

Getting Started

Install our tools (preferably in a new virtualenv):

pip install beautifulsoup4
pip install requests

Start Scraping!

Lets grab the Free Book Samplers from O'Reilly:

>>> import requests
>>> result = requests.get("")

Make sure we got a result.

>>> result.status_code
>>> result.headers

Store your content in an easy-to-type variable!

>>> c = result.content

Start parsing with Beautiful Soup. NOTE: If you installed with pip, you'll need to import from bs4. If you download the source, you'll need to import from BeautifulSoup (which is what they do in the online docs).

>>> from bs4 import BeautifulSoup
>>> soup = BeautifulSoup(c)
>>> samples = soup.find_all("a", "item-title")
>>> samples[0]
<a class="item-title" href="">
Programming Perl

Now, pick apart individual links.

>>> data = {}
>>> for a in samples:
...     title = a.string.strip()
...     data[title] = a.attrs['href']

Check out the keys/values in the data dict. Rejoice!

Now go scrape some stuff!

Copy link

fedale commented Apr 9, 2013

Very nice introduction, thanks!

Copy link

qhuang872 commented Dec 14, 2016

nice short intro

Copy link

whitecat commented Dec 14, 2016

One question. What if the request is from a website that is loading something in the request? How can I get the request to get the loaded content?

For example request ""
and use lines = soup.findAll("span", { "class" : "num text-emphasized" })
The problem is contributor shows: "fetching contributors"

Copy link

jasminecjc commented Apr 6, 2017

Good job! It helps me

Copy link

redfast00 commented May 14, 2017

For some reason is using result.content way slower when parsing in BeautifulSoup than using result.text. Any idea why?

Copy link

trey commented Jun 24, 2017

Thank you, that helped me!

Copy link

ebartan commented Jul 7, 2017

thx for share very useful start for soup

Copy link

danhamill commented Jul 27, 2017


Copy link

Mutungi commented Aug 10, 2017

Great introduction thanks

Copy link

michaelfangyao commented Sep 18, 2017

Good job bro

Copy link

Renzo1 commented Oct 30, 2017

pls wat is the function of the 'attrs[]' in the last line of the above code

Copy link

kevinprakasa commented Nov 9, 2017

thanks dude,

Copy link

hMutzner commented Dec 15, 2017

Very good introduction. Thank you.
Sample site: does not exist any more

Copy link

saif017 commented Jun 8, 2018

Tnx big bro

Copy link

LeeJobs4Med commented Oct 24, 2018

The link is down. you can see a previous version at .

Copy link

davidxbuck commented Nov 24, 2018

Thanks. It works as intended if you change to the current sampler page:

Copy link

hsheikha1429 commented Feb 10, 2020

Well explained.
Thank you.

Copy link

Jogwums commented Mar 6, 2020

Thanks Nice!
Found a way to export the array created to a csv file.

`import requests
from bs4 import BeautifulSoup
import pandas as pd
from pandas import DataFrame
import csv

results = requests.get("")

#check if the link is functional

#view headers

c = results.content
#apply web scrapping library
soup = BeautifulSoup(c)

#check html element and use to point exact location
samples = soup.find_all("a", "item-title")

#insert a loop to check each iteration and store in an empty dict
data = {}
for a in samples:
    title = a.string.strip()
    data[title] = a.attrs['href']


#import to csv
with open('books.csv', 'w') as f:
    for key in data.keys():

Copy link

kannankumar commented May 6, 2020

great short intro with just the required pieces. 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment