Skip to content

Instantly share code, notes, and snippets.

@MichelleDalalJian
Last active November 18, 2023 00:44
Show Gist options
  • Save MichelleDalalJian/2c9aaadbda21290e1ccfc87a9ab1f937 to your computer and use it in GitHub Desktop.
Save MichelleDalalJian/2c9aaadbda21290e1ccfc87a9ab1f937 to your computer and use it in GitHub Desktop.
Scraping Numbers from HTML using BeautifulSoup. The program will use urllib to read the HTML from the data files below, and parse the data, extracting numbers and compute the sum of the numbers in the file.
#Actual data: http://py4e-data.dr-chuck.net/comments_24964.html (Sum ends with 73)
from urllib import request
from bs4 import BeautifulSoup
html=request.urlopen('http://python-data.dr-chuck.net/comments_24964.html').read()
soup = BeautifulSoup(html)
tags=soup('span')
sum=0
for tag in tags:
sum=sum+int(tag.contents[0])
print(sum)
@elsa-huang0415
Copy link

Scraping Numbers from HTML using BeautifulSoup In this assignment you will write a Python program similar to http://www.py4e.com/code3/urllink2.py. The program will use urllib to read the HTML from the data files below, and parse the data, extracting numbers and compute the sum of the numbers in the file.

We provide two files for this assignment. One is a sample file where we give you the sum for your testing and the other is the actual data you need to process for the assignment.

Sample data: http://py4e-data.dr-chuck.net/comments_42.html (Sum=2553)
Actual data: http://py4e-data.dr-chuck.net/comments_1688329.html (Sum ends with 69)
You do not need to save these files to your folder since your program will read the data directly from the URL. Note: Each student will have a distinct data url for the assignment - so only use your own data url for analysis.
Data Format
The file is a table of names and comment counts. You can ignore most of the data in the file except for lines like the following:

Modu90 Kenzie88 Hubert87 You are to find all the tags in the file and pull out the numbers from the tag and sum the numbers. Look at the [sample code](https://www.py4e.com/code3/urllink2.py?PHPSESSID=2c24afed08670e9b1eb532ed49fcbf90) provided. It shows how to find all of a certain kind of tag, loop through the tags and extract the various aspects of the tags.

...

Retrieve all of the anchor tags

tags = soup('a')
for tag in tags:

Look at the parts of a tag

print 'TAG:',tag
print 'URL:',tag.get('href', None)
print 'Contents:',tag.contents[0]
print 'Attrs:',tag.attrs
You need to adjust this code to look for span tags and pull out the text content of the span tag, convert them to integers and add them up to complete the assignment.
Sample Execution

$ python3 solution.py
Enter - http://py4e-data.dr-chuck.net/comments_42.html
Count 50
Sum 2...
Turning in the Assignment

Enter the sum from the actual data and your Python code below:
Sum:
(ends with 69)

@elsa-huang0415
Copy link

can anyone solve this problem for me, i am so confuse

@Andrei-Ist
Copy link

I work with PyCharm using Python 3.11 and encountered a similar issue, after installing bs4.
Implemented @Nikowos solution and it works! Thanks

@mzaheerulislam
Copy link

mzaheerulislam commented Jan 30, 2023

Here is the way how you guys can solve this :
Working code below 👍
READ ME "":: Copy the actual Data url and run the file from the cmd/terminal and then paste the in terminal or CMD like so

#! /bin/python3
from urllib.request import urlopen
from bs4 import BeautifulSoup
import ssl

ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE

leave the url empty for now. Paste the url after running the file in cmd or terminal.

url = input("")
html = urlopen(url, context=ctx).read()
soup = BeautifulSoup(html, "html.parser")

spans = soup('span')
numbers = []

for span in spans:
numbers.append(int(span.string))

print (sum(numbers))

togithub.mp4

@TheMicroTecHub
Copy link

image

@TheMicroTecHub
Copy link

This is the error I am getting can anybody help?

@mzaheerulislam
Copy link

mzaheerulislam commented Feb 6, 2023 via email

@FelipeVidalV
Copy link

Notes Regarding the Use of BeautifulSoup
The sample code for this course and textbook examples use BeautifulSoup to parse HTML.

Using BeautifulSoup 4 with Python 3.10 or Python 3.11

Instructions for Windows 10:

  1. pip install beautifulsoup4 (run this command)

  2. if the bs4.zip file was downloaded, delete it

Instructions for MacOS:

  1. pip3 install beautifulsoup4 (run this command)

  2. if the bs4.zip file was downloaded or you have a bs4 folder, delete it

Using BeautifulSoup 3 (only for Python 3.8 or Python 3.9)

If you want use our samples "as is", download our Python 3 version of BeautifulSoup 3 from

http://www.py4e.com/code3/bs4.zip

You must unzip this into a "bs4" folder and have that folder as a sub-folder of the folder where you put our sample code like:

http://www.py4e.com/code3/urllinks.py

@Vicwambua
Copy link

Hello I tried this for the same question:

#Scraping Numbers from HTML using BeautifulSoup

from urllib.request import urlopen
from bs4 import BeautifulSoup
import ssl
import re

Ignore SSL certificate errors

ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE

url = input('Enter - ')
html = urlopen(url, context=ctx).read()
soup = BeautifulSoup(html, "html.parser")

Retrieve all of the anchor tags

counts = dict()
my_list = list()
tags = soup('span')
for tag in tags:
# Look at the parts of a tag
num = str(tag)
number = re.findall('[0-9]+', num)
if len(number) != 1:
continue
for integer in number:
integer = int(y)
my_list = append(integer)
counts[integer] = counts.get(integer, 0 ) + 1
print('Count ', counts)
#or you can say
#print('Count ', len(my_list))
print('Sum ', sum(my_list))

@Mzainabdin
Copy link

For window user follow the instruction given by instructor in the discussion forum than the above top one code even work out for you.
https://www.coursera.org/learn/python-network-data/discussions/forums/G0TMJ6G0EeqqMhL7huUnrQ/threads/Fi07MzG0EeymZRIVts3h3w

@AreebaYousuf
Copy link

import urllib.request
from bs4 import BeautifulSoup
# Prompt user for URL
url = input('Enter URL: ')

# Read HTML from URL
html = urllib.request.urlopen(url).read()

#Parse the HTML using BeautifulSoup
soup = BeautifulSoup(html, 'html.parser')

# Find all span tags
tags = soup('span')
# Sum up the numbers
sum = 0
for tag in tags:
sum += int(tag.contents[0])

# Print the sum
print(sum)

@alghamdiim
Copy link

Being new to Python i have figured out a way to retrieve the right answer. Honestly, I'm not sure if there is a better way but it was easier than i thought. Hope it helps you guys ou parsing t

Thank you so much!!!

@ShuckZ77
Copy link

ShuckZ77 commented May 7, 2023

import urllib.request, urllib.parse, urllib.error
from bs4 import BeautifulSoup
import ssl
import re

Ignore SSL certificate errors

ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE

url = input('ENTER URL:') #http://py4e-data.dr-chuck.net/comments_1692181.html

fhand = urllib.request.urlopen(url,context=ctx).read()

soup = BeautifulSoup(fhand,'html.parser')

#print(soup)

Retrieve all of the anchor tags

tags = soup('span')

lst=list()

for tag in tags:
tag = str(tag)
#print(tag)
tag2 = re.findall('[0-9]+',tag)
tag3 = int(tag2[0])
lst.append(tag3)

#print(lst)

total = sum(lst)

print(total)

@Jackyandsky
Copy link

For window user follow the instruction given by instructor in the discussion forum than the above top one code even work out for you. https://www.coursera.org/learn/python-network-data/discussions/forums/G0TMJ6G0EeqqMhL7huUnrQ/threads/Fi07MzG0EeymZRIVts3h3w
Thank you for your help, It works.

@Mk-Hamzaoui
Copy link

image
c'est quoi le probleme

@Mk-Hamzaoui
Copy link

jai essaie sur vs code il y a un probleme trace back et maintenant sur jupyter toujours le resultat 0

@Albedo100
Copy link

uninstall the zip folder and the extracted folder of bs4 and install it using your command prompt by typing: -
pip install beautifulsoup4

@katsuspec
Copy link

ayuda1
ayuda1 1
help mee pls
Sum: ???
Code: ???

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment