Create Scraper class Initialize with base_url, experience_name, max_pages
Define method extract_data
Takes an idea HTML element
Uses CSS selectors to extract required information from the element
Returns a dictionary of the extracted data
Define method get_page_data
Takes a session and page number
Sends a GET request to the specified page using the session
Uses BeautifulSoup to parse the response HTML
Finds all idea elements in the parsed HTML
Extracts data from each idea element using the extract_data method
Returns a list of the extracted data
Define method scrape_data
Creates a session
Determines the pages to scrape
Uses ThreadPoolExecutor to create multiple threads
Each thread executes get_page_data method for a page
Stores returned data into a DataFrame and adds it to a list
Combines all DataFrames into a single DataFrame
Returns the final DataFrame
Outside the class Define the URL, experience_name and max_pages Create an instance of Scraper with the URL, experience_name and max_pages Call the scrape_data method and save the result into df_ideas