To view the downloaded courses, use the Educative-Viewer repository.
I Welcome anyone to contribute to this project. A Star would help me a lot.
Update 01/08/2022 : v5.6 Fixed a bug related to Script Timeout in Single File.
Update 20/07/2022 : v5.5 Added support to Back, Next button for educative-viewer.
Update 20/07/2022 : v5.4 Added support to copy content from Code containers.
Update 20/07/2022 : v5.3 Fixed and Added a feature to Code Widget type containers.
Update 19/07/2022 : v5.2 Added SingleFile HTML page content instead of screenshot.
Update 15/07/2022 : v5.0 Added support for Linux arm64 architecture.
Update 07/07/2022 : v4.9 Fixed Multiple Bugs as educative.io made some changes in their DOM.
1. Create a urls text file and copy the links of the first topic of courses and paste it in text file as shown below.
2. Run both the executables chromedriver and educative_scraper by downloading them from latest release.
3. Select a config if you don't wish to use the default config "0" by pressing 2.
(Make sure to generate the config if it is selected for the first time)
4. Generate the config (if not present) and provide the urls text file path, save location and headless mode by pressing 1.
5. Login your educative account by pressing 3.
6. Start Scraping by pressing 4.
7. To return to Main Menu/ Exit Scraper press Ctrl+C / CMD+C.
Note 1: Uncomment line 482 to download the courses having download_button container but download button not working.[This Feature is not added in releases]
Note 2: If the scraper fails or the User Exits in between for any specific reason, a log.txt file will be created in the save directory, containing the last known url while scraping along with the index, copy the {index url} line and replace it in the original urls text file (make sure to delete the urls that are already scraped while replacing) to resume scraping the course where it was stopped previously by restarting the scraper.
Note 3: If for any reason your system shuts down for power failure or the scraper crashes then you have to manually search the url and index and provide the {index url} in urls text file since the scraper cannot create log.txt for sudden power cut/ crash.
pip3 install virtualenv
virtualenv env
env\Scripts\activate
source env/bin/activate
pip3 install -r requirements.txt
python3 chromedriver.py
python3 educative_scraper.py
Activate the Virtual Environment and Install the required modules for the project (Refer Step 1, 2, 3 above).
pip3 install pyinstaller
pyinstaller --clean --add-data Chrome-bin;Chrome-bin --onefile -i"icon.ico" educative_scraper.py
pyinstaller --clean --add-data "Chrome-driver;Chrome-driver" --onefile -i"icon.ico" chromedriver.py
pyinstaller --clean --add-data Chrome-bin:Chrome-bin --onefile -i"icon.ico" educative_scraper.py
pyinstaller --clean --add-data "Chrome-driver:Chrome-driver" --onefile -i"icon.ico" chromedriver.py
Pyinstaller command for Linux OS may or may not work due to a pyinstaller bug, currently checking for a fix.
A Whitepaper will be released containing the explanation of each functions and the cases handled by the scraper.