aboutsummarylogtreecommitdiffstats
path: root/README.md
diff options
context:
space:
mode:
authorYour Name2024-04-18 19:27:34 +0330
committerYour Name2024-04-18 19:27:34 +0330
commitac4a91fb9467ae1fca20253c390da6ca65a59a6a (patch)
treebb3e45140b91f25ea30a3691c733d20f0987a8dc /README.md
parent93193a3f808a7e170a0e1ef6b999bddf7c8a0371 (diff)
downloadaur-ac4a91fb9467ae1fca20253c390da6ca65a59a6a.tar.gz
web2pdf v3
Diffstat (limited to 'README.md')
-rw-r--r--README.md48
1 files changed, 46 insertions, 2 deletions
diff --git a/README.md b/README.md
index 4851ac7c2919..898937fbc856 100644
--- a/README.md
+++ b/README.md
@@ -4,6 +4,8 @@ A CLI tool to extract a part of a website, create a PDF
(new version now support both class and id , even some styling )
+(new version now support --comic-mode read document for more info )
+
------
> let me tell u a really fun story
@@ -43,7 +45,7 @@ available in aur
if u wanna compile it yourself
-1. clone repository
+1. clone repository
2. go to cloned file
@@ -60,6 +62,49 @@ if u wanna compile it yourself
6. u can even make build its binary yourself it is easy
+
+
+# how to use ?
+
+``` usage: web2pdf.py [-h] [--id ID] [--class CLASS_NAME]
+
+usage: web2pdf.py [-h] [--id ID] [--class CLASS_NAME]
+ [--exclude EXCLUDE [EXCLUDE ...]] [--comic-mode]
+ url pdf_name
+
+Save webpage content as PDF or images
+
+positional arguments:
+ url URL of the webpage to scrape
+ pdf_name Name of the PDF file to save
+
+options:
+ -h, --help show this help message and exit
+ --id ID ID of the content to extract
+ --class CLASS_NAME Class name of the content to extract
+ --exclude EXCLUDE [EXCLUDE ...]
+ Class names of elements to exclude
+ --comic-mode Save images and pdf them (like a real comic or manga)```
+
+```
+
+- `--comic-mode` : sometimes u wanna download a manga or comic from INTERNET
+
+ they have a part that comic is saved using very long images that r put tougher
+
+ downloading them one by one and make a pdf out of it is hard and somehow impossible
+
+ you can use web2pdf using `--comic-mode` these times
+
+ 1. it will make a dir with the same name of pdf and save all of page images
+ 2. and than make a pdf out of it
+
+- ` --id ID ID of the content to extract
+ --class CLASS_NAME Class name of the content to extract
+ --exclude EXCLUDE [EXCLUDE ...]`
+
+ these args r optional by default it will make a pdf out of all website
+
# what to do next
@@ -73,4 +118,3 @@ if u wanna compile it yourself
in the end i will be happy if u share your ideas about this script with me
TY so much ❤️
-