You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+34-9Lines changed: 34 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,7 +18,7 @@ By undeƒined
18
18
19
19
# AProxyRelay: An Async Request Library with Proxy Rotation
20
20
21
-
AProxyRelay is an asynchronous request library designed for easy data retrieval using various proxy servers. It seamlessly handles proxy rotation, preserves data that fails to be requested, and simplifies API scraping. The library is written in `Python 3.12.1` but is compatible with projects utilizing `Python 3.11.2`.
21
+
AProxyRelay is an asynchronous request library designed for easy data retrieval using various proxy servers. It seamlessly handles proxy rotation, preserves data that fails to be requested, and simplifies API scraping. The library is written in `Python 3.12.2`.
22
22
23
23
In addition, tested proxies will be shared with other people using this library. The more this library is utilized, the bigger the pool of available proxies.
24
24
@@ -33,37 +33,60 @@ AProxyRelay streamlines the process of making asynchronous requests with proxy s
| targets | list[str]| Target endpoints provided in an array | Each endpoint will be requested with an available proxy. If a proxy is unavailable and the request fails, we store it in a queue and try it out with another proxy until we have data. |
81
+
| timeout | int | Allowed proxy timeout. **Defaults to 5**| A proxy has to respond within the provided timeout to be considered valid. Otherwise, it will be discarded. |
82
+
| scrape | bool | Indicator to utilize the proxy scraper. **Defaults to True**| The decision to scrape for proxies is determined by the value of this parameter. When set to True (default), the proxy scraper is used, which is slower but provides a broader range of proxies. When set to False, proxies are fetched from a single source, offering a faster but more limited selection. |
83
+
| filter | bool | Indicator for filtering bad proxies. **Defaults to True**| If set to True (default), the tool will test proxy connections before using them. This process might take a bit longer, but it ensures that the proxies are valid before utilization. |
84
+
| zones | list[str]| An array of proxy zones. **Defaults to ['US']**| Sometimes it matters where the proxy is located. Each item in this list ensures the proxy is located in that specific zone, and requests made from the proxy are coming from the location provided. It acts like a whitelist for allowed proxy locations. |
85
+
| unpack | lambda | Anonymous function for unpacking data. **Defaults to `lambda data, target: data`**| When a request has been made to a target through a proxy and data has been fetched, this lambda method formats the result data before putting it into the result queue. **data** -> output from the target, **target** -> target URL. |
86
+
| debug | bool | Indicator which enables debug mode. **Defaults to False**| When true, additional logging will be printed to the terminal, enabling debug mode. |
87
+
88
+
89
+
67
90
## A Proxy Relay: Local Development
68
91
To install all library dependencies for local development, excluding the core code available locally, use the following command within a virtual environment:
69
92
@@ -89,10 +112,12 @@ from .core import ScraperCore
0 commit comments