fix(crawl): throttle concurrent CrawlJobs and relax fpw/proxyscrape HTTP
- CrawlJob waits on crawl_slot before JobExecutor semaphore so crawl-all does not fill slots while queued - BaseHTTPPlugin: longer connect budget for slow international links - proxyscrape: jsDelivr mirror + longer GitHub/API phases - fpw_*: higher timeouts/retries; lower internal concurrency on heavy multi-URL plugins Made-with: Cursor
This commit is contained in:
@@ -21,7 +21,7 @@ class FpwHidemyPlugin(BaseHTTPPlugin):
|
||||
|
||||
async def crawl(self) -> List[ProxyRaw]:
|
||||
results: List[ProxyRaw] = []
|
||||
htmls = await self.fetch_all(self.urls, timeout=12, retries=1)
|
||||
htmls = await self.fetch_all(self.urls, timeout=25, retries=2)
|
||||
for url, html in zip(self.urls, htmls):
|
||||
if not html:
|
||||
continue
|
||||
|
||||
Reference in New Issue
Block a user