fix(crawl): throttle concurrent CrawlJobs and relax fpw/proxyscrape HTTP
- CrawlJob waits on crawl_slot before JobExecutor semaphore so crawl-all does not fill slots while queued - BaseHTTPPlugin: longer connect budget for slow international links - proxyscrape: jsDelivr mirror + longer GitHub/API phases - fpw_*: higher timeouts/retries; lower internal concurrency on heavy multi-URL plugins Made-with: Cursor
This commit is contained in:
@@ -47,7 +47,7 @@ class FpwCheckerproxyPlugin(BaseHTTPPlugin):
|
||||
async def crawl(self) -> List[ProxyRaw]:
|
||||
merged: List[ProxyRaw] = []
|
||||
seen: Set[Tuple[str, int, str]] = set()
|
||||
htmls = await self.fetch_all(self.urls, timeout=12, retries=1)
|
||||
htmls = await self.fetch_all(self.urls, timeout=25, retries=2)
|
||||
for html in htmls:
|
||||
if not html or len(html) < 200:
|
||||
continue
|
||||
|
||||
Reference in New Issue
Block a user