fix(crawl): throttle concurrent CrawlJobs and relax fpw/proxyscrape HTTP
- CrawlJob waits on crawl_slot before JobExecutor semaphore so crawl-all does not fill slots while queued - BaseHTTPPlugin: longer connect budget for slow international links - proxyscrape: jsDelivr mirror + longer GitHub/API phases - fpw_*: higher timeouts/retries; lower internal concurrency on heavy multi-URL plugins Made-with: Cursor
This commit is contained in:
@@ -65,7 +65,7 @@ class FpwProxynovaPlugin(BaseHTTPPlugin):
|
||||
return out
|
||||
|
||||
async def crawl(self) -> List[ProxyRaw]:
|
||||
html = await self.fetch(self.urls[0], timeout=14, retries=1)
|
||||
html = await self.fetch(self.urls[0], timeout=25, retries=2)
|
||||
if not html:
|
||||
return []
|
||||
results = self._parse_rows(html)
|
||||
|
||||
Reference in New Issue
Block a user