Releases: apify/crawlee
v3.1.1
3.1.1 (2022-11-07)
Bug Fixes
utils.playwright.blockRequests
warning message (#1632) (76549eb)- concurrency option override order (#1649) (7bbad03)
- handle non-error objects thrown gracefully (#1652) (c3a4e1a)
- mark session as bad on failed requests (#1647) (445ae43)
- support reloading of sessions with lots of retries (ebc89d2)
- fix type errors when
playwright
is not installed (#1637) (de9db0c) - upgrade to [email protected] (#1623) (ce36d6b)
Features
v3.1.0
3.1.0 (2022-10-13)
Bug Fixes
- add overload for
KeyValueStore.getValue
with defaultValue (#1541) (e3cb509) - add retry attempts to methods in CLI (#1588) (9142e59)
- allow
label
inenqueueLinksByClickingElements
options (#1525) (18b7c25) - basic-crawler: handle
request.noRetry
aftererrorHandler
(#1542) (2a2040e) - build storage classes by using
this
instead of the class (#1596) (2b14eb7) - correct some typing exports (#1527) (4a136e5)
- do not hide stack trace of (retried) Type/Syntax/ReferenceErrors (469b4b5)
- enqueueLinks: ensure the enqueue strategy is respected alongside user patterns (#1509) (2b0eeed)
- enqueueLinks: prevent useless request creations when filtering by user patterns (#1510) (cb8fe36)
- export
Cookie
fromcrawlee
metapackage (7b02ceb) - handle redirect cookies (#1521) (2f7fc7c)
- http-crawler: do not hang on POST without payload (#1546) (8c87390)
- remove undeclared dependency on core package from puppeteer utils (827ae60)
- support TypeScript 4.8 (#1507) (4c3a504)
- wait for persist state listeners to run when event manager closes (#1481) (aa550ed)
Features
- add
Dataset.exportToCSV
andDataset.exportToJSON
- add
Dataset.getData()
shortcut (522ed6e) - add
utils.downloadListOfUrls
to crawlee metapackage (7b33b0a) - add
utils.parseOpenGraph()
(#1555) (059f85e) - add
utils.playwright.compileScript
(#1559) (2e14162) - add
utils.playwright.infiniteScroll
(#1543) (60c8289), closes #1528 - add
utils.playwright.saveSnapshot
(#1544) (a4ceef0) - add global
useState
helper (#1551) (2b03177) - allow disabling storage persistence (#1539) (f65e3c6)
- bump puppeteer support to 17.x (#1519) (b97a852)
- core: add
forefront
option toenqueueLinks
helper (f8755b6), closes #1595 - don't close page before calling errorHandler (#1548) (1c8cd82)
- enqueue links by clicking for Playwright (#1545) (3d25ade)
- error tracker (#1467) (6bfe1ce)
- make the CLI download directly from GitHub (#1540) (3ff398a)
- router: add userdata generic to addHandler (#1547) (19cdf13)
- use JSON5 for
INPUT.json
to support comments (#1538) (09133ff)
v3.0.4
v3.0.3
What's Changed
- fix: add missing configuration to CheerioCrawler constructor by @AndreyBykov in #1432
- fix: sendRequest types by @szmarczak in #1445
- fix: respect
headless
option in browser crawlers by @B4nan in #1455 - fix: make
CheerioCrawlerOptions
type more loose by @B4nan in d871d8c - fix: improve dockerfiles and project templates by @B4nan in 7c21a64
- feat: add
utils.playwright.blockRequests()
by @barjin in #1447 - feat: http-crawler by @szmarczak in #1440
- feat: prefer
/INPUT.json
files forKeyValueStore.getInput()
by @vladfrangu in #1453 - feat: jsdom-crawler by @szmarczak in #1451
- feat: add
RetryRequestError
+ add error to the context for BC by @vladfrangu in #1443 - feat: add
keepAlive
to crawler options by @B4nan in #1452
Full Changelog: v3.0.2...v3.0.3
v3.0.2
What's Changed
- fix: regression in resolving the base url for enqueue link filtering by @vladfrangu in #1422
- fix: improve file saving on memory storage by @vladfrangu in #1421
- fix: add
UserData
type argument toCheerioCrawlingContext
and related interfaces by @B4nan in #1424 - fix: always limit
desiredConcurrency
to the value ofmaxConcurrency
by @B4nan in bcb689d - fix: wait for storage to finish before resolving
crawler.run()
by @B4nan in 9d62d56 - fix: using explicitly typed router with
CheerioCrawler
by @B4nan in 07b7e69 - fix: declare dependency on
ow
in@crawlee/cheerio
package by @B4nan in be59f99 - fix: use
crawlee@^3.0.0
in the CLI templates by @B4nan in 6426f22 - fix: fix building projects with TS when puppeteer and playwright are not installed by @B4nan in #1404
- fix: enqueueLinks should respect full URL of the current request for relative link resolution by @B4nan in #1427
- fix: use
desiredConcurrency: 10
as the default forCheerioCrawler
by @B4nan in #1428 - feat: allow configuring what status codes will cause session retirement by @B4nan in #1423
- feat: add support for middlewares to the
Router
viause
method by @B4nan in #1431
Full Changelog: v3.0.1...v3.0.2
v3.0.1
What's Changed
- fix: remove
JSONData
generic type arg fromCheerioCrawler
by @B4nan in #1402 - fix: rename default storage folder to just
storage
by @B4nan in #1403 - fix: remove trailing slash for proxyUrl by @AndreyBykov in #1405
- fix: run browser crawlers in headless mode by default by @B4nan in #1409
- fix: rename interface
FailedRequestHandler
toErrorHandler
by @B4nan in #1410 - fix: ensure default route is not ignored in
CheerioCrawler
by @B4nan in #1411 - fix: add
headless
option toBrowserCrawlerOptions
by @B4nan in #1412 - fix: processing custom cookies by @vladfrangu in #1414
- fix: enqueue link not finding relative links if the checked page is redirected by @vladfrangu in #1416
- fix: calling
enqueueLinks
in browser crawler on page without any links by @B4nan in 385ca27 - fix: improve error message when no default route provided by @B4nan in 04c3b6a
- feat: add parseWithCheerio for puppeteer & playwright by @AndreyBykov in #1418
Full Changelog: v3.0.0...v3.0.1
v3.0.0
Crawlee is the spiritual successor to Apify SDK, so we decided to keep the versioning and release Crawlee as v3.
Crawlee vs Apify SDK
Up until version 3 of apify
, the package contained both scraping related tools and Apify platform related helper methods. With v3 we are splitting the whole project into two main parts:
- Crawlee, the new web-scraping library, available as
crawlee
package on NPM - Actor SDK, helpers for the Apify platform, available as
apify
package on NPM
Moreover, the Crawlee library is published as several packages under @crawlee
namespace:
@crawlee/core
: the base for all the crawler implementations, also contains things likeRequest
,RequestQueue
,RequestList
orDataset
classes@crawlee/basic
: exportsBasicCrawler
@crawlee/cheerio
: exportsCheerioCrawler
@crawlee/browser
: exportsBrowserCrawler
(which is used for creating@crawlee/playwright
and@crawlee/puppeteer
)@crawlee/playwright
: exportsPlaywrightCrawler
@crawlee/puppeteer
: exportsPuppeteerCrawler
@crawlee/memory-storage
:@apify/storage-local
alternative@crawlee/browser-pool
: previouslybrowser-pool
package@crawlee/utils
: utility methods@crawlee/types
: holds TS interfaces mainly about theStorageClient
Installing Crawlee
As Crawlee is not yet released as
latest
, we need to install from thenext
distribution tag!
Most of the Crawlee packages are extending and reexporting each other, so it's enough to install just the one you plan on using, e.g. @crawlee/playwright
if you plan on using playwright
- it already contains everything from the @crawlee/browser
package, which includes everything from @crawlee/basic
, which includes everything from @crawlee/core
.
npm install crawlee@next
Or if all we need is cheerio support, we can install only @crawlee/cheerio
npm install @crawlee/cheerio@next
When using playwright
or puppeteer
, we still need to install those dependencies explicitly - this allows the users to be in control of which version will be used.
npm install crawlee@next playwright
# or npm install @crawlee/playwright@next playwright
Alternatively we can also use the crawlee
meta-package which contains (re-exports) most of the @crawlee/*
packages, and therefore contains all the crawler classes.
Sometimes you might want to use some utility methods from
@crawlee/utils
, so you might want to install that as well. This package contains some utilities that were previously available underApify.utils
. Browser related utilities can be also found in the crawler packages (e.g.@crawlee/playwright
).
Full TypeScript support
Both Crawlee and Actor SDK are full TypeScript rewrite, so they include up-to-date types in the package. For your TypeScript crawlers we recommend using our predefined TypeScript configuration from @apify/tsconfig
package. Don't forget to set the module
and target
to ES2022
or above to be able to use top level await.
The
@apify/tsconfig
config hasnoImplicitAny
enabled, you might want to disable it during the initial development as it will cause build failures if you left some unused local variables in your code.
{
"extends": "@apify/tsconfig",
"compilerOptions": {
"module": "ES2022",
"target": "ES2022",
"outDir": "dist",
"lib": ["DOM"]
},
"include": [
"./src/**/*"
]
}
Docker build
For Dockerfile
we recommend using multi-stage build so you don't install the dev dependencies like TypeScript in your final image:
# using multistage build, as we need dev deps to build the TS source code
FROM apify/actor-node:16 AS builder
# copy all files, install all dependencies (including dev deps) and build the project
COPY . ./
RUN npm install --include=dev \
&& npm run build
# create final image
FROM apify/actor-node:16
# copy only necessary files
COPY --from=builder /usr/src/app/package*.json ./
COPY --from=builder /usr/src/app/README.md ./
COPY --from=builder /usr/src/app/dist ./dist
COPY --from=builder /usr/src/app/apify.json ./apify.json
COPY --from=builder /usr/src/app/INPUT_SCHEMA.json ./INPUT_SCHEMA.json
# install only prod deps
RUN npm --quiet set progress=false \
&& npm install --only=prod --no-optional \
&& echo "Installed NPM packages:" \
&& (npm list --only=prod --no-optional --all || true) \
&& echo "Node.js version:" \
&& node --version \
&& echo "NPM version:" \
&& npm --version
# run compiled code
CMD npm run start:prod
Browser fingerprints
Previously we had a magical stealth
option in the puppeteer crawler that enabled several tricks aiming to mimic the real users as much as possible. While this worked to a certain degree, we decided to replace it with generated browser fingerprints.
In case we don't want to have dynamic fingerprints, we can disable this behaviour via useFingerprints
in browserPoolOptions
:
const crawler = new PlaywrightCrawler({
browserPoolOptions: {
useFingerprints: false,
},
});
Session cookie method renames
Previously, if we wanted to get or add cookies for the session that would be used for the request, we had to call session.getPuppeteerCookies()
or session.setPuppeteerCookies()
. Since this method could be used for any of our crawlers, not just PuppeteerCrawler
, the methods have been renamed to session.getCookies()
and session.setCookies()
respectively. Otherwise, their usage is exactly the same!
Memory storage
When we store some data or intermediate state (like the one RequestQueue
holds), we now use @crawlee/memory-storage
by default. It is an alternative to the @apify/storage-local
, that stores the state inside memory (as opposed to SQLite database used by @apify/storage-local
). While the state is stored in memory, it also dumps it to the file system so we can observe it, as well as respects the existing data stored in KeyValueStore (e.g. the INPUT.json
file).
When we want to run the crawler on Apify platform, we need to use Actor.init
or Actor.main
, which will automatically switch the storage client to ApifyClient
when on the Apify platform.
We can still use the @apify/storage-local
, to do it, first install it pass it to the Actor.init
or Actor.main
options:
@apify/storage-local
v2.1.0+ is required for crawlee
import { Actor } from 'apify';
import { ApifyStorageLocal } from '@apify/storage-local';
const storage = new ApifyStorageLocal(/* options like `enableWalMode` belong here */);
await Actor.init({ storage });
Purging of the default storage
Previously the state was preserved between local runs, and we had to use --purge
argument of the apify-cli
. With Crawlee, this is now the default behaviour, we purge the storage automatically on Actor.init/main
call. We can opt out of it via purge: false
in the Actor.init
options.
Renamed crawler options and interfaces
Some options were renamed to better reflect what they do. We still support all the old parameter names too, but not at the TS level.
handleRequestFunction
->requestHandler
handlePageFunction
->requestHandler
handleRequestTimeoutSecs
->requestHandlerTimeoutSecs
handlePageTimeoutSecs
->requestHandlerTimeoutSecs
requestTimeoutSecs
->navigationTimeoutSecs
handleFailedRequestFunction
->failedRequestHandler
We also renamed the crawling context interfaces, so they follow the same convention and are more meaningful:
CheerioHandlePageInputs
->CheerioCrawlingContext
PlaywrightHandlePageFunction
->PlaywrightCrawlingContext
PuppeteerHandlePageFunction
->PuppeteerCrawlingContext
Context aware helpers
Some utilities previously available under Apify.utils
namespace are now moved to the crawling context and are context aware. This means they have some parameters automatically filled in from the context, like the current Request
instance or current Page
object, or the RequestQueue
bound to the crawler.
Enqueuing links
One common helper that received more attention is the enqueueLinks
. As mentioned above, it is context aware - we no longer need pass in the requestQueue
or page
arguments (or the cheerio handle $
). In addition to that, it now offers 3 enqueuing strategies:
EnqueueStrategy.All
('all'
): Matches any URLs foundEnqueueStrategy.SameHostname
('same-hostname'
) Matches any URLs that have the same subdomain as the base URL (default)EnqueueStrategy.SameDomain
('same-domain'
) Matches any URLs that have the same domain name. For example,https://wow.an.example.com
andhttps://example.com
will both be matched for a base url ofhttps://example.com
.
This means we can even call enqueueLinks()
without any parameters. By default, it will go through all the links found on current page and filter only those targeting the same subdomain.
Moreover, we can specify patterns the URL should match via globs:
const crawler = new PlaywrightCrawler({
async requestHandler({ enqueueLinks }) {
await enqueueLinks({
globs: ['https://apify.com/*/*'],
// we can also use `regexps` and `pseudoUrls` keys here
});
},
});
Implicit RequestQueue
instance
All crawlers now have the RequestQueue
instance automatically available via crawler.getRequestQueue()
method. It will create the instance for you if it does not exist yet. This mean we no longer need to create the RequestQueue
instance manually, and we can just use crawler.addRequests()
method described undern...
v2.3.2
v2.3.1
What's Changed
- fix:
utils.apifyClient
early instantiation by @barjin in #1330 - fix: ensure failed req count is correct when using
RequestList
by @mnmkng in #1347 - fix: random puppeteer crawler (running in headful mode) failure by @AndreyBykov in #1348
This should help with the
We either navigate top level or have old version of the navigated frame
bug in puppeteer. - fix(ts): allow returning falsy values in
RequestTransform
's return type - feat: add
utils.playwright.injectJQuery
by @barjin in #1337 - feat: add
keyValueStore
option toStatistics
class by @B4nan in #1345 - perf(browser-pool): do not use
page.authenticate
as it disables cache
Full Changelog: v2.3.0...v2.3.1
v2.3.0
What's Changed
- feat: accept more social media patterns by @lhotanok in #1286
- feat: add multiple click support to
enqueueLinksByClickingElements
by @audiBookning in #1295 - feat: instance-scoped "global" configuration by @barjin in #1315
- feat: stealth deprecation by @petrpatek in #1314
- feat:
RequestList
acceptsProxyConfiguration
forrequestsFromUrls
by @barjin in #1317 - feat: allow passing a stream to
KeyValueStore.setRecord
by @gahabeen in #1325 - feat: update
playwright
to v1.20.2 - feat: update
puppeteer
to v13.5.2We noticed that with this version of puppeteer actor run could crash with
We either navigate top level or have old version of the navigated frame
error (puppeteer issue here). It should not happen while running the browser in headless mode. In case you need to run the browser in headful mode (headless: false
), we recommend pinning puppeteer version to10.4.0
in actorpackage.json
file. - fix: improve guessing of chrome executable path on windows by @audiBookning in #1294
- fix: use correct apify-client instance for snapshotting by @B4nan in #1308
- fix: prune CPU snapshots locally by @B4nan in #1313
- fix: improve browser launcher types by @barjin in #1318
- fix: reset
RequestQueue
state after 5 minutes of inactivity by @B4nan in #1324
0 concurrency mitigation
This release should resolve the 0 concurrency bug by automatically resetting the internal RequestQueue
state after 5 minutes of inactivity.
We now track last activity done on a RequestQueue
instance:
- added new request
- started processing a request (added to
inProgress
cache) - marked request as handled
- reclaimed request
If we don't detect one of those actions in last 5 minutes, and we have some requests in the inProgress
cache, we try to reset the state. We can override this limit via APIFY_INTERNAL_TIMEOUT
env var.
This should finally resolve the 0 concurrency bug, as it was always about stuck requests in the inProgress
cache.
New Contributors
- @audiBookning made their first contribution in #1294
- @lhotanok made their first contribution in #1286
- @barjin made their first contribution in #1315
- @gahabeen made their first contribution in #1325
Full Changelog: v2.2.2...v2.3.0