Skip to content

Commit

Permalink
SeleniumRequest use keyword parameters (#49)
Browse files Browse the repository at this point in the history
  • Loading branch information
sazima authored and clemfromspace committed Sep 14, 2019
1 parent 509ee53 commit a697afb
Showing 1 changed file with 7 additions and 7 deletions.
14 changes: 7 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ Use the `scrapy_selenium.SeleniumRequest` instead of the scrapy built-in `Reques
```python
from scrapy_selenium import SeleniumRequest

yield SeleniumRequest(url, self.parse_result)
yield SeleniumRequest(url=url, callback=self.parse_result)
```
The request will be handled by selenium, and the request will have an additional `meta` key, named `driver` containing the selenium driver with the request processed.
```python
Expand All @@ -62,8 +62,8 @@ from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC

yield SeleniumRequest(
url,
self.parse_result,
url=url,
callback=self.parse_result,
wait_time=10,
wait_until=EC.element_to_be_clickable((By.ID, 'someid'))
)
Expand All @@ -73,8 +73,8 @@ yield SeleniumRequest(
When used, selenium will take a screenshot of the page and the binary data of the .png captured will be added to the response `meta`:
```python
yield SeleniumRequest(
url,
self.parse_result,
url=url,
callback=self.parse_result,
screenshot=True
)

Expand All @@ -87,8 +87,8 @@ def parse_result(self, response):
When used, selenium will execute custom JavaScript code.
```python
yield SeleniumRequest(
url,
self.parse_result,
url=url,
callback=self.parse_result,
script='window.scrollTo(0, document.body.scrollHeight);',
)
```

0 comments on commit a697afb

Please sign in to comment.