04.01.2018       Выпуск 211 (01.01.2018 - 07.01.2018)       Релизы

Scrapy 1.5.0

Читать>>



Экспериментальная функция:

Ниже вы видите текст статьи по ссылке. По нему можно быстро понять ссылка достойна прочтения или нет

Просим обратить внимание, что текст по ссылке и здесь может не совпадать.

Scrapy 1.5.0 (2017-12-29)¶

This release brings small new features and improvements across the codebase. Some highlights:

  • Google Cloud Storage is supported in FilesPipeline and ImagesPipeline.
  • Crawling with proxy servers becomes more efficient, as connections to proxies can be reused now.
  • Warnings, exception and logging messages are improved to make debugging easier.
  • scrapy parse command now allows to set custom request meta via --meta argument.
  • Compatibility with Python 3.6, PyPy and PyPy3 is improved; PyPy and PyPy3 are now supported officially, by running tests on CI.
  • Better default handling of HTTP 308, 522 and 524 status codes.
  • Documentation is improved, as usual.

Backwards Incompatible Changes¶

  • Scrapy 1.5 drops support for Python 3.3.
  • Default Scrapy User-Agent now uses https link to scrapy.org (issue 2983). This is technically backwards-incompatible; override USER_AGENT if you relied on old value.
  • Logging of settings overridden by custom_settings is fixed; this is technically backwards-incompatible because the logger changes from [scrapy.utils.log] to [scrapy.crawler]. If you’re parsing Scrapy logs, please update your log parsers (issue 1343).
  • LinkExtractor now ignores m4v extension by default, this is change in behavior.
  • 522 and 524 status codes are added to RETRY_HTTP_CODES (issue 2851)

New features¶

  • Support <link> tags in Response.follow (issue 2785)
  • Support for ptpython REPL (issue 2654)
  • Google Cloud Storage support for FilesPipeline and ImagesPipeline (issue 2923).
  • New --meta option of the “scrapy parse” command allows to pass additional request.meta (issue 2883)
  • Populate spider variable when using shell.inspect_response (issue 2812)
  • Handle HTTP 308 Permanent Redirect (issue 2844)
  • Add 522 and 524 to RETRY_HTTP_CODES (issue 2851)
  • Log versions information at startup (issue 2857)
  • scrapy.mail.MailSender now works in Python 3 (it requires Twisted 17.9.0)
  • Connections to proxy servers are reused (issue 2743)
  • Add template for a downloader middleware (issue 2755)
  • Explicit message for NotImplementedError when parse callback not defined (issue 2831)
  • CrawlerProcess got an option to disable installation of root log handler (issue 2921)
  • LinkExtractor now ignores m4v extension by default
  • Better log messages for responses over DOWNLOAD_WARNSIZE and DOWNLOAD_MAXSIZE limits (issue 2927)
  • Show warning when a URL is put to Spider.allowed_domains instead of a domain (issue 2250).

Bug fixes¶

  • Fix logging of settings overridden by custom_settings; this is technically backwards-incompatible because the logger changes from [scrapy.utils.log] to [scrapy.crawler], so please update your log parsers if needed (issue 1343)
  • Default Scrapy User-Agent now uses https link to scrapy.org (issue 2983). This is technically backwards-incompatible; override USER_AGENT if you relied on old value.
  • Fix PyPy and PyPy3 test failures, support them officially (issue 2793, issue 2935, issue 2990, issue 3050, issue 2213, issue 3048)
  • Fix DNS resolver when DNSCACHE_ENABLED=False (issue 2811)
  • Add cryptography for Debian Jessie tox test env (issue 2848)
  • Add verification to check if Request callback is callable (issue 2766)
  • Port extras/qpsclient.py to Python 3 (issue 2849)
  • Use getfullargspec under the scenes for Python 3 to stop DeprecationWarning (issue 2862)
  • Update deprecated test aliases (issue 2876)
  • Fix SitemapSpider support for alternate links (issue 2853)
  • Added missing bullet point for the AUTOTHROTTLE_TARGET_CONCURRENCY setting. (issue 2756)
  • Update Contributing docs, document new support channels (issue 2762, issue:3038)
  • Include references to Scrapy subreddit in the docs
  • Fix broken links; use https:// for external links (issue 2978, issue 2982, issue 2958)
  • Document CloseSpider extension better (issue 2759)
  • Use pymongo.collection.Collection.insert_one() in MongoDB example (issue 2781)
  • Spelling mistake and typos (issue 2828, issue 2837, issue #2884, issue 2924)
  • Clarify CSVFeedSpider.headers documentation (issue 2826)
  • Document DontCloseSpider exception and clarify spider_idle (issue 2791)
  • Update “Releases” section in README (issue 2764)
  • Fix rst syntax in DOWNLOAD_FAIL_ON_DATALOSS docs (issue 2763)
  • Small fix in description of startproject arguments (issue 2866)
  • Clarify data types in Response.body docs (issue 2922)
  • Add a note about request.meta['depth'] to DepthMiddleware docs (issue 2374)
  • Add a note about request.meta['dont_merge_cookies'] to CookiesMiddleware docs (issue 2999)
  • Up-to-date example of project structure (issue 2964, issue 2976)
  • A better example of ItemExporters usage (issue 2989)
  • Document from_crawler methods for spider and downloader middlewares (issue 3019)

Scrapy 1.4.0 (2017-05-18)¶

Scrapy 1.4 does not bring that many breathtaking new features but quite a few handy improvements nonetheless.

Scrapy now supports anonymous FTP sessions with customizable user and password via the new FTP_USER and FTP_PASSWORD settings. And if you’re using Twisted version 17.1.0 or above, FTP is now available with Python 3.

There’s a new response.follow method for creating requests; it is now a recommended way to create Requests in Scrapy spiders. This method makes it easier to write correct spiders; response.follow has several advantages over creating scrapy.Request objects directly:

  • it handles relative URLs;
  • it works properly with non-ascii URLs on non-UTF8 pages;
  • in addition to absolute and relative URLs it supports Selectors; for <a> elements it can also extract their href values.

For example, instead of this:

for href in response.css('li.page a::attr(href)').extract():
    url = response.urljoin(href)
    yield scrapy.Request(url, self.parse, encoding=response.encoding)

One can now write this:

for a in response.css('li.page a'):
    yield response.follow(a, self.parse)

Link extractors are also improved. They work similarly to what a regular modern browser would do: leading and trailing whitespace are removed from attributes (think href="   http://example.com") when building Link objects. This whitespace-stripping also happens for action attributes with FormRequest.

Please also note that link extractors do not canonicalize URLs by default anymore. This was puzzling users every now and then, and it’s not what browsers do in fact, so we removed that extra transformation on extractred links.

For those of you wanting more control on the Referer: header that Scrapy sends when following links, you can set your own Referrer Policy. Prior to Scrapy 1.4, the default RefererMiddleware would simply and blindly set it to the URL of the response that generated the HTTP request (which could leak information on your URL seeds). By default, Scrapy now behaves much like your regular browser does. And this policy is fully customizable with W3C standard values (or with something really custom of your own if you wish). See REFERRER_POLICY for details.

To make Scrapy spiders easier to debug, Scrapy logs more stats by default in 1.4: memory usage stats, detailed retry stats, detailed HTTP error code stats. A similar change is that HTTP cache path is also visible in logs now.

Last but not least, Scrapy now has the option to make JSON and XML items more human-readable, with newlines between items and even custom indenting offset, using the new FEED_EXPORT_INDENT setting.

Enjoy! (Or read on for the rest of changes in this release.)

Deprecations and Backwards Incompatible Changes¶

  • Default to canonicalize=False in scrapy.linkextractors.LinkExtractor (issue 2537, fixes issue 1941 and issue 1982): warning, this is technically backwards-incompatible
  • Enable memusage extension by default (issue 2539, fixes issue 2187); this is technically backwards-incompatible so please check if you have any non-default MEMUSAGE_*** options set.
  • EDITOR environment variable now takes precedence over EDITOR option defined in settings.py (issue 1829); Scrapy default settings no longer depend on environment variables. This is technically a backwards incompatible change.
  • Spider.make_requests_from_url is deprecated (issue 1728, fixes issue 1495).

Cleanups & Refactoring¶

  • Tests: remove temp files and folders (issue 2570), fixed ProjectUtilsTest on OS X (issue 2569), use portable pypy for Linux on Travis CI (issue 2710)
  • Separate building request from _requests_to_follow in CrawlSpider (issue 2562)
  • Remove “Python 3 progress” badge (issue 2567)
  • Add a couple more lines to .gitignore (issue 2557)
  • Remove bumpversion prerelease configuration (issue 2159)
  • Add codecov.yml file (issue 2750)
  • Set context factory implementation based on Twisted version (issue 2577, fixes issue 2560)
  • Add omitted self arguments in default project middleware template (issue 2595)
  • Remove redundant slot.add_request() call in ExecutionEngine (issue 2617)
  • Catch more specific os.error exception in FSFilesStore (issue 2644)
  • Change “localhost” test server certificate (issue 2720)
  • Remove unused MEMUSAGE_REPORT setting (issue 2576)

Scrapy 1.3.0 (2016-12-21)¶

This release comes rather soon after 1.2.2 for one main reason: it was found out that releases since 0.18 up to 1.2.2 (included) use some backported code from Twisted (scrapy.xlib.tx.*), even if newer Twisted modules are available. Scrapy now uses twisted.web.client and twisted.internet.endpoints directly. (See also cleanups below.)

As it is a major change, we wanted to get the bug fix out quickly while not breaking any projects using the 1.2 series.

New Features¶

  • MailSender now accepts single strings as values for to and cc arguments (issue 2272)
  • scrapy fetch url, scrapy shell url and fetch(url) inside scrapy shell now follow HTTP redirections by default (issue 2290); See fetch and shell for details.
  • HttpErrorMiddleware now logs errors with INFO level instead of DEBUG; this is technically backwards incompatible so please check your log parsers.
  • By default, logger names now use a long-form path, e.g. [scrapy.extensions.logstats], instead of the shorter “top-level” variant of prior releases (e.g. [scrapy]); this is backwards incompatible if you have log parsers expecting the short logger name part. You can switch back to short logger names using LOG_SHORT_NAMES set to True.

Dependencies & Cleanups¶

  • Scrapy now requires Twisted >= 13.1 which is the case for many Linux distributions already.
  • As a consequence, we got rid of scrapy.xlib.tx.* modules, which copied some of Twisted code for users stuck with an “old” Twisted version
  • ChunkedTransferMiddleware is deprecated and removed from the default downloader middlewares.

Scrapy 1.2.0 (2016-10-03)¶

New Features¶

  • New FEED_EXPORT_ENCODING setting to customize the encoding used when writing items to a file. This can be used to turn off \uXXXX escapes in JSON output. This is also useful for those wanting something else than UTF-8 for XML or CSV output (issue 2034).
  • startproject command now supports an optional destination directory to override the default one based on the project name (issue 2005).
  • New SCHEDULER_DEBUG setting to log requests serialization failures (issue 1610).
  • JSON encoder now supports serialization of set instances (issue 2058).
  • Interpret application/json-amazonui-streaming as TextResponse (issue 1503).
  • scrapy is imported by default when using shell tools (shell, inspect_response) (issue 2248).

Bug fixes¶

  • DefaultRequestHeaders middleware now runs before UserAgent middleware (issue 2088). Warning: this is technically backwards incompatible, though we consider this a bug fix.
  • HTTP cache extension and plugins that use the .scrapy data directory now work outside projects (issue 1581). Warning: this is technically backwards incompatible, though we consider this a bug fix.
  • Selector does not allow passing both response and text anymore (issue 2153).
  • Fixed logging of wrong callback name with scrapy parse (issue 2169).
  • Fix for an odd gzip decompression bug (issue 1606).
  • Fix for selected callbacks when using CrawlSpider with scrapy parse (issue 2225).
  • Fix for invalid JSON and XML files when spider yields no items (issue 872).
  • Implement flush() fpr StreamLogger avoiding a warning in logs (issue 2125).

Tests & Requirements¶

Scrapy’s new requirements baseline is Debian 8 “Jessie”. It was previously Ubuntu 12.04 Precise. What this means in practice is that we run continuous integration tests with these (main) packages versions at a minimum: Twisted 14.0, pyOpenSSL 0.14, lxml 3.4.

Scrapy may very well work with older versions of these packages (the code base still has switches for older Twisted versions for example) but it is not guaranteed (because it’s not tested anymore).

Scrapy 1.1.0 (2016-05-11)¶

This 1.1 release brings a lot of interesting features and bug fixes:

  • Scrapy 1.1 has beta Python 3 support (requires Twisted >= 15.5). See Beta Python 3 Support for more details and some limitations.
  • Hot new features:
  • These bug fixes may require your attention:
    • Don’t retry bad requests (HTTP 400) by default (issue 1289). If you need the old behavior, add 400 to RETRY_HTTP_CODES.
    • Fix shell files argument handling (issue 1710, issue 1550). If you try scrapy shell index.html it will try to load the URL http://index.html, use scrapy shell ./index.html to load a local file.
    • Robots.txt compliance is now enabled by default for newly-created projects (issue 1724). Scrapy will also wait for robots.txt to be downloaded before proceeding with the crawl (issue 1735). If you want to disable this behavior, update ROBOTSTXT_OBEY in settings.py file after creating a new project.
    • Exporters now work on unicode, instead of bytes by default (issue 1080). If you use PythonItemExporter, you may want to update your code to disable binary mode which is now deprecated.
    • Accept XML node names containing dots as valid (issue 1533).
    • When uploading files or images to S3 (with FilesPipeline or ImagesPipeline), the default ACL policy is now “private” instead of “public” Warning: backwards incompatible!. You can use FILES_STORE_S3_ACL to change it.
    • We’ve reimplemented canonicalize_url() for more correct output, especially for URLs with non-ASCII characters (issue 1947). This could change link extractors output compared to previous scrapy versions. This may also invalidate some cache entries you could still have from pre-1.1 runs. Warning: backwards incompatible!.

Keep reading for more details on other improvements and bug fixes.

Beta Python 3 Support¶

We have been hard at work to make Scrapy run on Python 3. As a result, now you can run spiders on Python 3.3, 3.4 and 3.5 (Twisted >= 15.5 required). Some features are still missing (and some may never be ported).

Almost all builtin extensions/middlewares are expected to work. However, we are aware of some limitations in Python 3:

  • Scrapy does not work on Windows with Python 3
  • Sending emails is not supported
  • FTP download handler is not supported
  • Telnet console is not supported

Additional New Features and Enhancements¶

Deprecations and Removals¶

  • Added to_bytes and to_unicode, deprecated str_to_unicode and unicode_to_str functions (issue 778).
  • binary_is_text is introduced, to replace use of isbinarytext (but with inverse return value) (issue 1851)
  • The optional_features set has been removed (issue 1359).
  • The --lsprof command line option has been removed (issue 1689). Warning: backward incompatible, but doesn’t break user code.
  • The following datatypes were deprecated (issue 1720):
    • scrapy.utils.datatypes.MultiValueDictKeyError
    • scrapy.utils.datatypes.MultiValueDict
    • scrapy.utils.datatypes.SiteNode
  • The previously bundled scrapy.xlib.pydispatch library was deprecated and replaced by pydispatcher.

Relocations¶

  • telnetconsole was relocated to extensions/ (issue 1524).

Scrapy 1.0.0 (2015-06-19)¶

You will find a lot of new features and bugfixes in this major release. Make sure to check our updated overview to get a glance of some of the changes, along with our brushed tutorial.

Support for returning dictionaries in spiders¶

Declaring and returning Scrapy Items is no longer necessary to collect the scraped data from your spider, you can now return explicit dictionaries instead.

Classic version

class MyItem(scrapy.Item):
    url = scrapy.Field()

class MySpider(scrapy.Spider):
    def parse(self, response):
        return MyItem(url=response.url)

New version

class MySpider(scrapy.Spider):
    def parse(self, response):
        return {'url': response.url}

Per-spider settings (GSoC 2014)¶

Last Google Summer of Code project accomplished an important redesign of the mechanism used for populating settings, introducing explicit priorities to override any given setting. As an extension of that goal, we included a new level of priority for settings that act exclusively for a single spider, allowing them to redefine project settings.

Start using it by defining a custom_settings class variable in your spider:

class MySpider(scrapy.Spider):
    custom_settings = {
        "DOWNLOAD_DELAY": 5.0,
        "RETRY_ENABLED": False,
    }

Read more about settings population: Settings

Python Logging¶

Scrapy 1.0 has moved away from Twisted logging to support Python built in’s as default logging system. We’re maintaining backward compatibility for most of the old custom interface to call logging functions, but you’ll get warnings to switch to the Python logging API entirely.

Old version

from scrapy import log
log.msg('MESSAGE', log.INFO)

New version

Logging with spiders remains the same, but on top of the log() method you’ll have access to a custom logger created for the spider to issue log events:

class MySpider(scrapy.Spider):
    def parse(self, response):
        self.logger.info('Response received')

Read more in the logging documentation: Logging

Crawler API refactoring (GSoC 2014)¶

Another milestone for last Google Summer of Code was a refactoring of the internal API, seeking a simpler and easier usage. Check new core interface in: Core API

A common situation where you will face these changes is while running Scrapy from scripts. Here’s a quick example of how to run a Spider manually with the new API:

from scrapy.crawler import CrawlerProcess

process = CrawlerProcess({
    'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
})
process.crawl(MySpider)
process.start()

Bear in mind this feature is still under development and its API may change until it reaches a stable status.

See more examples for scripts running Scrapy: Common Practices

Module Relocations¶

There’s been a large rearrangement of modules trying to improve the general structure of Scrapy. Main changes were separating various subpackages into new projects and dissolving both scrapy.contrib and scrapy.contrib_exp into top level packages. Backward compatibility was kept among internal relocations, while importing deprecated modules expect warnings indicating their new place.

Full list of relocations¶

Outsourced packages

Note

These extensions went through some minor changes, e.g. some setting names were changed. Please check the documentation in each new repository to get familiar with the new usage.

scrapy.contrib_exp and scrapy.contrib dissolutions

Old locationNew location
scrapy.contrib_exp.downloadermiddleware.decompressionscrapy.downloadermiddlewares.decompression
scrapy.contrib_exp.iteratorsscrapy.utils.iterators
scrapy.contrib.downloadermiddlewarescrapy.downloadermiddlewares
scrapy.contrib.exporterscrapy.exporters
scrapy.contrib.linkextractorsscrapy.linkextractors
scrapy.contrib.loaderscrapy.loader
scrapy.contrib.loader.processorscrapy.loader.processors
scrapy.contrib.pipelinescrapy.pipelines
scrapy.contrib.spidermiddlewarescrapy.spidermiddlewares
scrapy.contrib.spidersscrapy.spiders
  • scrapy.contrib.closespider
  • scrapy.contrib.corestats
  • scrapy.contrib.debug
  • scrapy.contrib.feedexport
  • scrapy.contrib.httpcache
  • scrapy.contrib.logstats
  • scrapy.contrib.memdebug
  • scrapy.contrib.memusage
  • scrapy.contrib.spiderstate
  • scrapy.contrib.statsmailer
  • scrapy.contrib.throttle
scrapy.extensions.*

Plural renames and Modules unification

Old locationNew location
scrapy.commandscrapy.commands
scrapy.dupefilterscrapy.dupefilters
scrapy.linkextractorscrapy.linkextractors
scrapy.spiderscrapy.spiders
scrapy.squeuescrapy.squeues
scrapy.statscolscrapy.statscollectors
scrapy.utils.decoratorscrapy.utils.decorators

Class renames

Old locationNew location
scrapy.spidermanager.SpiderManagerscrapy.spiderloader.SpiderLoader

Settings renames

Old locationNew location
SPIDER_MANAGER_CLASSSPIDER_LOADER_CLASS

Changelog¶

New Features and Enhancements

Deprecations and Removals

Relocations

Documentation

Bugfixes

  • Item multi inheritance fix (issue 353, issue 1228)
  • ItemLoader.load_item: iterate over copy of fields (issue 722)
  • Fix Unhandled error in Deferred (RobotsTxtMiddleware) (issue 1131, issue 1197)
  • Force to read DOWNLOAD_TIMEOUT as int (issue 954)
  • scrapy.utils.misc.load_object should print full traceback (issue 902)
  • Fix bug for “.local” host name (issue 878)
  • Fix for Enabled extensions, middlewares, pipelines info not printed anymore (issue 879)
  • fix dont_merge_cookies bad behaviour when set to false on meta (issue 846)

Python 3 In Progress Support

  • disable scrapy.telnet if twisted.conch is not available (issue 1161)
  • fix Python 3 syntax errors in ajaxcrawl.py (issue 1162)
  • more python3 compatibility changes for urllib (issue 1121)
  • assertItemsEqual was renamed to assertCountEqual in Python 3. (issue 1070)
  • Import unittest.mock if available. (issue 1066)
  • updated deprecated cgi.parse_qsl to use six’s parse_qsl (issue 909)
  • Prevent Python 3 port regressions (issue 830)
  • PY3: use MutableMapping for python 3 (issue 810)
  • PY3: use six.BytesIO and six.moves.cStringIO (issue 803)
  • PY3: fix xmlrpclib and email imports (issue 801)
  • PY3: use six for robotparser and urlparse (issue 800)
  • PY3: use six.iterkeys, six.iteritems, and tempfile (issue 799)
  • PY3: fix has_key and use six.moves.configparser (issue 798)
  • PY3: use six.moves.cPickle (issue 797)
  • PY3 make it possible to run some tests in Python3 (issue 776)

Tests

  • remove unnecessary lines from py3-ignores (issue 1243)
  • Fix remaining warnings from pytest while collecting tests (issue 1206)
  • Add docs build to travis (issue 1234)
  • TST don’t collect tests from deprecated modules. (issue 1165)
  • install service_identity package in tests to prevent warnings (issue 1168)
  • Fix deprecated settings API in tests (issue 1152)
  • Add test for webclient with POST method and no body given (issue 1089)
  • py3-ignores.txt supports comments (issue 1044)
  • modernize some of the asserts (issue 835)
  • selector.__repr__ test (issue 779)

Code refactoring

  • CSVFeedSpider cleanup: use iterate_spider_output (issue 1079)
  • remove unnecessary check from scrapy.utils.spider.iter_spider_output (issue 1078)
  • Pydispatch pep8 (issue 992)
  • Removed unused ‘load=False’ parameter from walk_modules() (issue 871)
  • For consistency, use job_dir helper in SpiderState extension. (issue 805)
  • rename “sflo” local variables to less cryptic “log_observer” (issue 775)

Scrapy 0.20.0 (released 2013-11-08)¶

Enhancements¶

  • New Selector’s API including CSS selectors (issue 395 and issue 426),
  • Request/Response url/body attributes are now immutable (modifying them had been deprecated for a long time)
  • ITEM_PIPELINES is now defined as a dict (instead of a list)
  • Sitemap spider can fetch alternate URLs (issue 360)
  • Selector.remove_namespaces() now remove namespaces from element’s attributes. (issue 416)
  • Paved the road for Python 3.3+ (issue 435, issue 436, issue 431, issue 452)
  • New item exporter using native python types with nesting support (issue 366)
  • Tune HTTP1.1 pool size so it matches concurrency defined by settings (commit b43b5f575)
  • scrapy.mail.MailSender now can connect over TLS or upgrade using STARTTLS (issue 327)
  • New FilesPipeline with functionality factored out from ImagesPipeline (issue 370, issue 409)
  • Recommend Pillow instead of PIL for image handling (issue 317)
  • Added debian packages for Ubuntu quantal and raring (commit 86230c0)
  • Mock server (used for tests) can listen for HTTPS requests (issue 410)
  • Remove multi spider support from multiple core components (issue 422, issue 421, issue 420, issue 419, issue 423, issue 418)
  • Travis-CI now tests Scrapy changes against development versions of w3lib and queuelib python packages.
  • Add pypy 2.1 to continuous integration tests (commit ecfa7431)
  • Pylinted, pep8 and removed old-style exceptions from source (issue 430, issue 432)
  • Use importlib for parametric imports (issue 445)
  • Handle a regression introduced in Python 2.7.5 that affects XmlItemExporter (issue 372)
  • Bugfix crawling shutdown on SIGINT (issue 450)
  • Do not submit reset type inputs in FormRequest.from_response (commit b326b87)
  • Do not silence download errors when request errback raises an exception (commit 684cfc0)

Other¶

  • Dropped Python 2.6 support (issue 448)
  • Add cssselect python package as install dependency
  • Drop libxml2 and multi selector’s backend support, lxml is required from now on.
  • Minimum Twisted version increased to 10.0.0, dropped Twisted 8.0 support.
  • Running test suite now requires mock python library (issue 390)

Thanks¶

Thanks to everyone who contribute to this release!

List of contributors sorted by number of commits:

69 Daniel Graña <dangra@...>
37 Pablo Hoffman <pablo@...>
13 Mikhail Korobov <kmike84@...>
 9 Alex Cepoi <alex.cepoi@...>
 9 alexanderlukanin13 <alexander.lukanin.13@...>
 8 Rolando Espinoza La fuente <darkrho@...>
 8 Lukasz Biedrycki <lukasz.biedrycki@...>
 6 Nicolas Ramirez <nramirez.uy@...>
 3 Paul Tremberth <paul.tremberth@...>
 2 Martin Olveyra <molveyra@...>
 2 Stefan <misc@...>
 2 Rolando Espinoza <darkrho@...>
 2 Loren Davie <loren@...>
 2 irgmedeiros <irgmedeiros@...>
 1 Stefan Koch <taikano@...>
 1 Stefan <cct@...>
 1 scraperdragon <dragon@...>
 1 Kumara Tharmalingam <ktharmal@...>
 1 Francesco Piccinno <stack.box@...>
 1 Marcos Campal <duendex@...>
 1 Dragon Dave <dragon@...>
 1 Capi Etheriel <barraponto@...>
 1 cacovsky <amarquesferraz@...>
 1 Berend Iwema <berend@...>

Scrapy 0.18.0 (released 2013-08-09)¶

  • Lot of improvements to testsuite run using Tox, including a way to test on pypi
  • Handle GET parameters for AJAX crawleable urls (commit 3fe2a32)
  • Use lxml recover option to parse sitemaps (issue 347)
  • Bugfix cookie merging by hostname and not by netloc (issue 352)
  • Support disabling HttpCompressionMiddleware using a flag setting (issue 359)
  • Support xml namespaces using iternodes parser in XMLFeedSpider (issue 12)
  • Support dont_cache request meta flag (issue 19)
  • Bugfix scrapy.utils.gz.gunzip broken by changes in python 2.7.4 (commit 4dc76e)
  • Bugfix url encoding on SgmlLinkExtractor (issue 24)
  • Bugfix TakeFirst processor shouldn’t discard zero (0) value (issue 59)
  • Support nested items in xml exporter (issue 66)
  • Improve cookies handling performance (issue 77)
  • Log dupe filtered requests once (issue 105)
  • Split redirection middleware into status and meta based middlewares (issue 78)
  • Use HTTP1.1 as default downloader handler (issue 109 and issue 318)
  • Support xpath form selection on FormRequest.from_response (issue 185)
  • Bugfix unicode decoding error on SgmlLinkExtractor (issue 199)
  • Bugfix signal dispatching on pypi interpreter (issue 205)
  • Improve request delay and concurrency handling (issue 206)
  • Add RFC2616 cache policy to HttpCacheMiddleware (issue 212)
  • Allow customization of messages logged by engine (issue 214)
  • Multiples improvements to DjangoItem (issue 217, issue 218, issue 221)
  • Extend Scrapy commands using setuptools entry points (issue 260)
  • Allow spider allowed_domains value to be set/tuple (issue 261)
  • Support settings.getdict (issue 269)
  • Simplify internal scrapy.core.scraper slot handling (issue 271)
  • Added Item.copy (issue 290)
  • Collect idle downloader slots (issue 297)
  • Add ftp:// scheme downloader handler (issue 329)
  • Added downloader benchmark webserver and spider tools Benchmarking
  • Moved persistent (on disk) queues to a separate project (queuelib) which scrapy now depends on
  • Add scrapy commands using external libraries (issue 260)
  • Added --pdb option to scrapy command line tool
  • Added XPathSelector.remove_namespaces() which allows to remove all namespaces from XML documents for convenience (to work with namespace-less XPaths). Documented in Selectors.
  • Several improvements to spider contracts
  • New default middleware named MetaRefreshMiddldeware that handles meta-refresh html tag redirections,
  • MetaRefreshMiddldeware and RedirectMiddleware have different priorities to address #62
  • added from_crawler method to spiders
  • added system tests with mock server
  • more improvements to Mac OS compatibility (thanks Alex Cepoi)
  • several more cleanups to singletons and multi-spider support (thanks Nicolas Ramirez)
  • support custom download slots
  • added –spider option to “shell” command.
  • log overridden settings when scrapy starts

Thanks to everyone who contribute to this release. Here is a list of contributors sorted by number of commits:

130 Pablo Hoffman <pablo@...>
 97 Daniel Graña <dangra@...>
 20 Nicolás Ramírez <nramirez.uy@...>
 13 Mikhail Korobov <kmike84@...>
 12 Pedro Faustino <pedrobandim@...>
 11 Steven Almeroth <sroth77@...>
  5 Rolando Espinoza La fuente <darkrho@...>
  4 Michal Danilak <mimino.coder@...>
  4 Alex Cepoi <alex.cepoi@...>
  4 Alexandr N Zamaraev (aka tonal) <tonal@...>
  3 paul <paul.tremberth@...>
  3 Martin Olveyra <molveyra@...>
  3 Jordi Llonch <llonchj@...>
  3 arijitchakraborty <myself.arijit@...>
  2 Shane Evans <shane.evans@...>
  2 joehillen <joehillen@...>
  2 Hart <HartSimha@...>
  2 Dan <ellisd23@...>
  1 Zuhao Wan <wanzuhao@...>
  1 whodatninja <blake@...>
  1 vkrest <v.krestiannykov@...>
  1 tpeng <pengtaoo@...>
  1 Tom Mortimer-Jones <tom@...>
  1 Rocio Aramberri <roschegel@...>
  1 Pedro <pedro@...>
  1 notsobad <wangxiaohugg@...>
  1 Natan L <kuyanatan.nlao@...>
  1 Mark Grey <mark.grey@...>
  1 Luan <luanpab@...>
  1 Libor Nenadál <libor.nenadal@...>
  1 Juan M Uys <opyate@...>
  1 Jonas Brunsgaard <jonas.brunsgaard@...>
  1 Ilya Baryshev <baryshev@...>
  1 Hasnain Lakhani <m.hasnain.lakhani@...>
  1 Emanuel Schorsch <emschorsch@...>
  1 Chris Tilden <chris.tilden@...>
  1 Capi Etheriel <barraponto@...>
  1 cacovsky <amarquesferraz@...>
  1 Berend Iwema <berend@...>

Scrapy 0.12¶

The numbers like #NNN reference tickets in the old issue tracker (Trac) which is no longer available.

New features and improvements¶

  • Passed item is now sent in the item argument of the item_passed (#273)
  • Added verbose option to scrapy version command, useful for bug reports (#298)
  • HTTP cache now stored by default in the project data dir (#279)
  • Added project data storage directory (#276, #277)
  • Documented file structure of Scrapy projects (see command-line tool doc)
  • New lxml backend for XPath selectors (#147)
  • Per-spider settings (#245)
  • Support exit codes to signal errors in Scrapy commands (#248)
  • Added -c argument to scrapy shell command
  • Made libxml2 optional (#260)
  • New deploy command (#261)
  • Added CLOSESPIDER_PAGECOUNT setting (#253)
  • Added CLOSESPIDER_ERRORCOUNT setting (#254)

Scrapyd changes¶

  • Scrapyd now uses one process per spider
  • It stores one log file per spider run, and rotate them keeping the lastest 5 logs per spider (by default)
  • A minimal web ui was added, available at http://localhost:6800 by default
  • There is now a scrapy server command to start a Scrapyd server of the current project

Changes to settings¶

  • added HTTPCACHE_ENABLED setting (False by default) to enable HTTP cache middleware
  • changed HTTPCACHE_EXPIRATION_SECS semantics: now zero means “never expire”.

Deprecated/obsoleted functionality¶

  • Deprecated runserver command in favor of server command which starts a Scrapyd server. See also: Scrapyd changes
  • Deprecated queue command in favor of using Scrapyd schedule.json API. See also: Scrapyd changes
  • Removed the !LxmlItemLoader (experimental contrib which never graduated to main contrib)

Scrapy 0.10¶

The numbers like #NNN reference tickets in the old issue tracker (Trac) which is no longer available.

New features and improvements¶

  • New Scrapy service called scrapyd for deploying Scrapy crawlers in production (#218) (documentation available)
  • Simplified Images pipeline usage which doesn’t require subclassing your own images pipeline now (#217)
  • Scrapy shell now shows the Scrapy log by default (#206)
  • Refactored execution queue in a common base code and pluggable backends called “spider queues” (#220)
  • New persistent spider queue (based on SQLite) (#198), available by default, which allows to start Scrapy in server mode and then schedule spiders to run.
  • Added documentation for Scrapy command-line tool and all its available sub-commands. (documentation available)
  • Feed exporters with pluggable backends (#197) (documentation available)
  • Deferred signals (#193)
  • Added two new methods to item pipeline open_spider(), close_spider() with deferred support (#195)
  • Support for overriding default request headers per spider (#181)
  • Replaced default Spider Manager with one with similar functionality but not depending on Twisted Plugins (#186)
  • Splitted Debian package into two packages - the library and the service (#187)
  • Scrapy log refactoring (#188)
  • New extension for keeping persistent spider contexts among different runs (#203)
  • Added dont_redirect request.meta key for avoiding redirects (#233)
  • Added dont_retry request.meta key for avoiding retries (#234)

Command-line tool changes¶

  • New scrapy command which replaces the old scrapy-ctl.py (#199) - there is only one global scrapy command now, instead of one scrapy-ctl.py per project - Added scrapy.bat script for running more conveniently from Windows
  • Added bash completion to command-line tool (#210)
  • Renamed command start to runserver (#209)

API changes¶

  • url and body attributes of Request objects are now read-only (#230)
  • Request.copy() and Request.replace() now also copies their callback and errback attributes (#231)
  • Removed UrlFilterMiddleware from scrapy.contrib (already disabled by default)
  • Offsite middelware doesn’t filter out any request coming from a spider that doesn’t have a allowed_domains attribute (#225)
  • Removed Spider Manager load() method. Now spiders are loaded in the constructor itself.
  • Changes to Scrapy Manager (now called “Crawler”):
    • scrapy.core.manager.ScrapyManager class renamed to scrapy.crawler.Crawler
    • scrapy.core.manager.scrapymanager singleton moved to scrapy.project.crawler
  • Moved module: scrapy.contrib.spidermanager to scrapy.spidermanager
  • Spider Manager singleton moved from scrapy.spider.spiders to the spiders` attribute of ``scrapy.project.crawler singleton.
  • moved Stats Collector classes: (#204)
    • scrapy.stats.collector.StatsCollector to scrapy.statscol.StatsCollector
    • scrapy.stats.collector.SimpledbStatsCollector to scrapy.contrib.statscol.SimpledbStatsCollector
  • default per-command settings are now specified in the default_settings attribute of command object class (#201)
  • changed arguments of Item pipeline process_item() method from (spider, item) to (item, spider)
    • backwards compatibility kept (with deprecation warning)
  • moved scrapy.core.signals module to scrapy.signals
    • backwards compatibility kept (with deprecation warning)
  • moved scrapy.core.exceptions module to scrapy.exceptions
    • backwards compatibility kept (with deprecation warning)
  • added handles_request() class method to BaseSpider
  • dropped scrapy.log.exc() function (use scrapy.log.err() instead)
  • dropped component argument of scrapy.log.msg() function
  • dropped scrapy.log.log_level attribute
  • Added from_settings() class methods to Spider Manager, and Item Pipeline Manager

Changes to settings¶

  • Added HTTPCACHE_IGNORE_SCHEMES setting to ignore certain schemes on !HttpCacheMiddleware (#225)
  • Added SPIDER_QUEUE_CLASS setting which defines the spider queue to use (#220)
  • Added KEEP_ALIVE setting (#220)
  • Removed SERVICE_QUEUE setting (#220)
  • Removed COMMANDS_SETTINGS_MODULE setting (#201)
  • Renamed REQUEST_HANDLERS to DOWNLOAD_HANDLERS and make download handlers classes (instead of functions)

Scrapy 0.9¶

The numbers like #NNN reference tickets in the old issue tracker (Trac) which is no longer available.

New features and improvements¶

  • Added SMTP-AUTH support to scrapy.mail
  • New settings added: MAIL_USER, MAIL_PASS (r2065 | #149)
  • Added new scrapy-ctl view command - To view URL in the browser, as seen by Scrapy (r2039)
  • Added web service for controlling Scrapy process (this also deprecates the web console. (r2053 | #167)
  • Support for running Scrapy as a service, for production systems (r1988, r2054, r2055, r2056, r2057 | #168)
  • Added wrapper induction library (documentation only available in source code for now). (r2011)
  • Simplified and improved response encoding support (r1961, r1969)
  • Added LOG_ENCODING setting (r1956, documentation available)
  • Added RANDOMIZE_DOWNLOAD_DELAY setting (enabled by default) (r1923, doc available)
  • MailSender is no longer IO-blocking (r1955 | #146)
  • Linkextractors and new Crawlspider now handle relative base tag urls (r1960 | #148)
  • Several improvements to Item Loaders and processors (r2022, r2023, r2024, r2025, r2026, r2027, r2028, r2029, r2030)
  • Added support for adding variables to telnet console (r2047 | #165)
  • Support for requests without callbacks (r2050 | #166)

API changes¶

  • Change Spider.domain_name to Spider.name (SEP-012, r1975)
  • Response.encoding is now the detected encoding (r1961)
  • HttpErrorMiddleware now returns None or raises an exception (r2006 | #157)
  • scrapy.command modules relocation (r2035, r2036, r2037)
  • Added ExecutionQueue for feeding spiders to scrape (r2034)
  • Removed ExecutionEngine singleton (r2039)
  • Ported S3ImagesStore (images pipeline) to use boto and threads (r2033)
  • Moved module: scrapy.management.telnet to scrapy.telnet (r2047)

Scrapy 0.8¶

The numbers like #NNN reference tickets in the old issue tracker (Trac) which is no longer available.

New features¶

  • Added DEFAULT_RESPONSE_ENCODING setting (r1809)
  • Added dont_click argument to FormRequest.from_response() method (r1813, r1816)
  • Added clickdata argument to FormRequest.from_response() method (r1802, r1803)
  • Added support for HTTP proxies (HttpProxyMiddleware) (r1781, r1785)
  • Offsite spider middleware now logs messages when filtering out requests (r1841)

Backwards-incompatible changes¶

  • Changed scrapy.utils.response.get_meta_refresh() signature (r1804)
  • Removed deprecated scrapy.item.ScrapedItem class - use scrapy.item.Item instead (r1838)
  • Removed deprecated scrapy.xpath module - use scrapy.selector instead. (r1836)
  • Removed deprecated core.signals.domain_open signal - use core.signals.domain_opened instead (r1822)
  • log.msg() now receives a spider argument (r1822)
    • Old domain argument has been deprecated and will be removed in 0.9. For spiders, you should always use the spider argument and pass spider references. If you really want to pass a string, use the component argument instead.
  • Changed core signals domain_opened, domain_closed, domain_idle
  • Changed Item pipeline to use spiders instead of domains
    • The domain argument of process_item() item pipeline method was changed to spider, the new signature is: process_item(spider, item) (r1827 | #105)
    • To quickly port your code (to work with Scrapy 0.8) just use spider.domain_name where you previously used domain.
  • Changed Stats API to use spiders instead of domains (r1849 | #113)
    • StatsCollector was changed to receive spider references (instead of domains) in its methods (set_value, inc_value, etc).
    • added StatsCollector.iter_spider_stats() method
    • removed StatsCollector.list_domains() method
    • Also, Stats signals were renamed and now pass around spider references (instead of domains). Here’s a summary of the changes:
    • To quickly port your code (to work with Scrapy 0.8) just use spider.domain_name where you previously used domain. spider_stats contains exactly the same data as domain_stats.
  • CloseDomain extension moved to scrapy.contrib.closespider.CloseSpider (r1833)
    • Its settings were also renamed:
      • CLOSEDOMAIN_TIMEOUT to CLOSESPIDER_TIMEOUT
      • CLOSEDOMAIN_ITEMCOUNT to CLOSESPIDER_ITEMCOUNT
  • Removed deprecated SCRAPYSETTINGS_MODULE environment variable - use SCRAPY_SETTINGS_MODULE instead (r1840)
  • Renamed setting: REQUESTS_PER_DOMAIN to CONCURRENT_REQUESTS_PER_SPIDER (r1830, r1844)
  • Renamed setting: CONCURRENT_DOMAINS to CONCURRENT_SPIDERS (r1830)
  • Refactored HTTP Cache middleware
  • HTTP Cache middleware has been heavilty refactored, retaining the same functionality except for the domain sectorization which was removed. (r1843 )
  • Renamed exception: DontCloseDomain to DontCloseSpider (r1859 | #120)
  • Renamed extension: DelayedCloseDomain to SpiderCloseDelay (r1861 | #121)
  • Removed obsolete scrapy.utils.markup.remove_escape_chars function - use scrapy.utils.markup.replace_escape_chars instead (r1865)


Лучшая Python рассылка



Разместим вашу рекламу

Пиши: mail@pythondigest.ru

Нашли опечатку?

Выделите фрагмент и отправьте нажатием Ctrl+Enter.

Система Orphus