eventlet.green.urllib package¶
Submodules¶
eventlet.green.urllib.error module¶
Exception classes raised by urllib.
The base exception class is URLError, which inherits from OSError. It doesn't define any behavior of its own, but is the base class for all exceptions defined in this package.
HTTPError is an exception class that is also a valid HTTP response instance. It behaves this way because HTTP protocol errors are valid responses, with a status code, headers, and a body. In some contexts, an application may want to handle an exception like a regular response.
- exception eventlet.green.urllib.error.ContentTooShortError(message, content)¶
基类:
URLError
Exception raised when downloaded size does not match content-length.
- exception eventlet.green.urllib.error.HTTPError(url, code, msg, hdrs, fp)¶
基类:
URLError
,addinfourl
Raised when HTTP error occurs, but also acts like non-error return
- property headers¶
- property reason¶
eventlet.green.urllib.parse module¶
Parse (absolute and relative) URLs.
urlparse module is based upon the following RFC specifications.
RFC 3986 (STD66): "Uniform Resource Identifiers" by T. Berners-Lee, R. Fielding and L. Masinter, January 2005.
RFC 2732 : "Format for Literal IPv6 Addresses in URL's by R.Hinden, B.Carpenter and L.Masinter, December 1999.
RFC 2396: "Uniform Resource Identifiers (URI)": Generic Syntax by T. Berners-Lee, R. Fielding, and L. Masinter, August 1998.
RFC 2368: "The mailto URL scheme", by P.Hoffman , L Masinter, J. Zawinski, July 1998.
RFC 1808: "Relative Uniform Resource Locators", by R. Fielding, UC Irvine, June 1995.
RFC 1738: "Uniform Resource Locators (URL)" by T. Berners-Lee, L. Masinter, M. McCahill, December 1994
RFC 3986 is considered the current standard and any future changes to urlparse module should conform with it. The urlparse module is currently not entirely compliant with this RFC due to defacto scenarios for parsing, and for backward compatibility purposes, some parsing quirks from older RFCs are retained. The testcases in test_urlparse.py provides a good indicator of parsing behavior.
The WHATWG URL Parser spec should also be considered. We are not compliant with it either due to existing user code API behavior expectations (Hyrum's Law). It serves as a useful guide when making changes.
- class eventlet.green.urllib.parse.DefragResult(url, fragment)¶
基类:
DefragResult
,_ResultMixinStr
- geturl()¶
- class eventlet.green.urllib.parse.DefragResultBytes(url, fragment)¶
基类:
DefragResult
,_ResultMixinBytes
- geturl()¶
- class eventlet.green.urllib.parse.ParseResult(scheme, netloc, path, params, query, fragment)¶
基类:
ParseResult
,_NetlocResultMixinStr
- geturl()¶
- class eventlet.green.urllib.parse.ParseResultBytes(scheme, netloc, path, params, query, fragment)¶
基类:
ParseResult
,_NetlocResultMixinBytes
- geturl()¶
- class eventlet.green.urllib.parse.SplitResult(scheme, netloc, path, query, fragment)¶
基类:
SplitResult
,_NetlocResultMixinStr
- geturl()¶
- class eventlet.green.urllib.parse.SplitResultBytes(scheme, netloc, path, query, fragment)¶
基类:
SplitResult
,_NetlocResultMixinBytes
- geturl()¶
- eventlet.green.urllib.parse.parse_qs(qs, keep_blank_values=False, strict_parsing=False, encoding='utf-8', errors='replace', max_num_fields=None, separator='&')¶
Parse a query given as a string argument.
Arguments:
qs: percent-encoded query string to be parsed
- keep_blank_values: flag indicating whether blank values in
percent-encoded queries should be treated as blank strings. A true value indicates that blanks should be retained as blank strings. The default false value indicates that blank values are to be ignored and treated as if they were not included.
- strict_parsing: flag indicating what to do with parsing errors.
If false (the default), errors are silently ignored. If true, errors raise a ValueError exception.
- encoding and errors: specify how to decode percent-encoded sequences
into Unicode characters, as accepted by the bytes.decode() method.
- max_num_fields: int. If set, then throws a ValueError if there
are more than n fields read by parse_qsl().
- separator: str. The symbol to use for separating the query arguments.
Defaults to &.
Returns a dictionary.
- eventlet.green.urllib.parse.parse_qsl(qs, keep_blank_values=False, strict_parsing=False, encoding='utf-8', errors='replace', max_num_fields=None, separator='&')¶
Parse a query given as a string argument.
Arguments:
qs: percent-encoded query string to be parsed
- keep_blank_values: flag indicating whether blank values in
percent-encoded queries should be treated as blank strings. A true value indicates that blanks should be retained as blank strings. The default false value indicates that blank values are to be ignored and treated as if they were not included.
- strict_parsing: flag indicating what to do with parsing errors. If
false (the default), errors are silently ignored. If true, errors raise a ValueError exception.
- encoding and errors: specify how to decode percent-encoded sequences
into Unicode characters, as accepted by the bytes.decode() method.
- max_num_fields: int. If set, then throws a ValueError
if there are more than n fields read by parse_qsl().
- separator: str. The symbol to use for separating the query arguments.
Defaults to &.
Returns a list, as G-d intended.
- eventlet.green.urllib.parse.quote('abc def') 'abc%20def' ¶
Each part of a URL, e.g. the path info, the query, etc., has a different set of reserved characters that must be quoted. The quote function offers a cautious (not minimal) way to quote a string for most of these parts.
RFC 3986 Uniform Resource Identifier (URI): Generic Syntax lists the following (un)reserved characters.
unreserved = ALPHA / DIGIT / "-" / "." / "_" / "~" reserved = gen-delims / sub-delims gen-delims = ":" / "/" / "?" / "#" / "[" / "]" / "@" sub-delims = "!" / "$" / "&" / "'" / "(" / ")"
/ "*" / "+" / "," / ";" / "="
Each of the reserved characters is reserved in some component of a URL, but not necessarily in all of them.
The quote function %-escapes all characters that are neither in the unreserved chars ("always safe") nor the additional chars set via the safe arg.
The default for the safe arg is '/'. The character is reserved, but in typical usage the quote function is being called on a path where the existing slash characters are to be preserved.
Python 3.7 updates from using RFC 2396 to RFC 3986 to quote URL strings. Now, "~" is included in the set of unreserved characters.
string and safe may be either str or bytes objects. encoding and errors must not be specified if string is a bytes object.
The optional encoding and errors parameters specify how to deal with non-ASCII characters, as accepted by the str.encode method. By default, encoding='utf-8' (characters are encoded with UTF-8), and errors='strict' (unsupported characters raise a UnicodeEncodeError).
- eventlet.green.urllib.parse.quote_from_bytes(bs, safe='/')¶
Like quote(), but accepts a bytes object rather than a str, and does not perform string-to-bytes encoding. It always returns an ASCII string. quote_from_bytes(b'abc def?') -> 'abc%20def%3f'
- eventlet.green.urllib.parse.quote_plus(string, safe='', encoding=None, errors=None)¶
Like quote(), but also replace ' ' with '+', as required for quoting HTML form values. Plus signs in the original string are escaped unless they are included in safe. It also does not have safe default to '/'.
- eventlet.green.urllib.parse.unquote(string, encoding='utf-8', errors='replace')¶
Replace %xx escapes by their single-character equivalent. The optional encoding and errors parameters specify how to decode percent-encoded sequences into Unicode characters, as accepted by the bytes.decode() method. By default, percent-encoded sequences are decoded with UTF-8, and invalid sequences are replaced by a placeholder character.
unquote('abc%20def') -> 'abc def'.
- eventlet.green.urllib.parse.unquote_plus(string, encoding='utf-8', errors='replace')¶
Like unquote(), but also replace plus signs by spaces, as required for unquoting HTML form values.
unquote_plus('%7e/abc+def') -> '~/abc def'
- eventlet.green.urllib.parse.unquote_to_bytes('abc%20def') b'abc def'. ¶
- eventlet.green.urllib.parse.urldefrag(url)¶
Removes any existing fragment from URL.
Returns a tuple of the defragmented URL and the fragment. If the URL contained no fragments, the second element is the empty string.
- eventlet.green.urllib.parse.urlencode(query, doseq=False, safe='', encoding=None, errors=None, quote_via=<function quote_plus>)¶
Encode a dict or sequence of two-element tuples into a URL query string.
If any values in the query arg are sequences and doseq is true, each sequence element is converted to a separate parameter.
If the query arg is a sequence of two-element tuples, the order of the parameters in the output will match the order of parameters in the input.
The components of a query arg may each be either a string or a bytes type.
The safe, encoding, and errors parameters are passed down to the function specified by quote_via (encoding and errors only if a component is a str).
- eventlet.green.urllib.parse.urljoin(base, url, allow_fragments=True)¶
Join a base URL and a possibly relative URL to form an absolute interpretation of the latter.
- eventlet.green.urllib.parse.urlparse(url, scheme='', allow_fragments=True)¶
Parse a URL into 6 components: <scheme>://<netloc>/<path>;<params>?<query>#<fragment>
The result is a named 6-tuple with fields corresponding to the above. It is either a ParseResult or ParseResultBytes object, depending on the type of the url parameter.
The username, password, hostname, and port sub-components of netloc can also be accessed as attributes of the returned object.
The scheme argument provides the default value of the scheme component when no scheme is found in url.
If allow_fragments is False, no attempt is made to separate the fragment component from the previous component, which can be either path or query.
Note that % escapes are not expanded.
- eventlet.green.urllib.parse.urlsplit(url, scheme='', allow_fragments=True)¶
Parse a URL into 5 components: <scheme>://<netloc>/<path>?<query>#<fragment>
The result is a named 5-tuple with fields corresponding to the above. It is either a SplitResult or SplitResultBytes object, depending on the type of the url parameter.
The username, password, hostname, and port sub-components of netloc can also be accessed as attributes of the returned object.
The scheme argument provides the default value of the scheme component when no scheme is found in url.
If allow_fragments is False, no attempt is made to separate the fragment component from the previous component, which can be either path or query.
Note that % escapes are not expanded.
- eventlet.green.urllib.parse.urlunparse(components)¶
Put a parsed URL back together again. This may result in a slightly different, but equivalent URL, if the URL that was parsed originally had redundant delimiters, e.g. a ? with an empty query (the draft states that these are equivalent).
- eventlet.green.urllib.parse.urlunsplit(components)¶
Combine the elements of a tuple as returned by urlsplit() into a complete URL as a string. The data argument can be any five-item iterable. This may result in a slightly different, but equivalent URL, if the URL that was parsed originally had unnecessary delimiters (for example, a ? with an empty query; the RFC states that these are equivalent).
eventlet.green.urllib.request module¶
An extensible library for opening URLs using a variety of protocols
The simplest way to use this module is to call the urlopen function, which accepts a string containing a URL or a Request object (described below). It opens the URL and returns the results as file-like object; the returned object has some extra methods described below.
The OpenerDirector manages a collection of Handler objects that do all the actual work. Each Handler implements a particular protocol or option. The OpenerDirector is a composite object that invokes the Handlers needed to open the requested URL. For example, the HTTPHandler performs HTTP GET and POST requests and deals with non-error returns. The HTTPRedirectHandler automatically deals with HTTP 301, 302, 303, 307, and 308 redirect errors, and the HTTPDigestAuthHandler deals with digest authentication.
urlopen(url, data=None) -- Basic usage is the same as original urllib. pass the url and optionally data to post to an HTTP URL, and get a file-like object back. One difference is that you can also pass a Request instance instead of URL. Raises a URLError (subclass of OSError); for HTTP errors, raises an HTTPError, which can also be treated as a valid response.
build_opener -- Function that creates a new OpenerDirector instance. Will install the default handlers. Accepts one or more Handlers as arguments, either instances or Handler classes that it will instantiate. If one of the argument is a subclass of the default handler, the argument will be installed instead of the default.
install_opener -- Installs a new opener as the default opener.
objects of interest:
OpenerDirector -- Sets up the User Agent as the Python-urllib client and manages the Handler classes, while dealing with requests and responses.
Request -- An object that encapsulates the state of a request. The state can be as simple as the URL. It can also include extra HTTP headers, e.g. a User-Agent.
BaseHandler --
internals: BaseHandler and parent _call_chain conventions
Example usage:
import urllib.request
# set up authentication info authinfo = urllib.request.HTTPBasicAuthHandler() authinfo.add_password(realm='PDQ Application',
uri='https://mahler:8092/site-updates.py', user='klem', passwd='geheim$parole')
proxy_support = urllib.request.ProxyHandler({"http" : "http://ahad-haam:3128"})
# build a new opener that adds authentication and caching FTP handlers opener = urllib.request.build_opener(proxy_support, authinfo,
urllib.request.CacheFTPHandler)
# install it urllib.request.install_opener(opener)
f = urllib.request.urlopen('https://www.python.org/')
- class eventlet.green.urllib.request.AbstractBasicAuthHandler(password_mgr=None)¶
基类:
object
- http_error_auth_reqed(authreq, host, req, headers)¶
- http_request(req)¶
- http_response(req, response)¶
- https_request(req)¶
- https_response(req, response)¶
- retry_http_basic_auth(host, req, realm)¶
- rx = re.compile('(?:^|,)[ \t]*([^ \t,]+)[ \t]+realm=(["\']?)([^"\']*)\\2', re.IGNORECASE)¶
- class eventlet.green.urllib.request.AbstractDigestAuthHandler(passwd=None)¶
基类:
object
- get_algorithm_impls(algorithm)¶
- get_authorization(req, chal)¶
- get_cnonce(nonce)¶
- get_entity_digest(data, chal)¶
- http_error_auth_reqed(auth_header, host, req, headers)¶
- reset_retry_count()¶
- retry_http_digest_auth(req, auth)¶
- class eventlet.green.urllib.request.BaseHandler¶
基类:
object
- add_parent(parent)¶
- close()¶
- handler_order = 500¶
- class eventlet.green.urllib.request.CacheFTPHandler¶
基类:
FTPHandler
- check_cache()¶
- clear_cache()¶
- connect_ftp(user, passwd, host, port, dirs, timeout)¶
- setMaxConns(m)¶
- setTimeout(t)¶
- class eventlet.green.urllib.request.DataHandler¶
基类:
BaseHandler
- data_open(req)¶
- class eventlet.green.urllib.request.FTPHandler¶
基类:
BaseHandler
- connect_ftp(user, passwd, host, port, dirs, timeout)¶
- ftp_open(**kw)¶
- class eventlet.green.urllib.request.FancyURLopener(*args, **kwargs)¶
基类:
URLopener
Derived class with handlers for errors we can handle (perhaps).
- get_user_passwd(host, realm, clear_cache=0)¶
- http_error_301(url, fp, errcode, errmsg, headers, data=None)¶
Error 301 -- also relocated (permanently).
- http_error_302(url, fp, errcode, errmsg, headers, data=None)¶
Error 302 -- relocated (temporarily).
- http_error_303(url, fp, errcode, errmsg, headers, data=None)¶
Error 303 -- also relocated (essentially identical to 302).
- http_error_307(url, fp, errcode, errmsg, headers, data=None)¶
Error 307 -- relocated, but turn POST into error.
- http_error_308(url, fp, errcode, errmsg, headers, data=None)¶
Error 308 -- relocated, but turn POST into error.
- http_error_401(url, fp, errcode, errmsg, headers, data=None, retry=False)¶
Error 401 -- authentication required. This function supports Basic authentication only.
- http_error_407(url, fp, errcode, errmsg, headers, data=None, retry=False)¶
Error 407 -- proxy authentication required. This function supports Basic authentication only.
- http_error_default(url, fp, errcode, errmsg, headers)¶
Default error handling -- don't raise an exception.
- prompt_user_passwd(host, realm)¶
Override this in a GUI environment!
- redirect_internal(url, fp, errcode, errmsg, headers, data)¶
- retry_http_basic_auth(url, realm, data=None)¶
- retry_https_basic_auth(url, realm, data=None)¶
- retry_proxy_http_basic_auth(url, realm, data=None)¶
- retry_proxy_https_basic_auth(url, realm, data=None)¶
- class eventlet.green.urllib.request.FileHandler¶
基类:
BaseHandler
- file_open(req)¶
- get_names()¶
- names = None¶
- open_local_file(req)¶
- class eventlet.green.urllib.request.HTTPBasicAuthHandler(password_mgr=None)¶
基类:
AbstractBasicAuthHandler
,BaseHandler
- auth_header = 'Authorization'¶
- http_error_401(req, fp, code, msg, headers)¶
- class eventlet.green.urllib.request.HTTPCookieProcessor(cookiejar=None)¶
基类:
BaseHandler
- http_request(request)¶
- http_response(request, response)¶
- https_request(request)¶
- https_response(request, response)¶
- class eventlet.green.urllib.request.HTTPDefaultErrorHandler¶
基类:
BaseHandler
- http_error_default(req, fp, code, msg, hdrs)¶
- class eventlet.green.urllib.request.HTTPDigestAuthHandler(passwd=None)¶
基类:
BaseHandler
,AbstractDigestAuthHandler
An authentication protocol defined by RFC 2069
Digest authentication improves on basic authentication because it does not transmit passwords in the clear.
- auth_header = 'Authorization'¶
- handler_order = 490¶
- http_error_401(req, fp, code, msg, headers)¶
- class eventlet.green.urllib.request.HTTPErrorProcessor¶
基类:
BaseHandler
Process HTTP error responses.
- handler_order = 1000¶
- http_response(request, response)¶
- https_response(request, response)¶
- class eventlet.green.urllib.request.HTTPHandler(debuglevel=None)¶
基类:
AbstractHTTPHandler
- http_open(req)¶
- http_request(request)¶
- class eventlet.green.urllib.request.HTTPPasswordMgr¶
基类:
object
- add_password(realm, uri, user, passwd)¶
- find_user_password(realm, authuri)¶
- is_suburi(base, test)¶
Check if test is below base in a URI tree
Both args must be URIs in reduced form.
- reduce_uri(uri, default_port=True)¶
Accept authority or URI and extract only the authority and path.
- class eventlet.green.urllib.request.HTTPPasswordMgrWithDefaultRealm¶
-
- find_user_password(realm, authuri)¶
- class eventlet.green.urllib.request.HTTPPasswordMgrWithPriorAuth(*args, **kwargs)¶
基类:
HTTPPasswordMgrWithDefaultRealm
- add_password(realm, uri, user, passwd, is_authenticated=False)¶
- is_authenticated(authuri)¶
- update_authenticated(uri, is_authenticated=False)¶
- class eventlet.green.urllib.request.HTTPRedirectHandler¶
基类:
BaseHandler
- http_error_301(req, fp, code, msg, headers)¶
- http_error_302(req, fp, code, msg, headers)¶
- http_error_303(req, fp, code, msg, headers)¶
- http_error_307(req, fp, code, msg, headers)¶
- http_error_308(req, fp, code, msg, headers)¶
- inf_msg = 'The HTTP server returned a redirect error that would lead to an infinite loop.\nThe last 30x error message was:\n'¶
- max_redirections = 10¶
- max_repeats = 4¶
- redirect_request(req, fp, code, msg, headers, newurl)¶
Return a Request or None in response to a redirect.
This is called by the http_error_30x methods when a redirection response is received. If a redirection should take place, return a new Request to allow http_error_30x to perform the redirect. Otherwise, raise HTTPError if no-one else should try to handle this url. Return None if you can't but another Handler might.
- class eventlet.green.urllib.request.HTTPSHandler(debuglevel=None, context=None, check_hostname=None)¶
基类:
AbstractHTTPHandler
- https_open(req)¶
- https_request(request)¶
- class eventlet.green.urllib.request.OpenerDirector¶
基类:
object
- add_handler(handler)¶
- close()¶
- error(proto, *args)¶
- open(fullurl, data=None, timeout=<object object>)¶
- class eventlet.green.urllib.request.ProxyBasicAuthHandler(password_mgr=None)¶
基类:
AbstractBasicAuthHandler
,BaseHandler
- auth_header = 'Proxy-authorization'¶
- http_error_407(req, fp, code, msg, headers)¶
- class eventlet.green.urllib.request.ProxyDigestAuthHandler(passwd=None)¶
基类:
BaseHandler
,AbstractDigestAuthHandler
- auth_header = 'Proxy-Authorization'¶
- handler_order = 490¶
- http_error_407(req, fp, code, msg, headers)¶
- class eventlet.green.urllib.request.ProxyHandler(proxies=None)¶
基类:
BaseHandler
- handler_order = 100¶
- proxy_open(req, proxy, type)¶
- class eventlet.green.urllib.request.Request(url, data=None, headers={}, origin_req_host=None, unverifiable=False, method=None)¶
基类:
object
- add_header(key, val)¶
- add_unredirected_header(key, val)¶
- property data¶
- property full_url¶
- get_full_url()¶
- get_header(header_name, default=None)¶
- get_method()¶
Return a string indicating the HTTP request method.
- has_header(header_name)¶
- has_proxy()¶
- header_items()¶
- remove_header(header_name)¶
- set_proxy(host, type)¶
- class eventlet.green.urllib.request.URLopener(proxies=None, **x509)¶
基类:
object
Class to open URLs. This is a class rather than just a subroutine because we may need more than one set of global protocol-specific options. Note -- this is a base class for those who don't want the automatic handling of errors type 302 (relocated) and 401 (authorization needed).
- addheader(*args)¶
Add a header to be used by the HTTP interface only e.g. u.addheader('Accept', 'sound/basic')
- cleanup()¶
- close()¶
- http_error(url, fp, errcode, errmsg, headers, data=None)¶
Handle http errors.
Derived class can override this, or provide specific handlers named http_error_DDD where DDD is the 3-digit error code.
- http_error_default(url, fp, errcode, errmsg, headers)¶
Default error handler: close the connection and raise OSError.
- open(fullurl, data=None)¶
Use URLopener().open(file) instead of open(file, 'r').
- open_data(url, data=None)¶
Use "data" URL.
- open_file(url)¶
Use local file or FTP depending on form of URL.
- open_ftp(**kw)¶
- open_http(url, data=None)¶
Use HTTP protocol.
- open_https(url, data=None)¶
Use HTTPS protocol.
- open_local_file(url)¶
Use local file.
- open_unknown(fullurl, data=None)¶
Overridable interface to open unknown URL type.
- open_unknown_proxy(proxy, fullurl, data=None)¶
Overridable interface to open unknown URL type.
- retrieve(url, filename=None, reporthook=None, data=None)¶
retrieve(url) returns (filename, headers) for a local object or (tempfilename, headers) for a remote object.
- version = 'Python-urllib/3.12'¶
- class eventlet.green.urllib.request.UnknownHandler¶
基类:
BaseHandler
- unknown_open(req)¶
- eventlet.green.urllib.request.build_opener(*handlers)¶
Create an opener object from a list of handlers.
The opener will use several default handlers, including support for HTTP, FTP and when applicable HTTPS.
If any of the handlers passed as arguments are subclasses of the default handlers, the default handlers will not be used.
- eventlet.green.urllib.request.getproxies()¶
Return a dictionary of scheme -> proxy server URL mappings.
Scan the environment for variables named <scheme>_proxy; this seems to be the standard convention. If you need a different way, you can pass a proxies dictionary to the [Fancy]URLopener constructor.
- eventlet.green.urllib.request.install_opener(opener)¶
- eventlet.green.urllib.request.pathname2url(pathname)¶
OS-specific conversion from a file system path to a relative URL of the 'file' scheme; not recommended for general use.
- eventlet.green.urllib.request.url2pathname(pathname)¶
OS-specific conversion from a relative URL of the 'file' scheme to a file system path; not recommended for general use.
- eventlet.green.urllib.request.urlcleanup()¶
Clean up temporary files from urlretrieve calls.
- eventlet.green.urllib.request.urlopen(url, data=None, timeout=<object object>, *, cafile=None, capath=None, cadefault=False, context=None)¶
Open the URL url, which can be either a string or a Request object.
data must be an object specifying additional data to be sent to the server, or None if no such data is needed. See Request for details.
urllib.request module uses HTTP/1.1 and includes a "Connection:close" header in its HTTP requests.
The optional timeout parameter specifies a timeout in seconds for blocking operations like the connection attempt (if not specified, the global default timeout setting will be used). This only works for HTTP, HTTPS and FTP connections.
If context is specified, it must be a ssl.SSLContext instance describing the various SSL options. See HTTPSConnection for more details.
The optional cafile and capath parameters specify a set of trusted CA certificates for HTTPS requests. cafile should point to a single file containing a bundle of CA certificates, whereas capath should point to a directory of hashed certificate files. More information can be found in ssl.SSLContext.load_verify_locations().
The cadefault parameter is ignored.
This function always returns an object which can work as a context manager and has the properties url, headers, and status. See urllib.response.addinfourl for more detail on these properties.
For HTTP and HTTPS URLs, this function returns a http.client.HTTPResponse object slightly modified. In addition to the three new methods above, the msg attribute contains the same information as the reason attribute --- the reason phrase returned by the server --- instead of the response headers as it is specified in the documentation for HTTPResponse.
For FTP, file, and data URLs and requests explicitly handled by legacy URLopener and FancyURLopener classes, this function returns a urllib.response.addinfourl object.
Note that None may be returned if no handler handles the request (though the default installed global OpenerDirector uses UnknownHandler to ensure this never happens).
In addition, if proxy settings are detected (for example, when a *_proxy environment variable like http_proxy is set), ProxyHandler is default installed and makes sure the requests are handled through the proxy.
- eventlet.green.urllib.request.urlretrieve(url, filename=None, reporthook=None, data=None)¶
Retrieve a URL into a temporary location on disk.
Requires a URL argument. If a filename is passed, it is used as the temporary file location. The reporthook argument should be a callable that accepts a block number, a read size, and the total file size of the URL target. The data argument should be valid URL encoded data.
If a filename is passed and the URL points to a local resource, the result is a copy from local file to new file.
Returns a tuple containing the path to the newly created data file as well as the resulting HTTPMessage object.
eventlet.green.urllib.response module¶
Response classes used by urllib.
The base class, addbase, defines a minimal file-like interface, including read() and readline(). The typical response object is an addinfourl instance, which defines an info() method that returns headers and a geturl() method that returns the url.
- class eventlet.green.urllib.response.addbase(fp)¶
基类:
_TemporaryFileWrapper
Base class for addinfo and addclosehook. Is a good idea for garbage collection.
- class eventlet.green.urllib.response.addclosehook(fp, closehook, *hookargs)¶
基类:
addbase
Class to add a close hook to an open file.
- close()¶
Close the temporary file, possibly deleting it.