Anyone want to make a small tut please ?
Like how to dim them, how the posting works, and the between methods, just like an Httpwrapper in .NET Programming :)
Printable View
Anyone want to make a small tut please ?
Like how to dim them, how the posting works, and the between methods, just like an Httpwrapper in .NET Programming :)
The methodology is the exact same, the syntax changes slightly. Look in to urllib/2. Or, just look at some source code examples: http://forum.logicalgamers.com/sourc...er-source.html
MAtt gave me that site whre he puts all of the LG projects, had the HTTPwrapper.py and all, but the syntax is the only thing that kills me.
What HTTP Wrapper are you using? If you're using this one:
then example usage:PHP Code:
import urllib
import urllib2
import cookielib
class HTTPWrapper:
def __init__(self, KeepCookies=True):
if(KeepCookies):
self.cj = cookielib.CookieJar()
self.HTTP = urllib2.build_opener(urllib2.HTTPCookieProcessor(self.cj))
else:
self.HTTP = urllib2.build_opener()
def Req(self, URL, data={}):
return self.HTTP.open(URL,urllib.urlencode(data)).read()
or postPHP Code:
import HTTPWrapper
Wrapper = HTTPWrapper.HTTPWrapper()
HTML = Wrapper.Req("http://www.google.com/")
PHP Code:
import HTTPWrapper
Wrapper = HTTPWrapper.HTTPWrapper()
Data = {"item1": "value", "item2": "value"}
HTML = Wrapper.Req("http://www.google.com/", Data)
Most of what you need is readily available in Python's standard library. There's BeautifulSoup, which is an HTML/markup parser. That's the only third-party library I've ever had to use really.
Requests: HTTP for Humans — Requests 0.13.5 documentation is the best HTTP requests library aka wrapper. Handles cookies, sessions, data and responses nicely. The code is very simple, just look at their examples. I would highly recommend it. And lxml - Processing XML and HTML with Python is your best bet for parsing HTML pages for content. Its not too hard to get a specific HTML tag's content with this. Seems much nicer than using a stringBetween method.
A library and wrapper aren't exactly two synonymous things (though they're both wrappers in the sense that they "wrap" the HTTP protocol functionality into a condensed, abstract form). The requests library is a lot more comprehensive than a standard wrapper built for HTTP tasks (which isn't necessarily a bad thing unless you're concerned with brevity of code), and a lot less user-defined and arbitrary.
For instance, if you made a library on top of the requests library, that would be a "wrapper" for it. And there are potential advantages to doing that (e.g. including what you need, omitting what you don't need, and adding other conveniences homogeneous libraries like requests typically don't feature).
As for the parsing library, BeautifulSoup is capable of all that and possibly even more. Usually people use methods like string between as opposed to extensive libraries like lxml and BeautifulSoup because in a lot of use cases they only require so much.