In Python, urllib is a module used for fetching URLs. It uses the http, ftp and file protocols and also has some functions to handle URLs. You can use this module to download web pages, request a webpage’s contents, add headers to your HTTP requests, send form data to a post URL, read cookies, handle redirections, etc.
Here’s a basic example:
```
import urllib.request
response = urllib.request.urlopen(‘http://www.google.com’)
html = response.read()
print(html)
```
In the above example, ‘urlopen’ is used to open and fetch the URL content. ‘read’ is then used to read and print the HTML content of the webpage.
If you want to use urllib to send a POST request, you can do the following:
```
import urllib.parse
import urllib.request
data = urllib.parse.urlencode({‘key1’: ‘value1’, ‘key2’: ‘value2’})
data = data.encode(‘ascii’)
request = urllib.request.Request(‘http://requestb.in’, data)
response = urllib.request.urlopen(request)
html = response.read()
print(html)
```
In this example, ‘urlencode’ is used to encode the POST data, then ‘Request’ is used to create a HTTP request, and ‘urlopen’ is used to send a HTTP POST request.
Please note that some websites may require user agent to be set for sending a request. You can set user agent in urllib like this:
```
import urllib.request
url = ‘http://www.google.com‘
headers = {‘User-Agent’: ‘Mozilla/5.0’} # Define your headers
request = urllib.request.Request(url, headers=headers) # Create a request
response = urllib.request.urlopen(request) # Send the request
html = response.read() # Get the webpage HTML
print(html)
```
In this example, a dictionary is created to define headers, then ‘Request’ is used to create a HTTP request by including the url and defined headers. After sending the request, you can read the HTML content of the webpage.