Web scraping is a technique to extract data from websites. Here is an example of basic web scraping with Node.js. We are going to use “axios” and “cheerio” libraries.
Axios is a promise-based HTTP client for the browser and Node.js. Cheerio parses markup and provides an API for traversing/manipulating the resulting data structure, while being fast and flexible.
Here’s how you can do web scraping with Node.js:
1. Install the necessary packages:
```
npm install axios cheerio
```
1. Use axios to download web pages and cheerio to parse them:
```
const axios = require(‘axios’);
const cheerio = require(‘cheerio’);
async function scrapeData(url) { try { const response = await axios.get(url); const html = response.data; const $ = cheerio.load(html);
const scrapedData = []; $(‘body’).each((index, element) => { const title = $(element).text(); const link = $(element).attr(‘href’); scrapedData.push({ title, link, }); }); console.log(scrapedData); } catch (error) { console.error(`Could not scrape the data from ${url}`); console.error(error); } }scrapeData(‘https://example.com’);
```
Note: You need permission from the page owner to scrape their data. Some websites don’t allow web scraping while others may have robots.txt file which will tell the client what they can and cannot do.
Also, the code above works when you want to obtain elements by their tag names. To get a certain class or id, you can do something like this:
```
$(‘.className’).each((index, element) => {
// do something
});
```
or
```javascript
$(‘#idName’).each((index, element) => {
// do something
});
```
Ensure to replace ‘body’, ‘.className’, and ‘#idName’ with appropriate HTML tags or CSS identifiers present on the web page you are scraping.