Monday, January 31, 2022

LeafletJS Vs Python Folium Web map

 In this post, I will show how various components are made in both LeafletJS and Python Folium.


First map: Initialize a map with center, zoom and openstreetmap background



LeafletJS
<!DOCTYPE html>
<html>
<head>
	<script src="https://cdnjs.cloudflare.com/ajax/libs/leaflet/1.0.0-beta.2.rc.2/leaflet.js"></script>
	<link href="https://cdnjs.cloudflare.com/ajax/libs/leaflet/1.0.0-beta.2.rc.2/leaflet.css" rel="stylesheet" />
	<script src="https://cdnjs.cloudflare.com/ajax/libs/leaflet.draw/0.2.3/leaflet.draw.js"></script>

	<link href="https://cdnjs.cloudflare.com/ajax/libs/leaflet.draw/0.2.3/leaflet.draw.css" rel="stylesheet" />

	<meta charset="utf-8">
	<title>Web map....</title>
</head>

<style type="text/css">
	html, body, #map { margin: 0; height: 100%; width: 100%; }
</style>


<body>



  <div id='map'></div>



  <script>
  	// center of the map
	var center = [8.242, 7.671];

	// Create the map
	var map = L.map('map').setView(center, 7);

	// Set up the OSM layer
	L.tileLayer(
	  'https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', {
	    attribution: 'Data © <a href="http://osm.org/copyright">OpenStreetMap</a>',
	    maxZoom: 18
	  }).addTo(map);





  </script>

</body>
</html>


Folium

import folium

# initialize a map with center, zoom and openstreetmap background...
mapObj = folium.Map(location=[8.242, 7.671],
                     zoom_start=7, tiles='openstreetmap')


mapObj






Draw point



LeafletJS
l


Folium

f






point

Draw line



LeafletJS
l


Folium

f






point

Draw polygon



LeafletJS
l


Folium

f






point

Plot geojson data



LeafletJS
l


Folium

f






point

Add layer control



LeafletJS
l


Folium

f






point

Add HTML



LeafletJS
l


Folium

f









Tuesday, January 18, 2022

Geopandas Vs Folium - Generate Web map from data

 The code snippet below will demonstrate how to create an interactive choropleth web map using Geopandas and Folium libraries.


The latest version of Geopandas has the explore() method which can create a leafletjs map as seen above. 


import geopandas as gpd

# Read shp...
gdf = gpd.read_file(r"NGA_adm1.shp")

# Create web map obj...
mymap = gdf.explore(column='geographic')

# Save to file...
mymap.save('map.html')







import folium
import geopandas as gpd

zones = {'NEZ':1, 'SEZ':2, 'SSZ':3, 'SWZ':4, 'NCZ':5, 'NWZ':6}

# Read shp...
gdf = gpd.read_file(r"NGA_adm1.shp")

gdf.reset_index(level=0, inplace=True)
gdf['Weight'] = gdf['geographic'].map(zones)
gdf['index'] = gdf['index'].apply( lambda x: str(x) )

# Create folium map obj...
mymap = folium.Map(location=[8.67, 7.22], zoom_start=6)

folium.Choropleth(
    geo_data=geo_json_str, 
    data=gdf,
    name = 'Choropleth Map',
    columns = ['index','Weight', 'state_name'],
    key_on = 'feature.id',
    fill_color = 'YlGnBu', # RdYlGn
    legend_name = 'Name of Legend...',
    smooth_factor=  0
    
    ).add_to(mymap)

mymap










Sunday, January 9, 2022

Keeping track on some favorite developers websites

 There are many developer authors who publish useful content on their blog on a regular basis.

As a learning fan, it is a great idea to use the skills you learnt from them to keep track of what is new on their blogs.

The two most common ways for achieving this are API and Scrapping. So, you will research if the author's blog has API service and in the case where it doesn't exist then you will think about using web scraping.

The authors I want to lookup in this post are: Renan MouraWilliam Vincent and Flavio Copes

As at the time of writing, the above authors don't have an API implemented on their respective websites, so we will use web scraping to keep track of the latest post on their blogs. So, basically we will write a scrapper to store the data in a file then compare it with feature scraped data to get the latest or newest entries on the blogs.

There are several libraries for scraping websites, here I will use python requests/selenium, beautifulsoup and pandas to get the job done.


Let's get started...


1- Renan Moura


From Renan Moura's  blog, I will like to keep track of the following post variables: Category, title, title url, published date and updated date.

Using requests library, I got "406 Not Acceptable client error response". Which means that there is a bot manager on the server where the website is hosted that prevents bots from accessing the website. To overcome this, we can either use request with a user-agent or selenium to access this website.

import requests
import pandas as pd
from bs4 import BeautifulSoup

url = 'https://renanmf.com'
# Get user-agent from: http://www.useragentstring.com/
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'}

response = requests.get(url, headers=headers)
html = response.text

soup = BeautifulSoup(html, 'html.parser')
article = soup.find_all("div", {'class':'card-content'})

print(len(article))


import requests
import pandas as pd
from bs4 import BeautifulSoup
from selenium import webdriver


url = 'https://renanmf.com'

driver = webdriver.Chrome('chromedriver.exe')
driver.get(url)

html = driver.page_source


soup = BeautifulSoup(html, 'html.parser')
article = soup.find_all("div", {'class':'card-content'})

print(len(article))


From any of the methods above, we can now loop through the columns we wanted as seen below:-

data_list = []
for art in article:
    category = art.find("li", {'class':'meta-categories'}).text
    title_txt = art.find("h2", {'class':'entry-title'}).text
    title_link = art.find("h2", {'class':'entry-title'}).find('a')['href']
    pub_date = art.find("li", {'class':'meta-date'}).text
    updated_date = art.find("li", {'class':'meta-updated-date'}).text
    
    data = category, title_txt, title_link, pub_date, updated_date
    
    data_list.append(data)

# ------------------------
data_list_df = pd.DataFrame(data_list, columns=['Category', 'Title', 'Title URL', 'Published Date', 'Updated Date'])




2- William Vincent


Here, we will get the following post variable: title, title url and published date

import requests
import pandas as pd
from bs4 import BeautifulSoup


url = 'https://wsvincent.com/'

headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36'}

response = requests.get(url, headers=headers)
html = response.text

soup = BeautifulSoup(html, 'html.parser')
article = soup.find_all("li")

# ------------------------


data_list = []

for art in article:
    title = art.find('h2').text
    title_link = art.find('h2').find('a')['href']
    pub_date = art.find('span', {'class':'post-meta'}).text
    
    data = title, title_link, pub_date
    
    data_list.append(data)
    
# ------------------------

    
data_list_df = pd.DataFrame(data_list, columns=['Title', 'Title URL', 'Published Date'])




3- Flavio Copes


Flavio's blog is similar to William Vincent above, we will get the following post variable: title, title url and published date.


url = 'https://flaviocopes.com'

headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36'}

response = requests.get(url, headers=headers)
html = response.text

soup = BeautifulSoup(html, 'html.parser')
article = soup.find_all("li", {'class':'post-stub'})
# ---------------

data_list = []

for art in article:
    title = art.find('h4').text
    title_link = art.find('a')['href']
    pub_date = art.find("time", {'class':'post-stub-date'}).text
    
    data = title, title_link, pub_date
    
    data_list.append(data)
    

    
data_list_df = pd.DataFrame(data_list, columns=['Title', 'Title URL', 'Published Date'])

data_list_df    




Happy scrapping!

Saturday, January 1, 2022

Make a WordCloud in Python

 Here is how to make something like this image below in python with less than ten lines of code. It is called "WordCloud" and it is a visual representations of words that give greater prominence to words that appear more frequently.


You need to install WordCloud and MatPlotLib libraries to run the code blow.

Make a list of text you want to use for the word cloud and generate it as seen below.

# Libraries
%matplotlib notebook
from wordcloud import WordCloud
import matplotlib.pyplot as plt
 
# Create a list of word
text=("Umar Umar Umar Matplotlib Matplotlib Seaborn Network Plot Violin Chart Pandas Datascience Wordcloud Spider Radar Parrallel Alpha Color Brewer Density Scatter Barplot Barplot Boxplot Violinplot Treemap Stacked Area Chart Chart Visualization Dataviz Donut Pie Time-Series Wordcloud Wordcloud Sankey Bubble")
 
# Create the wordcloud object
wordcloud = WordCloud(width=480, height=480, margin=0).generate(text)
 
# Display the generated image:
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis("off")
plt.margins(x=0, y=0)
plt.show()

That is it!