Hacker News is a site started by the startup incubator Y Combinator, where user-submitted stories (known as "posts") are voted and commented upon, similar to reddit.
Hacker News is extremely popular in technology and startup circles, and posts that make it to the top of Hacker News' listings can get hundreds of thousands of visitors as a result.
You can find the data set here, but note that it has been reduced from almost 300,000 rows to approximately 20,000 rows by removing all submissions that did not receive any comments, and then randomly sampling from the remaining submissions. Below are descriptions of the columns:
id
: The unique identifier from Hacker News for the posttitle
: The title of the posturl
: The URL that the posts links to, if the post has a URLnum_points
: The number of points the post acquired, calculated as the total number of upvotes minus the total number of downvotesnum_comments
: The number of comments that were made on the postauthor
: The username of the person who submitted the postcreated_at
: The date and time at which the post was submittedLet's start by importing the libraries we need and reading the data set into a list of lists.
#read csv file
import csv
d = open("hacker_news.csv")
hn = list(csv.reader(d))
#remove header from list of list
headers = hn[0]
hn = hn[1:]
print(headers)
print("\n")
print(hn[:5])
['id', 'title', 'url', 'num_points', 'num_comments', 'author', 'created_at'] [['12224879', 'Interactive Dynamic Video', 'http://www.interactivedynamicvideo.com/', '386', '52', 'ne0phyte', '8/4/2016 11:52'], ['10975351', 'How to Use Open Source and Shut the Fuck Up at the Same Time', 'http://hueniverse.com/2016/01/26/how-to-use-open-source-and-shut-the-fuck-up-at-the-same-time/', '39', '10', 'josep2', '1/26/2016 19:30'], ['11964716', "Florida DJs May Face Felony for April Fools' Water Joke", 'http://www.thewire.com/entertainment/2013/04/florida-djs-april-fools-water-joke/63798/', '2', '1', 'vezycash', '6/23/2016 22:20'], ['11919867', 'Technology ventures: From Idea to Enterprise', 'https://www.amazon.com/Technology-Ventures-Enterprise-Thomas-Byers/dp/0073523429', '3', '1', 'hswarna', '6/17/2016 0:01'], ['10301696', 'Note by Note: The Making of Steinway L1037 (2007)', 'http://www.nytimes.com/2007/11/07/movies/07stein.html?_r=0', '8', '2', 'walterbell', '9/30/2015 4:12']]
Now that we've removed the headers from hn
, we're ready to filter our data. Since we're only concerned with post titles beginning with Ask HN
or Show HN
, we'll create new lists of lists containing just the data for those titles.
#Extracting Ask HN and Show HN Posts into separate lists
ask_posts = []
show_posts = []
other_posts = []
for row in hn:
title = row[1]
title = title.lower()
if title.startswith('ask hn'):
ask_posts.append(row)
elif title.startswith('show hn'):
show_posts.append(row)
else:
other_posts.append(row)
print(len(ask_posts))
print(len(show_posts)) 0
print(len(other_posts))
1744 1162 17194
Here we will be calculating the Average Number of Comments for Ask HN and Show HN Posts.
total_ask_comments = 0
for row in ask_posts:
post = int(row[4])
total_ask_comments += post
avg_ask_comments = total_ask_comments / len(ask_posts)
print(avg_ask_comments)
print("\n")
total_show_comments = 0
for row in show_posts:
post = int(row[4])
total_show_comments += post
avg_show_comments = total_show_comments / len(show_posts)
print(avg_show_comments)
14.038417431192661 10.31669535283993
What we observe from this result is that ask posts receive on average approximately 14 comments, while show posts receive approximately 10 comments. Since ask posts are more likely to receive comments, we'll focus our remaining analysis just on these posts.
Next, we'll determine if ask posts created at a certain time are more likely to attract comments. We'll use the following steps to perform this analysis:
#Import the datetime module as dt
import datetime as dt
result_list = []
#Iterate over ask_posts and append to result_list a list with two elements(created_at and num_comments)
for row in ask_posts:
created_at = row[6]
num_comments = int(row[4])
result_list.append([created_at,num_comments])
counts_by_hour = {}
comments_by_hour = {}
for result in result_list:
date = result[0]
comment = result[1]
date = dt.datetime.strptime(date,"%m/%d/%Y %H:%M")
hour = dt.datetime.strftime(date, "%H")
if hour not in counts_by_hour:
counts_by_hour[hour] = 1
comments_by_hour[hour] = comment
else:
counts_by_hour[hour] += 1
comments_by_hour[hour] += comment
print(counts_by_hour)
print(comments_by_hour)
{'09': 45, '13': 85, '10': 59, '14': 107, '16': 108, '23': 68, '12': 73, '17': 100, '15': 116, '21': 109, '20': 80, '02': 58, '18': 109, '03': 54, '05': 46, '19': 110, '01': 60, '22': 71, '08': 48, '04': 47, '00': 55, '06': 44, '07': 34, '11': 58} {'09': 251, '13': 1253, '10': 793, '14': 1416, '16': 1814, '23': 543, '12': 687, '17': 1146, '15': 4477, '21': 1745, '20': 1722, '02': 1381, '18': 1439, '03': 421, '05': 464, '19': 1188, '01': 683, '22': 479, '08': 492, '04': 337, '00': 447, '06': 397, '07': 267, '11': 641}
We have calculated the amount of ask posts created per hour, along with the total amount of comments.
counts_by_hour
: contains the number of ask posts created during each hour of the day.comments_by_hour
: contains the corresponding number of comments ask posts created at each hour received.Next, we will use these two dictionaries (counts_by_hour
and comments_by_hour
) to calculate the average number of comments for posts created during each hour of the day.
#calculate the average number of comments for posts created during each hour of the day using the values from the two dictionaries
avg_by_hour = []
for hour in comments_by_hour:
avg_by_hour.append([hour, comments_by_hour[hour] / counts_by_hour[hour]])
print(avg_by_hour)
[['09', 5.5777777777777775], ['13', 14.741176470588234], ['10', 13.440677966101696], ['14', 13.233644859813085], ['16', 16.796296296296298], ['23', 7.985294117647059], ['12', 9.41095890410959], ['17', 11.46], ['15', 38.5948275862069], ['21', 16.009174311926607], ['20', 21.525], ['02', 23.810344827586206], ['18', 13.20183486238532], ['03', 7.796296296296297], ['05', 10.08695652173913], ['19', 10.8], ['01', 11.383333333333333], ['22', 6.746478873239437], ['08', 10.25], ['04', 7.170212765957447], ['00', 8.127272727272727], ['06', 9.022727272727273], ['07', 7.852941176470588], ['11', 11.051724137931034]]
Although we now have the results we need, this format makes it hard to identify the hours with the highest values. Let's finish by sorting the list of lists and printing the five highest values in a format that's easier to read.
# Swap the values in avg_by_hour
swap_avg_by_hour = []
for row in avg_by_hour:
swap_avg_by_hour.append([row[1], row[0]])
#Sort the values in swap_avg_by_hour in descending order
sorted_swap = sorted(swap_avg_by_hour, reverse=True)
print(sorted_swap)
print("\n")
print("Top 5 Hours for Ask Posts Comments")
print("\n")
#print the 5 hours with the highest average comments.
for values in sorted_swap[:5]:
avg = values[0]
hr = values[1]
hr = dt.datetime.strptime(hr, "%H")
hr_min = dt.datetime.strftime(hr, "%H:%M")
print("{}: {:.2f} average comments per post".format(hr_min, avg))
[[38.5948275862069, '15'], [23.810344827586206, '02'], [21.525, '20'], [16.796296296296298, '16'], [16.009174311926607, '21'], [14.741176470588234, '13'], [13.440677966101696, '10'], [13.233644859813085, '14'], [13.20183486238532, '18'], [11.46, '17'], [11.383333333333333, '01'], [11.051724137931034, '11'], [10.8, '19'], [10.25, '08'], [10.08695652173913, '05'], [9.41095890410959, '12'], [9.022727272727273, '06'], [8.127272727272727, '00'], [7.985294117647059, '23'], [7.852941176470588, '07'], [7.796296296296297, '03'], [7.170212765957447, '04'], [6.746478873239437, '22'], [5.5777777777777775, '09']] Top 5 Hours for Ask Posts Comments 15:00: 38.59 average comments per post 02:00: 23.81 average comments per post 20:00: 21.52 average comments per post 16:00: 16.80 average comments per post 21:00: 16.01 average comments per post
In reference to the data set documentation, the timezone used is Eastern Time in the US at 15:00 as 3:00pm EST. Converting this time to West African time will be at 8:00 pm WAT. The hour with the most comments per post on average is 15:00, with 38.59 comments per post on average.
In this project, we analyzed ask posts and show posts to determine which type of post and time receive the most comments on average. Based on our analysis, ask posts were more likely to receive comments with 4 comments more than show posts, we the focused our remaining analysis only ask posts. We recommed that posts should be categorized as ask post and created between 15:00 and 16:00 (3:00pm - 4:00pm EST) to enhance the amount of comments a post receives. Also, it should be noted that the data set we analyzed excluded posts without any comments. It's more accurate to say that of the posts that received comments, ask posts received more comments on average.