My project_ Hacker_News

In this project, we will work with a data set of submissions to popular technology site Hacker News. Hacker News is a site, where user submitted stories ("posts") are voted and commented upon. Hacker News is extremely popular in technology and startup circles, and posts that make it to the top of Hacker News's listings can get hundreds of thousands of visitors as a result. we will work with the file 'hackers_news.csv' which has been reduced from almost 300,000 rows to approximately 20,000 rows by removing all posts that did not receive any comments, and then randomly sampling from the remaining posts. Below are descriptions of the columns:

id :The unique identification from the Hacker News for the post

title : The title of the post

url : The URL that the post links to, if any

num_points : The number of points the post acquired, calculated as the total number of upvotes minus the total number of ownvotes

num_comments : The number of comments that were made on the post

author : The username of the person who submitted the post

created_at : The date and time at which the post was submitted

we are specifically interested in posts whose title begin with either Ask HN or Show HN. Users submit Ask HN posts to ask the Hacker News community a specific question and Show HN posts are submitted to show the Hacker News community a project, product, or just generally something interesting.

we will compare these two types of posts to determine the following:

. Average number of comments received on Ask HN and Show HN . Time connection of average comments on any post

Now, we will start by importing the libraries we need.

step 1 : Data file 'hacker_news.csv' is read as a list of lists and the result is assigned to a variable hn. First five rows of hn are printed.

In [1]:
from csv import reader
opened_file = open('hacker_news.csv')
read_file = reader(opened_file)
hn=list(read_file)
print(hn[:5])
[['id', 'title', 'url', 'num_points', 'num_comments', 'author', 'created_at'], ['12224879', 'Interactive Dynamic Video', 'http://www.interactivedynamicvideo.com/', '386', '52', 'ne0phyte', '8/4/2016 11:52'], ['10975351', 'How to Use Open Source and Shut the Fuck Up at the Same Time', 'http://hueniverse.com/2016/01/26/how-to-use-open-source-and-shut-the-fuck-up-at-the-same-time/', '39', '10', 'josep2', '1/26/2016 19:30'], ['11964716', "Florida DJs May Face Felony for April Fools' Water Joke", 'http://www.thewire.com/entertainment/2013/04/florida-djs-april-fools-water-joke/63798/', '2', '1', 'vezycash', '6/23/2016 22:20'], ['11919867', 'Technology ventures: From Idea to Enterprise', 'https://www.amazon.com/Technology-Ventures-Enterprise-Thomas-Byers/dp/0073523429', '3', '1', 'hswarna', '6/17/2016 0:01']]

step 2 : In the second steps we have extracted the first row of data, and assigned it to varaible headers. Now, the headers and remaining data is separated as shown below.

In [2]:
headers= hn[0]
print(headers)
hn= hn[1:]
print(hn[:5])
['id', 'title', 'url', 'num_points', 'num_comments', 'author', 'created_at']
[['12224879', 'Interactive Dynamic Video', 'http://www.interactivedynamicvideo.com/', '386', '52', 'ne0phyte', '8/4/2016 11:52'], ['10975351', 'How to Use Open Source and Shut the Fuck Up at the Same Time', 'http://hueniverse.com/2016/01/26/how-to-use-open-source-and-shut-the-fuck-up-at-the-same-time/', '39', '10', 'josep2', '1/26/2016 19:30'], ['11964716', "Florida DJs May Face Felony for April Fools' Water Joke", 'http://www.thewire.com/entertainment/2013/04/florida-djs-april-fools-water-joke/63798/', '2', '1', 'vezycash', '6/23/2016 22:20'], ['11919867', 'Technology ventures: From Idea to Enterprise', 'https://www.amazon.com/Technology-Ventures-Enterprise-Thomas-Byers/dp/0073523429', '3', '1', 'hswarna', '6/17/2016 0:01'], ['10301696', 'Note by Note: The Making of Steinway L1037 (2007)', 'http://www.nytimes.com/2007/11/07/movies/07stein.html?_r=0', '8', '2', 'walterbell', '9/30/2015 4:12']]

Now that we have removed the headers from 'hn', we will filter the data and since we are concerned with post titles begining with 'Ask HN' or 'Show HN', we'll create new list of lists containing just the data for those titles.

Below we have created empty lists called 'ask_posts', 'show_posts' and 'other_posts' and looped through each row in 'hn' for the title at row[1] and append the rows to respective empty lists. if 'title' starts with : 'Ask HN' : append it to 'ask_posts' 'Show HN' :append it to 'show_posts' all remaing posts will append to 'other_posts'

finally, compare the length of all lists with the length of 'hn.

In [3]:
ask_posts = []
show_posts = []
other_posts = []

for row in hn :
    title = row[1]
    
    if title.startswith('Ask HN'):
        ask_posts.append(row)
    elif title.startswith('Show HN'):
            show_posts.append(row)
    else:
        other_posts.append(row)

print(len(ask_posts))
print(len(show_posts))
print(len(other_posts))
print(len(hn))
total = len(ask_posts) + len(show_posts) + len(other_posts)
print(total)
1742
1161
17197
20100
20100

Now, when we have created separate list of all type of comments, we will calculate the average of both ask comments and show comments.

To calculate the average of comments of a particular category, we need total number of comments and length of 'ask_posts' and 'show_posts' in following steps:

  1. assign a value '0' to total comments and loop through rows of 'hn' to extract number of comments which is at row[4].
  2. total of comments is added is value of number of comments.
  3. average is calculated using 'total_comments'/len(ask_posts/show_posts')
In [4]:
total_ask_comments = 0
for row in ask_posts:
    num_comments = int(row[4])
    total_ask_comments += num_comments
    avg_ask_comments = total_ask_comments/len(ask_posts)
avg_ask_comments = "{:.2f}".format(avg_ask_comments)
print(avg_ask_comments)

total_show_comments = 0
for row in show_posts:
    num_comments = int(row[4])
    total_show_comments += num_comments
    avg_show_comments =  total_show_comments/len(show_posts)
avg_show_comments = "{:.2f}".format(avg_show_comments)
print(avg_show_comments)
    
14.04
10.32

Here we saw that on average, ask posts receive more comments than show posts. Since ask posts are more likely to receive comments, we'll focus our remaining analysis just on these posts.

Next, we'll determine if ask posts created at a certain time are more likely to attract comments. We'll use the following steps to perform this analysis:

  1. Calculate the amount of ask posts created in each hour of the day, along with the number of comments received.
  2. Calculate the average number of comments ask posts receive by hour created.

we will process following steps:

  1. import datetime as dt
  2. create empty list 'result_list'
  3. loop through rows of 'ask_posts' to extract the datetime and number of comments received in that hour.
  4. append 'datetime' and ' number of comments' to 'result_list' as a list
  5. now, we have 'result_list' as a list of lists

Now, we have a list of lists consisting datetime and number of comments received, we have to extract the 'hour' from the datetime in result_list and number of comments in that hour in following steps:

  1. create empty dictionaries counts_by_hour' and 'comments_by_hour'
  2. loop through rows of 'result_list' to extract the hour using datetime.strptime and datetime.strftime functions.
  3. if 'hour' is not in 'count_by_hour' : Create key in counts_by_hour and set it equal to 1. Create key in comments_by_hour and set it equal to 'number of comments'.
  4. if 'hour' in 'count_by_hour': Increment the value in 'count_by_hour' by 1. Increment the value in 'comments_by_hour' by 'number of comments'
In [5]:
import datetime as dt
result_list = []
for row in ask_posts:
    created_at = row[6]
    nu_comments = int(row[4])
    value = []
    value.append(created_at)
    value.append(nu_comments)
    result_list.append(value)



counts_by_hour = {}
comments_by_hour = {}

for row in result_list:
    c_date = row[0]
    date= dt.datetime.strptime(c_date, "%m/%d/%Y %H:%M")
    hour = dt.datetime.strftime(date, "%H")
    if hour not in counts_by_hour:
        counts_by_hour[hour] = 1
        comments_by_hour[hour] = row[1]
    else:
        counts_by_hour[hour] += 1
        comments_by_hour[hour] += row[1]
print(counts_by_hour)
print(comments_by_hour)
        

    
    
{'08': 48, '06': 44, '14': 107, '18': 108, '00': 54, '19': 110, '07': 34, '20': 80, '21': 109, '16': 108, '12': 73, '09': 45, '03': 54, '22': 71, '15': 116, '01': 60, '13': 85, '05': 46, '11': 58, '10': 59, '04': 47, '23': 68, '17': 100, '02': 58}
{'08': 492, '06': 397, '14': 1416, '18': 1430, '00': 439, '19': 1188, '07': 267, '20': 1722, '21': 1745, '16': 1814, '12': 687, '09': 251, '03': 421, '22': 479, '15': 4477, '01': 683, '13': 1253, '05': 464, '11': 641, '10': 793, '04': 337, '23': 543, '17': 1146, '02': 1381}

Now, when we have extracted the 'counts_by_hour' and 'comments_by_hour', we'll calculate the average number of the comments for posts created during each hour of the day.

In [6]:
avg_by_hour = []
for hr in counts_by_hour:
    cmt_hour= comments_by_hour[hr]
    count_hour= counts_by_hour[hr]
    avg_by_hour.append([hr, cmt_hour/count_hour])
print(avg_by_hour)
[['08', 10.25], ['06', 9.022727272727273], ['14', 13.233644859813085], ['18', 13.24074074074074], ['00', 8.12962962962963], ['19', 10.8], ['07', 7.852941176470588], ['20', 21.525], ['21', 16.009174311926607], ['16', 16.796296296296298], ['12', 9.41095890410959], ['09', 5.5777777777777775], ['03', 7.796296296296297], ['22', 6.746478873239437], ['15', 38.5948275862069], ['01', 11.383333333333333], ['13', 14.741176470588234], ['05', 10.08695652173913], ['11', 11.051724137931034], ['10', 13.440677966101696], ['04', 7.170212765957447], ['23', 7.985294117647059], ['17', 11.46], ['02', 23.810344827586206]]

we have extracted the list of lists which has elements as ['hour' , 'average number of comments'] but this format makes it hard to identify the hours with the highest values. so, we'll finish by sorting the lists of lists and printing the values in descending order in a format which is easier to read in the following steps:

  1. swap_avg_by_hour : swap columns
  2. sorting of swapped columns using sort function
  3. print the headline to describe the upcoming outcome.
  4. loop through the sorted list to extract the desired format of the 'hour' and round the average value to two decimal places.
In [10]:
swap_avg_by_hour = []
for row in avg_by_hour:
    swap_avg_by_hour.append([row[1], row[0]])



sorted_swap = sorted(swap_avg_by_hour, reverse= True)
print("Top 5 Hours for Ask Posts Comments.")

for row in sorted_swap:
    hour= row[1]
    hour_strip= dt.datetime.strptime(hour, "%H")
    hour_format= dt.datetime.strftime(hour_strip, "%H:%M")
    final_list= "{hour}:{avg_value:.2f} average comments per post".format(hour=hour_format, avg_value= row[0])
    print(final_list)
Top 5 Hours for Ask Posts Comments.
15:00:38.59 average comments per post
02:00:23.81 average comments per post
20:00:21.52 average comments per post
16:00:16.80 average comments per post
21:00:16.01 average comments per post
13:00:14.74 average comments per post
10:00:13.44 average comments per post
18:00:13.24 average comments per post
14:00:13.23 average comments per post
17:00:11.46 average comments per post
01:00:11.38 average comments per post
11:00:11.05 average comments per post
19:00:10.80 average comments per post
08:00:10.25 average comments per post
05:00:10.09 average comments per post
12:00:9.41 average comments per post
06:00:9.02 average comments per post
00:00:8.13 average comments per post
23:00:7.99 average comments per post
07:00:7.85 average comments per post
03:00:7.80 average comments per post
04:00:7.17 average comments per post
22:00:6.75 average comments per post
09:00:5.58 average comments per post

Finding of the project

From the analysis we did, it is clear that posts submitted during the hours 15.00 to 21.00 received most comments.