Feedback from employee exit surveys can provide powerful insights into a company’s culture. It doesn't matter how excellent a company is, people are eventually going to leave. Exit surveys allow leaving employees to share their unique opinions. This can help companies in mitigating the many costs of losing other employees in the future.
Image source: Skywalk Group
In this Project, we'll work with exit surveys from employees of the Department of Education, Training and Employment (DETE) and the Technical and Further Education (TAFE) institute in Queensland, Australia.
The DETE exit survey data can be found here. However, the original TAFE survey data is no longer available. Some modifications have been made to the original datasets to make them easier to work with, especially changing the encoding from cp1252 to UTF-8.
We will play the role of data analysts and pretend our stakeholders want to know the following:
The stakeholders want us to combine results from both surveys and answer these questions. Although both surveys used the same template, one of them had customized answers.
A data dictionary wasn't provided with the dataset. In a job setting, we'd make sure to meet with a manager and confirm the definitions of the data. For this project, we'll use our general knowledge to define the columns.
From dete_survey.csv
, we will focus on the following columns:
ID
: An id used to identify the participant of the survey.
SeparationType
: The reason why the person's employment ended.
Cease Date
: The year or month the person's employment ended.
DETE Start Date
: The year the person began employment with the DETE.
From tafe_survey.csv
, we will focus on the following columns:
Record ID
: An id used to identify the participant of the survey.
Reason for ceasing employment
: The reason why the person's employment ended.
LengthofServiceOverall
. Overall Length of Service at Institute (in years): The length of the person's employment (in years).
Age, gender, and length of service are important factors when it comes to employee satisfaction and retention, especially in the current world where employees have high expectations of sound workplace culture.
Young employees are less willing to leave a current employer while older employees pose a higher flight risk, perhaps, driven by a search for better career opportunities or a more challenging work environment with cross-functional collaboration. Younger employees generally seek to gain more experience, acquire new skills and advance their careers, which might explain their lower tendency to resign from dissatisfaction at the early stage of their careers.
In terms of gender, men posed a slightly higher flight risk than their female counterparts. They might likely be in search of higher-paying and career-accelerating opportunities to fend for their families.
We will start by importing some useful python libraries. Numpy
and Pandas
for performing mathematical operations and manipulating data; Tabulate
for pretty-printing pandas series and dataframes; and the Plotly
visualisation libraries for building informing visuals.
import numpy as np
import pandas as pd
from tabulate import tabulate
import plotly.express as px
import plotly.graph_objects as go
from plotly.subplots import make_subplots
#read the DETE dataset
dete_survey = pd.read_csv('./dete_survey.csv')
# Ensure that all columns are printed in our output
pd.set_option("display.max_columns", None)
# preview the DETE dataset
dete_survey.info()
dete_survey.head()
Classification
, Business Unit
, Aboriginal
, Torres Strait
, South Sea
, Disability
and NESB
have over 50% missing data. ID
column is stored as an integer. Other columns are stored as object/string data.Cease Date
, DETE Start Date
and Role Start Date
) are stored as object/string data instead of datetime or numerical data.dete_survey.describe(include='all')
SeparationType
column.Start Date
and Role Start Date
columns contain alot of 'Not Stated' entries. There could be a chance that this information wasn't provided by respondents at the time of completing the survey.Aboriginal
, Torres Strait
, South Sea
, Disability
and NESB
have only one unique value which is 'Yes'. This might explain why they have the highest proportion of null values. Null entries in these columns might have represented 'No' at the time the survey was administered.Professional Development
column to the Health & Safety
column is 'A'. This seems quite unusual as 'A' doesn't seem to represent anything. We will explore these columns further.To investigate the unusual 'A' entries, we can define a function count_values()
which computes the counts of all the unique values in a series. Next, we will apply the function to all columns from Professional Development
to Health & Safety
column using the Dataframe.apply()
method:
def count_values(column):
'''Computes the count of all unique values in a series'''
return column.value_counts()
# Extract the columns from Professional Development to Health and Safety using their indices.
flagged_columns = dete_survey.iloc[:, 28:49]
# Apply the function to the flagged columns
flagged_columns.apply(count_values)
A
, D
, M
,N
, SA
, SD
.tafe_survey = pd.read_csv('./tafe_survey.csv')
# preview dataset info
tafe_survey.info()
'Contributing Factor...'
columns.Record ID
and CESSATION YEAR
columns are stored as float types.CESSATION YEAR
, Reason for ceasing employment
, Gender
, CurrentAge
, and EmploymentType
.Let's look at some quick descriptive statistics for this dataset:
tafe_survey.describe(include='all')
"-"
. This might be a placeholder indicating that no answer was provided at the time ths survey was administered.Main Factor. Which of these was the main factor for leaving?
shows that the most frequent reason for employee exit is dissatisfaction. This column has over 80% missing entries.CurrentAge
column contains several age bins. Most respondents are 56 years or older.Both the dete_survey
and tafe_survey
datasets contain many columns that we wont be needing to answer our stakeholder questions.
dete_survey
data contains 'Not Stated' values that indicate values are missing, they should be represented as NaN.tafe_survey
there are many responses that point to resignation caused by dissatisfaction.Let's address these observations:
We can start by using the pd.read_csv()
method to specify values that should be represented as NaN. We will use the method to fix missing values in the dete_survey
. Next, we will drop columns that we don't need for our analysis. This includes columns that do not imply that an employee resigned due to dissatisfaction, columns that do not add relevant data to our analysis, and columns with too many missing entries.
dete_survey = pd.read_csv('./dete_survey.csv', na_values='Not Stated')
dete_survey.head()
For the DETE survey, we'll drop the object/string type columns from Professional Development [28]
to Health & Safety [48]
. These were the columns with the infamous Agree, Neutral, Strongly Agree, Disagree, Strongly Disagree and Not Applicable options.
# Verify and print out the unwanted columns
unwanted_dete = dete_survey.columns[28:49]
print('\033[1m' + '\033[4m' + '\033[95m' + 'Unwanted Columns in DETE Survey' + '\033[0m')
print(unwanted_dete)
# Remove unwanted columns
dete_survey.drop(unwanted_dete, axis=1, inplace=True)
dete_survey.head(3)
We will repeat the same process for TAFE, dropping columns containing similar "Agree/Disagree" data from Main Factor [17]
to Workplace Topic [65]
.
unwanted_tafe = tafe_survey.columns[17:66]
tafe_survey.drop(unwanted_tafe, axis=1, inplace=True)
tafe_survey.head(3)
Now let's verify the number of remaining columns in both datasets:
print('\033[1m' + '\033[4m' + 'REMAINING COLUMNS' + '\033[0m')
print('\033[1m' + '\033[95m' + 'DETE: {} columns'.format(dete_survey.shape[1]) + '\033[0m')
print('\033[1m' + '\033[94m' + 'TAFE: {} columns'.format(tafe_survey.shape[1]) + '\033[0m')
As observed earlier, both datasets contains many of the same columns, but the column names are different. Here are some columns we'd like to use for our final analysis of both datasets:
The plan is to end up combining the two datasets. To do this, we will have to standardize the column names. Let's start by formating the DETE survey column names to the proper snake case convention:
# Format column names
dete_survey.columns = (dete_survey.columns.str.lower()
.str.replace('separationtype', 'separation_type')
.str.replace(' ', '_')
.str.replace('/', '_')
.str.strip()
)
# Preview results
print('\033[1m' + '\033[4m' + '\033[95m' + 'Renamed DETE Columns' + '\033[0m')
print(dete_survey.columns)
Next, we will use the DataFrame.rename()
method to update the columns in tafe_survey
. We will focus on the similar columns for now, then handle the other columns later:
# Create a dictionary of columns to rename
similar_columns = {
'Record ID': 'id',
'CESSATION YEAR': 'cease_date',
'Reason for ceasing employment': 'separation_type',
'Gender. What is your Gender?': 'gender',
'CurrentAge. Current Age': 'age',
'Employment Type. Employment Type': 'employment_status',
'Classification. Classification': 'position',
'LengthofServiceOverall. Overall Length of Service at Institute (in years)': 'institute_service',
'LengthofServiceCurrent. Length of Service at current workplace (in years)': 'role_service'
}
# Rename the TAFE columns
tafe_survey.rename(similar_columns, axis=1, inplace=True)
# Preview renamed columns
print('\033[1m' + '\033[4m' + '\033[94m' + 'Renamed TAFE Columns' + '\033[0m')
print(tafe_survey.columns)
One of our goals is to answer the following question:
Is some dissatisfaction causing newer and older employees to resign from the institute?
If we look at the unique values in the separation_type
columns in each dataframe, we'll see that each dataset contains varying entries for separation type:
names = ['DETE SURVEY DATA', 'TAFE SURVEY DATA']
# Create a selection of colors for output headers
colors = ['\033[95m','\033[94m']
# Pretty print unique values in the seperation_type column of both datasets
for df, name, color in zip([dete_survey, tafe_survey], names, colors):
print('\033[1m' + '\033[4m' + color + name + '\033[0m')
print(tabulate(df['separation_type'].value_counts(dropna=False).to_frame(),
headers=['Separation Type', 'Count'], tablefmt='psql'))
We will only analyze survey respondents who resigned. Their separation type contains the string 'Resignation'
. We can see multiple uses of the word in the different seperation types:
Resignation
Resignation-Other reasons
Resignation-Other employer
Resignation-Move overseas/interstate
We have to account for each of these variations so we don't unintentionally drop useful data.
# Select entries starting with resignation in both datasets.
dete_resignations = dete_survey[dete_survey['separation_type'].str.startswith('Resignation')].copy()
tafe_resignations = tafe_survey[tafe_survey['separation_type'].str.startswith('Resignation', na=False)].copy()
# Copy was added above to deal with settings with copy warnings.
# Pretty print unique values in the seperation_type column of both datasets
for df, name, color in zip([dete_resignations, tafe_resignations], names, colors):
print('\033[1m' + '\033[4m' + color + name + '\033[0m')
print(tabulate(df['separation_type'].value_counts(dropna=False).to_frame(),
headers=['Separation Type', 'Count'], tablefmt='psql'))
In this step, we'll focus on verifying that the years in the cease_date
, dete_start_date
and role_start_date
are correctly entered.
dete_start_date
was before the year 1940.Lets start by taking a look at the cease_date
column in the DETE dataset:
# Define a function that pretty prints a pandas series to a readable format
def pretty_print(data, headings, color, title):
"""
Pretty-prints a Pandas series in a more readable format
Params:
:data (series): Pandas series of interest
:headings (list): List of column names to use in output
:color (string): Python formatted output color code
:title (string): Title of output table
Output:
Returns pretty-printed series with assigned column names.
"""
print('\033[1m' + '\033[4m' + color + title + '\033[0m')
print(tabulate(data.to_frame(), headers=headings, tablefmt='pretty', stralign='left'))
# Pretty print the cease dates in DETE data.
pretty_print(dete_resignations['cease_date'].value_counts(),
['cease_date', 'Count'], colors[0],
'DETE: Cease Date')
To avoid further confusion down the line, we will clean this column, extract only the year values and convert the datatype to float (float makes it easier to work with NaN entries).
# Create a regex to extract the year
year_pattern = r"([0-9]{4})"
# Extract the year and assign data type as float
dete_resignations['cease_date'] = dete_resignations['cease_date'].str.extract(year_pattern)
dete_resignations['cease_date'] = dete_resignations['cease_date'].astype(float)
# Preview the modified column
pretty_print(dete_resignations['cease_date'].value_counts().sort_index(),
['Cease_date', 'Count'], colors[0],
'DETE: Cease Date - After Cleaning')
print('Datatype: {}'.format(dete_resignations['cease_date'].dtype))
Next, we will explore the cease_date
column of TAFE resignation data.
pretty_print(tafe_resignations['cease_date'].value_counts().sort_index(),
['cease_date', 'Count'], colors[1],
'TAFE: Cease Date')
print('Datatype: {}'.format(tafe_resignations['cease_date'].dtype))
The TAFE cease dates look fine. They are uniformly formatted too. Let's dive-in to explore the dete_start_date
column of the DETE resignation data.
pretty_print(dete_resignations['dete_start_date'].value_counts().sort_index(),
['Start Date', 'Count'], colors[0],
'DETE: Start Date')
print('Datatype: {}'.format(dete_resignations['dete_start_date'].dtype))
Again, the dates seem realistic and uniformly formatted. Nothing to do here. Let's explore the role_start_date
column of the DETE resignation data.
pretty_print(dete_resignations['role_start_date'].value_counts().sort_index(),
['Role Start Date', 'Count'], colors[0],
'DETE: Role Start Date')
print('Datatype: {}'.format(dete_resignations['role_start_date'].dtype))
Since there is only one entry with this error, we can safely remove the record from our dataset:
# Eliminate the entry where the role start date is 200
dete_resignations = dete_resignations.query('role_start_date != 200')
pretty_print(dete_resignations['role_start_date'].value_counts().sort_index(),
['Role Start Date', 'Count'], colors[0],
'DETE: Role Start Date - After cleaning')
dete_dates = dete_resignations[['dete_start_date', 'role_start_date', 'cease_date']]
fig = px.box(dete_dates, y=dete_dates.columns, width=500, height=600, template='plotly_white')
fig.update_layout(title='DETE Employees Who Resigned.<br><i>When did they join, when did they leave?')
fig.update_yaxes(dtick=5, color='gray', title='Year', showline=True, mirror=True)
fig.update_xaxes(title='', color='gray', showline=True, mirror=True)
fig.show('png')
Since we do not have detailed information on the job start dates from the TAFE resignation data. We cannot build a comprehensive visualization for the TAFE survey.
Now that we've verified the different date data from the two datasets. We can safely calculate the length of time that each survey respondent (employee) spent at the institute.
The tafe_resignations
dataframe already contains an institute_service
column. However, dete_resignations
does not contain such information at the moment. Luckily we can extrapolate this from the dete_start_date
and cease_date
columns. This will prove useful in the long run, when we have to analyze both surveys together.
# Compute the institute service years for the DETE resignation data
dete_resignations['institute_service'] = dete_resignations['cease_date'] - dete_resignations['dete_start_date']
pretty_print(dete_resignations['institute_service'].value_counts(bins=5),
['Institute service', 'Count'], colors[0],
'DETE: Institute Service')
Let's explore the institute service pattern at TAFE:
pretty_print(tafe_resignations['institute_service'].value_counts(dropna=False),
['Institute service', 'Count'], colors[1],
'TAFE: Institute Service')
Now, we will try to identify any employees who resigned because they were dissatisfied. Below are the columns we'll use to make this assessment:
TAFE
Contributing Factors. Dissatisfaction
Contributing Factors. Job Dissatisfaction
DETE
job_dissatisfaction
dissatisfaction_with_the_department
physical_work_environment
lack_of_recognition
lack_of_job_security
work_location
employment_conditions
work_life_balance
workload
If an employee indicated that any of the factors above caused them to resign, we'll mark them as dissatisfied in a new column. After our changes, the dissatisfied
column will contain just the following values:
True:
indicates a person resigned because they were dissatisfied with the job
False:
indicates a person resigned because of a reason other than dissatisfaction with the job
# DETE columns related to dissatisfaction
dissatisfied_dete = [
'job_dissatisfaction',
'dissatisfaction_with_the_department',
'physical_work_environment',
'lack_of_recognition',
'lack_of_job_security',
'work_location',
'employment_conditions',
'work_life_balance',
'workload'
]
# TAFE columns related to dissatisfaction
dissatisfied_tafe = [
'Contributing Factors. Dissatisfaction',
'Contributing Factors. Job Dissatisfaction'
]
# Preview the unique entries in the DETE columns
for column in dissatisfied_dete:
pretty_print(dete_resignations[column].value_counts(dropna=False),
['Unique Values', 'Count'], colors[0],
'DETE: '+ column)
We won't need need to clean these DETE resignation columns further. They all appear to be in the right format. For now, let's explore the TAFE columns that we are interested in:
for column in dissatisfied_tafe:
pretty_print(tafe_resignations[column].value_counts(dropna=False),
['Unique Values', 'Count'], colors[0],
'TAFE: '+ column)
We can easily intuit that the "-"
entries are analogous to a respondent answering as "False"
, while any other string entry will equate to "True"
. Let's update these columns to True
, False
or NaN
values:
# A function to update '-' as False and other string entries to True
def map_boolean(entry):
if entry == '-':
return False
elif pd.isnull(entry):
return np.nan
else:
return True
# Apply function and print preview
for column in dissatisfied_tafe:
tafe_resignations[column] = tafe_resignations[column].map(map_boolean)
pretty_print(tafe_resignations[column].value_counts(dropna=False),
['Unique Values', 'Count'], colors[0],
'TAFE: '+ column)
Finally, we can create the dissatisfied
column in both datasets. Remember, once any of the employee dissatisfaction questions equates to True, the dissatisfied column will also contain True, otherwise False. For ease, we will use the Dataframe.any()
method to make this possible.
# Create a dissatisfied column and evaluate to True or False
tafe_resignations['dissatisfied'] = tafe_resignations[dissatisfied_tafe].any(axis=1, skipna=False)
dete_resignations['dissatisfied'] = dete_resignations[dissatisfied_dete].any(axis=1, skipna=False)
# Preview the newly created column
for df, name, color in zip([dete_resignations, tafe_resignations], ['DETE', 'TAFE'], [0,1]):
pretty_print(df['dissatisfied'].value_counts(dropna=False),
['Unique Values', 'Count'], colors[color],
name+': Dissatisfied Column')
Among others, our stakeholders expect us to answer the following question:
If a dissatisfaction is present, how does it vary within the different age groups at the instititute?
To accurately provide an answer to this, we need to ensure that age
information is properly formatted in both datasets. Let's start by previewing the entries for age.
pretty_print(dete_resignations['age'].value_counts(dropna=False).sort_index(),
['Age group', 'Count'], colors[0],
'DETE: Age Groups')
pretty_print(tafe_resignations['age'].value_counts(dropna=False).sort_index(),
['Age group', 'Count'], colors[1],
'TAFE: Age Groups')
Although, the age groups in the datasets are mostly similar, the formats are not exactly the same. The age groups in the TAFE dataset contain extra space characters e.g 21 25
. We should reformat these entries to agree with that of the DETE dataset e.g 21-25
.
In the TAFE data, age brackets end at 56 or older
while the DETE data has two extra age groups 56-60
and 61 or older
. We should make the age groups uniform in both datasets by formatting the two extra groups in DETE data to 56 or older
.
# Remove the extra space characters from TAFE age data
tafe_resignations['age'] = tafe_resignations['age'].str.replace(' ', '-')
# Format the extra age brackets in DETE data to 56 or older
dete_resignations['age'] = (dete_resignations['age'].str.replace('56-60', '56 or older')
.str.replace('61 or older', '56 or older')
)
# Re-examine the age columns again.
pretty_print(dete_resignations['age'].value_counts(dropna=False).sort_index(),
['Age group', 'Count'], colors[0],
'DETE: Age Groups - post cleaning')
pretty_print(tafe_resignations['age'].value_counts(dropna=False).sort_index(),
['Age group', 'Count'], colors[1],
'TAFE: Age Groups - post cleaning')
Note: The age groups are quite numerous, partly because they are mostly spaced at an interval of 5. This might make it difficult to observe some trends during analysis (since each group is not quite large enough). We will correct for these by creating an age structure later on.
Combining will also mean combining columns that are not common to both datasets. This would lead to a lot of null values. It is better to investigate each dataset column, then select only the common columns that are useful for our analysis. To select the common columns, we will use the np.intersect1d()
method.
# Preview all columns in DETE data
print('\033[1m' + '\033[4m' + colors[0] + 'DETE Resignation Columns' + '\033[0m')
print(dete_resignations.columns)
print('')
# Preview all columns in TAFE data
print('\033[1m' + '\033[4m' + colors[1] + 'TAFE Resignation Columns' + '\033[0m')
print(tafe_resignations.columns)
print('')
# Find the intersect (common items) in both columns
common_columns = np.intersect1d(dete_resignations.columns, tafe_resignations.columns)
# Preview the common columns
print('\033[1m' + '\033[4m' + '\033[91m' + 'COMMON COLUMNS' + '\033[0m')
for num, column in zip(range(1, 10), common_columns):
print('\033[91m' + str(num) + ': ' + column + '\033[0m')
We are almost ready to combine our datasets. In a two step process, we will isolate the common columns from each dataset, then create an institute
column. This will help us distinguish the source of each data after combining:
# Select only common columns from each dataset
dete_updated = dete_resignations[common_columns].copy()
tafe_updated = tafe_resignations[common_columns].copy()
# Add an institute column in each dataset
dete_updated['institute'] = 'DETE'
tafe_updated['institute'] = 'TAFE'
dete_updated.head()
tafe_updated.head()
From the output above, we'd notice that the institute_service
column currently contains entries that are not uniformly formatted accross both datasets. We will deal with this later. For now, we are ready to combine our datasets.
We can use the pd.concat()
function to stack our dataframes on one another, essentially combining them into one unit:
combined = pd.concat([dete_updated, tafe_updated])
combined.head(3)
combined.tail(3)
The ID
column does not add anything of value to our analysis. Let's drop it before we proceed.
combined.drop('id', axis=1, inplace=True)
Now that we have combined our dataframes and removed the id
column. The next step is to clean up institute_service
. Let's preview this column to have an idea of what we will be working with.
pretty_print(combined['institute_service'].value_counts(dropna=False),
['Institute service', 'Count'], colors[1],
'Institute Service Entries')
This column is tricky to clean because it currently contains values that are formatted in different ways. Rather than cleaning this column alone, we will further convert the numbers into various service categories.
We can draw insights from this article, which makes the argument that understanding employee's needs according to career stage instead of age is more effective.
We'll use the following definitions:
New:
Less than 3 years at a company
Experienced:
3-6 years at a company
Established:
7-10 years at a company
Veteran:
11 or more years at a company
We can now clean and categorize the values in the institute_service
column using the definitions above:
pattern = r"(\d+)" # matches one or more repetitions of numbers between the range [0-9]
# Extract values from institute service based on the defined pattern
combined['institute_service'] = (combined['institute_service'].astype('str')
.str.extract(pattern)
.astype(float)
)
# Preview results
pretty_print(combined['institute_service'].value_counts(dropna=False),
['Institute service', 'Count'], colors[1],
'Institute Service Entries - After cleaning')
Next, we'll map each value to one of the career stage definitions namely: New, experienced, established and veteran.
def map_career_state(value):
'''Maps value to a corresponding service category'''
if pd.isnull(value):
return np.nan
elif value < 3:
return 'New'
elif (value >=3 and value <=6):
return 'Experienced'
elif (value >=7 and value <=10):
return 'Established'
else:
return 'Veteran'
# Apply function to the combined dataframe
combined['service_category'] = combined['institute_service'].apply(map_career_state)
pretty_print(combined['service_category'].value_counts(dropna=False),
['Category', 'Count'], colors[1],
'Entries For Service Category')
combined.isnull().sum()
Having some information about age can help us estimate how long an employee may have served at an institute (institute service) and vice versa. However, we will notice that both information are missing from some records. In the absence of both data, these records will be hard to analyze. Hence will remove records with missing information for age and service category.
# Extract records where age and service category are missing
missing_service_data = combined[(combined['age'].isnull() & combined['service_category'].isnull())]
print('\033[1m' + '\033[91m' + str(missing_service_data.shape[0]) + ' records meet this criteria' + '\033[0m')
missing_service_data
These 53 records contain a lot of missing data. It is clear that they wont be useful for our analysis, so we will drop them all from our dataframe:
# Select only records with information on age and service category
combined = combined[(combined['age'].notnull() & combined['service_category'].notnull())]
print('\033[1m' + '\033[4m' + '\033[94m' + 'Null Values in our Combined Dataframe' + '\033[0m')
combined.isnull().sum()
By dealing with null values in the age and service_category columns. We have significantly reduced the number of null records from our dataset. Next, we will deal with the few null values left.
We won't be needing the cease_date
column to answer our stakeholder questions. However, the gender
and position
columns are important. We may decide to fill the missing gender records with the most common gender. However, this would not be a safe way to extrapolate the missing gender records, instead we will drop the 5 records with missing gender entries.
combined = combined[combined['gender'].notnull()]
print('\033[1m' + '\033[4m' + '\033[94m' + 'Null Values left after cleaning' + '\033[0m')
combined.isnull().sum()
We will categorize entries in the position
column into teaching
and non-teaching staff
. We only have 3 missing records in this column so it will be safe to map them as non-teaching staff. Let's view the unique values in this column again:
pretty_print(combined['position'].value_counts(dropna=False),
['position', 'Count'], colors[1],
'Unique entries in the position column')
Here is how we will proceed with our mapping: Any entry that contains the term Teacher, Tutor, Training, and Guidance will be mapped as Teaching staff
. All other entries will be recorded as Non-Teaching staff
. The mapped information will be stored in a new role
column.
teaching_roles = ['Teacher', 'Teacher (including LVT)', 'Teacher Aide', 'Guidance Officer', 'Tutor',
'Workplace Training Officer']
def map_position(entry):
''''categorizes entry under teaching or non-teaching staff'''
if pd.isnull(entry):
return 'Non-Teaching staff'
elif entry.strip() in teaching_roles:
return 'Teaching staff'
else:
return 'Non-Teaching staff'
# Apply function to the position column
combined['role'] = combined['position'].apply(map_position)
# Preview results
pretty_print(combined['role'].value_counts(dropna=False),
['Role', 'Count'], colors[1],
'Entries in the role column')
pretty_print(combined['employment_status'].value_counts(dropna=False),
['Status', 'Count'], colors[1],
'Entries in the employment status column')
There is not much to correct here. However, we can see that there is an overlap between the Contract/casual
group and the casual
group. For uniformity sake, we will reformat all the Casual
entries as Contract/casual
:
# Replace casual with contract/casual
combined['employment_status'] = combined['employment_status'].str.replace('Casual', 'Contract/casual')
pretty_print(combined['employment_status'].value_counts(dropna=False),
['Status', 'Count'], colors[1],
'Entries in the employment status column - After cleaning')
As a final step in our data preparation and cleaning process, we will create a standard age structure to correct for the numerous age groups we currently have in our data. We will follow some standard structures used by indexmundi.com for classifying age in relation to demographics. A few modifications will be made to suit our use case:
25 years or less
: Early working age
26-55 years
: Prime working age
56 years or older
: Mature or elderly working age
We can create a dictionary with this information, and map it to a new column in our dataframe:
# Create the mapping dictionary
age_structure = {
'20 or younger': 'Early working age',
'21-25': 'Early working age',
'26-30': 'Prime working age',
'31-35': 'Prime working age',
'36-40': 'Prime working age',
'41-45': 'Prime working age',
'46-50': 'Prime working age',
'51-55': 'Prime working age',
'56 or older': 'Elderly working age'
}
# Map dictionary contents to a new column
combined['age_structure'] = combined['age'].map(age_structure)
# Preview results
pretty_print(combined['age_structure'].value_counts(dropna=False),
['Age structure', 'Count'], colors[1],
'Entries in the age structure column')
It is not too surprising that the majority of employees are in their prime working age. They are Probably resigning to pursue other career interests, or further their current careers. We also have an essentially even distribution of early aged and mature/elderly aged employees.
Lets take one final look at our combined and cleaned dataframe:
combined.reset_index(drop=True, inplace=True)
combined.head()
Great! Everything looks good. We are finally ready to dive into analysis.
To make working with the data easier, we will define two helper functions:
generate_table()
: Generates a pivot table based on provided inputs.
plot_table()
: Creates a pre-styled horizontal bar chart from passed arguments.
def generate_table(df, index_col, value_col):
"""
Builds a pivot table from provided arguments.
Params:
:df (dataframe): dataframe of interest
:index_col(string): name of index column
:value_col(string): name of column to aggregate (value column)
Output:
Pivot table showing the percentage of dissatisfied and satisfied employees.
"""
table = df.pivot_table(index = index_col, values = value_col).reset_index().sort_values(by=value_col)
table[value_col] = round(table[value_col]*100, 2)
table['not_dissatisfied'] = 100 - table[value_col]
return table
def plot_table(table, main_title=None, plot_color = '#0062CC'):
"""
Creates an horizontal bar chart formatted like a progress bar
Params:
:table(dataframe): dataframe of interest
:main_title(string): main chart title
:plot_color(string): hex code to style bar color
Output:
Bar chart generated from input table.
"""
x_val, y_val, ref_val = table.columns[1], table.columns[0], table.columns[2]
fig = px.bar(table, y=y_val, x=x_val,
orientation='h', text= x_val
)
fig.add_trace(go.Bar(y=table[y_val], x=table[ref_val],
orientation='h', marker_color='grey', opacity=0.1
))
fig.data[0].marker.color = '#0062CC'
fig.data[0].texttemplate='%{text:.0f}%'
fig.update_yaxes(showline=False, title='', ticksuffix=' ')
fig.update_xaxes(title='', showticklabels=False, showgrid=False, zeroline=False)
fig.update_layout(template='plotly_white', showlegend=False, font_family='arial', font_size=13,
title= '<i>'+main_title, bargap=0.35, margin_b=0)
return fig
Now, we can proceed to answer some interesting questions
institute_info = generate_table(combined, 'institute', 'dissatisfied')
fig = plot_table(institute_info, 'Percentage of dissatisfied employees by institute')
fig.update_layout(height=200, width=500)
fig.show('png')
The DETE Institute recorded a greater number of employees resigning due to dissatisfaction. Infact the dissatisfaction rates observed in DETE are almost twice as much as those recorded in the TAFE institute.
This huge difference in dissatisfaction rates has a potential to significantly skew our analysis results. To ensure that we obtain the best insights possible, we will first conduct a general analysis of the combined data, then dive deeper to explore the differences between DETE and TAFE for each analysis question.
service_info = generate_table(combined, 'service_category', 'dissatisfied')
fig = plot_table(service_info, 'Percentage of dissatisfied employees in each service category')
fig.update_layout(height=300, width=700)
fig.show('png')
It appears that employees who spend longer at these institutes are more likely to resign due to dissatisfaction. Established employees and Veterans are more likely to be dissatisfied than experienced or new employees. In otherwords, employees who have spent 7 years or more are more likely to resign due to dissatisfaction than those who have spent lesser than 7 years.
age_info = generate_table(combined, ['age'], 'dissatisfied')
age_info
fig = plot_table(age_info, 'Percentage of dissatisfied employees across different age groups')
fig.update_layout(height=500, width=700)
fig.show('png')
Our plot doesn't show a clear pattern here. Although we see that older employees are more likely to resign due to dissatisfaction, we cannot confidently conclude because younger employees of age 26-30 also show high dissatisfaction rates.
Perhaps we could turn our attention to the age structure we previously created. This may give us clearer insights into the underlying central pattern.
age_structure_info = generate_table(combined, ['age_structure'], 'dissatisfied')
age_structure_info
fig = plot_table(age_structure_info, 'Percentage of dissatisfied employees across different age structures')
fig.update_layout(height=250, width=700)
fig.show('png')
The age structure informs us clearer and better! Employee dissatisfaction increases with age, leading to more resignations among older employees. Younger employees are less likely to resign due to dissatisfaction.
gender_info = generate_table(combined, ['gender'], 'dissatisfied')
gender_info
fig = plot_table(gender_info, 'Percentage of dissatisfied employees by gender')
fig.update_layout(height=200, width=500)
fig.show('png')
It appears that gender doesn't exert so much influence on dissatisfaction. However, male employees have a marginally higher tendency to resign due to dissatisfaction than their female counterparts.
employment_info = generate_table(combined, ['employment_status'], 'dissatisfied')
employment_info
fig = plot_table(employment_info, 'Percentage of dissatisfied employees by contract type')
fig.update_layout(height=360, width=700)
fig.show('png')
As employee contracts become more permanent, dissatisfaction is more likely to occur. Permanent employees suffer a larger share of dissatisfaction than temporary employees and casual workers.
role_info = generate_table(combined, ['role'], 'dissatisfied')
role_info
fig = plot_table(role_info, 'Percentage of dissatisfied employees by role')
fig.update_layout(height=200, width=500)
fig.show('png')
Teaching staff are more likely to resign due to dissatisfaction than staff who carry out administrative or other non-teaching related roles.
In the second aspect of our analysis, we will consider and compare the influence of each factor on resignation across both institutes. This is primarily because of the largely unequal dissatisfaction rates that we obeserved between DETE and TAFE employees.
Again, we will define two helper functions. The create_subplot()
function helps to generate high quality subplots from both datasets while the sort_by_df()
function sorts a dataframe based on a specified columns in another dataframe: this is especially useful when we want our graphs to have the same arrangement of data labels.
def create_subplot(first, second, main_title):
"""
Builds a subplot from provided arguments.
Params:
:first(dataframe): first dataframe of interest
:second(dataframe): second dataframe of interest
:main_title(string): name of chart
Output:
Subplots containing barcharts from both dataframes.
"""
fig = make_subplots(rows=1, cols=2, horizontal_spacing=0.2,
subplot_titles=('DETE', 'TAFE'))
x_val, y_val, ref_val = first.columns[1], first.columns[0], first.columns[2]
fig.add_trace(go.Bar(y=first[y_val], x=first[x_val],
orientation='h', marker_color='#2E7D9E', text= first[x_val]
), row=1, col=1)
fig.add_trace(go.Bar(y=first[y_val], x=first[ref_val],
orientation='h', marker_color='grey', opacity=0.1
), row=1, col=1)
fig.add_trace(go.Bar(y=second[y_val], x=second[x_val],
orientation='h', marker_color='#A36A69', text= second[x_val]
), row=1, col=2)
fig.add_trace(go.Bar(y=second[y_val], x=second[ref_val],
orientation='h', marker_color='grey', opacity=0.1
), row=1, col=2)
fig.data[0].texttemplate='%{text:.0f}%'
fig.data[2].texttemplate='%{text:.0f}%'
fig.update_yaxes(showline=False, title='', ticksuffix=' ')
fig.update_xaxes(title='', showticklabels=False, showgrid=False, zeroline=False)
fig.update_layout(template='plotly_white', showlegend=False, font_family='arial', font_size=13,
title= '<i>'+main_title, bargap=0.35, margin_b=0, barmode='stack')
return fig
def sort_by_df(input_df, sorting_column, sorting_df):
"""
Sorts a dataframe by a set column in another dataframe
Params:
:input_df (dataframe): dataframe to sort
:sorting_column(string): name of column in sorting_df to sort by
:sorting_df(dataframe): dataframe to sort from
Output:
An input dataframe sorted by the column specified in the sorting dataframe.
"""
result = input_df.set_index(sorting_column)
result = result.reindex(index=sorting_df[sorting_column])
result = result.reset_index()
return result
We will proceed to seperate DETE related data from TAFE related data by filtering the combined
dataframe:
dete = combined.query("institute == 'DETE'")
tafe = combined.query("institute == 'TAFE'")
# Use the generate_table function to pull relevant service_year information from both datasets
dete_service_info = generate_table(dete, ['service_category'], 'dissatisfied')
tafe_service_info = generate_table(tafe, ['service_category'], 'dissatisfied')
# Arrange the tafe service info dataset in the same order as dete service info
tafe_service_info = sort_by_df(tafe_service_info, 'service_category', dete_service_info)
# Generate subplot
fig = create_subplot(dete_service_info, tafe_service_info,
'Percentage of dissatisfied employees in each service category')
fig.update_layout(height=300, width=900)
fig.show('png')
Across both institutes, dissatisfaction tends to increase with the number of service years. This pattern is prominent among DETE employees (higher percentages) while subtle among TAFE employees (smaller percentages). New TAFE employees are more likely to resign due to dissatisfaction than experienced employees, although the difference is only marginal.
dete_age_info = generate_table(dete, ['age'], 'dissatisfied')
tafe_age_info = generate_table(tafe, ['age'], 'dissatisfied')
tafe_age_info = sort_by_df(tafe_age_info, 'age', dete_age_info)
fig = create_subplot(dete_age_info, tafe_age_info,
'Percentage of dissatisfied employees across different age groups')
fig.update_layout(height=500, width=950)
fig.show('png')