#!/usr/bin/env python # coding: utf-8 # # Data School: My top 25 pandas tricks ([video](https://www.youtube.com/watch?v=RlIiVeig3hc&list=PL5-da3qGB5ICCsgW1MxlZ0Hq8LL5U3u9y&index=35)) # # **See also:** "21 more pandas tricks" [video](https://www.youtube.com/watch?v=tWFQqaRtSQA&list=PL5-da3qGB5ICCsgW1MxlZ0Hq8LL5U3u9y&index=36) and [notebook](https://nbviewer.org/github/justmarkham/pandas-videos/blob/master/21_more_pandas_tricks.ipynb) # # - Watch the [complete pandas video series](https://www.dataschool.io/easier-data-analysis-with-pandas/) # - Connect on [Twitter](https://twitter.com/justmarkham), [Facebook](https://www.facebook.com/DataScienceSchool/), and [LinkedIn](https://www.linkedin.com/in/justmarkham/) # - Subscribe on [YouTube](https://www.youtube.com/dataschool?sub_confirmation=1) # - Join the [email newsletter](https://www.dataschool.io/subscribe/) # ## Table of contents # # 1. Show installed versions # 2. Create an example DataFrame # 3. Rename columns # 4. Reverse row order # 5. Reverse column order # 6. Select columns by data type # 7. Convert strings to numbers # 8. Reduce DataFrame size # 9. Build a DataFrame from multiple files (row-wise) # 10. Build a DataFrame from multiple files (column-wise) # 11. Create a DataFrame from the clipboard # 12. Split a DataFrame into two random subsets # 13. Filter a DataFrame by multiple categories # 14. Filter a DataFrame by largest categories # 15. Handle missing values # 16. Split a string into multiple columns # 17. Expand a Series of lists into a DataFrame # 18. Aggregate by multiple functions # 19. Combine the output of an aggregation with a DataFrame # 20. Select a slice of rows and columns # 21. Reshape a MultiIndexed Series # 22. Create a pivot table # 23. Convert continuous data into categorical data # 24. Change display options # 25. Style a DataFrame # 26. Bonus trick: Profile a DataFrame # ## Load example datasets # In[1]: import pandas as pd import numpy as np # In[2]: drinks = pd.read_csv('http://bit.ly/drinksbycountry') movies = pd.read_csv('http://bit.ly/imdbratings') orders = pd.read_csv('http://bit.ly/chiporders', sep='\t') orders['item_price'] = orders.item_price.str.replace('$', '').astype('float') stocks = pd.read_csv('http://bit.ly/smallstocks', parse_dates=['Date']) titanic = pd.read_csv('http://bit.ly/kaggletrain') ufo = pd.read_csv('http://bit.ly/uforeports', parse_dates=['Time']) # ## 1. Show installed versions # Sometimes you need to know the pandas version you're using, especially when reading the pandas documentation. You can show the pandas version by typing: # In[3]: pd.__version__ # But if you also need to know the versions of pandas' dependencies, you can use the `show_versions()` function: # In[4]: pd.show_versions() # You can see the versions of Python, pandas, NumPy, matplotlib, and more. # ## 2. Create an example DataFrame # Let's say that you want to demonstrate some pandas code. You need an example DataFrame to work with. # # There are many ways to do this, but my favorite way is to pass a dictionary to the DataFrame constructor, in which the dictionary keys are the column names and the dictionary values are lists of column values: # In[5]: df = pd.DataFrame({'col one':[100, 200], 'col two':[300, 400]}) df # Now if you need a much larger DataFrame, the above method will require way too much typing. In that case, you can use NumPy's `random.rand()` function, tell it the number of rows and columns, and pass that to the DataFrame constructor: # In[6]: pd.DataFrame(np.random.rand(4, 8)) # That's pretty good, but if you also want non-numeric column names, you can coerce a string of letters to a list and then pass that list to the columns parameter: # In[7]: pd.DataFrame(np.random.rand(4, 8), columns=list('abcdefgh')) # As you might guess, your string will need to have the same number of characters as there are columns. # ## 3. Rename columns # Let's take a look at the example DataFrame we created in the last trick: # In[8]: df # I prefer to use dot notation to select pandas columns, but that won't work since the column names have spaces. Let's fix this. # # The most flexible method for renaming columns is the `rename()` method. You pass it a dictionary in which the keys are the old names and the values are the new names, and you also specify the axis: # In[9]: df = df.rename({'col one':'col_one', 'col two':'col_two'}, axis='columns') # The best thing about this method is that you can use it to rename any number of columns, whether it be just one column or all columns. # # Now if you're going to rename all of the columns at once, a simpler method is just to overwrite the columns attribute of the DataFrame: # In[10]: df.columns = ['col_one', 'col_two'] # Now if the only thing you're doing is replacing spaces with underscores, an even better method is to use the `str.replace()` method, since you don't have to type out all of the column names: # In[11]: df.columns = df.columns.str.replace(' ', '_') # All three of these methods have the same result, which is to rename the columns so that they don't have any spaces: # In[12]: df # Finally, if you just need to add a prefix or suffix to all of your column names, you can use the `add_prefix()` method... # In[13]: df.add_prefix('X_') # ...or the `add_suffix()` method: # In[14]: df.add_suffix('_Y') # ## 4. Reverse row order # Let's take a look at the drinks DataFrame: # In[15]: drinks.head() # This is a dataset of average alcohol consumption by country. What if you wanted to reverse the order of the rows? # # The most straightforward method is to use the `loc` accessor and pass it `::-1`, which is the same slicing notation used to reverse a Python list: # In[16]: drinks.loc[::-1].head() # What if you also wanted to reset the index so that it starts at zero? # # You would use the `reset_index()` method and tell it to drop the old index entirely: # In[17]: drinks.loc[::-1].reset_index(drop=True).head() # As you can see, the rows are in reverse order but the index has been reset to the default integer index. # ## 5. Reverse column order # Similar to the previous trick, you can also use `loc` to reverse the left-to-right order of your columns: # In[18]: drinks.loc[:, ::-1].head() # The colon before the comma means "select all rows", and the `::-1` after the comma means "reverse the columns", which is why "country" is now on the right side. # ## 6. Select columns by data type # Here are the data types of the drinks DataFrame: # In[19]: drinks.dtypes # Let's say you need to select only the numeric columns. You can use the `select_dtypes()` method: # In[20]: drinks.select_dtypes(include='number').head() # This includes both int and float columns. # # You could also use this method to select just the object columns: # In[21]: drinks.select_dtypes(include='object').head() # You can tell it to include multiple data types by passing a list: # In[22]: drinks.select_dtypes(include=['number', 'object', 'category', 'datetime']).head() # You can also tell it to exclude certain data types: # In[23]: drinks.select_dtypes(exclude='number').head() # ## 7. Convert strings to numbers # Let's create another example DataFrame: # In[24]: df = pd.DataFrame({'col_one':['1.1', '2.2', '3.3'], 'col_two':['4.4', '5.5', '6.6'], 'col_three':['7.7', '8.8', '-']}) df # These numbers are actually stored as strings, which results in object columns: # In[25]: df.dtypes # In order to do mathematical operations on these columns, we need to convert the data types to numeric. You can use the `astype()` method on the first two columns: # In[26]: df.astype({'col_one':'float', 'col_two':'float'}).dtypes # However, this would have resulted in an error if you tried to use it on the third column, because that column contains a dash to represent zero and pandas doesn't understand how to handle it. # # Instead, you can use the `to_numeric()` function on the third column and tell it to convert any invalid input into `NaN` values: # In[27]: pd.to_numeric(df.col_three, errors='coerce') # If you know that the `NaN` values actually represent zeros, you can fill them with zeros using the `fillna()` method: # In[28]: pd.to_numeric(df.col_three, errors='coerce').fillna(0) # Finally, you can apply this function to the entire DataFrame all at once by using the `apply()` method: # In[29]: df = df.apply(pd.to_numeric, errors='coerce').fillna(0) df # This one line of code accomplishes our goal, because all of the data types have now been converted to float: # In[30]: df.dtypes # ## 8. Reduce DataFrame size # pandas DataFrames are designed to fit into memory, and so sometimes you need to reduce the DataFrame size in order to work with it on your system. # # Here's the size of the drinks DataFrame: # In[31]: drinks.info(memory_usage='deep') # You can see that it currently uses 30.4 KB. # # If you're having performance problems with your DataFrame, or you can't even read it into memory, there are two easy steps you can take during the file reading process to reduce the DataFrame size. # # The first step is to only read in the columns that you actually need, which we specify with the "usecols" parameter: # In[32]: cols = ['beer_servings', 'continent'] small_drinks = pd.read_csv('http://bit.ly/drinksbycountry', usecols=cols) small_drinks.info(memory_usage='deep') # By only reading in these two columns, we've reduced the DataFrame size to 13.6 KB. # # The second step is to convert any object columns containing categorical data to the category data type, which we specify with the "dtype" parameter: # In[33]: dtypes = {'continent':'category'} smaller_drinks = pd.read_csv('http://bit.ly/drinksbycountry', usecols=cols, dtype=dtypes) smaller_drinks.info(memory_usage='deep') # By reading in the continent column as the category data type, we've further reduced the DataFrame size to 2.3 KB. # # Keep in mind that the category data type will only reduce memory usage if you have a small number of categories relative to the number of rows. # ## 9. Build a DataFrame from multiple files (row-wise) # Let's say that your dataset is spread across multiple files, but you want to read the dataset into a single DataFrame. # # For example, I have a small dataset of stock data in which each CSV file only includes a single day. Here's the first day: # In[34]: pd.read_csv('data/stocks1.csv') # Here's the second day: # In[35]: pd.read_csv('data/stocks2.csv') # And here's the third day: # In[36]: pd.read_csv('data/stocks3.csv') # You could read each CSV file into its own DataFrame, combine them together, and then delete the original DataFrames, but that would be memory inefficient and require a lot of code. # # A better solution is to use the built-in glob module: # In[37]: from glob import glob # You can pass a pattern to `glob()`, including wildcard characters, and it will return a list of all files that match that pattern. # # In this case, glob is looking in the "data" subdirectory for all CSV files that start with the word "stocks": # In[38]: stock_files = sorted(glob('data/stocks*.csv')) stock_files # glob returns filenames in an arbitrary order, which is why we sorted the list using Python's built-in `sorted()` function. # # We can then use a generator expression to read each of the files using `read_csv()` and pass the results to the `concat()` function, which will concatenate the rows into a single DataFrame: # In[39]: pd.concat((pd.read_csv(file) for file in stock_files)) # Unfortunately, there are now duplicate values in the index. To avoid that, we can tell the `concat()` function to ignore the index and instead use the default integer index: # In[40]: pd.concat((pd.read_csv(file) for file in stock_files), ignore_index=True) # ## 10. Build a DataFrame from multiple files (column-wise) # The previous trick is useful when each file contains rows from your dataset. But what if each file instead contains columns from your dataset? # # Here's an example in which the drinks dataset has been split into two CSV files, and each file contains three columns: # In[41]: pd.read_csv('data/drinks1.csv').head() # In[42]: pd.read_csv('data/drinks2.csv').head() # Similar to the previous trick, we'll start by using `glob()`: # In[43]: drink_files = sorted(glob('data/drinks*.csv')) # And this time, we'll tell the `concat()` function to concatenate along the columns axis: # In[44]: pd.concat((pd.read_csv(file) for file in drink_files), axis='columns').head() # Now our DataFrame has all six columns. # ## 11. Create a DataFrame from the clipboard # Let's say that you have some data stored in an Excel spreadsheet or a [Google Sheet](https://docs.google.com/spreadsheets/d/1ipv_HAykbky8OXUubs9eLL-LQ1rAkexXG61-B4jd0Rc/edit?usp=sharing), and you want to get it into a DataFrame as quickly as possible. # # Just select the data and copy it to the clipboard. Then, you can use the `read_clipboard()` function to read it into a DataFrame: # In[45]: df = pd.read_clipboard() df # Just like the `read_csv()` function, `read_clipboard()` automatically detects the correct data type for each column: # In[46]: df.dtypes # Let's copy one other dataset to the clipboard: # In[47]: df = pd.read_clipboard() df # Amazingly, pandas has even identified the first column as the index: # In[48]: df.index # Keep in mind that if you want your work to be reproducible in the future, `read_clipboard()` is not the recommended approach. # ## 12. Split a DataFrame into two random subsets # Let's say that you want to split a DataFrame into two parts, randomly assigning 75% of the rows to one DataFrame and the other 25% to a second DataFrame. # # For example, we have a DataFrame of movie ratings with 979 rows: # In[49]: len(movies) # We can use the `sample()` method to randomly select 75% of the rows and assign them to the "movies_1" DataFrame: # In[50]: movies_1 = movies.sample(frac=0.75, random_state=1234) # Then we can use the `drop()` method to drop all rows that are in "movies_1" and assign the remaining rows to "movies_2": # In[51]: movies_2 = movies.drop(movies_1.index) # You can see that the total number of rows is correct: # In[52]: len(movies_1) + len(movies_2) # And you can see from the index that every movie is in either "movies_1": # In[53]: movies_1.index.sort_values() # ...or "movies_2": # In[54]: movies_2.index.sort_values() # Keep in mind that this approach will not work if your index values are not unique. # ## 13. Filter a DataFrame by multiple categories # Let's take a look at the movies DataFrame: # In[55]: movies.head() # One of the columns is genre: # In[56]: movies.genre.unique() # If we wanted to filter the DataFrame to only show movies with the genre Action or Drama or Western, we could use multiple conditions separated by the "or" operator: # In[57]: movies[(movies.genre == 'Action') | (movies.genre == 'Drama') | (movies.genre == 'Western')].head() # However, you can actually rewrite this code more clearly by using the `isin()` method and passing it a list of genres: # In[58]: movies[movies.genre.isin(['Action', 'Drama', 'Western'])].head() # And if you want to reverse this filter, so that you are excluding (rather than including) those three genres, you can put a tilde in front of the condition: # In[59]: movies[~movies.genre.isin(['Action', 'Drama', 'Western'])].head() # This works because tilde is the "not" operator in Python. # ## 14. Filter a DataFrame by largest categories # Let's say that you needed to filter the movies DataFrame by genre, but only include the 3 largest genres. # # We'll start by taking the `value_counts()` of genre and saving it as a Series called counts: # In[60]: counts = movies.genre.value_counts() counts # The Series method `nlargest()` makes it easy to select the 3 largest values in this Series: # In[61]: counts.nlargest(3) # And all we actually need from this Series is the index: # In[62]: counts.nlargest(3).index # Finally, we can pass the index object to `isin()`, and it will be treated like a list of genres: # In[63]: movies[movies.genre.isin(counts.nlargest(3).index)].head() # Thus, only Drama and Comedy and Action movies remain in the DataFrame. # ## 15. Handle missing values # Let's look at a dataset of UFO sightings: # In[64]: ufo.head() # You'll notice that some of the values are missing. # # To find out how many values are missing in each column, you can use the `isna()` method and then take the `sum()`: # In[65]: ufo.isna().sum() # `isna()` generated a DataFrame of True and False values, and `sum()` converted all of the True values to 1 and added them up. # # Similarly, you can find out the percentage of values that are missing by taking the `mean()` of `isna()`: # In[66]: ufo.isna().mean() # If you want to drop the columns that have any missing values, you can use the `dropna()` method: # In[67]: ufo.dropna(axis='columns').head() # Or if you want to drop columns in which more than 10% of the values are missing, you can set a threshold for `dropna()`: # In[68]: ufo.dropna(thresh=len(ufo)*0.9, axis='columns').head() # `len(ufo)` returns the total number of rows, and then we multiply that by 0.9 to tell pandas to only keep columns in which at least 90% of the values are not missing. # ## 16. Split a string into multiple columns # Let's create another example DataFrame: # In[69]: df = pd.DataFrame({'name':['John Arthur Doe', 'Jane Ann Smith'], 'location':['Los Angeles, CA', 'Washington, DC']}) df # What if we wanted to split the "name" column into three separate columns, for first, middle, and last name? We would use the `str.split()` method and tell it to split on a space character and expand the results into a DataFrame: # In[70]: df.name.str.split(' ', expand=True) # These three columns can actually be saved to the original DataFrame in a single assignment statement: # In[71]: df[['first', 'middle', 'last']] = df.name.str.split(' ', expand=True) df # What if we wanted to split a string, but only keep one of the resulting columns? For example, let's split the location column on "comma space": # In[72]: df.location.str.split(', ', expand=True) # If we only cared about saving the city name in column 0, we can just select that column and save it to the DataFrame: # In[73]: df['city'] = df.location.str.split(', ', expand=True)[0] df # ## 17. Expand a Series of lists into a DataFrame # Let's create another example DataFrame: # In[74]: df = pd.DataFrame({'col_one':['a', 'b', 'c'], 'col_two':[[10, 40], [20, 50], [30, 60]]}) df # There are two columns, and the second column contains regular Python lists of integers. # # If we wanted to expand the second column into its own DataFrame, we can use the `apply()` method on that column and pass it the Series constructor: # In[75]: df_new = df.col_two.apply(pd.Series) df_new # And by using the `concat()` function, you can combine the original DataFrame with the new DataFrame: # In[76]: pd.concat([df, df_new], axis='columns') # ## 18. Aggregate by multiple functions # Let's look at a DataFrame of orders from the Chipotle restaurant chain: # In[77]: orders.head(10) # Each order has an order_id and consists of one or more rows. To figure out the total price of an order, you sum the item_price for that order_id. For example, here's the total price of order number 1: # In[78]: orders[orders.order_id == 1].item_price.sum() # If you wanted to calculate the total price of every order, you would `groupby()` order_id and then take the sum of item_price for each group: # In[79]: orders.groupby('order_id').item_price.sum().head() # However, you're not actually limited to aggregating by a single function such as `sum()`. To aggregate by multiple functions, you use the `agg()` method and pass it a list of functions such as `sum()` and `count()`: # In[80]: orders.groupby('order_id').item_price.agg(['sum', 'count']).head() # That gives us the total price of each order as well as the number of items in each order. # ## 19. Combine the output of an aggregation with a DataFrame # Let's take another look at the orders DataFrame: # In[81]: orders.head(10) # What if we wanted to create a new column listing the total price of each order? Recall that we calculated the total price using the `sum()` method: # In[82]: orders.groupby('order_id').item_price.sum().head() # `sum()` is an aggregation function, which means that it returns a reduced version of the input data. # # In other words, the output of the `sum()` function: # In[83]: len(orders.groupby('order_id').item_price.sum()) # ...is smaller than the input to the function: # In[84]: len(orders.item_price) # The solution is to use the `transform()` method, which performs the same calculation but returns output data that is the same shape as the input data: # In[85]: total_price = orders.groupby('order_id').item_price.transform('sum') len(total_price) # We'll store the results in a new DataFrame column called total_price: # In[86]: orders['total_price'] = total_price orders.head(10) # As you can see, the total price of each order is now listed on every single line. # # That makes it easy to calculate the percentage of the total order price that each line represents: # In[87]: orders['percent_of_total'] = orders.item_price / orders.total_price orders.head(10) # ## 20. Select a slice of rows and columns # Let's take a look at another dataset: # In[88]: titanic.head() # This is the famous Titanic dataset, which shows information about passengers on the Titanic and whether or not they survived. # # If you wanted a numerical summary of the dataset, you would use the `describe()` method: # In[89]: titanic.describe() # However, the resulting DataFrame might be displaying more information than you need. # # If you wanted to filter it to only show the "five-number summary", you can use the `loc` accessor and pass it a slice of the "min" through the "max" row labels: # In[90]: titanic.describe().loc['min':'max'] # And if you're not interested in all of the columns, you can also pass it a slice of column labels: # In[91]: titanic.describe().loc['min':'max', 'Pclass':'Parch'] # ## 21. Reshape a MultiIndexed Series # The Titanic dataset has a "Survived" column made up of ones and zeros, so you can calculate the overall survival rate by taking a mean of that column: # In[92]: titanic.Survived.mean() # If you wanted to calculate the survival rate by a single category such as "Sex", you would use a `groupby()`: # In[93]: titanic.groupby('Sex').Survived.mean() # And if you wanted to calculate the survival rate across two different categories at once, you would `groupby()` both of those categories: # In[94]: titanic.groupby(['Sex', 'Pclass']).Survived.mean() # This shows the survival rate for every combination of Sex and Passenger Class. It's stored as a MultiIndexed Series, meaning that it has multiple index levels to the left of the actual data. # # It can be hard to read and interact with data in this format, so it's often more convenient to reshape a MultiIndexed Series into a DataFrame by using the `unstack()` method: # In[95]: titanic.groupby(['Sex', 'Pclass']).Survived.mean().unstack() # This DataFrame contains the same exact data as the MultiIndexed Series, except that now you can interact with it using familiar DataFrame methods. # ## 22. Create a pivot table # If you often create DataFrames like the one above, you might find it more convenient to use the `pivot_table()` method instead: # In[96]: titanic.pivot_table(index='Sex', columns='Pclass', values='Survived', aggfunc='mean') # With a pivot table, you directly specify the index, the columns, the values, and the aggregation function. # # An added benefit of a pivot table is that you can easily add row and column totals by setting `margins=True`: # In[97]: titanic.pivot_table(index='Sex', columns='Pclass', values='Survived', aggfunc='mean', margins=True) # This shows the overall survival rate as well as the survival rate by Sex and Passenger Class. # # Finally, you can create a cross-tabulation just by changing the aggregation function from "mean" to "count": # In[98]: titanic.pivot_table(index='Sex', columns='Pclass', values='Survived', aggfunc='count', margins=True) # This shows the number of records that appear in each combination of categories. # ## 23. Convert continuous data into categorical data # Let's take a look at the Age column from the Titanic dataset: # In[99]: titanic.Age.head(10) # It's currently continuous data, but what if you wanted to convert it into categorical data? # # One solution would be to label the age ranges, such as "child", "young adult", and "adult". The best way to do this is by using the `cut()` function: # In[100]: pd.cut(titanic.Age, bins=[0, 18, 25, 99], labels=['child', 'young adult', 'adult']).head(10) # This assigned each value to a bin with a label. Ages 0 to 18 were assigned the label "child", ages 18 to 25 were assigned the label "young adult", and ages 25 to 99 were assigned the label "adult". # # Notice that the data type is now "category", and the categories are automatically ordered. # ## 24. Change display options # Let's take another look at the Titanic dataset: # In[101]: titanic.head() # Notice that the Age column has 1 decimal place and the Fare column has 4 decimal places. What if you wanted to standardize the display to use 2 decimal places? # # You can use the `set_option()` function: # In[102]: pd.set_option('display.float_format', '{:.2f}'.format) # The first argument is the name of the option, and the second argument is a Python format string. # In[103]: titanic.head() # You can see that Age and Fare are now using 2 decimal places. Note that this did not change the underlying data, only the display of the data. # # You can also reset any option back to its default: # In[104]: pd.reset_option('display.float_format') # There are many more options you can specify is a similar way. # ## 25. Style a DataFrame # The previous trick is useful if you want to change the display of your entire notebook. However, a more flexible and powerful approach is to define the style of a particular DataFrame. # # Let's return to the stocks DataFrame: # In[105]: stocks # We can create a dictionary of format strings that specifies how each column should be formatted: # In[106]: format_dict = {'Date':'{:%m/%d/%y}', 'Close':'${:.2f}', 'Volume':'{:,}'} # And then we can pass it to the DataFrame's `style.format()` method: # In[107]: stocks.style.format(format_dict) # Notice that the Date is now in month-day-year format, the closing price has a dollar sign, and the Volume has commas. # # We can apply more styling by chaining additional methods: # In[108]: (stocks.style.format(format_dict) .hide_index() .highlight_min('Close', color='red') .highlight_max('Close', color='lightgreen') ) # We've now hidden the index, highlighted the minimum Close value in red, and highlighted the maximum Close value in green. # # Here's another example of DataFrame styling: # In[109]: (stocks.style.format(format_dict) .hide_index() .background_gradient(subset='Volume', cmap='Blues') ) # The Volume column now has a background gradient to help you easily identify high and low values. # # And here's one final example: # In[110]: (stocks.style.format(format_dict) .hide_index() .bar('Volume', color='lightblue', align='zero') .set_caption('Stock Prices from October 2016') ) # There's now a bar chart within the Volume column and a caption above the DataFrame. # # Note that there are many more options for how you can style your DataFrame. # ## Bonus: Profile a DataFrame # Let's say that you've got a new dataset, and you want to quickly explore it without too much work. There's a separate package called [pandas-profiling](https://github.com/pandas-profiling/pandas-profiling) that is designed for this purpose. # # First you have to install it using conda or pip. Once that's done, you import `pandas_profiling`: # In[111]: import pandas_profiling # Then, simply run the `ProfileReport()` function and pass it any DataFrame. It returns an interactive HTML report: # # - The first section is an overview of the dataset and a list of possible issues with the data. # - The next section gives a summary of each column. You can click "toggle details" for even more information. # - The third section shows a heatmap of the correlation between columns. # - And the fourth section shows the head of the dataset. # In[112]: pandas_profiling.ProfileReport(titanic) # ### Want more tricks? Watch [21 more pandas tricks](https://www.youtube.com/watch?v=tWFQqaRtSQA&list=PL5-da3qGB5ICCsgW1MxlZ0Hq8LL5U3u9y&index=36) or [Read the notebook](https://nbviewer.org/github/justmarkham/pandas-videos/blob/master/21_more_pandas_tricks.ipynb) # # © 2019 [Data School](https://www.dataschool.io). All rights reserved.