Pandas Cheatsheet
Pandas Cheat Sheet: Guide
First, it may be a good idea to bookmark this page, which will be easy to search with Ctrl+F when you're looking for something specific. However, we've also created a PDF version of this cheat sheet that you can download from here in case you'd like to print it out.
In this cheat sheet, we'll use the following shorthand:
df
| Any pandas DataFrame objects
| Any pandas Series object
As you scroll down, you'll see we've organized related commands using subheadings so that you can quickly search for and find the correct syntax based on the task you're trying to complete.
Also, a quick reminder — to make use of the commands listed below, you'll need to first import the relevant libraries like so:
import pandas as pd
import numpy as np
Importing Data
Use these commands to import data from a variety of different sources and formats.
pd.read_csv(filename)
| From a CSV filepd.read_table(filename)
| From a delimited text file (like TSV)pd.read_excel(filename)
| From an Excel filepd.read_sql(query, connection_object)
| Read from a SQL table/databasepd.read_json(json_string)
| Read from a JSON formatted string, URL or file.pd.read_html(url)
| Parses an html URL, string or file and extracts tables to a list of dataframespd.read_clipboard()
| Takes the contents of your clipboard and passes it to read_table()
pd.DataFrame(dict)
| From a dict, keys for columns names, values for data as lists
Exporting Data
Use these commands to export a DataFrame to CSV, .xlsx, SQL, or JSON.
df.to_csv(filename)
| Write to a CSV filedf.to_excel(filename)
| Write to an Excel filedf.to_sql(table_name, connection_object)
| Write to a SQL tabledf.to_json(filename)
| Write to a file in JSON format
Create Test Objects
These commands can be useful for creating test segments.
pd.DataFrame(np.random.rand(20,5))
| 5 columns and 20 rows of random floatspd.Series(my_list)
| Create a series from an iterable my_list
df.index = pd.date_range('1900/1/30', periods=df.shape[0])
| Add a date index
Viewing/Inspecting Data
Use these commands to take a look at specific sections of your pandas DataFrame or Series.
df.head(n)
| First n rows of the DataFramedf.tail(n)
| Last n rows of the DataFramedf.shape
| Number of rows and columnsdf.info()
| Index, Datatype and Memory informationdf.describe()
| Summary statistics for numerical columnss.value_counts(dropna=False)
| View unique values and countsdf.apply(pd.Series.value_counts)
| Unique values and counts for all columns
Selection
Use these commands to select a specific subset of your data.
df[col]
| Returns column with label col as Seriesdf[[col1, col2]]
| Returns columns as a new DataFrames.iloc[0]
| Selection by positions.loc['index_one']
| Selection by indexdf.iloc[0,:]
| First rowdf.iloc[0,0]
| First element of first column
Data Cleaning
Use these commands to perform a variety of data cleaning tasks.
df.columns = ['a','b','c']
| Rename columnspd.isnull()
| Checks for null Values, Returns Boolean Arrraypd.notnull()
| Opposite of pd.isnull()
df.dropna()
| Drop all rows that contain null valuesdf.dropna(axis=1)
| Drop all columns that contain null valuesdf.dropna(axis=1,thresh=n)
| Drop all rows have have less than n non null valuesdf.fillna(x)
| Replace all null values with xs.fillna(s.mean())
| Replace all null values with the mean (mean can be replaced with almost any function from the statistics module)s.astype(float)
| Convert the datatype of the series to floats.replace(1,'one')
| Replace all values equal to 1
with 'one'
s.replace([1,3],['one','three'])
| Replace all 1 with 'one'
and 3
with 'three'
df.rename(columns=lambda x: x + 1)
| Mass renaming of columnsdf.rename(columns={'old_name': 'new_ name'})
| Selective renamingdf.set_index('column_one')
| Change the indexdf.rename(index=lambda x: x + 1)
| Mass renaming of index
Filter, Sort, and Groupby
Use these commands to filter, sort, and group your data.
df[df[col] > 0.5]
| Rows where the column col
is greater than 0.5
df[(df[col] > 0.5) & (df[col] < 0.7)]
| Rows where 0.7 > col > 0.5
df.sort_values(col1)
| Sort values by col1 in ascending orderdf.sort_values(col2,ascending=False)
| Sort values by col2
in descending orderdf.sort_values([col1,col2],ascending=[True,False])
| Sort values by col1
in ascending order then col2
in descending orderdf.groupby(col)
| Returns a groupby object for values from one columndf.groupby([col1,col2])
| Returns groupby object for values from multiple columnsdf.groupby(col1)[col2]
| Returns the mean of the values in col2
, grouped by the values in col1
(mean can be replaced with almost any function from the statistics module)df.pivot_table(index=col1,values=[col2,col3],aggfunc=mean)
| Create a pivot table that groups by col1
and calculates the mean of col2
and col3
df.groupby(col1).agg(np.mean)
| Find the average across all columns for every unique col1
groupdf.apply(np.mean)
| Apply the function np.mean()
across each columnnf.apply(np.max,axis=1)
| Apply the function np.max()
across each row
Join/Combine
Use these commands to combine multiple dataframes into a single one.
df1.append(df2)
| Add the rows in df1
to the end of df2
(columns should be identical)pd.concat([df1, df2],axis=1)
| Add the columns in df1
to the end of df2
(rows should be identical)df1.join(df2,on=col1,how='inner')
| SQL-style join the columns in df1
with the columns on df2
where the rows for col
have identical values. 'how'
can be one of 'left'
, 'right'
, 'outer'
, 'inner'
Statistics
Use these commands to perform various statistical tests. (These can all be applied to a series as well.)
df.describe()
| Summary statistics for numerical columnsdf.mean()
| Returns the mean of all columnsdf.corr()
| Returns the correlation between columns in a DataFramedf.count()
| Returns the number of non-null values in each DataFrame columndf.max()
| Returns the highest value in each columndf.min()
| Returns the lowest value in each columndf.median()
| Returns the median of each columndf.std()
| Returns the standard deviation of each column
No comments:
Post a Comment