Introduction to Pandas

Pandas is a library providing high-performance, easy-to-use data structures and data analysis tools. The core of pandas is its dataframe which is essentially a table of data. Pandas provides easy and powerful ways to import data from a variety of sources and export it to just as many. It is also explicitly designed to handle missing data elegantly which is a very common problem in data from the real world.

The offical pandas documentation is very comprehensive and you will be answer a lot of questions in there, however, it can sometimes be hard to find the right page. Don't be afraid to use Google to find help.

Pandas has a standard convention for importing it which you will see used in a lot of documentation so we will follow that in this course:

In [1]:
import pandas as pd
from pandas import Series, DataFrame

Series

The simplest of pandas' data structures is the Series. It is a one-dimensional list-like structure. Let's create one from a list:

In [2]:
Series([14, 7, 3, -7, 8])
Out[2]:
0    14
1     7
2     3
3    -7
4     8
dtype: int64

There are three main components to this output. The first column (0, 2, etc.) is the index, by default this is numbers each row starting from zero. The second column is our data, stored i the same order we entered it in our list. Finally at the bottom there is the dtype which stands for 'data type' which is telling us that all our data is being stored as a 64-bit integer. Usually you can ignore the dtype until you start doing more advanced things.

In the first example above we allowed pandas to automatically create an index for our Series (this is the 0, 1, 2, etc. in the left column) but often you will want to specify one yourself

In [3]:
s = Series([14, 7, 3, -7, 8], index=['a', 'b', 'c', 'd', 'e'])
print(s)
a    14
b     7
c     3
d    -7
e     8
dtype: int64

We can use this index to retrieve individual rows

In [4]:
s['a']
Out[4]:
14

to replace values in the series

In [5]:
s['c'] = -1

or to get a set of rows

In [6]:
s[['a', 'c', 'd']]
Out[6]:
a    14
c    -1
d    -7
dtype: int64

Exercise 1

  • Create a Pandas Series with 10 or so elements where the indices are years and the values are numbers.
  • Experiment with retrieving elements from the Series.
  • Try making another Series with duplicate values in the index, what happens when you access those elements?
  • How does a Pandas Series differ from a Python list or dict?
In [7]:
# Answer

ex1_1 = Series([4, 7.1, 7.3, 7.8, 8.1], index=[2000, 2001, 2002, 2003, 2004])

ex1_1
Out[7]:
2000    4.0
2001    7.1
2002    7.3
2003    7.8
2004    8.1
dtype: float64
In [8]:
# Answer

ex1_1[2002]
Out[8]:
7.2999999999999998
In [9]:
# Answer

ex1_1[[2001, 2004]]
Out[9]:
2001    7.1
2004    8.1
dtype: float64
In [10]:
# Answer

ex1_2 = Series([4, 7.1, 7.3, 7.8, 8.1], index=[2000, 2001, 2004, 2001, 2004])

ex1_2
Out[10]:
2000    4.0
2001    7.1
2004    7.3
2001    7.8
2004    8.1
dtype: float64
In [11]:
# Answer

ex1_2[2001]
Out[11]:
2001    7.1
2001    7.8
dtype: float64

Series operations

A Series is list-like in the sense that it is an ordered set of values. It is also dict-like since its entries can be accessed via key lookup. One very important way in which is differs is how it allows operations to be done over the whole Series in one go, a technique often referred to as 'broadcasting'.

A simple example is wanting to double the value of every entry in a set of data. In standard Python, you might have a list like

In [12]:
my_list = [3, 6, 8, 4, 10]

If you wanted to double every entry you might try simply multiplying the list by 2:

In [13]:
my_list * 2
Out[13]:
[3, 6, 8, 4, 10, 3, 6, 8, 4, 10]

but as you can see, that simply duplicated the elements. Instead you would have to use a for loop or a list comprehension:

In [14]:
[i * 2 for i in my_list]
Out[14]:
[6, 12, 16, 8, 20]

With a pandas Series, however, you can perform bulk mathematical operations to the whole series in one go:

In [15]:
my_series = Series(my_list)
print(my_series)
0     3
1     6
2     8
3     4
4    10
dtype: int64
In [16]:
my_series * 2
Out[16]:
0     6
1    12
2    16
3     8
4    20
dtype: int64

As well as bulk modifications, you can perform bulk selections by putting more complex statements in the square brackets:

In [17]:
s[s < 0]  # All negative entries
Out[17]:
c   -1
d   -7
dtype: int64
In [18]:
s[(s * 2) > 4]  # All entries which, when doubled are greater than 4
Out[18]:
a    14
b     7
e     8
dtype: int64

These operations work because the Series index selection can be passed a series of True and False values which it then uses to filter the result:

In [19]:
(s * 2) > 4
Out[19]:
a     True
b     True
c    False
d    False
e     True
dtype: bool

Here you can see that the rows a, b and e are True while the others are False. Passing this to s[...] will only show rows that are True.

Multi-Series operations

It is also possible to perform operations between two Series objects:

In [20]:
s2 = Series([23,5,34,7,5])
s3 = Series([7, 6, 5,4,3])
s2 - s3
Out[20]:
0    16
1    -1
2    29
3     3
4     2
dtype: int64

Exercise 2

  • Create two Series objects of equal length with no specified index and containing any values you like. Perform some mathematical operations on them and experiment to make sure it works how you think.
  • What happens then you perform an operation on two series which have different lengths? How does this change when you give the series some indices?
  • Using the Series from the first exercise with the years for the index, Select all entries with even-numbered years. Also, select all those with odd-numbered years.
In [21]:
# Answer

ex2_1a = Series([1,3,5,7,4,3])
ex2_1b = Series([3,7,9,4,2,4])

ex2_1a + ex2_1b
Out[21]:
0     4
1    10
2    14
3    11
4     6
5     7
dtype: int64
In [22]:
# Answer

ex2_2a = Series([1,3,5,7,4,3])
ex2_2b = Series([3,7,9,4])

ex2_2a + ex2_2b
Out[22]:
0     4.0
1    10.0
2    14.0
3    11.0
4     NaN
5     NaN
dtype: float64
In [23]:
# Answer

ex2_2a = Series([1,3,5,7,4,3], index=[1,2,3,4,5,6])
ex2_2b = Series([3,7,9,4], index=[1,3,5,7])

ex2_2a + ex2_2b
Out[23]:
1     4.0
2     NaN
3    12.0
4     NaN
5    13.0
6     NaN
7     NaN
dtype: float64
In [24]:
ex1_1[ex1_1.index % 2 == 0]
Out[24]:
2000    4.0
2002    7.3
2004    8.1
dtype: float64

DataFrame

While you can think of the Series as a one-dimensional list of data, pandas' DataFrame is a two (or possibly more) dimensional table of data. You can think of each column in the table as being a Series.

In [25]:
data = {'city': ['Paris', 'Paris', 'Paris', 'Paris',
                 'London', 'London', 'London', 'London',
                 'Rome', 'Rome', 'Rome', 'Rome'],
        'year': [2001, 2008, 2009, 2010,
                 2001, 2006, 2011, 2015,
                 2001, 2006, 2009, 2012],
        'pop': [2.148, 2.211, 2.234, 2.244,
                7.322, 7.657, 8.174, 8.615,
                2.547, 2.627, 2.734, 2.627]}
df = DataFrame(data)

This has created a DataFrame from the dictionary data. The keys will become the column headers and the values will be the values in each column. As with the Series, an index will be created automatically.

In [26]:
df
Out[26]:
city pop year
0 Paris 2.148 2001
1 Paris 2.211 2008
2 Paris 2.234 2009
3 Paris 2.244 2010
4 London 7.322 2001
5 London 7.657 2006
6 London 8.174 2011
7 London 8.615 2015
8 Rome 2.547 2001
9 Rome 2.627 2006
10 Rome 2.734 2009
11 Rome 2.627 2012

Or, if you just want a peek at the data, you can just grab the first few rows with:

In [27]:
df.head(3)
Out[27]:
city pop year
0 Paris 2.148 2001
1 Paris 2.211 2008
2 Paris 2.234 2009

Since we passed in a dictionary to the DataFrame constructor, the order of the columns will not necessarilly match the order in which you defined them. To enforce a certain order, you can pass a columns argument to the constructor giving a list of the columns in the order you want them:

In [28]:
DataFrame(data, columns=['year', 'city', 'pop'])
Out[28]:
year city pop
0 2001 Paris 2.148
1 2008 Paris 2.211
2 2009 Paris 2.234
3 2010 Paris 2.244
4 2001 London 7.322
5 2006 London 7.657
6 2011 London 8.174
7 2015 London 8.615
8 2001 Rome 2.547
9 2006 Rome 2.627
10 2009 Rome 2.734
11 2012 Rome 2.627

When we accessed elements from a Series object, it would select an element by row. However, by default DataFrames index primarily by column. You can access any column directly by using square brackets or by named attributes:

In [29]:
df['year']
Out[29]:
0     2001
1     2008
2     2009
3     2010
4     2001
5     2006
6     2011
7     2015
8     2001
9     2006
10    2009
11    2012
Name: year, dtype: int64
In [30]:
df.city
Out[30]:
0      Paris
1      Paris
2      Paris
3      Paris
4     London
5     London
6     London
7     London
8       Rome
9       Rome
10      Rome
11      Rome
Name: city, dtype: object

Accessing a column like this returns a Series which will act in the same way as those we were using earlier.

Note that there is one additional part to this output, Name: city. Pandas has remembered that this Series was created from the 'city' column in the DataFrame.

In [31]:
type(df.city)
Out[31]:
pandas.core.series.Series
In [32]:
df.city == 'Paris'
Out[32]:
0      True
1      True
2      True
3      True
4     False
5     False
6     False
7     False
8     False
9     False
10    False
11    False
Name: city, dtype: bool

This has created a new Series which has True set where the city is Paris and False elsewhere.

We can use filtered Series like this to filter the DataFrame as a whole. df.city == 'Paris' has returned a Series containing booleans. Passing it back into df as an indexing operation will use it to filter based on the 'city' column.

In [33]:
df[df.city == 'Paris']
Out[33]:
city pop year
0 Paris 2.148 2001
1 Paris 2.211 2008
2 Paris 2.234 2009
3 Paris 2.244 2010

You can then carry on and grab another column after that filter:

In [34]:
df[df.city == 'Paris'].year
Out[34]:
0    2001
1    2008
2    2009
3    2010
Name: year, dtype: int64

If you want to select a row from a DataFrame then you can use the .loc attribute which allows you to pass index values like:

In [35]:
df.loc[2]
Out[35]:
city    Paris
pop     2.234
year     2009
Name: 2, dtype: object
In [36]:
df.loc[2]['city']
Out[36]:
'Paris'

Adding new columns

New columns can be added to a DataFrame simply by assigning them by index (as you would for a Python dict) and can be deleted with the del keyword in the same way:

In [37]:
df['continental'] = df.city != 'London'
df
Out[37]:
city pop year continental
0 Paris 2.148 2001 True
1 Paris 2.211 2008 True
2 Paris 2.234 2009 True
3 Paris 2.244 2010 True
4 London 7.322 2001 False
5 London 7.657 2006 False
6 London 8.174 2011 False
7 London 8.615 2015 False
8 Rome 2.547 2001 True
9 Rome 2.627 2006 True
10 Rome 2.734 2009 True
11 Rome 2.627 2012 True
In [38]:
del df['continental']

Exercise 3

  • Create the DataFrame containing the census data for the three cities.
  • Select the data for the year 2001. Which city had the smallest population that year?
  • Find all the cities which had a population smaller than 2.6 million.
In [39]:
# Answer

print(df[df.year == 2001])
     city    pop  year
0   Paris  2.148  2001
4  London  7.322  2001
8    Rome  2.547  2001
In [40]:
# Answer

print(df[df['pop'] < 2.6])
    city    pop  year
0  Paris  2.148  2001
1  Paris  2.211  2008
2  Paris  2.234  2009
3  Paris  2.244  2010
8   Rome  2.547  2001

Reading from file

One of the msot common situations is that you have some data file containing the data you want to read. Perhaps this is data you've produced yourself or maybe it's from a collegue. In an ideal world the file will be perfectly formatted and will be trivial to import into pandas but since this is so often not the case, it provides a number of features to make your ife easier.

Full information on reading and writing is available in the pandas manual on IO tools but first it's worth noting the common formats that pandas can work with:

  • Comma separated tables (or tab-separated or space-separated etc.)
  • Excel spreadsheets
  • HDF5 files
  • SQL databases

For this course we will focus on plain-text CSV files as they are perhaps the most common format. Imagine we have a CSV file like (you can download this file from city_pop.csv):

In [41]:
!cat city_pop.csv  # Uses the IPython 'magic' !cat to print the file
This is an example CSV file
The text at the top here is not part of the data but instead is here
to describe the file. You'll see this quite often in real-world data.
A -1 signifies a missing value.

year;London;Paris;Rome
2001;7.322;2.148;2.547
2006;7.652;;2.627
2008;-1;2.211;
2009;-1;2.234;2.734
2011;8.174;;
2012;-1;2.244;2.627
2015;8.615;;

We can use the pandas function read_csv() to read the file and convert it to a DataFrame. Full documentation for this function can be found in the manual or, as with any Python object, directly in the notebook by putting a ? after the name:

In [42]:
help(pd.read_csv)
Help on function read_csv in module pandas.io.parsers:

read_csv(filepath_or_buffer, sep=',', delimiter=None, header='infer', names=None, index_col=None, usecols=None, squeeze=False, prefix=None, mangle_dupe_cols=True, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skipinitialspace=False, skiprows=None, nrows=None, na_values=None, keep_default_na=True, na_filter=True, verbose=False, skip_blank_lines=True, parse_dates=False, infer_datetime_format=False, keep_date_col=False, date_parser=None, dayfirst=False, iterator=False, chunksize=None, compression='infer', thousands=None, decimal=b'.', lineterminator=None, quotechar='"', quoting=0, escapechar=None, comment=None, encoding=None, dialect=None, tupleize_cols=False, error_bad_lines=True, warn_bad_lines=True, skipfooter=0, skip_footer=0, doublequote=True, delim_whitespace=False, as_recarray=False, compact_ints=False, use_unsigned=False, low_memory=True, buffer_lines=None, memory_map=False, float_precision=None)
    Read CSV (comma-separated) file into DataFrame
    
    Also supports optionally iterating or breaking of the file
    into chunks.
    
    Additional help can be found in the `online docs for IO Tools
    <http://pandas.pydata.org/pandas-docs/stable/io.html>`_.
    
    Parameters
    ----------
    filepath_or_buffer : str, pathlib.Path, py._path.local.LocalPath or any object with a read() method (such as a file handle or StringIO)
        The string could be a URL. Valid URL schemes include http, ftp, s3, and
        file. For file URLs, a host is expected. For instance, a local file could
        be file ://localhost/path/to/table.csv
    sep : str, default ','
        Delimiter to use. If sep is None, the C engine cannot automatically detect
        the separator, but the Python parsing engine can, meaning the latter will
        be used automatically. In addition, separators longer than 1 character and
        different from ``'\s+'`` will be interpreted as regular expressions and
        will also force the use of the Python parsing engine. Note that regex
        delimiters are prone to ignoring quoted data. Regex example: ``'\r\t'``
    delimiter : str, default ``None``
        Alternative argument name for sep.
    delim_whitespace : boolean, default False
        Specifies whether or not whitespace (e.g. ``' '`` or ``'    '``) will be
        used as the sep. Equivalent to setting ``sep='\s+'``. If this option
        is set to True, nothing should be passed in for the ``delimiter``
        parameter.
    
        .. versionadded:: 0.18.1 support for the Python parser.
    
    header : int or list of ints, default 'infer'
        Row number(s) to use as the column names, and the start of the data.
        Default behavior is as if set to 0 if no ``names`` passed, otherwise
        ``None``. Explicitly pass ``header=0`` to be able to replace existing
        names. The header can be a list of integers that specify row locations for
        a multi-index on the columns e.g. [0,1,3]. Intervening rows that are not
        specified will be skipped (e.g. 2 in this example is skipped). Note that
        this parameter ignores commented lines and empty lines if
        ``skip_blank_lines=True``, so header=0 denotes the first line of data
        rather than the first line of the file.
    names : array-like, default None
        List of column names to use. If file contains no header row, then you
        should explicitly pass header=None. Duplicates in this list are not
        allowed unless mangle_dupe_cols=True, which is the default.
    index_col : int or sequence or False, default None
        Column to use as the row labels of the DataFrame. If a sequence is given, a
        MultiIndex is used. If you have a malformed file with delimiters at the end
        of each line, you might consider index_col=False to force pandas to _not_
        use the first column as the index (row names)
    usecols : array-like or callable, default None
        Return a subset of the columns. If array-like, all elements must either
        be positional (i.e. integer indices into the document columns) or strings
        that correspond to column names provided either by the user in `names` or
        inferred from the document header row(s). For example, a valid array-like
        `usecols` parameter would be [0, 1, 2] or ['foo', 'bar', 'baz'].
    
        If callable, the callable function will be evaluated against the column
        names, returning names where the callable function evaluates to True. An
        example of a valid callable argument would be ``lambda x: x.upper() in
        ['AAA', 'BBB', 'DDD']``. Using this parameter results in much faster
        parsing time and lower memory usage.
    as_recarray : boolean, default False
        DEPRECATED: this argument will be removed in a future version. Please call
        `pd.read_csv(...).to_records()` instead.
    
        Return a NumPy recarray instead of a DataFrame after parsing the data.
        If set to True, this option takes precedence over the `squeeze` parameter.
        In addition, as row indices are not available in such a format, the
        `index_col` parameter will be ignored.
    squeeze : boolean, default False
        If the parsed data only contains one column then return a Series
    prefix : str, default None
        Prefix to add to column numbers when no header, e.g. 'X' for X0, X1, ...
    mangle_dupe_cols : boolean, default True
        Duplicate columns will be specified as 'X.0'...'X.N', rather than
        'X'...'X'. Passing in False will cause data to be overwritten if there
        are duplicate names in the columns.
    dtype : Type name or dict of column -> type, default None
        Data type for data or columns. E.g. {'a': np.float64, 'b': np.int32}
        Use `str` or `object` to preserve and not interpret dtype.
        If converters are specified, they will be applied INSTEAD
        of dtype conversion.
    engine : {'c', 'python'}, optional
        Parser engine to use. The C engine is faster while the python engine is
        currently more feature-complete.
    converters : dict, default None
        Dict of functions for converting values in certain columns. Keys can either
        be integers or column labels
    true_values : list, default None
        Values to consider as True
    false_values : list, default None
        Values to consider as False
    skipinitialspace : boolean, default False
        Skip spaces after delimiter.
    skiprows : list-like or integer or callable, default None
        Line numbers to skip (0-indexed) or number of lines to skip (int)
        at the start of the file.
    
        If callable, the callable function will be evaluated against the row
        indices, returning True if the row should be skipped and False otherwise.
        An example of a valid callable argument would be ``lambda x: x in [0, 2]``.
    skipfooter : int, default 0
        Number of lines at bottom of file to skip (Unsupported with engine='c')
    skip_footer : int, default 0
        DEPRECATED: use the `skipfooter` parameter instead, as they are identical
    nrows : int, default None
        Number of rows of file to read. Useful for reading pieces of large files
    na_values : scalar, str, list-like, or dict, default None
        Additional strings to recognize as NA/NaN. If dict passed, specific
        per-column NA values.  By default the following values are interpreted as
        NaN: '', '#N/A', '#N/A N/A', '#NA', '-1.#IND', '-1.#QNAN', '-NaN', '-nan',
        '1.#IND', '1.#QNAN', 'N/A', 'NA', 'NULL', 'NaN', 'nan'`.
    keep_default_na : bool, default True
        If na_values are specified and keep_default_na is False the default NaN
        values are overridden, otherwise they're appended to.
    na_filter : boolean, default True
        Detect missing value markers (empty strings and the value of na_values). In
        data without any NAs, passing na_filter=False can improve the performance
        of reading a large file
    verbose : boolean, default False
        Indicate number of NA values placed in non-numeric columns
    skip_blank_lines : boolean, default True
        If True, skip over blank lines rather than interpreting as NaN values
    parse_dates : boolean or list of ints or names or list of lists or dict, default False
    
        * boolean. If True -> try parsing the index.
        * list of ints or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3
          each as a separate date column.
        * list of lists. e.g.  If [[1, 3]] -> combine columns 1 and 3 and parse as
          a single date column.
        * dict, e.g. {'foo' : [1, 3]} -> parse columns 1, 3 as date and call result
          'foo'
    
        If a column or index contains an unparseable date, the entire column or
        index will be returned unaltered as an object data type. For non-standard
        datetime parsing, use ``pd.to_datetime`` after ``pd.read_csv``
    
        Note: A fast-path exists for iso8601-formatted dates.
    infer_datetime_format : boolean, default False
        If True and parse_dates is enabled, pandas will attempt to infer the format
        of the datetime strings in the columns, and if it can be inferred, switch
        to a faster method of parsing them. In some cases this can increase the
        parsing speed by 5-10x.
    keep_date_col : boolean, default False
        If True and parse_dates specifies combining multiple columns then
        keep the original columns.
    date_parser : function, default None
        Function to use for converting a sequence of string columns to an array of
        datetime instances. The default uses ``dateutil.parser.parser`` to do the
        conversion. Pandas will try to call date_parser in three different ways,
        advancing to the next if an exception occurs: 1) Pass one or more arrays
        (as defined by parse_dates) as arguments; 2) concatenate (row-wise) the
        string values from the columns defined by parse_dates into a single array
        and pass that; and 3) call date_parser once for each row using one or more
        strings (corresponding to the columns defined by parse_dates) as arguments.
    dayfirst : boolean, default False
        DD/MM format dates, international and European format
    iterator : boolean, default False
        Return TextFileReader object for iteration or getting chunks with
        ``get_chunk()``.
    chunksize : int, default None
        Return TextFileReader object for iteration.
        See the `IO Tools docs
        <http://pandas.pydata.org/pandas-docs/stable/io.html#io-chunking>`_
        for more information on ``iterator`` and ``chunksize``.
    compression : {'infer', 'gzip', 'bz2', 'zip', 'xz', None}, default 'infer'
        For on-the-fly decompression of on-disk data. If 'infer', then use gzip,
        bz2, zip or xz if filepath_or_buffer is a string ending in '.gz', '.bz2',
        '.zip', or 'xz', respectively, and no decompression otherwise. If using
        'zip', the ZIP file must contain only one data file to be read in.
        Set to None for no decompression.
    
        .. versionadded:: 0.18.1 support for 'zip' and 'xz' compression.
    
    thousands : str, default None
        Thousands separator
    decimal : str, default '.'
        Character to recognize as decimal point (e.g. use ',' for European data).
    float_precision : string, default None
        Specifies which converter the C engine should use for floating-point
        values. The options are `None` for the ordinary converter,
        `high` for the high-precision converter, and `round_trip` for the
        round-trip converter.
    lineterminator : str (length 1), default None
        Character to break file into lines. Only valid with C parser.
    quotechar : str (length 1), optional
        The character used to denote the start and end of a quoted item. Quoted
        items can include the delimiter and it will be ignored.
    quoting : int or csv.QUOTE_* instance, default 0
        Control field quoting behavior per ``csv.QUOTE_*`` constants. Use one of
        QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or QUOTE_NONE (3).
    doublequote : boolean, default ``True``
       When quotechar is specified and quoting is not ``QUOTE_NONE``, indicate
       whether or not to interpret two consecutive quotechar elements INSIDE a
       field as a single ``quotechar`` element.
    escapechar : str (length 1), default None
        One-character string used to escape delimiter when quoting is QUOTE_NONE.
    comment : str, default None
        Indicates remainder of line should not be parsed. If found at the beginning
        of a line, the line will be ignored altogether. This parameter must be a
        single character. Like empty lines (as long as ``skip_blank_lines=True``),
        fully commented lines are ignored by the parameter `header` but not by
        `skiprows`. For example, if comment='#', parsing '#empty\na,b,c\n1,2,3'
        with `header=0` will result in 'a,b,c' being
        treated as the header.
    encoding : str, default None
        Encoding to use for UTF when reading/writing (ex. 'utf-8'). `List of Python
        standard encodings
        <https://docs.python.org/3/library/codecs.html#standard-encodings>`_
    dialect : str or csv.Dialect instance, default None
        If provided, this parameter will override values (default or not) for the
        following parameters: `delimiter`, `doublequote`, `escapechar`,
        `skipinitialspace`, `quotechar`, and `quoting`. If it is necessary to
        override values, a ParserWarning will be issued. See csv.Dialect
        documentation for more details.
    tupleize_cols : boolean, default False
        Leave a list of tuples on columns as is (default is to convert to
        a Multi Index on the columns)
    error_bad_lines : boolean, default True
        Lines with too many fields (e.g. a csv line with too many commas) will by
        default cause an exception to be raised, and no DataFrame will be returned.
        If False, then these "bad lines" will dropped from the DataFrame that is
        returned.
    warn_bad_lines : boolean, default True
        If error_bad_lines is False, and warn_bad_lines is True, a warning for each
        "bad line" will be output.
    low_memory : boolean, default True
        Internally process the file in chunks, resulting in lower memory use
        while parsing, but possibly mixed type inference.  To ensure no mixed
        types either set False, or specify the type with the `dtype` parameter.
        Note that the entire file is read into a single DataFrame regardless,
        use the `chunksize` or `iterator` parameter to return the data in chunks.
        (Only valid with C parser)
    buffer_lines : int, default None
        DEPRECATED: this argument will be removed in a future version because its
        value is not respected by the parser
    compact_ints : boolean, default False
        DEPRECATED: this argument will be removed in a future version
    
        If compact_ints is True, then for any column that is of integer dtype,
        the parser will attempt to cast it as the smallest integer dtype possible,
        either signed or unsigned depending on the specification from the
        `use_unsigned` parameter.
    use_unsigned : boolean, default False
        DEPRECATED: this argument will be removed in a future version
    
        If integer columns are being compacted (i.e. `compact_ints=True`), specify
        whether the column should be compacted to the smallest signed or unsigned
        integer dtype.
    memory_map : boolean, default False
        If a filepath is provided for `filepath_or_buffer`, map the file object
        directly onto memory and access the data directly from there. Using this
        option can improve performance because there is no longer any I/O overhead.
    
    Returns
    -------
    result : DataFrame or TextParser

In [43]:
pd.read_csv('city_pop.csv')
Out[43]:
This is an example CSV file
0 The text at the top here is not part of the da...
1 to describe the file. You'll see this quite of...
2 A -1 signifies a missing value.
3 year;London;Paris;Rome
4 2001;7.322;2.148;2.547
5 2006;7.652;;2.627
6 2008;-1;2.211;
7 2009;-1;2.234;2.734
8 2011;8.174;;
9 2012;-1;2.244;2.627
10 2015;8.615;;

We can see that by default it's done a fairly bad job of parsing the file (this is mostly because I;ve construsted the city_pop.csv file to be as obtuse as possible). It's making a lot of assumptions about the structure of the file but in general it's taking quite a naïve approach.

The first this we notice is that it's treating the text at the top of the file as though it's data. Checking the documentation we see that the simplest way to solve this is to use the skiprows argument to the function to which we give an integer giving the number of rows to skip:

In [44]:
pd.read_csv(
    'city_pop.csv',
    skiprows=5,
)
Out[44]:
year;London;Paris;Rome
0 2001;7.322;2.148;2.547
1 2006;7.652;;2.627
2 2008;-1;2.211;
3 2009;-1;2.234;2.734
4 2011;8.174;;
5 2012;-1;2.244;2.627
6 2015;8.615;;

The next most obvious problem is that it is not separating the columns at all. This is controlled by the sep argument which is set to ',' by default (hence comma separated values). We can simply set it to the appropriate semi-colon:

In [45]:
pd.read_csv(
    'city_pop.csv',
    skiprows=5,
    sep=';'
)
Out[45]:
year London Paris Rome
0 2001 7.322 2.148 2.547
1 2006 7.652 NaN 2.627
2 2008 -1.000 2.211 NaN
3 2009 -1.000 2.234 2.734
4 2011 8.174 NaN NaN
5 2012 -1.000 2.244 2.627
6 2015 8.615 NaN NaN

Reading the descriptive header of our data file we see that a value of -1 signifies a missing reading so we should mark those too. This can be done after the fact but it is simplest to do it at import-time using the na_values argument:

In [46]:
pd.read_csv(
    'city_pop.csv',
    skiprows=5,
    sep=';',
    na_values='-1'
)
Out[46]:
year London Paris Rome
0 2001 7.322 2.148 2.547
1 2006 7.652 NaN 2.627
2 2008 NaN 2.211 NaN
3 2009 NaN 2.234 2.734
4 2011 8.174 NaN NaN
5 2012 NaN 2.244 2.627
6 2015 8.615 NaN NaN

The last this we want to do is use the year column as the index for the DataFrame. This can be done by passing the name of the column to the index_col argument:

In [47]:
df3 = pd.read_csv(
    'city_pop.csv',
    skiprows=5,
    sep=';',
    na_values='-1',
    index_col='year'
)
df3
Out[47]:
London Paris Rome
year
2001 7.322 2.148 2.547
2006 7.652 NaN 2.627
2008 NaN 2.211 NaN
2009 NaN 2.234 2.734
2011 8.174 NaN NaN
2012 NaN 2.244 2.627
2015 8.615 NaN NaN

Exercise 4

  • Alongside city_pop.csv there is another file called cetml1659on.dat (also available from here). This contains some historical weather data for a location in the UK. Import that file as a Pandas DataFrame using read_csv(), making sure that you cover all the NaN values.
  • How many years had a negative average temperature in January?
  • What was the average temperature in June over the years in the data set? Tip: look in the documentation for which method to call.

We will come back to this data set in a later stage.

In [48]:
# Answer

weather = pd.read_csv(
    'cetml1659on.dat',  # file name
    skiprows=6,  # skip header
    sep='\s+',  # whitespace separated
    na_values=['-99.9', '-99.99'],  # NaNs
)
weather.head()
Out[48]:
JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC YEAR
1659 3.0 4.0 6.0 7.0 11.0 13.0 16.0 16.0 13.0 10.0 5.0 2.0 8.87
1660 0.0 4.0 6.0 9.0 11.0 14.0 15.0 16.0 13.0 10.0 6.0 5.0 9.10
1661 5.0 5.0 6.0 8.0 11.0 14.0 15.0 15.0 13.0 11.0 8.0 6.0 9.78
1662 5.0 6.0 6.0 8.0 11.0 15.0 15.0 15.0 13.0 11.0 6.0 3.0 9.52
1663 1.0 1.0 5.0 7.0 10.0 14.0 15.0 15.0 13.0 10.0 7.0 5.0 8.63
In [49]:
# Answer

len(weather[weather.JAN < 0])
Out[49]:
20
In [50]:
# Answer

weather.JUN.mean()
Out[50]:
14.325977653631282