#!/usr/bin/env python
# coding: utf-8
# In[1]:
from IPython.core.display import display, HTML
display(HTML(""))
# # The Euro on Forex
# ## Exploring Twenty Years of Volatility
# ### History
# The currency exchange market has been around since coins were first used. At the end of the 1800s the development of the gold standard fixed currency values to a certain amount of gold, allowing for an internationally stable monetary system (1). Eventually the amount of gold was not enough to keep up with the demand for currency. After World War II the Bretton Woods Accord set the US Dollar as the global fixed currency. This was an adjustable rate, but still determined by central banks and state actors (2). The US dollar was pegged to gold at the time, but during the 1970s the same problem (i.e. amount of gold vs currency in circulation) occurred with the dollar. In August of 1971 President Richard Nixon unilaterally canceled direct international convertability of the US dollar to gold, inadvertently beginning a regime of free-floating currencies (3).
#
# ### Today
# The modern foreign exchange market (Forex, FX, or currency market) is a direct broker to dealer global marketplace that determines the market value and exchange rate of currencies all over the world. It is open 24 hours a day, 5 days a week. Currencies are always traded in pairs, and the value expressed is of one currency relative to another. For example, at the end of the day on 11/19/21 the Euro to US dollar rate was 1.13, or $1.13 equals €1. All of the twenty most traded currencies are free-floating, their value is determined by political conditions, economic factors and market psychology.
#
# Forex is by far the largest financial market in the world, in 2019 having a daily volume of over $6.6 trillion (4). The Euro to US dollar (EURUSD) is the most traded currency pair, taking about 30% of the multi-billion dollar Forex turnover (5).
#
# ### Methodology
# This study examines the volatility of the value of the Euro to eight other currencies. It uses the 2019 Triennial Central Bank Survey by the Bank for International Settlements to determine four currencies that represent large portions of the market and four currencies that represent small portions of the market. The highly traded currencies are the US dollar, Japanese yen, UK pound, and Australian dollar. The Israeli shekel, Phillipine peso, Malaysian ringgit, and Romanian leu comprise the group of least traded currencies.
#
# The data set used is based on daily closing values provided by the European Central Bank and compiled by Daria Chemkaeva. It is updated weekly, the version used for this study was downloaded from Kaggle in November 2021.
#
# ### Summary
# Text with conclusion.
#
#
# ## Initial Look
# The first steps are to import the basic libraries, open the file, and take an initial look at the dataset.
#
#
# In[2]:
# import libraries and set display options
import pandas as pd
import pprint
pd.options.display.float_format = '{:20,.4f}'.format
pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
pd.set_option('display.width', 3000)
pd.set_option('display.colheader_justify', 'center')
pd.set_option('display.precision', 3)
# open file
x_rates = pd.read_csv("euro-daily-hist_1999_2020.csv")
# head, tail, and basic info
display(x_rates.head())
display(x_rates.tail())
display(x_rates.info())
# ## Initial Look Continued
# The dataset covers the time period from January 1999 to January 2021. It has 5699 rows and 41 columns. Some of these columns have less than half the number of entries as other columns. In the original data set, cells with missing entries have a hyphen (-). This causes problems with a lot of the functions that will be used. For the time being the hyphens will be converted to the value ```NaN```, which preserves the row of information.
# ## Data Cleaning
# The first steps will be:
# * Clean and modify column names
# * Convert to `datetime` data type
# * Sort the values with oldest first and reset the index
# * Clean currency columns
# * Convert exchange rates to floats
# In[3]:
# strip brackets and trailing space from country names
# replace remaining space with underscore
# lower case all column names
x_rates.columns = x_rates.columns.str.replace("[","").str.replace("]","")
x_rates.columns = x_rates.columns.str.rstrip()
x_rates.columns = x_rates.columns.str.replace(" ","_")
x_rates.columns = x_rates.columns.str.lower()
# rename columns
x_rates.rename(columns={"period\\unit:":"date",
"chinese_yuan_renminbi":"chinese_yuan",
"uk_pound_sterling":"uk_pound"}, inplace=True)
# convert datetime
x_rates["date"] = pd.to_datetime(x_rates["date"])
# resort and reindex
x_rates.sort_values("date", inplace=True)
x_rates.reset_index(drop=True, inplace=True)
# convert hyphens in currency columns to NaN
import numpy as np
x_rates = x_rates.replace("-", np.nan)
# convert exchange rate values to float
x_rates.iloc[:,1:] = x_rates.iloc[:,1:].astype(float)
# set style for cells with centered output
from IPython.core.display import HTML
HTML("""
""")
# x_rates head
display(x_rates.head())
# ## Dollar to Euro
# The change over time in Euro exchange rate can be visualized in a line graph. The US dollar is the first currency to be examined.
# In[4]:
# create a us dollar to euro dataframe and add 30 day rolling mean column
us_dollar = x_rates[["date","us_dollar"]].copy()
us_dollar["rolling_mean_30"] = us_dollar["us_dollar"].rolling(30).mean()
# line graph for dollar to euro exchange rate
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
get_ipython().run_line_magic('matplotlib', 'inline')
fig, ax = plt.subplots(figsize=(10,3.75))
ax.plot(us_dollar["date"], us_dollar["rolling_mean_30"])
ax.set(xlabel="Year",
ylabel="US dollar",
title="US Dollar to Euro Exhcange Rates")
ax.set_xticks(["2000-01-04 00:00:00", "2005-01-04 00:00:00", "2010-01-04 00:00:00",
"2015-01-04 00:00:00", "2020-01-04 00:00:00"])
ax.set_xticklabels([2000,2005,2010,2015,2020])
plt.show()
# ## Euro to Dollar Continued
# The line graph shows that the exchange rate between the Euro and US dollar is pretty variable. There are also a lot of gaps in the graph. These represent points in the data frame where information for a particular day is missing. This is the most accurate representation of the data, but it's not very easy to look at. A quick visual check can illustrate this overlap and then compare the original data frame with one where the missing dates are removed.
# In[5]:
# create nan group and get dates
nan_dollar_group = us_dollar[us_dollar["us_dollar"].isnull()]
nan_days = nan_dollar_group["date"]
# line graph showing dollar to euro showing NaN values with and without highlight
fig, (ax1, ax2) = plt.subplots(nrows=2, ncols=1, figsize=(10,7.5))
ax1.plot(us_dollar["date"], us_dollar["rolling_mean_30"])
ax1.set(title="dataset w/ NaN values")
ax2.plot(us_dollar["date"], us_dollar["rolling_mean_30"])
ax2.set(title="dataset w/ corresponding dates in light blue")
for ax in ax1, ax2:
ax.set_xticks(["2000-01-04 00:00:00", "2005-01-04 00:00:00", "2010-01-04 00:00:00",
"2015-01-04 00:00:00", "2020-01-04 00:00:00"])
ax.set_xticklabels([2000,2005,2010,2015,2020])
for xc in nan_days:
plt.axvline(x=xc, color="lightcyan")
fig.suptitle("US dollar to Euro exchange rates", fontsize=20)
fig.tight_layout(pad=3.0)
plt.show()
# create a dollar to euro dataframe with NaN dates removed
US_dollar_nonull = x_rates[x_rates["us_dollar"].notna()].copy()
US_dollar_nonull["rolling_mean_30"] = US_dollar_nonull["us_dollar"].rolling(30).mean()
# line graph showing 30 day rolling mean with and without NaN rows
fig, (ax1, ax2) = plt.subplots(nrows=2, ncols=1, figsize=(10,7.5))
ax1.plot(us_dollar["date"], us_dollar["rolling_mean_30"])
ax1.set(title="NaN rows included")
ax2.plot(US_dollar_nonull["date"], US_dollar_nonull["rolling_mean_30"])
ax2.set(title="NaN rows removed")
for ax in ax1, ax2:
ax.set_xticks(["2000-01-04 00:00:00", "2005-01-04 00:00:00", "2010-01-04 00:00:00",
"2015-01-04 00:00:00", "2020-01-04 00:00:00"])
ax.set_xticklabels([2000,2005,2010,2015,2020])
fig.suptitle("US dollar to Euro exchange rates", fontsize=20)
fig.tight_layout(pad=3.0)
plt.show()
# ## Euro to Dollar Conclusion
# The third graph shows vertical blue stripes for all dates with missing information. These correspond perfectly with the gaps in the original chart. The final chart, with these rows removed, appears to be good match and is easier to look at. This also demonstrates how the rolling mean handles NaN values within a data frame. In either case the shape of the line is identical where the information exists, therefor both are similarly accurate represenations of the availalble data. All further currency data frames will be handled the same.
#
# ## Euro & Traded Pairs
# ### Euro to Highly Traded Currencies
# The line graphs represent the exchange rates for the Euro to the US dollar, Japanese yen, UK pound, and Australian dollar. These four currencies, in addition to the Euro, are the top five currenceis in the Forex market. All five together comprise over 75% of daily trades (4).
# ### Euro to Least Traded Currencies
# The line graphs represent the exchange rates for the Euro to the Israeli shekel, Philippine peso, Malaysian ringgit, and Romanian leu. These four currencies have low trade volumes, they comprise less than 0.4% of daily trades (4).
# In[6]:
# building the rolling 30 mean into the data frame
mean_rates = x_rates.rolling(30).mean()
mean_rates = mean_rates.add_suffix("_rolling_30")
new_frame = pd.concat([x_rates, mean_rates], axis=1)
new_frame = new_frame[[item for items in zip(x_rates.columns, mean_rates.columns) for item in items]]
# display(new_frame.head())
# # how do I get the currency columns before the rolling mean column
# function to build data frame for Euro to other half of trading pair
# this one uses new_frame, which has the 30 day rolling mean incorporated, and then removes the NaN dates
framer_list = []
def framer(col):
framer_list.append(col)
col_names = ["date", (str(col)), (str(col)+"_rolling_30")]
df = pd.DataFrame(columns=col_names)
df.iloc[:,0] = new_frame["date"]
df.iloc[:,1] = new_frame[col]
df.iloc[:,2] = new_frame[(str(col)+"_rolling_30")]
df = df[df.iloc[:,2].notna()]
return df
australian_dollar = framer("australian_dollar")
display(australian_dollar.head())
print(framer_list)
plt.plot(australian_dollar["date"], australian_dollar["australian_dollar_rolling_30"])
plt.show()
# In[7]:
# function to build data frame for Euro to other half of trading pair
# this one uses x_rates, removes the NaN values, and then adds a rolling 30 mean
xchange_list = []
def xchange(col):
xchange_list.append(col)
df = x_rates[["date",col]].copy()
df = df[df[col].notna()]
df["rolling_mean_30"] = df[col].rolling(30).mean()
return df
# canadian_dollar = xchange("canadian_dollar")
# display(canadian_dollar.head())
# print(xchange_list)
# plt.plot(canadian_dollar["date"], canadian_dollar["rolling_mean_30"])
# plt.show()
# In[8]:
# is there a way to apply either function to all the columns in x_rates?
# one drawback could be having 42 data frames
# calling xchange function on four highly traded currencies
us_dollar = xchange("us_dollar")
japanese_yen = xchange("japanese_yen")
uk_pound = xchange("uk_pound")
australian_dollar = xchange("australian_dollar")
# calling xchange function on four least traded currencies
israeli_shekel = xchange("israeli_shekel")
philippine_peso = xchange("philippine_peso")
malaysian_ringgit = xchange("malaysian_ringgit")
romanian_leu = xchange("romanian_leu")
# it would be cool to convert or use xchange_list for this but I can't get the string off
c_list = [us_dollar, japanese_yen, uk_pound, australian_dollar,
israeli_shekel, philippine_peso, malaysian_ringgit, romanian_leu]
# line graph showing Euro to each high frequency trading pairs
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(nrows=2, ncols=2, figsize=(12,8))
fig.tight_layout(pad=3)
years = mdates.YearLocator() # every year
years_fmt = mdates.DateFormatter('%Y')
for ax in ax1, ax2, ax3, ax4:
ax.xaxis.set_major_locator(years) # format the x ticks
ax.xaxis.set_major_formatter(years_fmt)
fmt_three_years = mdates.MonthLocator(interval = 36) # Major ticks every 3 years
ax.xaxis.set_major_locator(fmt_three_years)
ax.set(yticks=[],
xlabel="Year")
for ax, currency in zip((ax1, ax2, ax3, ax4), c_list[:5]):
ax.plot(currency["date"], currency["rolling_mean_30"])
# ax.set_ylabel(str(currency)) # why not, i can get it to change to a string "currency", but not a variable
ax1.set(ylabel="US dollar")
ax2.set(ylabel="Japanese yen")
ax3.set(ylabel="UK pound")
ax4.set(ylabel="Australian dollar")
fig.suptitle("Euro to High Frequency Pairs", fontsize=20)
plt.show()
# line graph showing Euro to each low frequency trading pairs
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(nrows=2, ncols=2, figsize=(12,8))
fig.tight_layout(pad=3)
for ax in ax1, ax2, ax3, ax4:
ax.xaxis.set_major_locator(years) # format the x ticks
ax.xaxis.set_major_formatter(years_fmt)
fmt_three_years = mdates.MonthLocator(interval = 36) # Major ticks every 3 years
ax.xaxis.set_major_locator(fmt_three_years)
ax.set(yticks=[],
xlabel="Year")
for ax, currency in zip((ax1, ax2, ax3, ax4), c_list[4:]):
ax.plot(currency["date"], currency["rolling_mean_30"])
# ax.set_ylabel(str(currency)) # why not, i can get it to change to a string "currency", but not a variable
ax1.set(ylabel="Israeli shekel")
ax2.set(ylabel="Philippine peso")
ax3.set(ylabel="Malaysian ringgit")
ax4.set(ylabel="Romanian leu")
fig.suptitle("Euro to Low Frequency Pairs", fontsize=20)
plt.show()
# ## **Euro to Traded Pairs Conclusion**
#
# ### High Frequency Pairs
# * The US dollar, Japanese yen, and UK pound all show a tight ratio with the Euro in the early 2000s.
# * Both the Japanese yen and Australian dollar show similar trade ratios in late 2011.
# ### Low Frequency Pairs
# * The Israeli shekel appears to have increased in value and the Romanian leu is definitely loosing value.
# * The shekel and leu appear to be fairly stable, or have low volatility.
# * The Israeli shekel and Malaysian ringgit also show the tight ratio of the early 2000s. This is seen to a lesser extent in the Phillipine pese
#
# ## **The early struggle**
# An overlay of all currencies focused on the first decade of the Euro shows it's initial struggle and eventual success.
# In[9]:
x_values1 = us_dollar["date"]
y_values1 = us_dollar["rolling_mean_30"]
x_values2 = japanese_yen["date"]
y_values2 = japanese_yen["rolling_mean_30"]
x_values3 = uk_pound["date"]
y_values3 = uk_pound["rolling_mean_30"]
# x_values4 = uk_pound["date"]
# y_values4 = uk_pound["rolling_mean_30"]
fig = plt.figure()
ax = fig.add_subplot(111, label = "1")
ax2 = fig.add_subplot(111, label = "2", frame_on = False)
ax3 = fig.add_subplot(111, label = "3", frame_on = False)
for ax in ax, ax2, ax3:
ax.xaxis.set_major_locator(years) # format the x ticks
ax.xaxis.set_major_formatter(years_fmt)
fmt_three_years = mdates.MonthLocator(interval = 36) # Major ticks every 3 years
ax.xaxis.set_major_locator(fmt_three_years)
ax.set(yticks=[],
xlabel="Year")
# ax.set_xlim(right=2009)
ax.plot(x_values1, y_values1, color = "blue")
ax.set_yticks([])
# ax.set_xlabel("x label 1", color = "C0")
# ax.set_ylabel("y label 1", color = "C0")
# ax.tick_params(axis = 'x', colors = "C0")
# ax.tick_params(axis = 'y', colors = "C0")
ax2.plot(x_values2, y_values2, color = "green")
ax2.set_yticks([])
# ax2.xaxis.tick_top()
# ax2.yaxis.tick_right()
# ax2.set_xlabel('x label 2', color = "C1")
# ax2.set_ylabel('y label 2', color = "C1")
# ax2.xaxis.set_label_position('top')
# ax2.yaxis.set_label_position('right')
# ax2.tick_params(axis = 'x', colors = "C1")
# ax2.tick_params(axis = 'y', colors = "C1")
ax3.plot(x_values3, y_values3, color = "yellow")
ax3.set_yticks([])
plt.show()
# In[10]:
# [us_dollar, japanese_yen, uk_pound, australian_dollar,
# israeli_shekel, philippine_peso, malaysian_ringgit, romanian_leu]
for country in c_list:
plt.plot(country["date"],country["rolling_mean_30"])
# plt.legend()
plt.show()
# In[11]:
first_decade = x_rates[x_rates["date"].dt.year < 2009]
display(first_decade.head())
display(first_decade.tail())
# ## Late 2011
# An overlay of the Japanese yen and Australian dollar will show if the changes in 2011 are concurrent.
# In[12]:
# line graph showing Euro to yen and aus dollar
fig, ax = plt.subplots(figsize=(9,5.5))
ax.plot(japanese_yen["date"], japanese_yen["rolling_mean_30"], color="green")
ax.set_ylabel('Japanese yen', color="green")
ax.set(yticks=[])
ax2 = ax.twinx()
ax2.plot(australian_dollar["date"], australian_dollar["rolling_mean_30"], color="blue")
ax2.set_ylabel('Australian dollar', color="blue")
ax2.set(yticks=[])
ax.set(xlabel='Year')
ax.xaxis.set_major_locator(years) # format the x ticks
ax.xaxis.set_major_formatter(years_fmt)
fmt_three_years = mdates.MonthLocator(interval = 36) # Major ticks every 3 years
ax.xaxis.set_major_locator(fmt_three_years)
fig.suptitle("Euro to Two Traded Currencies", fontsize=20)
plt.show()
# In[13]:
days = new_frame
# In[14]:
fig, ax = plt.subplots()
ax.plot(new_frame["date"],new_frame["chinese_yuan_rolling_30"])
plt.show()
# In[15]:
# big_list = []
# for col in x_rates.columns:
# big_list.append(col)
# big_list
# In[16]:
late_2000s = x_rates[x_rates["date"].dt.year.between(2009,2013)]
late_2000s.head()
# ## Volatility
# Determining volatility of the eight currencies for the past two decades illustrates part of the story of the twenty-first century.
# In[17]:
# function to get log of daily rate
def logarizer(dataset):
dataset["log_rate"] = np.log(dataset.iloc[:,1]/dataset.iloc[:,1].shift()) # getting the log of the exchange rate
return dataset["log_rate"]
# function to get overall volatility
volatility_dict = {}
def volatizer(dataset):
column_headers = list(dataset.columns.values)
currency_name = column_headers[1]
volatility = dataset["log_rate"].std()*253**.5 # volatility measured by standard deviation * 253 trading days per year
volatility = round(volatility, 4)*100
volatility_dict[currency_name] = volatility
# calling logarizer on all eight currencies
for currency in c_list:
logarizer(currency)
# calling volatizer on all eight currencies
for currency in c_list:
volatizer(currency)
# line graph showing log exchange rates of each currency
plt.figure(figsize=(10,14))
for i, currency in zip(range(1,9), c_list):
plt.subplot(4,2,i)
currency["log_rate"].plot.hist(bins=50)
# plt.title(currency) # will get ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
plt.show()
# plt.title("Romanian volatility: " + str(romanian_leu["volatility"]) + "%")
# In[18]:
volatility_dict
# In[19]:
# import geopandas and other libraries
# import geopandas as gpd
# import seaborn as sns
# import os
# map_data = gpd.read_file("World_Countries__Generalized_.shp")
# map_data["Volatility"] = None
# map_data.head()
# map_data.shape
# map_data.plot(figsize=(15,13))
# map_data[map_data["COUNTRY"] == "United States"].plot(color = "green")
# ## Volatility Continued
# stuff
# ## Mapping
# stuff
# ## Citations
# 1. Eichengreen, Barry (2019). Globalizing Capital: A History of the International Monetary System (3rd ed.). Princeton University Press. pp. 7, 79. ISBN 978-0-691-19390-8. JSTOR j.ctvd58rxg.
# 2. https://en.wikipedia.org/wiki/Bretton_Woods_system
# 3. Lowenstein, Roger (August 5, 2011). "The Nixon Shock". www.bloomberg.com. Bloomberg. Retrieved March 25, 2019.
# 4. Bank for International Settlements. "Foreign Exchange Turnover in April 2019." Accessed Oct. 3, 2020.
# 5. https://fxssi.com/the-most-traded-currency-pairs