!pip install pydbtools
!pip install numpy==1.24.3 --user --force-reinstall
!pip install "pybind11>=2.12" -force-reinstall
Requirement already satisfied: pydbtools in /opt/conda/lib/python3.9/site-packages (5.6.4) Requirement already satisfied: sql-metadata<3.0.0,>=2.3.0 in /opt/conda/lib/python3.9/site-packages (from pydbtools) (2.15.0) Requirement already satisfied: awswrangler>=2.12.0 in /opt/conda/lib/python3.9/site-packages (from pydbtools) (3.11.0) Requirement already satisfied: sqlparse>=0.5.0 in /opt/conda/lib/python3.9/site-packages (from pydbtools) (0.5.3) Requirement already satisfied: pyarrow>=14.0.0 in /opt/conda/lib/python3.9/site-packages (from pydbtools) (18.1.0) Requirement already satisfied: Jinja2>=3.1.0 in /opt/conda/lib/python3.9/site-packages (from pydbtools) (3.1.6) Requirement already satisfied: boto3>=1.7.4 in /opt/conda/lib/python3.9/site-packages (from pydbtools) (1.24.82) Requirement already satisfied: arrow-pd-parser>=1.3.9 in /opt/conda/lib/python3.9/site-packages (from pydbtools) (2.2.0) Requirement already satisfied: mojap-metadata[arrow]<2.0.0,>=1.10.0 in /opt/conda/lib/python3.9/site-packages (from arrow-pd-parser>=1.3.9->pydbtools) (1.15.3) Requirement already satisfied: pandas>=1.2 in /opt/conda/lib/python3.9/site-packages (from arrow-pd-parser>=1.3.9->pydbtools) (1.3.3) Requirement already satisfied: smart-open<6.0.0,>=5.2.1 in /opt/conda/lib/python3.9/site-packages (from arrow-pd-parser>=1.3.9->pydbtools) (5.2.1) Collecting numpy<2.1.0,>=1.26 Using cached numpy-2.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (19.5 MB) Requirement already satisfied: typing-extensions<5.0.0,>=4.4.0 in /opt/conda/lib/python3.9/site-packages (from awswrangler>=2.12.0->pydbtools) (4.12.2) Requirement already satisfied: botocore<2.0.0,>=1.23.32 in /opt/conda/lib/python3.9/site-packages (from awswrangler>=2.12.0->pydbtools) (1.27.82) Requirement already satisfied: packaging<25.0,>=21.1 in /opt/conda/lib/python3.9/site-packages (from awswrangler>=2.12.0->pydbtools) (24.2) Requirement already satisfied: jmespath<2.0.0,>=0.7.1 in /opt/conda/lib/python3.9/site-packages (from boto3>=1.7.4->pydbtools) (1.0.1) Requirement already satisfied: s3transfer<0.7.0,>=0.6.0 in /opt/conda/lib/python3.9/site-packages (from boto3>=1.7.4->pydbtools) (0.6.0) Requirement already satisfied: urllib3<1.27,>=1.25.4 in /opt/conda/lib/python3.9/site-packages (from botocore<2.0.0,>=1.23.32->awswrangler>=2.12.0->pydbtools) (1.26.6) Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /opt/conda/lib/python3.9/site-packages (from botocore<2.0.0,>=1.23.32->awswrangler>=2.12.0->pydbtools) (2.8.2) Requirement already satisfied: MarkupSafe>=2.0 in /opt/conda/lib/python3.9/site-packages (from Jinja2>=3.1.0->pydbtools) (2.0.1) Requirement already satisfied: jsonschema>=3.0.0 in /opt/conda/lib/python3.9/site-packages (from mojap-metadata[arrow]<2.0.0,>=1.10.0->arrow-pd-parser>=1.3.9->pydbtools) (3.2.0) Requirement already satisfied: dataengineeringutils3>=1.4.0 in /opt/conda/lib/python3.9/site-packages (from mojap-metadata[arrow]<2.0.0,>=1.10.0->arrow-pd-parser>=1.3.9->pydbtools) (1.4.3) Requirement already satisfied: parameterized==0.7.* in /opt/conda/lib/python3.9/site-packages (from mojap-metadata[arrow]<2.0.0,>=1.10.0->arrow-pd-parser>=1.3.9->pydbtools) (0.7.5) Requirement already satisfied: PyYAML<7.0,>=6.0 in /opt/conda/lib/python3.9/site-packages (from mojap-metadata[arrow]<2.0.0,>=1.10.0->arrow-pd-parser>=1.3.9->pydbtools) (6.0.2) Requirement already satisfied: Deprecated<2.0.0,>=1.2.12 in /opt/conda/lib/python3.9/site-packages (from dataengineeringutils3>=1.4.0->mojap-metadata[arrow]<2.0.0,>=1.10.0->arrow-pd-parser>=1.3.9->pydbtools) (1.2.18) Requirement already satisfied: wrapt<2,>=1.10 in /opt/conda/lib/python3.9/site-packages (from Deprecated<2.0.0,>=1.2.12->dataengineeringutils3>=1.4.0->mojap-metadata[arrow]<2.0.0,>=1.10.0->arrow-pd-parser>=1.3.9->pydbtools) (1.17.2) Requirement already satisfied: attrs>=17.4.0 in /opt/conda/lib/python3.9/site-packages (from jsonschema>=3.0.0->mojap-metadata[arrow]<2.0.0,>=1.10.0->arrow-pd-parser>=1.3.9->pydbtools) (21.2.0) Requirement already satisfied: six>=1.11.0 in /opt/conda/lib/python3.9/site-packages (from jsonschema>=3.0.0->mojap-metadata[arrow]<2.0.0,>=1.10.0->arrow-pd-parser>=1.3.9->pydbtools) (1.16.0) Requirement already satisfied: pyrsistent>=0.14.0 in /opt/conda/lib/python3.9/site-packages (from jsonschema>=3.0.0->mojap-metadata[arrow]<2.0.0,>=1.10.0->arrow-pd-parser>=1.3.9->pydbtools) (0.17.3) Requirement already satisfied: setuptools in /opt/conda/lib/python3.9/site-packages (from jsonschema>=3.0.0->mojap-metadata[arrow]<2.0.0,>=1.10.0->arrow-pd-parser>=1.3.9->pydbtools) (58.0.4) Requirement already satisfied: pytz>=2017.3 in /opt/conda/lib/python3.9/site-packages (from pandas>=1.2->arrow-pd-parser>=1.3.9->pydbtools) (2021.1) Installing collected packages: numpy Attempting uninstall: numpy Found existing installation: numpy 1.24.3 Uninstalling numpy-1.24.3: Successfully uninstalled numpy-1.24.3 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. scipy 1.7.1 requires numpy<1.23.0,>=1.16.5, but you have numpy 2.0.2 which is incompatible. numba 0.54.0 requires numpy<1.21,>=1.17, but you have numpy 2.0.2 which is incompatible. Successfully installed numpy-2.0.2 Collecting numpy==1.24.3 Using cached numpy-1.24.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.3 MB) Installing collected packages: numpy ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. scipy 1.7.1 requires numpy<1.23.0,>=1.16.5, but you have numpy 1.24.3 which is incompatible. numba 0.54.0 requires numpy<1.21,>=1.17, but you have numpy 1.24.3 which is incompatible. awswrangler 3.11.0 requires numpy<2.1.0,>=1.26; python_version < "3.10", but you have numpy 1.24.3 which is incompatible. Successfully installed numpy-1.24.3 Looking in links: orce-reinstall Requirement already satisfied: pybind11>=2.12 in /opt/conda/lib/python3.9/site-packages (2.13.6) Requirement already satisfied: openpyxl in /opt/conda/lib/python3.9/site-packages (3.1.5) Requirement already satisfied: et-xmlfile in /opt/conda/lib/python3.9/site-packages (from openpyxl) (2.0.0)
import pydbtools as pydb
import pandas as pd
# Query to get the dataset from athena
q_query = "Select * from courts_caseload_rpt_dev_dbt.rpt_crown_court_caseload_quarterly_summary"
dfq = pydb.read_sql_query(q_query)
#Get the first two rows of the dataset to explore
dfq.head(2)
id | year | quarter | all_cases_receipts | all_cases_disposals | all_cases_opens | total_all_trial_receipts | total_all_trial_disposals | total_all_trial_opens | triable_either_way_receipts | ... | indictable_only_opens | committed_for_sentence_receipts | committed_for_sentence_disposals | committed_for_sentence_opens | appeals_receipts | appeals_disposals | appeals_opens | unknown_receipts | unknown_disposals | unknown_opens | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 2f8d1c1e93f31d0765194a95718f891f | 2025 | Q4 | <NA> | <NA> | <NA> | <NA> | <NA> | <NA> | <NA> | ... | <NA> | <NA> | <NA> | <NA> | <NA> | <NA> | <NA> | <NA> | <NA> | <NA> |
1 | 755b7c10d16531b8a315aad797aa801c | 2025 | Q3 | <NA> | 1 | <NA> | <NA> | <NA> | <NA> | <NA> | ... | <NA> | <NA> | 1 | <NA> | <NA> | <NA> | <NA> | <NA> | <NA> | <NA> |
2 rows × 24 columns
Get Column Names from dbt config
!pip install pyyaml
import yaml
Requirement already satisfied: pyyaml in /opt/conda/lib/python3.9/site-packages (6.0.2)
#Import DBT config file
file_path = r'courts_caseload_rpt__rpt_crown_court_caseload_properties.yml'
with open(file_path, 'r') as file:
data = yaml.safe_load(file)
print(data)
{'version': 2, 'models': [{'name': 'courts_caseload_rpt__rpt_crown_court_caseload_quarterly_summary', 'description': 'This model represents the staging component of the reporting layer, utilizing data from the OneCrown dimensional model', 'columns': [{'name': 'id', 'description': 'Unique Identifier for each row', 'tests': ['unique', 'not_null']}, {'name': 'year', 'description': 'Year', 'tests': ['not_null']}, {'name': 'quarter', 'description': 'Quarter', 'tests': ['not_null']}, {'name': 'all_cases_receipts', 'description': 'All cases:receipts'}, {'name': 'all_cases_disposals', 'description': 'All cases:disposals'}, {'name': 'all_cases_opens', 'description': 'All cases:open'}, {'name': 'total_all_trial_receipts', 'description': 'All trials:receipts'}, {'name': 'total_all_trial_disposals', 'description': 'All trials:disposals'}, {'name': 'total_all_trial_opens', 'description': 'All trials:open'}, {'name': 'triable_either_way_receipts', 'description': 'Triable-either-way trials:receipts'}, {'name': 'triable_either_way_disposals', 'description': 'Triable-either-way trials:disposals'}, {'name': 'triable_either_way_opens', 'description': 'Triable-either-way trials:open'}, {'name': 'indictable_only_receipts', 'description': 'Indictable only trials:receipts'}, {'name': 'indictable_only_disposals', 'description': 'Indictable only trials:disposals'}, {'name': 'indictable_only_opens', 'description': 'Indictable only trials:open'}, {'name': 'committed_for_sentence_receipts', 'description': 'Committed for sentence:receipts'}, {'name': 'committed_for_sentence_disposals', 'description': 'Committed for sentence:disposals'}, {'name': 'committed_for_sentence_opens', 'description': 'Committed for sentence:open'}, {'name': 'appeals_receipts', 'description': 'Appeals:receipts'}, {'name': 'appeals_disposals', 'description': 'Appeals:disposals'}, {'name': 'appeals_opens', 'description': 'Appeals:open'}, {'name': 'unknown_receipts', 'description': 'Unknown:receipts'}, {'name': 'unknown_disposals', 'description': 'Unknown:disposals'}, {'name': 'unknown_opens', 'description': 'Unknown:open'}]}]}
#Get Output Column Names from .yml description tags
updated_column_name = []
for model in data['models']:
if 'description' in model:
updated_column_name.append(model['description'])
if 'columns' in model:
for column in model['columns']:
if 'description' in column:
updated_column_name.append(column['description'])
del updated_column_name[:2]
print(updated_column_name)
['Year', 'Quarter', 'All cases:receipts', 'All cases:disposals', 'All cases:open', 'All trials:receipts', 'All trials:disposals', 'All trials:open', 'Triable-either-way trials:receipts', 'Triable-either-way trials:disposals', 'Triable-either-way trials:open', 'Indictable only trials:receipts', 'Indictable only trials:disposals', 'Indictable only trials:open', 'Committed for sentence:receipts', 'Committed for sentence:disposals', 'Committed for sentence:open', 'Appeals:receipts', 'Appeals:disposals', 'Appeals:open', 'Unknown:receipts', 'Unknown:disposals', 'Unknown:open']
#Counts to check column (items) in list
count = len(updated_column_name)
print(count)
23
#Map Dataframe columns to updated columns from list
dfq = dfq.drop(columns='id')
dfq.columns = updated_column_name
column_list = dfq.columns.to_list()
print(column_list)
['Year', 'Quarter', 'All cases:receipts', 'All cases:disposals', 'All cases:open', 'All trials:receipts', 'All trials:disposals', 'All trials:open', 'Triable-either-way trials:receipts', 'Triable-either-way trials:disposals', 'Triable-either-way trials:open', 'Indictable only trials:receipts', 'Indictable only trials:disposals', 'Indictable only trials:open', 'Committed for sentence:receipts', 'Committed for sentence:disposals', 'Committed for sentence:open', 'Appeals:receipts', 'Appeals:disposals', 'Appeals:open', 'Unknown:receipts', 'Unknown:disposals', 'Unknown:open']
!pip install gptables
import gptables as gpt
Collecting gptables Using cached gptables-1.2.0-py3-none-any.whl (72 kB) Requirement already satisfied: xlrd>=1.2.0 in /opt/conda/lib/python3.9/site-packages (from gptables) (2.0.1) Requirement already satisfied: pyyaml>=3.12 in /opt/conda/lib/python3.9/site-packages (from gptables) (6.0.2) Requirement already satisfied: pandas>=0.25.3 in /opt/conda/lib/python3.9/site-packages (from gptables) (1.3.3) Collecting XlsxWriter>=1.2.6 Using cached XlsxWriter-3.2.2-py3-none-any.whl (165 kB) Requirement already satisfied: numpy>=1.17.3 in ./.local/lib/python3.9/site-packages (from pandas>=0.25.3->gptables) (1.24.3) Requirement already satisfied: python-dateutil>=2.7.3 in /opt/conda/lib/python3.9/site-packages (from pandas>=0.25.3->gptables) (2.8.2) Requirement already satisfied: pytz>=2017.3 in /opt/conda/lib/python3.9/site-packages (from pandas>=0.25.3->gptables) (2021.1) Requirement already satisfied: six>=1.5 in /opt/conda/lib/python3.9/site-packages (from python-dateutil>=2.7.3->pandas>=0.25.3->gptables) (1.16.0) Installing collected packages: XlsxWriter, gptables Successfully installed XlsxWriter-3.2.2 gptables-1.2.0
# Define Metadata for the quarterly tables
metadata = {
"title": "Table C1: Receipts, disposals and open criminal cases in the Crown Court in England and Wales, annually 2016 - 2023, quarterly Q1 2016 - Q3 2024 [note 13][note 14][note 15][note 16][note 118] (“One Crown”)",
"source": "Source: XHIBIT system and Common Platform, HMCTS",
}
#Create a GPTable object
q_table = gpt.GPTable(
table=dfq,
index_columns={1:0},
title=metadata["title"],
table_name="Crown_Court_Cases",
## subtitle=metadata["subtitle"],
source=metadata["source"],
)
q_data = q_table.table
#Convert GPTable to Dataframe
dfql = pd.DataFrame(q_data)
# Query to get the dataset from athena
y_query = "Select * from courts_caseload_rpt_dev_dbt.rpt_crown_court_caseload_yearly_summary"
dfy = pydb.read_sql_query(y_query)
#Get the first two rows of the dataset to explore
dfy.head(2)
id | year | all_cases_receipts | all_cases_disposals | all_cases_opens | total_all_trial_receipts | total_all_trial_disposals | total_all_trial_opens | triable_either_way_receipts | triable_either_way_disposals | ... | indictable_only_opens | committed_for_sentence_receipts | committed_for_sentence_disposals | committed_for_sentence_opens | appeals_receipts | appeals_disposals | appeals_opens | unknown_receipts | unknown_disposals | unknown_opens | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 312351bff07989769097660a56395065 | 2025 | 14928 | 14455 | 75261 | <NA> | <NA> | <NA> | 6241 | 5594 | ... | 21871 | 5202 | 5301 | 11599 | 571 | 754 | 2589 | 2 | 3 | 6 |
1 | 07811dc6c422334ce36a09ff5cd6fe71 | 2024 | 121571 | 113936 | 174941 | <NA> | <NA> | <NA> | 49267 | 43936 | ... | 42049 | 43399 | 42505 | 46094 | 5419 | 5627 | 7586 | 23 | 20 | 23 |
2 rows × 23 columns
#Import DBT config file
file_path_y = r'courts_caseload_rpt__rpt_crown_court_caseload_Y_properties.yml'
with open(file_path_y, 'r') as file:
data_y = yaml.safe_load(file)
print(data_y)
{'version': 2, 'models': [{'name': 'courts_caseload_rpt__rpt_crown_court_caseload_yearly_summary', 'description': 'This model represents the base component of the reporting layer, utilizing data from the OneCrown dimensional model to create a yearly summary of the caseload report', 'columns': [{'name': 'id', 'description': 'Unique Identifier for each row', 'tests': ['unique', 'not_null']}, {'name': 'year', 'description': 'Year', 'tests': ['not_null']}, {'name': 'all_cases_receipts', 'description': 'All cases:receipts'}, {'name': 'all_cases_disposals', 'description': 'All cases:disposals'}, {'name': 'all_cases_opens', 'description': 'All cases:open'}, {'name': 'total_all_trial_receipts', 'description': 'All trials:receipts'}, {'name': 'total_all_trial_disposals', 'description': 'All trials:disposals'}, {'name': 'total_all_trial_opens', 'description': 'All trials:open'}, {'name': 'triable_either_way_receipts', 'description': 'Triable-either-way trials:receipts'}, {'name': 'triable_either_way_disposals', 'description': 'Triable-either-way trials:disposals'}, {'name': 'triable_either_way_opens', 'description': 'Triable-either-way trials:open'}, {'name': 'indictable_only_receipts', 'description': 'Indictable only trials:receipts'}, {'name': 'indictable_only_disposals', 'description': 'Indictable only trials:disposals'}, {'name': 'indictable_only_opens', 'description': 'Indictable only trials:open'}, {'name': 'committed_for_sentence_receipts', 'description': 'Committed for sentence:receipts'}, {'name': 'committed_for_sentence_disposals', 'description': 'Committed for sentence:disposals'}, {'name': 'committed_for_sentence_opens', 'description': 'Committed for sentence:open'}, {'name': 'appeals_receipts', 'description': 'Appeals:receipts'}, {'name': 'appeals_disposals', 'description': 'Appeals:disposals'}, {'name': 'appeals_opens', 'description': 'Appeals:open'}, {'name': 'unknown_receipts', 'description': 'Unknown:receipts'}, {'name': 'unknown_disposals', 'description': 'Unknown:disposals'}, {'name': 'unknown_opens', 'description': 'Unknown:open'}]}]}
#Get Output Column Names from .yml description tags
updated_column_name_y = []
for model in data_y['models']:
if 'description' in model:
updated_column_name_y.append(model['description'])
if 'columns' in model:
for column in model['columns']:
if 'description' in column:
updated_column_name_y.append(column['description'])
del updated_column_name_y[:2]
print(updated_column_name_y)
['Year', 'All cases:receipts', 'All cases:disposals', 'All cases:open', 'All trials:receipts', 'All trials:disposals', 'All trials:open', 'Triable-either-way trials:receipts', 'Triable-either-way trials:disposals', 'Triable-either-way trials:open', 'Indictable only trials:receipts', 'Indictable only trials:disposals', 'Indictable only trials:open', 'Committed for sentence:receipts', 'Committed for sentence:disposals', 'Committed for sentence:open', 'Appeals:receipts', 'Appeals:disposals', 'Appeals:open', 'Unknown:receipts', 'Unknown:disposals', 'Unknown:open']
#Counts to check column (items) in list
count = len(updated_column_name_y)
print(count)
22
#Map Dataframe columns to updated columns from list
dfy = dfy.drop(columns='id')
dfy.columns = updated_column_name_y
column_list_y = dfy.columns.to_list()
print(column_list_y)
['Year', 'All cases:receipts', 'All cases:disposals', 'All cases:open', 'All trials:receipts', 'All trials:disposals', 'All trials:open', 'Triable-either-way trials:receipts', 'Triable-either-way trials:disposals', 'Triable-either-way trials:open', 'Indictable only trials:receipts', 'Indictable only trials:disposals', 'Indictable only trials:open', 'Committed for sentence:receipts', 'Committed for sentence:disposals', 'Committed for sentence:open', 'Appeals:receipts', 'Appeals:disposals', 'Appeals:open', 'Unknown:receipts', 'Unknown:disposals', 'Unknown:open']
# Define Metadata for the quarterly tables
metadata = {
"title": "Table C1: Receipts, disposals and open criminal cases in the Crown Court in England and Wales, annually 2016 - 2023, quarterly Q1 2016 - Q3 2024 [note 13][note 14][note 15][note 16][note 118] (“One Crown”)",
"subtitle1": "This worksheet contains one table.",
"subtitle2": "This table contains notes, which can be found in the Notes worksheet.",
"source": "Source: XHIBIT system and Common Platform, HMCTS",
}
#Create a GPTable object
y_table = gpt.GPTable(
table=dfy,
index_columns={1:0},
title=metadata["title"],
table_name="Crown_Court_Cases",
## subtitle=metadata["subtitle"],
source=metadata["source"],
)
y_data = y_table.table
#Convert GPTable to Dataframe
dfyl = pd.DataFrame(y_data)
!pip install openpyxl
!pip install XlsxWriter
import openpyxl
import xlsxwriter
from openpyxl import load_workbook
from openpyxl.utils import get_column_letter
from openpyxl.styles import Font, Alignment
Requirement already satisfied: openpyxl in /opt/conda/lib/python3.9/site-packages (3.1.5) Requirement already satisfied: et-xmlfile in /opt/conda/lib/python3.9/site-packages (from openpyxl) (2.0.0) Requirement already satisfied: XlsxWriter in /opt/conda/lib/python3.9/site-packages (3.2.2)
# Save the table as an Excel File
output_filename = "GPTable_Output.xlsx"
with pd.ExcelWriter(output_filename, engine="openpyxl") as writer:
dfyl.to_excel(writer, sheet_name="Yearly_Crown_Court_Cases", startrow=5, index=False)
dfql.to_excel(writer, sheet_name="Quarterly_Crown_Court_Cases", startrow=5, index=False)
#Edit Yearly Figures WorkBook
#Load Workbook
wb = load_workbook(output_filename)
def insert_metadata(sheet_name):
ws = wb[sheet_name]
ws["A1"] = metadata["title"] # Title in first row
ws["A2"] = metadata["subtitle1"] # subtitle1 in second row
ws["A3"] = metadata["subtitle2"] # subtitle2 in third row
ws["A4"] = metadata["source"] # Source in fourth row
ws["A1"].font = Font(bold=True) # Make title bold
# Apply wrap text to row 4 (column headers)
for col in range(1, ws.max_column + 1): # Loop through all columns
ws.cell(row=6, column=col).alignment = Alignment(wrap_text=True)
# Apply metadata to both sheets
insert_metadata("Yearly_Crown_Court_Cases")
insert_metadata("Quarterly_Crown_Court_Cases")
# Save the updated file
wb.save(output_filename)
Download Link
from IPython.display import HTML
def create_download_link(filename):
return HTML(f'<a href="{filename}" download>Click here to download {filename}</a>')
create_download_link(output_filename)
Copy Workbook To S3 Location
!pip install boto3
Requirement already satisfied: boto3 in /opt/conda/lib/python3.9/site-packages (1.24.82) Requirement already satisfied: botocore<1.28.0,>=1.27.82 in /opt/conda/lib/python3.9/site-packages (from boto3) (1.27.82) Requirement already satisfied: jmespath<2.0.0,>=0.7.1 in /opt/conda/lib/python3.9/site-packages (from boto3) (1.0.1) Requirement already satisfied: s3transfer<0.7.0,>=0.6.0 in /opt/conda/lib/python3.9/site-packages (from boto3) (0.6.0) Requirement already satisfied: urllib3<1.27,>=1.25.4 in /opt/conda/lib/python3.9/site-packages (from botocore<1.28.0,>=1.27.82->boto3) (1.26.6) Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /opt/conda/lib/python3.9/site-packages (from botocore<1.28.0,>=1.27.82->boto3) (2.8.2) Requirement already satisfied: six>=1.5 in /opt/conda/lib/python3.9/site-packages (from python-dateutil<3.0.0,>=2.1->botocore<1.28.0,>=1.27.82->boto3) (1.16.0)
import boto3
# Set up AWS session with credentials
#session = boto3.Session(
# aws_access_key_id="",
# aws_secret_access_key="",
# region_name=""
#)
# Create an S3 client using the session
s3_client = boto3.client("s3")
# Prepare the file and the bucket name
bucket_name = "alpha-fotest" # Enclosed in quotes
output_filename = "GPTable_Output.xlsx" # The file you're uploading
data = open(output_filename, "rb") # Open the file in binary mode
response = s3_client.put_object(Bucket=bucket_name , Body=data, Key=output_filename)
print(f"AWS response code for uploading file is {(response['ResponseMetadata']['HTTPStatusCode'])}")
AWS response code for uploading file is 200