#!/usr/bin/env python # coding: utf-8 # # # # # Start-to-Finish Example: [TOV](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation) Neutron Star Simulation: The "Hydro without Hydro" Test # # ## Authors: Zach Etienne & Phil Chang # ### Formatting improvements courtesy Brandon Clark # # ## This module sets up initial data for a neutron star on a spherical numerical grid, using the approach [documented in the previous NRPy+ module](Tutorial-Start_to_Finish-BSSNCurvilinear-TOV_initial_data.ipynb), and then evolves these initial data forward in time. The aim is to reproduce the results from [Baumgarte, Hughes, and Shapiro]( https://arxiv.org/abs/gr-qc/9902024) (which were performed using Cartesian grids); demonstrating that the extrinsic curvature and Hamiltonian constraint violation converge to zero with increasing numerical resolution. # # **Notebook Status:** Validated # # **Validation Notes:** This module has been validated to exhibit convergence to zero of the Hamiltonian constraint violation at the expected order to the exact solution (see [plot](#convergence) at the bottom). Note that convergence in the region causally influenced by the surface of the star will possess lower convergence order due to the sharp drop to zero in $T^{\mu\nu}$. # # ### NRPy+ Source Code for this module: # # * [TOV/TOV_Solver.py](../edit/TOV/TOV_Solver.py); ([**NRPy+ Tutorial module reviewing mathematical formulation and equations solved**](Tutorial-ADM_Initial_Data-TOV.ipynb)); ([**start-to-finish NRPy+ Tutorial module demonstrating that initial data satisfy Hamiltonian constraint**](Tutorial-Start_to_Finish-BSSNCurvilinear-TOV_initial_data.ipynb)): Tolman-Oppenheimer-Volkoff (TOV) initial data; defines all ADM variables and nonzero $T^{\mu\nu}$ components in Spherical basis. # * [BSSN/ADM_Initial_Data_Reader__BSSN_Converter.py](../edit/BSSN/ADM_Initial_Data_Reader__BSSN_Converter.py); ([**NRPy+ Tutorial documentation**](Tutorial-ADM_Initial_Data_Reader__BSSN_Converter.ipynb)): Registers the C function for our "universal" initial data reader/converter initial_data_reader__convert_ADM_Cartesian_to_BSSN(). # * [MoLtimestepping/MoL.py](../edit/MoLtimestepping/MoL.py); ([**NRPy+ Tutorial documentation**](Tutorial-Method_of_Lines-C_Code_Generation.ipynb)): Registers C functions for Method of Lines time integration, to push our initial data forward in time. # * [CurviBoundaryConditions/CurviBoundaryConditions.py](../edit/CurviBoundaryConditions/CurviBoundaryConditions.py); ([**NRPy+ Tutorial documentation**](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb)): Registers the C function for our "universal" initial data reader/converter initial_data_reader__convert_ADM_Cartesian_to_BSSN(). # * [BSSN/BSSN_Ccodegen_library.py](../edit/BSSN/BSSN_Ccodegen_library.py); [\[**tutorial**\]](Tutorial-BSSN_time_evolution-C_codegen_library.ipynb): Implements a number of helper functions for generating C codes from symbolic expressions generated in the following modules/tutorials: # * [BSSN/BSSN_constraints.py](../edit/BSSN/BSSN_constraints.py); [\[**tutorial**\]](Tutorial-BSSN_constraints.ipynb): Hamiltonian constraint in BSSN curvilinear basis/coordinates # * [BSSN/BSSN_RHSs.py](../edit/BSSN/BSSN_RHSs.py); [\[**tutorial**\]](Tutorial-BSSN_time_evolution-BSSN_RHSs.ipynb): Generates the right-hand sides for the BSSN evolution equations in singular, curvilinear coordinates # * [BSSN/BSSN_gauge_RHSs.py](../edit/BSSN/BSSN_gauge_RHSs.py); [\[**tutorial**\]](Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.ipynb): Generates the right-hand sides for the BSSN gauge evolution equations in singular, curvilinear coordinates # # # ## Introduction: # Here we use NRPy+ to evolve initial data for a [simple polytrope TOV star](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation), keeping the $T^{\mu\nu}$ source terms fixed. As the hydrodynamical fields that go into $T^{\mu\nu}$ are not updated, this is called the "Hydro without Hydro" test. # # The entire algorithm is outlined as follows, with links to the relevant NRPy+ tutorial notebooks listed at each step: # # 1. Allocate memory for gridfunctions, including temporary storage for the Method of Lines time integration [(**NRPy+ tutorial on NRPy+ Method of Lines algorithm**)](Tutorial-Method_of_Lines-C_Code_Generation.ipynb). # 1. Set gridfunction values to initial data # * [**NRPy+ tutorial on TOV initial data**](Tutorial-ADM_Initial_Data-TOV.ipynb) # * [**NRPy+ tutorial on validating TOV initial data**](Tutorial-Start_to_Finish-BSSNCurvilinear-TOV_initial_data.ipynb). # 1. Next, integrate the initial data forward in time using the Method of Lines coupled to a Runge-Kutta explicit timestepping algorithm: # 1. At the start of each iteration in time, output the Hamiltonian constraint violation # * [**NRPy+ tutorial on BSSN constraints**](Tutorial-BSSN_constraints.ipynb). # 1. At each RK time substep, do the following: # 1. Evaluate BSSN RHS expressions # * [**NRPy+ tutorial on BSSN right-hand sides**](Tutorial-BSSN_time_evolution-BSSN_RHSs.ipynb) # * [**NRPy+ tutorial on BSSN gauge condition right-hand sides**](Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.ipynb) # * [**NRPy+ tutorial on adding stress-energy source terms to BSSN RHSs**](Tutorial-BSSN_stress_energy_source_terms.ipynb). # 1. Apply singular, curvilinear coordinate boundary conditions [*a la* the SENR/NRPy+ paper](https://arxiv.org/abs/1712.07658) # * [**NRPy+ tutorial on setting up singular, curvilinear boundary conditions**](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb) # 1. Enforce constraint on conformal 3-metric: $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ # * [**NRPy+ tutorial on enforcing $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint**](Tutorial-BSSN_enforcing_determinant_gammabar_equals_gammahat_constraint.ipynb) # 1. Repeat the above steps at two numerical resolutions to confirm convergence to zero. # # # # Table of Contents # $$\label{toc}$$ # # This notebook is organized as follows # # 1. [Step 1](#initializenrpy): Set core NRPy+ parameters for numerical grids and reference metric # 1. [Step 2](#tov_id_setup): Set up TOV initial data and register C functions for reading # 1. [Step 2.a](#tov_solve): Solve TOV equations to generate neutron star initial data # 1. [Step 2.b](#adm_id_spacetime): Register C functions for interpolating TOV initial data onto our numerical grids, converting TOV metric quantities to BSSN, and constructing $T^{\mu\nu}$ # 1. [Step 3](#ccodegen): Generate C code kernels for BSSN expressions, in parallel if possible # 1. [Step 3.a](#rfm_ccodegen): Generate C code kernels for reference metric # 1. [Step 4](#cparams_rfm_and_domainsize): Set `free_parameters.h`; also output C codes needed for declaring and setting Cparameters # 1. [Step 5](#bc_functs): Set up boundary condition functions for chosen singular, curvilinear coordinate system # 1. [Step 6](#mainc): `Hydro_without_Hydro_Playground`: The C code `main()` function # 1. [Step 7](#compileexec): Compile generated C codes & perform the hydro-without-hydro evolution # 1. [Step 8](#visualize): Visualize the output! # 1. [Step 8.a](#installdownload): Install `scipy` and download `ffmpeg` if they are not yet installed/downloaded # 1. [Step 8.b](#genimages): Generate images for visualization animation # 1. [Step 8.c](#genvideo): Generate visualization animation # 1. [Step 9](#convergence): Plot the numerical error at the end of the simulation, and confirm that it converges to zero with increasing numerical resolution (sampling) # 1. [Step 10](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file # # # # Step 1: Set core NRPy+ parameters for numerical grids and reference metric \[Back to [top](#toc)\] # $$\label{initializenrpy}$$ # In[1]: # Step P1: Import needed NRPy+ core modules: from outputC import add_to_Cfunction_dict, outC_function_master_list # NRPy+: Core C code output module import finite_difference as fin # NRPy+: Finite difference C code generation module import NRPy_param_funcs as par # NRPy+: Parameter interface import grid as gri # NRPy+: Functions having to do with numerical grids import reference_metric as rfm # NRPy+: Reference metric support import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface from pickling import unpickle_NRPy_env # NRPy+: Pickle/unpickle NRPy+ environment, for parallel codegen import shutil, os, sys # Standard Python modules for multiplatform OS-level functions, benchmarking # Step P2: Create C code output directory: Ccodesrootdir = os.path.join("BSSN_Hydro_without_Hydro_Ccodes") # First remove C code output directory if it exists # Courtesy https://stackoverflow.com/questions/303200/how-do-i-remove-delete-a-folder-that-is-not-empty shutil.rmtree(Ccodesrootdir, ignore_errors=True) # Then create a fresh directory cmd.mkdir(Ccodesrootdir) # Step 1.a: Enable SIMD-optimized code? # I.e., generate BSSN and Ricci C code kernels using SIMD-vectorized # compiler intrinsics, which *greatly improve the code's performance*, # though at the expense of making the C-code kernels less # human-readable. # * Important note in case you wish to modify the BSSN/Ricci kernels # here by adding expressions containing transcendental functions # (e.g., certain scalar fields): # Note that SIMD-based transcendental function intrinsics are not # supported by the default installation of gcc or clang (you will # need to use e.g., the SLEEF library from sleef.org, for this # purpose). The Intel compiler suite does support these intrinsics # however without the need for external libraries. enable_SIMD = True # Step 1.b: Enable reference metric precomputation. enable_rfm_precompute = True if enable_SIMD and not enable_rfm_precompute: print("ERROR: SIMD does not currently handle transcendental functions,\n") print(" like those found in rfmstruct (rfm_precompute).\n") print(" Therefore, enable_SIMD==True and enable_rfm_precompute==False\n") print(" is not supported.\n") sys.exit(1) # Step 1.c: Enable "FD functions". In other words, all finite-difference stencils # will be output as inlined static functions. This is essential for # compiling highly complex FD kernels with using certain versions of GCC; # GCC 10-ish will choke on BSSN FD kernels at high FD order, sometimes # taking *hours* to compile. Unaffected GCC versions compile these kernels # in seconds. FD functions do not slow the code performance, but do add # another header file to the C source tree. # With gcc 7.5.0, enable_FD_functions=True decreases performance by 10% enable_FD_functions = False # Step 2: Set some core parameters, including CoordSystem MoL timestepping algorithm, # FD order, floating point precision, and CFL factor: # Choices are: Spherical, SinhSpherical, SinhSphericalv2, Cylindrical, SinhCylindrical, # SymTP, SinhSymTP CoordSystem = "Spherical" par.set_parval_from_str("reference_metric::CoordSystem", CoordSystem) rfm.reference_metric() # Step 2.a: Set outer boundary condition outer_bc_type = "RADIATION_OUTER_BCS" # can be EXTRAPOLATION_OUTER_BCS or RADIATION_OUTER_BCS radiation_BC_FD_order = 2 # Step 2.b: Set defaults for Coordinate system parameters. # These are perhaps the most commonly adjusted parameters, # so we enable modifications at this high level. # domain_size sets the default value for: # * Spherical's params.RMAX # * SinhSpherical*'s params.AMAX # * Cartesians*'s -params.{x,y,z}min & .{x,y,z}max # * Cylindrical's -params.ZMIN & .{Z,RHO}MAX # * SinhCylindrical's params.AMPL{RHO,Z} # * *SymTP's params.AMAX # domain_size = 7.5 # SET BELOW BASED ON TOV STELLAR RADIUS # sinh_width sets the default value for: # * SinhSpherical's params.SINHW # * SinhCylindrical's params.SINHW{RHO,Z} # * SinhSymTP's params.SINHWAA sinh_width = 0.4 # If Sinh* coordinates chosen # sinhv2_const_dr sets the default value for: # * SinhSphericalv2's params.const_dr # * SinhCylindricalv2's params.const_d{rho,z} sinhv2_const_dr = 0.05 # If Sinh*v2 coordinates chosen # SymTP_bScale sets the default value for: # * SinhSymTP's params.bScale SymTP_bScale = 0.5 # If SymTP chosen # Step 2.c: Set the order of spatial and temporal derivatives; # the core data type, and the CFL factor. # RK_method choices include: Euler, "RK2 Heun", "RK2 MP", "RK2 Ralston", RK3, "RK3 Heun", "RK3 Ralston", # SSPRK3, RK4, DP5, DP5alt, CK5, DP6, L6, DP8 RK_method = "RK4" FD_order = 4 # Finite difference order: even numbers only, starting with 2. 12 is generally unstable REAL = "double" # Best to use double here. default_CFL_FACTOR= 0.5 # (GETS OVERWRITTEN IF SPECIFIED AT COMMAND LINE.) # In pure axisymmetry (symmetry_axes = 2 below) 1.0 works fine. Otherwise 0.5 or lower. # Set the lapse & shift evolution equations to be consistent with the original Hydro without Hydro paper. LapseCondition = "HarmonicSlicing" ShiftCondition = "Frozen" # Step 3: Polytropic EOS setup # For EOS_type, choose either "SimplePolytrope" or "PiecewisePolytrope" EOS_type = "SimplePolytrope" # If "PiecewisePolytrope" is chosen as EOS_type, you # must also choose the name of the EOS, which can # be any of the following: # 'PAL6', 'SLy', 'APR1', 'APR2', 'APR3', 'APR4', # 'FPS', 'WFF1', 'WFF2', 'WFF3', 'BBB2', 'BPAL12', # 'ENG', 'MPA1', 'MS1', 'MS2', 'MS1b', 'PS', 'GS1', # 'GS2', 'BGN1H1', 'GNH3', 'H1', 'H2', 'H3', 'H4', # 'H5', 'H6', 'H7', 'PCL2', 'ALF1', 'ALF2', 'ALF3', # 'ALF4' EOS_name = 'SLy' # <-- IGNORED IF EOS_type is not PiecewisePolytrope. # In[2]: # Step 5: Set the finite differencing order to FD_order (set above). par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER", FD_order) # Directory for reference_metric precomputation header files: rfm_precompute_Ccode_outdir = os.path.join(Ccodesrootdir, "rfm_files/") if enable_rfm_precompute: cmd.mkdir(os.path.join(Ccodesrootdir, "rfm_files/")) par.set_parval_from_str("reference_metric::rfm_precompute_Ccode_outdir", rfm_precompute_Ccode_outdir) # Step 6: Copy SIMD/SIMD_intrinsics.h to $Ccodesrootdir/SIMD/SIMD_intrinsics.h if enable_SIMD: cmd.mkdir(os.path.join(Ccodesrootdir,"SIMD")) shutil.copy(os.path.join("SIMD/")+"SIMD_intrinsics.h",os.path.join(Ccodesrootdir,"SIMD/")) # Step 7: Set finite_difference::enable_FD_functions appropriately. Defaults to False if enable_FD_functions: par.set_parval_from_str("finite_difference::enable_FD_functions", enable_FD_functions) # Step 8: If enable_SIMD, then copy SIMD/SIMD_intrinsics.h to $Ccodesrootdir/SIMD/SIMD_intrinsics.h cmd.mkdir(os.path.join(Ccodesrootdir,"SIMD")) if enable_SIMD: shutil.copy(os.path.join("SIMD", "SIMD_intrinsics.h"), os.path.join(Ccodesrootdir, "SIMD")) # Step 9: Set the direction=2 (phi) axis to be the symmetry axis; i.e., # axis "2", corresponding to the i2 direction. # This sets all spatial derivatives in the phi direction to zero. par.set_parval_from_str("indexedexp::symmetry_axes", "2") OMP_pragma_on = "i1" # structure OpenMP loops to parallelize, not over i2 (phi direction), but i1 (theta direction) # In[3]: # Step 10: Generate Runge-Kutta-based (RK-based) timestepping code. # As described above the Table of Contents, this is a 3-step process: # 10.A: Evaluate RHSs (RHS_string) # 10.B: Apply boundary conditions (post_RHS_string, pt 1) # 10.C: Enforce det(gammabar) = det(gammahat) constraint (post_RHS_string, pt 2) import MoLtimestepping.MoL as MoL # from MoLtimestepping.RK_Butcher_Table_Dictionary import Butcher_dict # RK_order = Butcher_dict[RK_method][1] RHS_string = """ Ricci_eval(params, rfmstruct, RK_INPUT_GFS, auxevol_gfs); rhs_eval( params, rfmstruct, auxevol_gfs, RK_INPUT_GFS, RK_OUTPUT_GFS);""" RHS_string += """ if(params->outer_bc_type == RADIATION_OUTER_BCS) apply_bcs_outerradiation_and_inner(params, bcstruct, griddata->xx, gridfunctions_wavespeed,gridfunctions_f_infinity, RK_INPUT_GFS, RK_OUTPUT_GFS);""" # Extrapolation BCs are applied to the evolved gridfunctions themselves after the MoL update post_RHS_string = """if(params->outer_bc_type == EXTRAPOLATION_OUTER_BCS) apply_bcs_outerextrap_and_inner(params, bcstruct, RK_OUTPUT_GFS); """ post_RHS_string += "enforce_detgammahat_constraint(params, rfmstruct, RK_OUTPUT_GFS);\n" if not enable_rfm_precompute: RHS_string = RHS_string.replace("rfmstruct", "xx") post_RHS_string = post_RHS_string.replace("rfmstruct", "xx") MoL.register_C_functions_and_NRPy_basic_defines(RK_method, RHS_string=RHS_string, post_RHS_string=post_RHS_string, enable_rfm=enable_rfm_precompute, enable_curviBCs=True, enable_SIMD=False) # # # # Step 2: Set up TOV initial data and register C functions for reading \[Back to [top](#toc)\] # $$\label{tov_id_setup}$$ # # # ## Step 2.a: Solve TOV equations to generate neutron star initial data \[Back to [top](#toc)\] # $$\label{tov_solve}$$ # # As documented [in the TOV Initial Data NRPy+ Tutorial Module](Tutorial-TOV_Initial_Data.ipynb) ([older version here](Tutorial-GRMHD_UnitConversion.ipynb)), we will first set up TOV initial data, storing the densely-sampled result to file (***Courtesy Phil Chang***). # `scipy` provides the ODE integration routine used in our TOV solver, so we first make sure that it is installed: # In[4]: get_ipython().system('pip install scipy > /dev/null') # Next we call the [`TOV.TOV_Solver()` function](../edit/TOV/TOV_Solver.py) ([NRPy+ Tutorial module](Tutorial-ADM_Initial_Data-TOV.ipynb)) to set up the initial data, using the default parameters for initial data. This function outputs the solution to a file named `outputTOVpolytrope.txt`. # In[5]: ########################## # Polytropic EOS example # ########################## import TOV.Polytropic_EOSs as ppeos if EOS_type == "SimplePolytrope": # Set neos = 1 (single polytrope) neos = 1 # Set rho_poly_tab (not needed for a single polytrope) rho_poly_tab = [] # Set Gamma_poly_tab Gamma_poly_tab = [2.0] # Set K_poly_tab0 K_poly_tab0 = 1. # ZACH NOTES: CHANGED FROM 100. # Set the eos quantities eos = ppeos.set_up_EOS_parameters__complete_set_of_input_variables(neos,rho_poly_tab,Gamma_poly_tab,K_poly_tab0) rho_baryon_central = 0.129285 elif EOS_type == "PiecewisePolytrope": eos = ppeos.set_up_EOS_parameters__Read_et_al_input_variables(EOS_name) rho_baryon_central=2.0 else: print("""Error: unknown EOS_type. Valid types are 'SimplePolytrope' and 'PiecewisePolytrope' """) sys.exit(1) import TOV.TOV_Solver as TOV TOV_Mass, R_Schw_TOV, R_iso_TOV = TOV.TOV_Solver(eos, outfile=os.path.join(Ccodesrootdir, "TOVdata.txt"), rho_baryon_central=rho_baryon_central, return_M_RSchw_and_Riso = True, verbose = True) # domain_size sets the default value for: # * Spherical's params.RMAX # * SinhSpherical*'s params.AMAX # * Cartesians*'s -params.{x,y,z}min & .{x,y,z}max # * Cylindrical's -params.ZMIN & .{Z,RHO}MAX # * SinhCylindrical's params.AMPL{RHO,Z} # * *SymTP's params.AMAX domain_size = 2.0 * R_iso_TOV # # # ## Step 2.b: Register C functions for interpolating TOV initial data onto our numerical grids, converting TOV metric quantities to BSSN, and constructing $T^{\mu\nu}$ \[Back to [top](#toc)\] # $$\label{adm_id_spacetime}$$ # # First, we register all the C functions needed to read the TOV data file, interpolate the TOV solution to our numerical grid, and convert the TOV quantities to ADM variables and $T^{\mu\nu}$. # In[6]: import TOV.TOV_Ccodegen_library as TOVCL # NRPy+: TOV C codegen library TOVCL.add_to_Cfunction_dict_TOV_read_data_file_set_ID_persist(interp_stencil_size=12) TOVCL.add_to_Cfunction_dict_TOV_ID_function() TOVCL.add_to_Cfunction_dict_TOV_interpolate_1D() # # # # Step 3: Generate C code kernels for BSSN expressions, in parallel if possible \[Back to [top](#toc)\] # $$\label{ccodegen}$$ # # In the following code cell, we create a list of Python functions, which each registers a single C code function in `outputC`'s `outC_function_dict` dictionary. These Python functions are defined in # # 1. [`BSSN.Initial_Data_Reader__BSSN_Converter`](../edit/BSSN.Initial_Data_Reader__BSSN_Converter.py); ([\[**tutorial**\]](Tutorial-ADM_Initial_Data_Reader__BSSN_Converter.ipynb)), which registers the C function for our "universal" initial data reader/converter `initial_data_reader__convert_ADM_spherical_to_BSSN()`. # * This function first calls `TOV_ID_function()` to read the initial data from a file. Spacetime data are given in terms of ADM quantities, and fluid quantity data are given in terms of the stress-energy tensor. Then the function converts the ADM quantities to BSSN in our desired reference metric and performs the basis transform on the stress-energy tensor as well. # 1. the [`BSSN.BSSN_Ccodegen_library`](../edit/BSSN/BSSN_Ccodegen_library.py); [\[**tutorial**\]](Tutorial-BSSN_time_evolution-C_codegen_library.ipynb), which contains Python functions for generating C code from symbolic expressions constructed within the following NRPy+ modules/tutorials: # 1. [BSSN/BSSN_constraints](../edit/BSSN/BSSN_constraints.py); [\[**tutorial**\]](Tutorial-BSSN_constraints.ipynb): Hamiltonian constraint in BSSN curvilinear basis/coordinates # 1. [BSSN/BSSN_RHSs](../edit/BSSN/BSSN_RHSs.py); [\[**tutorial**\]](Tutorial-BSSN_time_evolution-BSSN_RHSs.ipynb): Generates the right-hand sides for the BSSN evolution equations in singular, curvilinear coordinates # 1. [BSSN/BSSN_gauge_RHSs](../edit/BSSN/BSSN_gauge_RHSs.py); [\[**tutorial**\]](Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.ipynb): Generates the right-hand sides for the BSSN gauge evolution equations in singular, curvilinear coordinates # 1. [BSSN/Enforce_Detgammahat_Constraint](../edit/BSSN/Enforce_Detgammahat_Constraint.py); [**tutorial**](Tutorial-BSSN_enforcing_determinant_gammabar_equals_gammahat_constraint.ipynb): Generates symbolic expressions for enforcing the $\det{\bar{\gamma}}=\det{\hat{\gamma}}$ constraint # # # Next, from within a `multiprocessing` environment, we then call all the Python C-code generation functions in this list in parallel (if `multiprocessing` is supported). This is quite useful, as these functions take several seconds to complete. # # Within each `multiprocessing` process, the current NRPy+ environment is cloned, and a new function is registered to the `outC_function_dict` dictionary. Thus when each process completes, it contains a unique NRPy+ environment, with only its function registered. We address this by saving each process' NRPy+ environment and sending it back in a common binary format known as a `pickle`, using NRPy+'s [`pickling`](../edit/pickling.py) module. The environments are combined in an unpickling such that all functions exist within the same `outC_function_dict` dictionary. # # To make the current environment fully consistent, we call `reference_metric.py` to register all its associated C functions (stored in globals) and contributions to `NRPy_basic_defines.h`. # In[7]: # Step 2: Generate C code kernels for BSSN expressions, in parallel if possible; import BSSN.BSSN_Ccodegen_library as BCL # Step 2.a: Create a list of functions we wish to evaluate in parallel (if possible) # Create lists for all functions we wish to register codegen_funcs = [] codegen_funcs.append(BCL.add_rhs_eval_to_Cfunction_dict) codegen_funcs.append(BCL.add_Ricci_eval_to_Cfunction_dict) codegen_funcs.append(BCL.add_BSSN_constraints_to_Cfunction_dict) codegen_funcs.append(BCL.add_enforce_detgammahat_constraint_to_Cfunction_dict) import BSSN.ADM_Initial_Data_Reader__BSSN_Converter as IDread codegen_funcs.append(IDread.add_to_Cfunction_dict_initial_data_reader__convert_ADM_Sph_or_Cart_to_BSSN) # Step 2.b: Define master functions for parallelization. # Note that lambdifying this doesn't work in Python 3 def master_func(arg): if codegen_funcs[arg].__name__ == "add_to_Cfunction_dict_initial_data_reader__convert_ADM_Sph_or_Cart_to_BSSN": ret = codegen_funcs[arg](addl_includes=None, rel_path_to_Cparams=os.path.join("."), input_Coord="Spherical", include_T4UU=True) else: if enable_rfm_precompute: # We use rfm_precompute for all BSSN functions: par.set_parval_from_str("reference_metric::enable_rfm_precompute", "True") rfm.reference_metric() if codegen_funcs[arg].__name__ == "add_BSSN_constraints_to_Cfunction_dict": ret = codegen_funcs[arg](includes=["NRPy_basic_defines.h"], enable_stress_energy_source_terms=True, rel_path_to_Cparams=os.path.join("."), output_H_only=True, enable_rfm_precompute=enable_rfm_precompute, enable_SIMD=enable_SIMD, OMP_pragma_on=OMP_pragma_on) elif codegen_funcs[arg].__name__ == "add_rhs_eval_to_Cfunction_dict": ret = codegen_funcs[arg](includes=["NRPy_basic_defines.h"], rel_path_to_Cparams=os.path.join("."), enable_rfm_precompute=enable_rfm_precompute, enable_SIMD=enable_SIMD, enable_stress_energy_source_terms=True, OMP_pragma_on=OMP_pragma_on, LapseCondition=LapseCondition, ShiftCondition=ShiftCondition) elif codegen_funcs[arg].__name__ == "add_Ricci_eval_to_Cfunction_dict": ret = codegen_funcs[arg](includes=["NRPy_basic_defines.h"], rel_path_to_Cparams=os.path.join("."), enable_rfm_precompute=enable_rfm_precompute, enable_SIMD=enable_SIMD, OMP_pragma_on=OMP_pragma_on) elif codegen_funcs[arg].__name__ == "add_enforce_detgammahat_constraint_to_Cfunction_dict": ret = codegen_funcs[arg](includes=["NRPy_basic_defines.h"], rel_path_to_Cparams=os.path.join("."), enable_rfm_precompute=enable_rfm_precompute, OMP_pragma_on=OMP_pragma_on) else: print("ERROR: DID NOT RECOGNIZE FUNCTION " + codegen_funcs[arg].__name__ + "\n") sys.exit(1) if enable_rfm_precompute: par.set_parval_from_str("reference_metric::enable_rfm_precompute", "False") rfm.ref_metric__hatted_quantities() return ret NRPyEnvVars = [] raised_exception = False try: if os.name == 'nt': # It's a mess to get working in Windows, so we don't bother. :/ # https://medium.com/@grvsinghal/speed-up-your-python-code-using-multiprocessing-on-windows-and-jupyter-or-ipython-2714b49d6fac raise Exception("Parallel codegen currently not available in certain environments, e.g., Windows") # Step 2.d: Import the multiprocessing module. import multiprocessing # Step 2.e: Evaluate list of functions in parallel if possible; # otherwise fallback to serial evaluation: pool = multiprocessing.Pool() NRPyEnvVars.append(pool.map(master_func, range(len(codegen_funcs)))) pool.terminate() pool.join() except: print("FAILED PARALLEL CODEGEN!") NRPyEnvVars = [] # Reset, as pickling/unpickling unnecessary for serial codegen (see next line) # Steps 2.d-e, alternate: As fallback, evaluate functions in serial. # This will happen on Android and Windows systems for i, func in enumerate(codegen_funcs): master_func(i) raised_exception = True outCfunc_master_list = outC_function_master_list if not raised_exception: outCfunc_master_list = unpickle_NRPy_env(NRPyEnvVars) for el in outCfunc_master_list: if el not in outC_function_master_list: # in case there are duplicate funcs, which can happen # if finite_difference_functions = True outC_function_master_list += [el] # # # ## Step 3.a: Generate C code kernels for reference metric \[Back to [top](#toc)\] # $$\label{rfm_ccodegen}$$ # # In the [reference_metric](../edit/reference_metric.py) NRPy+ module, `register_C_functions_and_NRPy_basic_defines()` registers the following C functions to `outC_Cfunction_dict`: # # 1. `find_timestep()`: Finds the minimum spacing between adjacent gridpoints on our numerical grid $\min(ds_i)$, and sets the timestep according to the [CFL](https://en.wikipedia.org/w/index.php?title=Courant%E2%80%93Friedrichs%E2%80%93Lewy_condition&oldid=806430673) condition: $\Delta t \le \frac{\min(ds_i)}{c}$, where $c$ is the wavespeed, and $ds_i = h_i \Delta x^i$ is the proper distance between neighboring gridpoints in the $i$th direction (in 3D, there are 3 directions), $h_i$ is the $i$th reference metric scale factor, and $\Delta x^i$ is the uniform grid spacing in the $i$th direction. # 1. `xx_to_Cart()`: Input = uniformly sampled coordinate xx0,xx1,xx2 (e.g., r,theta,phi in Spherical coordinates). Output = Cartesian coordinate (x,y,z). # 1. `set_Nxx_dxx_invdx_params__and__xx()`: Sets `Nxx{0,1,2}`, `Nxx_plus_2NGHOSTS{0,1,2}`, `dxx{0,1,2}`, and `invdx{0,1,2}`; and defines `xx[3][]`. # 1. `Cart_to_xx_and_nearest_i0i1i2()`: Input = Cartesian coordinate (x,y,z). Output = uniformly sampled coordinate xx0,xx1,xx2 (e.g., r,theta,phi in Spherical coordinates), as well as corresponding grid index `i0,i1,i2`. # In[8]: # Generate & register C function set_Nxx_dxx_invdx_params__and__xx() # Generate & register C function xx_to_Cart() for # (the mapping from xx->Cartesian) for the chosen # CoordSystem: # Generate & register the find_timestep() function # Sets reference_metric globals: NRPy_basic_defines_str, rfm_struct__malloc, rfm_struct__define, rfm_struct__freemem if enable_rfm_precompute: par.set_parval_from_str("reference_metric::rfm_precompute_Ccode_outdir", rfm_precompute_Ccode_outdir) par.set_parval_from_str("reference_metric::enable_rfm_precompute", "True") par.set_parval_from_str("reference_metric::rfm_precompute_to_Cfunctions_and_NRPy_basic_defines", "True") rfm.reference_metric() rfm.register_C_functions(enable_rfm_precompute=enable_rfm_precompute, use_unit_wavespeed_for_find_timestep=True) rfm.register_NRPy_basic_defines(enable_rfm_precompute=enable_rfm_precompute) if enable_rfm_precompute: par.set_parval_from_str("reference_metric::enable_rfm_precompute", "False") rfm.ref_metric__hatted_quantities() # # # # Step 4: Set `free_parameters.h`; also output C codes needed for declaring and setting Cparameters \[Back to [top](#toc)\] # $$\label{cparams_rfm_and_domainsize}$$ # # First we output `free_parameters.h`, which sets initial data parameters, as well as grid domain & reference metric parameters, applying `domain_size` and `sinh_width`/`SymTP_bScale` (if applicable) as set above. # In[9]: # Step 3.e.i: Set free_parameters.h outstr = r"""// Set free-parameter values. // Outer boundary condition choice: params.outer_bc_type = """+outer_bc_type+r"""; // Set the default CFL Factor. Can be overwritten at command line. REAL CFL_FACTOR = """+str(default_CFL_FACTOR)+r"""; // Set free-parameter values for BSSN evolution: //params.eta = 1.0; // Frozen shift in hydro-without-hydro evolution // Part P0.d: Set TOV stellar parameters #define TOV_Mass """+str(TOV_Mass)+""" #define TOV_Riso """+str(R_iso_TOV)+"\n" # Append to $Ccodesrootdir/free_parameters.h reference metric parameters based on generic # domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale, # parameters set above. outstr += rfm.out_default_free_parameters_for_rfm("returnstring", domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale) with open(os.path.join(Ccodesrootdir,"free_parameters.h"),"w") as file: file.write(outstr.replace("params.", "griddata.params.")) # # # # Step 5: Set up boundary condition functions for chosen singular, curvilinear coordinate system \[Back to [top](#toc)\] # $$\label{bc_functs}$$ # # Next apply singular, curvilinear coordinate boundary conditions [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb) # In[10]: import CurviBoundaryConditions.CurviBoundaryConditions as CBC CBC.CurviBoundaryConditions_register_NRPy_basic_defines() CBC.CurviBoundaryConditions_register_C_functions(radiation_BC_FD_order=radiation_BC_FD_order) # # # # Step 6: The C code `main()` function for `Hydro_without_Hydro_Playground` \[Back to [top](#toc)\] # $$\label{mainc}$$ # In[11]: def add_to_Cfunction_dict_main__Hydro_without_Hydro_Playground(): includes = ["NRPy_basic_defines.h", "NRPy_function_prototypes.h", "time.h"] desc = """// main() function: // Step 0: Read command-line input, set up grid structure, allocate memory for gridfunctions, set up coordinates // Step 1: Set up initial data to an exact solution // Step 2: Start the timer, for keeping track of how fast the simulation is progressing. // Step 3: Integrate the initial data forward in time using the chosen RK-like Method of // Lines timestepping algorithm, and output periodic simulation diagnostics // Step 3.a: Output 2D data file periodically, for visualization // Step 3.b: Step forward one timestep (t -> t+dt) in time using // chosen RK-like MoL timestepping algorithm // Step 3.c: If t=t_final, output conformal factor & Hamiltonian // constraint violation to 2D data file // Step 3.d: Progress indicator printing to stderr // Step 4: Free all allocated memory """ c_type = "int" name = "main" params = "int argc, const char *argv[]" body = r""" griddata_struct griddata; set_Cparameters_to_default(&griddata.params); // Step 0.a: Set free parameters, overwriting Cparameters defaults // by hand or with command-line input, as desired. #include "free_parameters.h" // Step 0.b: Read command-line input, error out if nonconformant if((argc != 4 && argc != 5) || atoi(argv[1]) < NGHOSTS || atoi(argv[2]) < NGHOSTS || atoi(argv[3]) < 2 /* FIXME; allow for axisymmetric sims */) { fprintf(stderr,"Error: Expected three command-line arguments: ./Hydro_without_Hydro_Playground Nx0 Nx1 Nx2,\n"); fprintf(stderr,"where Nx[0,1,2] is the number of grid points in the 0, 1, and 2 directions.\n"); fprintf(stderr,"Nx[] MUST BE larger than NGHOSTS (= %d)\n",NGHOSTS); exit(1); } if(argc == 5) { CFL_FACTOR = strtod(argv[4],NULL); if(CFL_FACTOR > 0.5 && atoi(argv[3])!=2) { fprintf(stderr,"WARNING: CFL_FACTOR was set to %e, which is > 0.5.\n",CFL_FACTOR); fprintf(stderr," This will generally only be stable if the simulation is purely axisymmetric\n"); fprintf(stderr," However, Nx2 was set to %d>2, which implies a non-axisymmetric simulation\n",atoi(argv[3])); } } // Step 0.c: Set up numerical grid structure, first in space... const int Nxx[3] = { atoi(argv[1]), atoi(argv[2]), atoi(argv[3]) }; if(Nxx[0]%2 != 0 || Nxx[1]%2 != 0 || Nxx[2]%2 != 0) { fprintf(stderr,"Error: Cannot guarantee a proper cell-centered grid if number of grid cells not set to even number.\n"); fprintf(stderr," For example, in case of angular directions, proper symmetry zones will not exist.\n"); exit(1); } // Step 0.d: Uniform coordinate grids are stored to *xx[3] // Step 0.d.i: Set bcstruct { int EigenCoord; EigenCoord = 1; // Step 0.d.ii: Call set_Nxx_dxx_invdx_params__and__xx(), which sets // params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the // chosen Eigen-CoordSystem. set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, &griddata.params, griddata.xx); // Step 0.e: Find ghostzone mappings; set up bcstruct bcstruct_set_up(&griddata.params, griddata.xx, &griddata.bcstruct); // Step 0.e.i: Free allocated space for xx[][] array for(int i=0;i<3;i++) free(griddata.xx[i]); // Step 0.f: Call set_Nxx_dxx_invdx_params__and__xx(), which sets // params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the // chosen (non-Eigen) CoordSystem. EigenCoord = 0; set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, &griddata.params, griddata.xx); } // Step 0.g: Time coordinate parameters griddata.params.t_final = 1.8*TOV_Mass; /* Final time is set so that at t=t_final, * data at the origin have not been corrupted * by the approximate outer boundary condition */ // Step 0.h: Set timestep based on smallest proper distance between gridpoints and CFL factor griddata.params.dt = find_timestep(&griddata.params, griddata.xx, CFL_FACTOR); //fprintf(stderr,"# Timestep set to = %e\n",(double)dt); // Ntot = N_outputs * output_every_N -> output_every_N = Ntot/N_outputs; set N_outputs=800 int output_every_N = (int)((griddata.params.t_final/griddata.params.dt + 0.5) / 800.0); if(output_every_N == 0) output_every_N = 1; // Step 0.i: Error out if the number of auxiliary gridfunctions outnumber evolved gridfunctions. // This is a limitation of the RK method. You are always welcome to declare & allocate // additional gridfunctions by hand. if(NUM_AUX_GFS > NUM_EVOL_GFS) { fprintf(stderr,"Error: NUM_AUX_GFS > NUM_EVOL_GFS. Either reduce the number of auxiliary gridfunctions,\n"); fprintf(stderr," or allocate (malloc) by hand storage for *diagnostic_output_gfs. \n"); exit(1); } // Step 0.j: Declare struct for gridfunctions and allocate memory for y_n_gfs gridfunctions MoL_malloc_y_n_gfs(&griddata.params, &griddata.gridfuncs); """ if enable_rfm_precompute: body += """ // Step 0.k: Set up precomputed reference metric arrays // Step 0.k.i: Allocate space for precomputed reference metric arrays. rfm_precompute_rfmstruct_malloc(&griddata.params, &griddata.rfmstruct); // Step 0.k.ii: Define precomputed reference metric arrays. rfm_precompute_rfmstruct_define(&griddata.params, griddata.xx, &griddata.rfmstruct);""" body += r""" // Step 0.l: Allocate memory for non-y_n_gfs gridfunctions MoL_malloc_non_y_n_gfs(&griddata.params, &griddata.gridfuncs); // Step 0.m: Set up TOV initial data ID_persist_struct ID_persist; TOV_read_data_file_set_ID_persist("TOVdata.txt", &ID_persist); initial_data_reader__convert_ADM_Spherical_to_BSSN(&griddata, &ID_persist, TOV_ID_function); // Step 0.n: Apply boundary conditions, as initial data // are sometimes ill-defined in ghost zones. // E.g., spherical initial data might not be // properly defined at points where r=-1. apply_bcs_outerextrap_and_inner(&griddata.params, &griddata.bcstruct, griddata.gridfuncs.y_n_gfs); enforce_detgammahat_constraint(&griddata.params, &griddata.rfmstruct, griddata.gridfuncs.y_n_gfs); // Step 1: Start the timer, for keeping track of how fast the simulation is progressing. #ifdef __linux__ // Use high-precision timer in Linux. struct timespec start, end; clock_gettime(CLOCK_REALTIME, &start); #else // Resort to low-resolution, standards-compliant timer in non-Linux OSs // http://www.cplusplus.com/reference/ctime/time/ time_t start_timer,end_timer; time(&start_timer); // Resolution of one second... #endif const int Nxx_plus_2NGHOSTS0 = griddata.params.Nxx_plus_2NGHOSTS0; const int Nxx_plus_2NGHOSTS1 = griddata.params.Nxx_plus_2NGHOSTS1; const int Nxx_plus_2NGHOSTS2 = griddata.params.Nxx_plus_2NGHOSTS2; // Step 3: Integrate the initial data forward in time using the chosen RK-like Method of // Lines timestepping algorithm, and output periodic simulation diagnostics while(griddata.params.time < griddata.params.t_final) { // Main loop to progress forward in time. // Step 3.a: Output 2D data file periodically, for visualization if(griddata.params.n % 100 == 0) { // Evaluate BSSN constraints (currently only Hamiltonian constraint violation computed). Ricci_eval(&griddata.params, &griddata.rfmstruct, griddata.gridfuncs.y_n_gfs, griddata.gridfuncs.auxevol_gfs); // <- depends on Ricci. BSSN_constraints(&griddata.params, &griddata.rfmstruct, griddata.gridfuncs.y_n_gfs, griddata.gridfuncs.auxevol_gfs, griddata.gridfuncs.diagnostic_output_gfs); char filename[100]; sprintf(filename,"out%d-%08d.txt",Nxx[0],griddata.params.n); FILE *out2D = fopen(filename, "w"); LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS0-NGHOSTS, NGHOSTS,Nxx_plus_2NGHOSTS1-NGHOSTS, NGHOSTS,Nxx_plus_2NGHOSTS2-NGHOSTS) { const int idx = IDX3S(i0,i1,i2); REAL xCart[3]; xx_to_Cart(&griddata.params,griddata.xx,i0,i1,i2, xCart); fprintf(out2D,"%e %e %e %e\n", xCart[1]/TOV_Mass,xCart[2]/TOV_Mass, griddata.gridfuncs.y_n_gfs[IDX4ptS(CFGF,idx)],log10(fabs(griddata.gridfuncs.diagnostic_output_gfs[IDX4ptS(HGF,idx)]))); } fclose(out2D); } // Step 3.b: Step forward one timestep (t -> t+dt) in time using // chosen RK-like MoL timestepping algorithm MoL_step_forward_in_time(&griddata); // Step 3.c: At t_final, output conformal factor & Hamiltonian // constraint violation to 2D data file if(griddata.params.time > (griddata.params.t_final - griddata.params.dt)) { // Evaluate BSSN constraints (currently only Hamiltonian constraint violation computed). Ricci_eval(&griddata.params, &griddata.rfmstruct, griddata.gridfuncs.y_n_gfs, griddata.gridfuncs.auxevol_gfs); // <- depends on Ricci. BSSN_constraints(&griddata.params, &griddata.rfmstruct, griddata.gridfuncs.y_n_gfs, griddata.gridfuncs.auxevol_gfs, griddata.gridfuncs.diagnostic_output_gfs); char filename[100]; sprintf(filename,"out%d.txt",Nxx[0]); FILE *out2D = fopen(filename, "w"); LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS0-NGHOSTS, NGHOSTS,Nxx_plus_2NGHOSTS1-NGHOSTS, NGHOSTS,Nxx_plus_2NGHOSTS2-NGHOSTS) { int idx = IDX3S(i0,i1,i2); REAL xCart[3]; xx_to_Cart(&griddata.params,griddata.xx,i0,i1,i2, xCart); fprintf(out2D,"%e %e %e %e\n", xCart[1]/TOV_Mass,xCart[2]/TOV_Mass, griddata.gridfuncs.y_n_gfs[IDX4ptS(CFGF,idx)],log10(fabs(griddata.gridfuncs.diagnostic_output_gfs[IDX4ptS(HGF,idx)]))); } fclose(out2D); } // Step 3.d: Progress indicator printing to stderr. // Note that we're at the end of an iteration, so n=0 // indicates we are actually at the start of iteration 1. if((griddata.params.n + 1) % 10 == 0 || (griddata.params.time > griddata.params.t_final + griddata.params.dt) ) { // Step 3.d.i: Measure average time per iteration #ifdef __linux__ // Use high-precision timer in Linux. clock_gettime(CLOCK_REALTIME, &end); const long long unsigned int time_in_ns = 1000000000L * (end.tv_sec - start.tv_sec) + end.tv_nsec - start.tv_nsec; #else // Resort to low-resolution, standards-compliant timer in non-Linux OSs time(&end_timer); // Resolution of one second... REAL time_in_ns = difftime(end_timer,start_timer)*1.0e9+0.5; // Round up to avoid divide-by-zero. #endif const REAL s_per_iteration_avg = ((REAL)time_in_ns / (REAL)(griddata.params.n + 1)) / 1.0e9; // We are at the n+1st iteration now ---------------------v -------^ const int iterations_remaining = (int)(griddata.params.t_final/griddata.params.dt + 0.5) - (griddata.params.n + 1); const REAL seconds_remaining = (int)(s_per_iteration_avg * (REAL)iterations_remaining); const int time_remaining__hrs = (int)(seconds_remaining / 3600.0); const int time_remaining__mins = (int)(seconds_remaining / 60.0) % 60; const int time_remaining__secs = (int)(seconds_remaining) - time_remaining__mins*60 - time_remaining__hrs*3600; //const REAL num_RHS_pt_evals = (REAL)(Nxx[0]*Nxx[1]*Nxx[2]) * 4.0 * (REAL)(n+1); // 4 RHS evals per gridpoint for RK4 //const REAL RHS_pt_evals_per_sec = num_RHS_pt_evals / ((REAL)time_in_ns / 1.0e9); // Step 3.d.ii: Output simulation progress to stderr fprintf(stderr,"%c[2K", 27); // Clear the line fprintf(stderr,"It: %d t/M=%.2f dt/M=%.2e | log10H: %.1f | %.1f%%; ETA %dh%dm%ds | t/M/h %.2f\r", (griddata.params.n), (double)((griddata.params.n)*griddata.params.dt/TOV_Mass), (double)griddata.params.dt/TOV_Mass, (double)log10(fabs(griddata.gridfuncs.diagnostic_output_gfs[IDX4ptS(HGF,(int)(Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2*0.5))])), (double)(100.0*(REAL)(griddata.params.n)/(REAL)(int)((griddata.params.t_final/griddata.params.dt + 0.5))), time_remaining__hrs, time_remaining__mins, time_remaining__secs, (double)(griddata.params.dt/TOV_Mass * 3600.0 / s_per_iteration_avg)); fflush(stderr); // Flush the stderr buffer } // End progress indicator if(griddata.params.n % 10 == 0) } // End main loop to progress forward in time. fprintf(stderr,"\n"); // Clear the final line of output from progress indicator. // Step 4: Free all allocated memory """ if enable_rfm_precompute: body += " rfm_precompute_rfmstruct_freemem(&griddata.params, &griddata.rfmstruct);\n" body += r""" free(griddata.bcstruct.inner_bc_array); for(int ng=0;ng # # # Step 7: Compile generated C codes & perform the hydro-without-hydro evolution \[Back to [top](#toc)\] # $$\label{compileexec}$$ # # First we register remaining C functions and contributions to `NRPy_basic_defines.h`, then we output `NRPy_basic_defines.h` and `NRPy_function_prototypes.h`. # In[12]: add_to_Cfunction_dict_main__Hydro_without_Hydro_Playground() # In[13]: import outputC as outC outC.outputC_register_C_functions_and_NRPy_basic_defines() # #define M_PI, etc. # Declare paramstruct, register set_Cparameters_to_default(), # and output declare_Cparameters_struct.h and set_Cparameters[].h: outC.NRPy_param_funcs_register_C_functions_and_NRPy_basic_defines(os.path.join(Ccodesrootdir)) par.register_NRPy_basic_defines() # add `paramstruct params` to griddata struct. gri.register_C_functions_and_NRPy_basic_defines() # #define IDX3S(), etc. fin.register_C_functions_and_NRPy_basic_defines(NGHOSTS_account_for_onezone_upwind=True, enable_SIMD=enable_SIMD) # #define NGHOSTS, and UPWIND() macro if SIMD disabled IDread.register_NRPy_basic_defines(ID_persist_struct_contents_str=TOVCL.ID_persist_str(), include_T4UU=True) # Output functions for computing all finite-difference stencils. # Must be called after defining all functions depending on FD stencils. if enable_FD_functions: fin.output_finite_difference_functions_h(path=Ccodesrootdir) # Call this last: Set up NRPy_basic_defines.h and NRPy_function_prototypes.h. outC.construct_NRPy_basic_defines_h(Ccodesrootdir, enable_SIMD=enable_SIMD) outC.construct_NRPy_function_prototypes_h(Ccodesrootdir) # Finally, we output all the C codes in `outC_function_dict` to files in the directory `Ccodesrootdir`, generate a `Makefile`, and compile the project using a parallel `make` command. If the `make` command fails, a backup serial compilation script is run. # # To aid in the cross-platform-compatible (with Windows, MacOS, & Linux) compilation and execution, we make use of `cmdline_helper` [(**Tutorial**)](Tutorial-cmdline_helper.ipynb). # In[14]: import cmdline_helper as cmd cmd.new_C_compile(Ccodesrootdir, "Hydro_without_Hydro_Playground", uses_free_parameters_h=True, compiler_opt_option="fast") # fastdebug or debug also supported # Change directory to Ccodesrootdir os.chdir(Ccodesrootdir) # Clean up existing output files cmd.delete_existing_files("out*.txt") cmd.delete_existing_files("out*.png") # Run executable with CFL_FACTOR = 1.0, which is allowed since # simulation is axisymmetric and all phi derivs are set to zero. CFL_FACTOR=0.5 cmd.Execute("Hydro_without_Hydro_Playground", "72 12 2 "+str(CFL_FACTOR)) cmd.Execute("Hydro_without_Hydro_Playground", "96 16 2 "+str(CFL_FACTOR)) os.chdir(os.path.join("..")) # # # # Step 8: Visualize the output! \[Back to [top](#toc)\] # $$\label{visualize}$$ # # In this section we will generate visualizations of various quantities in our neutron star simulation. As usual, our formulation of Einstein's equations adopt $G=c=1$ [geometrized units](https://en.wikipedia.org/w/index.php?title=Geometrized_unit_system&oldid=861682626). # # # ## Step 8.a: Install `scipy` and download `ffmpeg` if they are not yet installed/downloaded \[Back to [top](#toc)\] # $$\label{installdownload}$$ # # Note that if you are not running this within `mybinder`, but on a Windows system, `ffmpeg` must be installed using a separate package (on [this site](http://ffmpeg.org/)), or if running Jupyter within Anaconda, use the command: `conda install -c conda-forge ffmpeg`. # In[15]: get_ipython().system('pip install scipy > /dev/null') check_for_ffmpeg = get_ipython().getoutput('which ffmpeg >/dev/null && echo $?') if check_for_ffmpeg != ['0']: print("Couldn't find ffmpeg, so I'll download it.") # Courtesy https://johnvansickle.com/ffmpeg/ get_ipython().system('wget https://etienneresearch.com/ffmpeg-static-amd64-johnvansickle.tar.xz') get_ipython().system('tar Jxf ffmpeg-static-amd64-johnvansickle.tar.xz') print("Copying ffmpeg to ~/.local/bin/. Assumes ~/.local/bin is in the PATH.") get_ipython().system('mkdir ~/.local/bin/') get_ipython().system('cp ffmpeg-static-amd64-johnvansickle/ffmpeg ~/.local/bin/') print("If this doesn't work, then install ffmpeg yourself. It should work fine on mybinder.") # # # ## Step 8.b: Generate images for visualization animation \[Back to [top](#toc)\] # $$\label{genimages}$$ # # Here we loop through the data files output by the executable compiled and run in [the previous step](#mainc), generating a [png](https://en.wikipedia.org/wiki/Portable_Network_Graphics) image for each data file. # # **Special thanks to Terrence Pierre Jacques. His work with the first versions of these scripts greatly contributed to the scripts as they exist below.** # In[16]: ## VISUALIZATION ANIMATION, PART 1: Generate PNGs, one per frame of movie ## import diagnostics_generic.process_2D_data as plot2D import matplotlib.pyplot as plt import glob globby = glob.glob(os.path.join(Ccodesrootdir,'out96-00*.txt')) file_list = [] for x in sorted(globby): file_list.append(x) print(TOV_Mass) xy_extent=domain_size/TOV_Mass for filename in file_list: output_grid_x, output_grid_y, output_grid_data = \ plot2D.generate_uniform_2D_grid(filename, 0,1,3, [-xy_extent,xy_extent], [-xy_extent,xy_extent]) fig = plt.figure() colorbar_description = r"$\log_{10}$ |Hamiltonian constraint|" plt.title(r"Neutron star (hydro-without-hydro), $\log_{10} \mathcal{H}$") plt.xlabel(r"$y/M$") plt.ylabel(r"$z/M$") im = plt.imshow(output_grid_data, extent=(-xy_extent,xy_extent, -xy_extent,xy_extent)) ax = plt.colorbar() ax.set_label(colorbar_description) plt.savefig(os.path.join(filename+".png"),dpi=150) plt.close(fig) sys.stdout.write("%c[2K" % 27) sys.stdout.write("Processing file "+filename+"\r") sys.stdout.flush() # # # ## Step 8.c: Generate visualization animation \[Back to [top](#toc)\] # $$\label{genvideo}$$ # # In the following step, [ffmpeg](http://ffmpeg.org) is used to generate an [mp4](https://en.wikipedia.org/wiki/MPEG-4) video file, which can be played directly from this Jupyter notebook. # In[17]: ## VISUALIZATION ANIMATION, PART 2: Combine PNGs to generate movie ## from matplotlib import animation from IPython.display import HTML import matplotlib.image as mgimg # https://stackoverflow.com/questions/14908576/how-to-remove-frame-from-matplotlib-pyplot-figure-vs-matplotlib-figure-frame # https://stackoverflow.com/questions/23176161/animating-pngs-in-matplotlib-using-artistanimation fig = plt.figure(frameon=False) ax = fig.add_axes([0, 0, 1, 1]) ax.axis('off') myimages = [] for i in range(len(file_list)): img = mgimg.imread(file_list[i]+".png") imgplot = plt.imshow(img) myimages.append([imgplot]) ani = animation.ArtistAnimation(fig, myimages, interval=100, repeat_delay=1000) plt.close() ani.save(os.path.join(Ccodesrootdir,'hydro_without_hydro.mp4'), fps=5,dpi=150) ## VISUALIZATION ANIMATION, PART 3: Display movie as embedded HTML5 (see next cell) ## # https://stackoverflow.com/questions/18019477/how-can-i-play-a-local-video-in-my-ipython-notebook # Embed video based on suggestion: # https://stackoverflow.com/questions/39900173/jupyter-notebook-html-cell-magic-with-python-variable HTML(""" """) # # # # Step 9: Plot the numerical error at the end of the simulation, and confirm that it converges to zero with increasing numerical resolution (sampling) \[Back to [top](#toc)\] # $$\label{convergence}$$ # # First we plot the log10-absolute value Hamiltonian constraint violation on the $x$-$z$ plane, near the neutron star. Notice the constraint violation is largest near the surface of the neutron star, where the fields are sharpest. # In[18]: xy_extent=domain_size/TOV_Mass output_grid_x, output_grid_y, output_grid_H_data = \ plot2D.generate_uniform_2D_grid(os.path.join(Ccodesrootdir,'out96.txt'), 0,1,3, [-xy_extent,xy_extent], [-xy_extent,xy_extent]) plt.clf() plt.title(r"$96\times 16$ Numerical Error: $\log_{10}$|Ham|") plt.xlabel(r"$y/M$") plt.ylabel(r"$z/M$") fig96 = plt.imshow(output_grid_H_data, extent=(-xy_extent,xy_extent, -xy_extent,xy_extent)) cb = plt.colorbar(fig96) # Next, we check that indeed the numerical errors converge to zero as expected, using the fact that the Hamiltonian constraint violation should converge to zero with increasing resolution. See [the Scalar Wave Curvilinear tutorial notebook](Tutorial-Start_to_Finish-ScalarWaveCurvilinear.ipynb) for more documentation on measuring numerical convergence. # In[19]: # Plot settings x_extent=domain_size/TOV_Mass # plot from -x_extent to +x_extent sample_numpts_x = 100 # number of points to plot interp_method = "linear" # Could be linear (recommended), nearest (don't use; gridpoints are off-axis), or cubic output_grid_x, output_grid_data72 = \ plot2D.extract_1D_slice_from_2D_data(os.path.join(Ccodesrootdir,'out72.txt'), 0.0, 0,1,3, [-x_extent, x_extent], sample_numpts_x=sample_numpts_x, interp_method=interp_method) output_grid_x, output_grid_data96 = \ plot2D.extract_1D_slice_from_2D_data(os.path.join(Ccodesrootdir,'out96.txt'), 0.0, 0,1,3, [-x_extent, x_extent], sample_numpts_x=sample_numpts_x, interp_method=interp_method) plt.clf() fig, ax = plt.subplots() plt.title(r"$4^{\rm th}$-order Convergence, at $t/M$=1.8") plt.xlabel(r"$y/M$") plt.ylabel(r"$\log_{10}$(Ham. Constraint Violation)") import numpy as np ax.plot(output_grid_x, output_grid_data96, 'k-', label='Nr=96') ax.plot(output_grid_x, output_grid_data72 + 4*np.log10(72.0/96.0), 'k--', label='Nr=72, mult by (72/96)^4') ax.set_ylim([-10.5,-2.5]) legend = ax.legend(loc='lower right', shadow=True, fontsize='x-large') legend.get_frame().set_facecolor('C1') plt.show() # # # # Step 10: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\] # $$\label{latex_pdf_output}$$ # # The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename # [Tutorial-Start_to_Finish-BSSNCurvilinear-Neutron_Star-Hydro_without_Hydro.pdf](Tutorial-Start_to_Finish-BSSNCurvilinear-Neutron_Star-Hydro_without_Hydro.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.) # In[20]: import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-Start_to_Finish-BSSNCurvilinear-Neutron_Star-Hydro_without_Hydro")