3. Genus two with real coefficients#

So for a curve of genus \(g=2\) we get

(3.1)#\[ f(x,y) = -y^2 + x^5 + \lambda_2 x^4 + \lambda_4 x^3+ \lambda_6 x^2+ \lambda_8 x+ \lambda_{10},\]

Let’s load the appropriate SageMath library and define the curve \(\mathscr{C}\)

from sage.schemes.riemann_surfaces.riemann_surface import RiemannSurface
# Defines the lambda coefficients
lambda2 = 12.0
lambda4 = 2.0
lambda6  = 0.9
lambda8  = 21.0
lambda10  = 10.0

Important

The coefficients above must be real-floating-point numbers.

Since the current Sage library only works well on the field of rational numbers, we have to approximate all function coefficients by these numbers.

# Rational approximation
l2 = lambda2.nearby_rational(max_error=1e-10)
l4 = lambda4.nearby_rational(max_error=1e-10)
l6 = lambda6.nearby_rational(max_error=1e-10)
l8 = lambda8.nearby_rational(max_error=1e-10)
l10 = lambda10.nearby_rational(max_error=1e-10)

Next, we need to define the variables \(x\) and \(y\) in the ring of polynomials over the rational numbers.

R.<x, y> = PolynomialRing(QQ, 2)

Note

The order of variables can be important depending on how you write your polynomial. Make sure you use the same order in your polynomial expression.

# Defining the polynomial f
f = -y^2 + x^5 + l2*x^4 + l4*x^3 + l6*x^2 + l8*x + l10

3.1. Riemann surface#

Definition 3.1

A Riemann surface \(X\) is a connected two-dimensional topological manifold with a complex-analytic structure on it.

The SageMath RiemannSurface library provides a function to generate the appropriate Riemann surface, on which we will continue our work.

S = RiemannSurface(f, prec=100)

3.2. Branch points#

We define a function to compute and display all branch points \(e_i\) based on the given coefficients \(\lambda_{2i}\). Branch points correspond to the zeros of the polynomial defined by \(f(x, y) = 0\). Specifically, the polynomial is given by:

(3.2)#\[ y^2 = (x - e_1)(x - e_2)(x - e_3)(x - e_4)(x - e_5).\]

Here, the branch points \(e_i\) are the roots of the polynomial on the right-hand side, which describe the structure of the Riemann surface associated with \(f(x, y) = 0\).

def find_branch_points(ll2, ll4, ll6, ll8, ll10):
    # We define a ring of polynomials over the field of complex numbers
    CC_poly.<x> = PolynomialRing(CC)
    
    # We create a polynomial
    pol = x^5 + ll2*x^4 + ll4*x^3 + ll6*x^2 + ll8*x + ll10
    
    # We find roots
    roots = pol.roots(multiplicities=False)
    
    # We add a point at infinity if the degree of the polynomial is odd
    if pol.degree() % 2 == 1:
        roots.append(infinity)
    
    return roots

Note

The multiplicities=False parameter in the roots() method in Sage has the following meaning:

  • When multiplicities=False (default): The method returns only the roots of the polynomial, without information about their multiplicities. The result is a list of unique root values.

  • When multiplicities=True: The method returns pairs (root, multiplicity) for each root. The result is a list of tuples, where each tuple contains the root and its multiplicity.

# Example of use:

branch_points = find_branch_points(l2, l4, l6, l8, l10)
print("Branch points:", *branch_points, sep='\n')
Branch points:
-11.8251161409669
-1.05264146874943
-0.512317960119317
0.695037784917798 - 1.04164546422316*I
0.695037784917798 + 1.04164546422316*I
+Infinity

3.3. First and Second Kind Periods#

To construct periodic functions, we first define a canonical basis for the space of holomorphic differentials, ( {du_i \mid i = 1, \ldots, g} ), and the associated meromorphic differentials, ( {dr_i \mid i = 1, \ldots, g} ), on the Riemann surface as follows:

(3.3)#\[ du_{2i-1} := \frac{x^{g-i} dx}{\partial_y f(x)},\]
(3.4)#\[ dr_{2i-1} := \frac{\mathcal{R}_{2i-1}(x) dx}{\partial_y f(x)}, \]

where

(3.5)#\[ \mathcal{R}_{2i-1}(x)=\sum_{k=1}^{2i-1}k\lambda_{4i-2k-2}x^{g-i+k}.\]

For \(g = 2\), these expressions can be written in vector form as:

(3.6)#\[\begin{split} du= \begin{pmatrix} x\\ 1 \end{pmatrix} \frac{dx}{-2\sqrt{f(x)}},\end{split}\]
(3.7)#\[\begin{split} dr= \begin{pmatrix} \mathcal{R}_{1}(x)\\ \mathcal{R}_{3}(x) \end{pmatrix} \frac{dx}{-2\sqrt{f(x)}} = \begin{pmatrix} x^2\\ 3x^3 + 2\lambda_2 x^2 + \lambda_4 x \end{pmatrix} \frac{dx}{-2\sqrt{f(x)}},\end{split}\]

The holomorphic basis defined above can be compared to the output of thecohomology_basis() function. In SageMath,S.cohomology_basis() generates a list of holomorphic differentials, typically represented as polynomials \(g(x)\) corresponding to the differentials:

(3.8)#\[ \omega = g(x,y)\frac{dx}{\partial f / \partial y}\]

where \(f(x, y) = 0\) defines the curve.

S.cohomology_basis()
[1, x]

Note

The order of elements clearly differs from the convention adopted in [1]. In this notebook, we adopt Julia Bernatska’s convention, as it provides a consistent framework for our computations.

3.3.1. First Kind Periods#

First we define our holomorphic base

# holomorphic differentials base
holbais=[x,x^0]

To compute the period matrices of the first kind, we evaluate the following integrals along the canonical homology cycles \(\{\mathfrak{a}_i, \mathfrak{b}_i\}_{i=1}^g\):

(3.9)#\[ \omega = (\omega_{ij})= \left( \int_{\mathfrak{a}_j}du_i \right), \quad \omega' = (\omega'_{ij})= \left( \int_{\mathfrak{b}_j}du_i \right). \]

Here, \(\omega\) and \(\omega'\) are the period matrices corresponding to the \(\mathfrak{a}\) and \(\mathfrak{b}\) cycles, respectively.

The SageMath function matrix_of_integral_values(differentials, integration_method='heuristic') can be used to compute the path integrals of the given differentials along the homology basis. The result is a matrix, where each row corresponds to a differential.

If the Riemann surface is given by the equation \(f(x,y)=0\), the differentials are encoded by:

(3.10)#\[ g(x,y)\frac{dx}{(df/dy)}.\]

Input:

  • differentials – a list of polynomials.

  • integration_method – (default: ‘heuristic’). String specifying the integration method to use. The options are ‘heuristic’ and ‘rigorous’.

Output:
A matrix, one row per differential, containing the values of the path integrals along the homology basis of the Riemann surface.

MofInt1=S.matrix_of_integral_values(holbais)
# Let's display the matrix in a shortened form so that it will be easy to see its structure
print(MofInt1.n(digits=5))
[     0.13760 - 0.66033*I  -4.9304e-32 - 0.91722*I  -3.9443e-31 + 0.40344*I       1.1746 + 0.25689*I]
[    -0.62448 + 0.28220*I -5.5467e-32 + 0.038862*I  -4.9304e-32 - 0.52554*I      0.14607 + 0.24334*I]

The structure of the returned matrix

(3.11)#\[\begin{split} MofInt1 = \begin{pmatrix} \omega_{1,1} & \omega_{1,2} & \omega'_{1,1} & \omega'_{1,2} \\ \omega_{3,1} & \omega_{3,2} & \omega'_{3,1} & \omega'_{3,2} \end{pmatrix} = \begin{array}{|c|c|c|c|c|} \hline & \mathfrak{a}_1 & \mathfrak{a}_2 & \mathfrak{b}_1 & \mathfrak{b}_2 \\ \hline du_1 & \omega_{1,1} = \int_{\mathfrak{a}_1} du_1 & \omega_{1,2} = \int_{\mathfrak{a}_2} du_1 & \omega'_{1,1} = \int_{\mathfrak{b}_1} du_1 & \omega'_{1,2} = \int_{\mathfrak{b}_2} du_1 \\ \hline du_3 & \omega_{3,1} = \int_{\mathfrak{a}_1} du_3 & \omega_{3,2} = \int_{\mathfrak{a}_2} du_3 & \omega'_{3,1} = \int_{\mathfrak{b}_1} du_3 & \omega'_{3,2} = \int_{\mathfrak{b}_2} du_3 \\ \hline \end{array}\end{split}\]

We can compare this with the results of the built-in Sage function: ‘period_matrix()’, which, for the adopted notational convention, will return a period matrix in the form

(3.12)#\[\begin{split} pM= \begin{pmatrix} \omega_{3,1} & \omega_{3,2} & \omega'_{3,1} & \omega'_{3,2} \\ \omega_{1,1} & \omega_{1,2} & \omega'_{1,1} & \omega'_{3,2} \end{pmatrix} \end{split}\]
pM=S.period_matrix()
print(pM.n(digits=5))
[   -0.62448 + 0.28220*I 4.3141e-32 + 0.038862*I -4.9304e-32 - 0.52554*I     0.14607 + 0.24334*I]
[    0.13760 - 0.66033*I  2.2187e-31 - 0.91722*I -5.9165e-31 + 0.40344*I      1.1746 + 0.25689*I]

Before going any further, lat’s define a function that will display the matrices in a rounded form so that we can compare them more easily.

def format_complex(z, digits=5, threshold=1e-10):
    real = float(z.real())
    imag = float(z.imag())
    
    # We round very small values to zero
    if abs(real) < threshold:
        real = 0
        if abs(imag) < threshold:
            return "0"
        # We format the result  
        return f"{imag:.{digits}f}*I"
    
    if abs(imag) < threshold:
        # We format the result
        return f"{real:.{digits}f}"

    sign = "+" if imag > 0 else "-"
    return f"{real:.{digits}f} {sign} {abs(imag):.{digits}f}*I"


def ApproxM(matrix, digits=5, threshold=1e-10):
    rows, cols = matrix.nrows(), matrix.ncols()
    
    for i in range(rows):
        formatted_row = [format_complex(matrix[i,j], digits, threshold) for j in range(cols)]
        print("\t".join(formatted_row))
ApproxM(pM)
-0.62448 + 0.28220*I	0.03886*I	-0.52554*I	0.14607 + 0.24334*I
0.13760 - 0.66033*I	-0.91722*I	0.40344*I	1.17459 + 0.25689*I
ApproxM(MofInt1)
0.13760 - 0.66033*I	-0.91722*I	0.40344*I	1.17459 + 0.25689*I
-0.62448 + 0.28220*I	0.03886*I	-0.52554*I	0.14607 + 0.24334*I

In what follows we use the function matrix_of_integral_values() instead of period_matrix() because it allows us to calculate periodic matrices of the second kind.

# Extract the omega-periods (first two columns)
omega = MofInt1[:, 0:2]

# Extract the omega'-periods (last two columns)
omegaP = MofInt1[:, 2:4]

ApproxM(omega)
print()
ApproxM(omegaP)
0.13760 - 0.66033*I	-0.91722*I
-0.62448 + 0.28220*I	0.03886*I

0.40344*I	1.17459 + 0.25689*I
-0.52554*I	0.14607 + 0.24334*I

Now we calculate the matrix

(3.13)#\[ \tau=\omega^{-1}\omega'\]

which belongs to the Siegel upper half-space. Hence it should satisfy two conditions:

  • Symmetry:

(3.14)#\[ \tau^T = \tau\]
  • Positive definiteness of the imaginary part

(3.15)#\[ Im(\tau)>0\]
tau= omega.inverse() * omegaP

# Displaying the result
print(tau.n(digits=5))
[-0.28894 + 0.70313*I -0.12636 - 0.46286*I]
[-0.12636 - 0.46286*I  -0.25854 + 1.6328*I]

Here we can also compare this with the results of the built-in Sage function: riemann_matrix():

S.riemann_matrix().n(digits=5)
[-0.28894 + 0.70313*I -0.12636 - 0.46286*I]
[-0.12636 - 0.46286*I  -0.25854 + 1.6328*I]
# Test of the symmetry
print(tau-tau.transpose().n(digits=5))
[0.00000 0.00000]
[0.00000 0.00000]
# Test of positivity
# Calculating the complex part of the tau matrix
tauImag = tau.apply_map(lambda x: x.imag())

# Calculate the eigenvalues
eigenvalues = tauImag.eigenvalues()

# Checking if all eigenvalues are positive
all_positive = all(e > 0 for e in eigenvalues)

# Displaying the result
eigenvalues, all_positive
([1.8239307353738361091933820951, 0.51198440541283078719843599741], True)

3.3.2. Second Kind Periods#

To calculate the second kind period matrices, we need to calculate the following integrals along the canonical homology cycles \(\{ \mathfrak{a}_i, \mathfrak{b}_i\}_{i=1}^g\)

(3.16)#\[ \eta = (\eta_{ij})= \left( \int_{\mathfrak{a}_j}du_i \right), \quad \eta' = (\eta'_{ij})= \left( \int_{\mathfrak{b}_j}du_i \right). \]
# meromorphic differentials base
merbais=[x^2, 3*x^3 + 2*l2*x^2 + l4*x]
MofInt2=S.matrix_of_integral_values(merbais)
# Let's display the matrix in a shortened form so that it will be easy to see its structure
ApproxM(MofInt2)
0.15574 + 0.20627*I	0.08372*I	-0.32882*I	-7.07532 + 0.12255*I
4.74867 + 3.09497*I	-0.04832*I	-6.23827*I	-2.96268 + 3.14329*I
# Extract the omega-periods (first two columns)
eta = MofInt2[:, 0:2]

# Extract the omega'-periods (last two columns)
etaP = MofInt2[:, 2:4]

ApproxM(eta)
print()
ApproxM(etaP)
0.15574 + 0.20627*I	0.08372*I
4.74867 + 3.09497*I	-0.04832*I

-0.32882*I	-7.07532 + 0.12255*I
-6.23827*I	-2.96268 + 3.14329*I

We can compute \(\kappa\), given by

(3.17)#\[ \kappa=\eta\; \omega^{-1}\]
omega_inv=Matrix(omega).inverse()
kappa = eta*omega_inv
ApproxM(kappa)
-0.09763 - 0.01261*I	-0.14978 - 0.29754*I
-0.14978 - 0.29754*I	-4.77835 - 7.02263*I

3.4. Legendre relation#

Next, we perform another test. The unnormalised period matrices of the first kind, \(\omega\) and \(\omega'\), and the second kind, \(\eta\) and \(\eta'\), should satisfy the Legendre relation:

(3.18)#\[ \Omega^T J \Omega = 2\pi i J\]

where

(3.19)#\[\begin{split} \Omega= \begin{pmatrix} \omega && \omega'\\ \eta && \eta'\\ \end{pmatrix}, \quad J= \begin{pmatrix} 0 && -1_g\\ 1_g && 0\\ \end{pmatrix}\end{split}\]
# Omega matrix
Omega = block_matrix([
    [omega, omegaP],
    [eta, etaP]
])

# Converting lists to matrices
zeroM = Matrix([[0.0, 0.0], [0.0, 0.0]])
mOneg = Matrix([[-1.0, 0.0], [0.0, -1.0]])
Oneg = Matrix([[1.0, 0.0], [0.0, 1.0]])

# J matrix
J = block_matrix([
    [zeroM, mOneg],
    [Oneg, zeroM]
])    


print("Omega Matrix:")
print(Omega.n(digits=5))
print()
print("J Matrix:")
print(J.n(digits=5))
Omega Matrix:
[     0.13760 - 0.66033*I  -4.9304e-32 - 0.91722*I| -3.9443e-31 + 0.40344*I       1.1746 + 0.25689*I]
[    -0.62448 + 0.28220*I -5.5467e-32 + 0.038862*I| -4.9304e-32 - 0.52554*I      0.14607 + 0.24334*I]
[-------------------------------------------------+-------------------------------------------------]
[     0.15574 + 0.20627*I  7.8886e-31 + 0.083724*I| -8.3816e-31 - 0.32882*I      -7.0753 + 0.12255*I]
[       4.7487 + 3.0950*I  1.1360e-28 - 0.048319*I|   2.2088e-29 - 6.2383*I       -2.9627 + 3.1433*I]

J Matrix:
[0.00000 0.00000|-1.0000 0.00000]
[0.00000 0.00000|0.00000 -1.0000]
[---------------+---------------]
[ 1.0000 0.00000|0.00000 0.00000]
[0.00000  1.0000|0.00000 0.00000]
import numpy as np

pi = np.pi
left=Omega.transpose()*J*Omega
right = 2*pi*I*J
result = left - right
ApproxM(result)
0	0	0	0
0	0	0	0
0	0	0	0
0	0	0	0

3.5. Theta function#

Definition 3.2

A Riemann theta function \(\theta(v;\tau)\) defined in terms of normalized coordinates \(v\) and normalized period matrix \(\tau\), canonicaly is given by

(3.20)#\[ \theta(\mathbf{v};\tau) = \sum_{n\in\mathbb{Z}^g} e^{i\pi \mathbf{n}^T \tau \mathbf{n} + 2i \pi \mathbf{n}^T \mathbf{v}}.\]

A theta function with characteristic \([\varepsilon ]\) is defined by

(3.21)#\[\begin{split} \theta[\varepsilon](\mathbf{v};\tau) = \theta \begin{bmatrix} \varepsilon'_1 & \ldots & \varepsilon'_g\\ \varepsilon_1 & \ldots & \varepsilon_g\\ \end{bmatrix} (\mathbf{v};\tau)= \sum_{n\in\mathbb{Z}^g} e^{i\pi\{ (\mathbf{n}+ \frac{1}{2}\mathbf{\varepsilon}'^T ) \tau (\mathbf{n}+ \frac{1}{2}\mathbf{\varepsilon}') +2 (\mathbf{v}+\frac{1}{2}\mathbf{\varepsilon})^T (\mathbf{n}+\frac{1}{2}\mathbf{\varepsilon}') \} }.\end{split}\]

Important

In the literature, it is common to use a convention with slightly different characteristics:

(3.22)#\[ (\varepsilon_i, \varepsilon'_j) \to 2 (\varepsilon_i, \varepsilon'_j)\]

in consequence

(3.23)#\[\begin{split} \theta[\varepsilon](\mathbf{v};\tau) = \theta \begin{bmatrix} \varepsilon'_1 & \ldots & \varepsilon'_g\\ \varepsilon_1 & \ldots & \varepsilon_g\\ \end{bmatrix} (\mathbf{v};\tau)= \sum_{n\in\mathbb{Z}^g} e^{i\pi\{ (\mathbf{n}+ \mathbf{\varepsilon}'^T ) \tau (\mathbf{n}+ \mathbf{\varepsilon}') +2 (\mathbf{v}+\mathbf{\varepsilon})^T (\mathbf{n}+\mathbf{\varepsilon}') \} }.\end{split}\]
def Theta( v, ttau, NAcc):
    # NAcc is responsible for the number of elements in the sum, i.e. the precision of the result. 
    # Experimentally, a good approximation is obtained for NAcc>4, but of course this can be increased as needed.
    total_sum = 0
    # epsilon_m is the list [epsilon 1, epsilon 2] where epsilon1 and epsilon2 are vectors
    
    # We iterate over two indices from -NAcc to NAcc
    for n1 in range(-NAcc, NAcc):
        for n2 in range(-NAcc, NAcc):
            # We create vector n
            n = vector([n1, n2])
                    
            # The first component of the sum
            term1 = I * pi * n * (ttau * n)
                    
            # The second component of the sum
            term2 = 2 * I * pi * n * v
                    
            # We add the exp from these components to the total
            total_sum += exp(term1 + term2)
    
    return total_sum
def ThetaCh(epsilon_m, v, ttau, NAcc):
    # NAcc is responsible for the number of elements in the sum, i.e. the precision of the result. 
    # Experimentally, a good approximation is obtained for NAcc>4, but of course this can be increased as needed.
    total_sum = 0
    # epsilon_m is the list [epsilon 1, epsilon 2] where epsilon1 and epsilon2 are vectors
    epsilon1 = epsilon_m[0]
    epsilon2 = epsilon_m[1]
    
    # We iterate over two indices from -NAcc to NAcc
    for n1 in range(-NAcc, NAcc):
        for n2 in range(-NAcc, NAcc):
            # We create vector n
            n = vector([n1, n2])
                    
            # The first component of the sum
            term1 = I * pi * (n + 1/2 * vector(epsilon1)) * (ttau * (n + 1/2 * vector(epsilon1)))
                    
            # The second component of the sum
            term2 = 2 * I * pi * (n + 1/2 * vector(epsilon1)) * (v + 1/2 * vector(epsilon2))
                    
            # We add the exp from these components to the total
            total_sum += exp(term1 + term2)
    
    return total_sum

Note

In Wolfram Mathematica exists a corresponding function under the name SiegelTheta[$\nu_1$,$\nu_2$]($\Omega,s$). The relation between our variables and those in Mathematica are as follows

  • \(\Omega=\tau\)

  • \(s=v\)

  • \(\nu_1 = \frac{1}{2} epsilon1\)

  • \(\nu_2 = \frac{1}{2} epsilon2\)

To test this, you can check the following code in Mathematica

Listing 3.1 Mathematica Code#
            SiegelTheta[{{2, 4}, {4, 2}}, IdentityMatrix[2] I, {1, 2}] // N

and compare it with below one in SageMath

Listing 3.2 SageMath Code#
    # Defining sample data
    epsilon_m = [vector([4, 8]), vector([8, 4])]
    v = vector([1, 2])
    tau = Matrix([[I, 0], [0, I]])
    NAcc = 5
    # Function call
    result = ThetaCh(epsilon_m, v, tau, NAcc)
    print(result)

Additionally, one can define the \(\theta\) function in Mathematica in a similar way with the following code:

Listing 3.3 Mathematica Code#
    ThetaCh[\[Epsilon]m_, v_, \[Tau]_, NAcc_] :=
    Sum[
     Exp[
      I Pi ({Subscript[n, 1], Subscript[n, 2]} + 1/2 \[Epsilon]m[[1]]) . (\[Tau] . ({Subscript[n, 1], Subscript[n, 2]} + 1/2 \[Epsilon]m[[1]])) + 2 I Pi ({Subscript[n, 1], Subscript[n, 2]} + 1/2 \[Epsilon]m[[1]]) . (v + 1/2 \[Epsilon]m[[2]])
      ],
     {Subscript[n, 2], -NAcc, NAcc}, {Subscript[n, 1], -NAcc, NAcc}
    ]

And check the result of the above test with:

Listing 3.4 Mathematica Code#
    ThetaCh[2 {{2, 4}, {4, 2}}, {1, 2}, IdentityMatrix[2] I, 5] // N

3.5.1. Tests#

Test of the formula ([1], p.4, eq. 7)

(3.24)#\[ \theta[\varepsilon](\mathbf{v};\tau) = e^{i\pi\left(\frac{1}{2} \mathbf{\varepsilon}'^T\right) \tau \left(\frac{1}{2} \mathbf{\varepsilon}'\right) + 2i\pi \left( \mathbf{v} + \frac{1}{2} \mathbf{\varepsilon}\right)^T \left(\frac{1}{2} \mathbf{\varepsilon}'\right)} \theta\left(\mathbf{v} + \frac{1}{2}\mathbf{\varepsilon} + \tau\left(\frac{1}{2}\mathbf{\varepsilon}'\right);\tau \right). \]

where a characteristic is a a \(2\times g\) matrix \([\varepsilon] =(\mathbf{\varepsilon}',\mathbf{\varepsilon})^T \)

# Define the genus
g = 2  # For genus 2

Acc=20

# Define the period matrix tau
### tau = Matrix(CC, [[I, 0.5], [0.5, I]])
tau = omega.inverse() * omegaP

# Define the vector v
v = vector(CC, [0.1, 0.2])

# Define the characteristic
eps_prime = [1, 0]
eps = [0, 1]
N = 2  # Level of the characteristic
char = Matrix([eps_prime,eps])

# Compute theta with characteristic at v
theta_char = ThetaCh(char, v, tau, Acc)

# Compute half of the characteristic vectors
eps_vec = vector(CC, eps)
eps_prime_vec = vector(CC, eps_prime)
eps_half = 0.5 * eps_vec
eps_prime_half = 0.5 * eps_prime_vec

# Compute the shifted vector v_shifted
v_shifted = v + eps_half + tau * eps_prime_half

# Compute the terms for the exponential factor
term1 = (eps_prime_half) * tau * (eps_prime_half)
term2 = (v + eps_half) * (eps_prime_half)

# Compute the exponential factor
exp_factor = exp(I * pi * term1 + 2 * I * pi * term2)

# Compute theta at the shifted point without characteristic
theta_standard = Theta(v_shifted, tau, Acc)


# Compute the RHS of the relation
RHS = exp_factor * theta_standard

# Compute the difference
difference = theta_char - RHS

# Print the results
print("Theta with characteristic:\n", theta_char)
print()
print("Exponential factor:\n", exp_factor)
print()
print("Theta at shifted point:\n", theta_standard)
print()
print("RHS:\n", RHS)
print()
print("Difference:\n", difference)

# Check if the difference is within an acceptable tolerance
tolerance = 1e-12  # Adjust based on the precision
if abs(difference) < tolerance:
    print("The relation is verified within the given tolerance.")
else:
    print("The relation is not satisfied within the given tolerance.")
Theta with characteristic:
 1.0600858517074845 - 0.2501023731212625*I

Exponential factor:
 0.5734712442681018 + 0.050148801540618036*I

Theta at shifted point:
 1.7966652347912195 - 0.5932345951363659*I

RHS:
 1.0600858517074845 - 0.2501023731212626*I

Difference:
 1.1102230246251565e-16*I
The relation is verified within the given tolerance.

3.6. \(\sigma\)-Functions#

Definition 3.3

Sigma function (Kleinian sigma) is a modular invariant entire function on \(\mathrm{Jac}(\mathscr{C})\). It is definef by a relation with the theta function:

(3.25)#\[ \sigma(\mathbf{u}) = C \tilde{\sigma}(\mathbf{u})\]

where

(3.26)#\[ \tilde{\sigma}(\mathbf{u})= e^{-\frac{1}{2}\mathbf{u}^T \kappa \mathbf{u}} \theta[ K ] (\omega^{-1} \mathbf{u}, \tau),\]
(3.27)#\[ C= \sqrt{\frac{\pi^g}{\det{\omega}}} \left( \prod_{1\leq i<j \leq 2g+1} (e_i - e_j) \right)^{-1/4},\]

and \([K]\) denotes the characteristics of the vector of Riemann constants. (The expression for \(C\) comes from [5], p.9, Eq. II.41)

# We define variables
var('U1 U3')

# We define the accuracy of theta function
Acc=20


# sigma
def Tsigma(U1, U3):
    e = exp(-(1/2)*(vector([U1, U3])*kappa*vector([U1, U3])))
    theta = ThetaCh(KCh, omega_inv * vector([U1, U3]), tau, Acc)
    return e*theta

# C constant
det_omega = omega.determinant()
g = 2
#Branch points
BP = find_branch_points(l2, l4, l6, l8, l10)
# Calculating the product of branch point differences
prod = 1
for i in range(len(BP)):
    for j in range(i+1, len(BP)):
        if BP[i] != infinity and BP[j] != infinity:
            prod *= (BP[i] - BP[j])
C = sqrt(pi**g / det_omega) * prod**(-1/4)


def sigma(U1, U3):
    return C*Tsigma(U1, U3)

Let’s define the derivatives of the sigma function which will be useful in the following sections.

# sigma1
def Tsigma1(u1_val, u3_val):
    U1, U3 = var('U1 U3')
    sigma_expr = Tsigma(U1, U3)
    sigma1 =diff (sigma_expr, U1)
    return sigma1.subs({U1: u1_val, U3: u3_val}).n()

# sigma3
def Tsigma3(u1_val, u3_val):
    U1, U3 = var('U1 U3')
    sigma_expr = Tsigma(U1, U3)
    sigma3 =diff (sigma_expr, U3)
    return sigma3.subs({U1: u1_val, U3: u3_val}).n()

3.6.1. \(\sigma\)-divisors#

Definition 3.4

\(\sigma\)-divisors (or special divisors) are points \(\mathbf{u}\) for which

(3.28)#\[ \sigma(\mathbf{u})=0\]

3.7. Characteristics of branch points and Bolza formula#

Definition 3.5

A characteristic is a \(2\times g\) matrix \([\varepsilon] = (\mathbf{\varepsilon}', \mathbf{\varepsilon})^T\) with real values within the interval \([0,2)\). Every point \(\mathbf{u}\) in the funcamental domain \(\mathrm{Jac}(\mathscr{C})\) can be represented by its characteristic \([\varepsilon]\):

(3.29)#\[ \mathbf{u}=\mathbf{u}[\varepsilon] = \frac{1}{2} \omega \mathbf{\varepsilon} + \frac{1}{2}\omega' \mathbf{\varepsilon}'.\]

In the hyperelliptic case, the Abel images of branch points and any combination of branch points are described by characteristics with 1 or 0, which are called half-integer characteristics.

The half-integer characteristics are odd whenever

(3.30)#\[ \varepsilon^T \varepsilon'=0 \pmod{2},\]

and even when

(3.31)#\[ \varepsilon^T \varepsilon'= 1\pmod{2},\]

For hyperelliptic curves of genus 2, the characteristics are widely known in the literature [3], [6]. To write them down, we can use the code proposed by Julia Bernatska [2].

# We define the genus variable
genus = S.genus  


eChars = [[[0 for k in range(genus)], [1 for k in range(genus)]], 
          [[0 for k in range(genus)] for i in range(2)]]

# Do-like loop in Julia's Mathematica notebook
for l in range(genus):
    # let's note the indexing, which must be adjusted by
    eChars.insert(0, 
        [[(eChars[0][0][k] + kronecker_delta(k+1, genus - l) + 
           kronecker_delta(k+1, genus - l + 1)) % 2 for k in range(genus)],
         [(eChars[0][1][k] + 0) % 2 for k in range(genus)]]
    )

    eChars.insert(0,
        [[(eChars[0][0][k] + 0) % 2 for k in range(genus)],
         [(eChars[0][1][k] + kronecker_delta(k+1, genus - l)) % 2 for k in range(genus)]]
    )

# We display matrices
seen_matrices = []
for i in range(len(eChars)):
    current_matrix = matrix(eChars[i])
    if current_matrix not in seen_matrices:
        seen_matrices.append(current_matrix)
        print(current_matrix)
        print()
[1 0]
[0 0]

[1 0]
[1 0]

[0 1]
[1 0]

[0 1]
[1 1]

[0 0]
[1 1]

[0 0]
[0 0]

There are 2 odd characteristic among them, their sum is also odd. The corresponding vector -vector of Riemann constants -is denoted by \(\mathbf{K}\) and

(3.32)#\[ [\mathbf{K}] = \sum_{\text{all odd}\; [\varepsilon_i]} [\varepsilon_i]\]
# We sum the eChars elements with indices 2*i +1 (because python counts from 0) and take Mod 2
KCh = sum(matrix(eChars[2 * i+1]) for i in range(genus)) % 2

print(KCh)
[1 1]
[0 1]

The first test of the above functions and selected characteristics can be the Bolza formula given as follows

Definition 3.6

From [1](Sec. 2.4), we have the Bolza formula:

(3.33)#\[ e_\iota = - \frac{\partial_{u_3}\theta[{\iota}](\omega^{-1}\mathbf{u})}{\partial_{u_1}\theta[{\iota}](\omega^{-1}\mathbf{u})}|_{\mathbf{u}=0},\]

where \([{\iota}] = [\varepsilon_\iota] +[K]\). It can also be defined in an equivalent way:

(3.34)#\[ e_\iota = - \frac{\sigma_3(\mathbf{u}_\iota)}{\sigma_1(\mathbf{u}_\iota)},\]

where \(\mathbf{u}_\iota=\mathbf{u}[\varepsilon_\iota]=\frac{1}{2}\omega \varepsilon_\iota + \frac{1}{2}\omega' \varepsilon'_\iota\), and \(\sigma_i(\mathbf{u})\) denotes the derivative of the sigma function with respect to the \(i\)-th component of \(\mathbf{u}\).

We will start by coding the second definition, which is easier to interpret.

# Characteristic vectors
epsp1 = vector(eChars[0][0])
eps1 = vector(eChars[0][1])

epsp2 = vector(eChars[1][0])
eps2 = vector(eChars[1][1])

epsp3 = vector(eChars[2][0])
eps3 = vector(eChars[2][1])

epsp4 = vector(eChars[3][0])
eps4 = vector(eChars[3][1])

epsp5 = vector(eChars[4][0])
eps5 = vector(eChars[4][1])

epsp6 = vector(eChars[5][0])
eps6 = vector(eChars[5][1])

# Points u
up1 = (1/2)*omega*eps1 + (1/2)*omegaP*epsp1
up2 = (1/2)*omega*eps2 + (1/2)*omegaP*epsp2
up3 = (1/2)*omega*eps3 + (1/2)*omegaP*epsp3
up4 = (1/2)*omega*eps4 + (1/2)*omegaP*epsp4
up5 = (1/2)*omega*eps5 + (1/2)*omegaP*epsp5
up6 = (1/2)*omega*eps6 + (1/2)*omegaP*epsp6

# ei based on sigma functions
def e(u1_val, u3_val):
    sigma1 = Tsigma1(u1_val, u3_val)
    sigma3 = Tsigma3(u1_val, u3_val)
    expr = -sigma3/sigma1
    return expr.n()
print("Branch points from Bolza formula:", 
      e(up1[0],up1[1]), 
      e(up2[0],up2[1]), 
      e(up3[0],up3[1]), 
      e(up4[0],up4[1]), 
      e(up5[0],up5[1]),
      e(up6[0],up6[1]),sep='\n')
Branch points from Bolza formula:
-1.05264146874943 - 5.55111512312578e-16*I
0.695037784917798 + 1.04164546422316*I
-11.8251161409668 - 4.34097202628436e-14*I
2.53231492223134e15 - 6.84742393413673e15*I
0.695037784917799 - 1.04164546422316*I
-0.512317960119317

We can see two things:

  • the roots are unordered

  • one of the roots has a very large magnitude.

Below, we present a simple program that reorders the characteristics listed above to arrange the roots \(e_i\) in ascending order The fact that one of the roots has such a large magnitude (up to a sign precision) is a consequence of the fact that the point \(\mathbf{u}\) is a special divisor, i.e. a point for which

(3.35)#\[ \sigma_1(\mathbf{u})=0\]

The characteristic associated with this point will be the last one, corresponding to the root at infinity.

# List of characteristics and their corresponding brunch points
char = [eChars[0],eChars[1],eChars[2],eChars[3],eChars[4],eChars[5]]
values = [e(up1[0],up1[1]), e(up2[0],up2[1]), e(up3[0],up3[1]), e(up4[0],up4[1]), e(up5[0],up5[1]), e(up6[0],up6[1])]

# Bubble Sort Algorithm Implementation
def bubble_sort(char, values):
    sorted = []
    n = len(values)
    for i in range(n):
        for j in range(0, n-i-1):
            if values[j] > values[j+1]:  # Value comparison
                # Value replacing
                values[j], values[j+1] = values[j+1], values[j]
                # Replacing corresponding characteristics
                char[j], char[j+1] = char[j+1], char[j]
    
     # Making sure the infinite element is at the end
    if values[0].real().abs() > 10**10:
        values = values[1:] + values[:1]
        char = char[1:] + char[:1]
    
    return char, values  

# Sorting lists
char, values = bubble_sort(char, values)    

# We display sorted characteristics matrices
seen_matrices = []
for i in range(len(char)):
    current_matrix = matrix(char[i])
    if current_matrix not in seen_matrices:
        seen_matrices.append(current_matrix)
        print(current_matrix)
        print()
[0 1]
[1 0]

[1 0]
[0 0]

[0 0]
[0 0]

[1 0]
[1 0]

[0 0]
[1 1]

[0 1]
[1 1]
print("Sorted values:", values)
Sorted values: [-11.8251161409668 - 4.34097202628436e-14*I, -1.05264146874943 - 5.55111512312578e-16*I, -0.512317960119317, 0.695037784917798 + 1.04164546422316*I, 0.695037784917799 - 1.04164546422316*I, 2.53231492223134e15 - 6.84742393413673e15*I]

Now, we can redefine the characteristic vectors based on the reordered characteristics.

# New characteristic vectors
epsp1 = vector(char[0][0])
eps1 = vector(char[0][1])

epsp2 = vector(char[1][0])
eps2 = vector(char[1][1])

epsp3 = vector(char[2][0])
eps3 = vector(char[2][1])

epsp4 = vector(char[3][0])
eps4 = vector(char[3][1])

epsp5 = vector(char[4][0])
eps5 = vector(char[4][1])

epsp6 = vector(char[5][0])
eps6 = vector(char[5][1])

# Points u
up1 = (1/2)*omega*eps1 + (1/2)*omegaP*epsp1
up2 = (1/2)*omega*eps2 + (1/2)*omegaP*epsp2
up3 = (1/2)*omega*eps3 + (1/2)*omegaP*epsp3
up4 = (1/2)*omega*eps4 + (1/2)*omegaP*epsp4
up5 = (1/2)*omega*eps5 + (1/2)*omegaP*epsp5
up6 = (1/2)*omega*eps6 + (1/2)*omegaP*epsp6

print("Branch points from Bolza formula:", 
      e(up1[0],up1[1]), 
      e(up2[0],up2[1]), 
      e(up3[0],up3[1]), 
      e(up4[0],up4[1]), 
      e(up5[0],up5[1]), 
      e(up6[0],up6[1]),sep='\n')
Branch points from Bolza formula:
-11.8251161409668 - 4.34097202628436e-14*I
-1.05264146874943 - 5.55111512312578e-16*I
-0.512317960119317
0.695037784917798 + 1.04164546422316*I
0.695037784917799 - 1.04164546422316*I
2.53231492223134e15 - 6.84742393413673e15*I

Before we go any further, we can do one more test. Based on the discussion {citeernatska_Wolfram_RiemmVec_2024 in the Wolfram community, about the vector of Riemann constants for a computed characteristic \([\mathbf{K}]\), we should check whether

(3.36)#\[ \sigma\left( \mathbf{u}[\mathbf{K}]\right) =0 \;\; \text{and} \;\; \sigma_1\left( \mathbf{u}[\mathbf{K}]\right) =0,\]

where

(3.37)#\[\begin{split} \mathbf{u}[\mathbf{K}]= \frac{1}{2}\omega \begin{pmatrix} K_{1,1}\\ K_{2,1} \end{pmatrix} + \frac{1}{2}\omega' \begin{pmatrix} K_{1,2}\\ K_{2,2} \end{pmatrix} \end{split}\]
vK1 = vector(KCh[0])
vK2 = vector(KCh[1])
uK = (1/2)*omega*vK1 + (1/2)*omegaP*vK2

print("sigma(u[K])=",Tsigma(uK[0], uK[1]))
print("sigma1(u[K])=",Tsigma1(uK[0], uK[1]))
sigma(u[K])= 4.168869563487747e-16 + 4.900022606845388e-16*I
sigma1(u[K])= 1.23432198410658e-15 + 3.52430396044750e-15*I

We will now test the definition defined using theta functions, which will be referred to in the following parts of the code.

def e(i):
    # Declare variables
    U1, U3, = var('U1 U3')
    CCh=(matrix(char[i])+KCh)%2
    # Partial derivatives of the ThetaCh function
    numerator = diff(ThetaCh(CCh, omega_inv * vector([U1, U3]), tau, Acc), U3)
    denominator = diff(ThetaCh(CCh, omega_inv * vector([U1, U3]), tau, Acc), U1)

    # Simplified ratio of derivatives
    result = -numerator / denominator

    # Substitute U1, U3, -> 0
    result = result.subs({U1: 0, U3: 0})
    return result.n()
print("Branch points from Bolza formula:", 
      e(0),
      e(1), 
      e(2), 
      e(3), 
      e(4), 
      e(5), sep='\n')
Branch points from Bolza formula:
-11.8251161409669 + 1.46420013867900e-15*I
-1.05264146874943 - 3.37524010891116e-16*I
-0.512317960119317 + 3.11685236212985e-17*I
0.695037784917798 + 1.04164546422316*I
0.695037784917798 - 1.04164546422316*I
-2.69482572518105e17 - 2.55665399144444e17*I

We can see that the last element has changed its value. This is, of course, a numerical error arising from computations involving very large numbers.

3.8. \(\wp\)-Functions#

Definition 3.7

Let \(\Sigma = \{ \mathbf{u}\in \mathrm{Jac}(\mathscr{C})| \sigma(\mathbf{u})=0 \} \), then we define a multiply periodic Klein-Weierstrass \(\wp\)-functions on \(\mathrm{Jac}(\mathscr{C}) \setminus \Sigma\) by

(3.38)#\[ \wp_{ij}(\mathbf{u}):= - \frac{\partial^2\log{\sigma(\mathbf{u})}}{\partial u_i \partial u_j}, \quad \wp_{ijk}:=- \frac{\partial^3\log{\sigma(\mathbf{u})}}{\partial u_i \partial u_j \partial u_k},\]

where \(\sigma(u)\) is called sigma function. This can be also written by

(3.39)#\[ \wp_{ij} = \frac{\sigma_i(\mathbf{u})\sigma_j(\mathbf{u}) - \sigma(\mathbf{u})\sigma_{ij}(\mathbf{u})}{\sigma^2(\mathbf{u})} = \frac{\tilde{\sigma}_i(\mathbf{u})\tilde{\sigma}_j(\mathbf{u}) - \tilde{\sigma}(\mathbf{u})\tilde{\sigma}_{ij}(\mathbf{u})}{\tilde{\sigma}^2(\mathbf{u})},\]

where \(\sigma_i(\mathbf{u})\) denotes the derivative of the sigma function with respect to the \(i\)-th component of \(\mathbf{u}\).

It can be shown that the above definitions can be written in the form

(3.40)#\[ \wp_{ij}(\mathbf{u}):=\kappa_{ij} - \frac{\partial^2}{\partial u_i \partial u_j}\log{\theta[K](\omega^{-1}\mathbf{u};\tau)}, \quad \wp_{ijk}(\mathbf{u}):=- \frac{\partial^3}{\partial u_i \partial u_j \partial u_k}\log{\theta[K](\omega^{-1}\mathbf{u};\tau)}.\]
# Definition with thetas
# We define variables
var('U1 U3')

# We define the accuracy of theta function
Acc=20


# WeierstrassP11
def WeierstrassP11(u1_val, u3_val):
    symbolic_expr = kappa[0, 0] - diff(log(ThetaCh(KCh, omega_inv * vector([U1, U3]), tau, Acc)), U1, 2)
    return symbolic_expr.subs({U1: u1_val, U3: u3_val}).n()

# WeierstrassP13
def WeierstrassP13(u1_val, u3_val):
    symbolic_expr = kappa[0, 1] - diff(log(ThetaCh(KCh, omega_inv * vector([U1, U3]), tau, Acc)),  U1, U3)
    return symbolic_expr.subs({U1: u1_val, U3: u3_val}).n()

# WeierstrassP33
def WeierstrassP33(u1_val, u3_val):
    symbolic_expr = kappa[1, 1] - diff(log(ThetaCh(KCh, omega_inv * vector([U1, U3]), tau, Acc)), U3, 2)
    return symbolic_expr.subs({U1: u1_val, U3: u3_val}).n()

#####
# Derivatives of these functions useful in subsequent sections
#####

# WeierstrassP111
def WeierstrassP111(u1_val, u3_val):
    symbolic_expr = - diff(log(ThetaCh(KCh, omega_inv * vector([U1, U3]), tau, Acc)), U1, 3)
    return symbolic_expr.subs({U1: u1_val, U3: u3_val}).n()

# WeierstrassP113
def WeierstrassP113(u1_val, u3_val):
    symbolic_expr = - diff(log(ThetaCh(KCh, omega_inv * vector([U1, U3]), tau, Acc)), U1, U1, U3)
    return symbolic_expr.subs({U1: u1_val, U3: u3_val}).n()

# WeierstrassP133
def WeierstrassP133(u1_val, u3_val):
    symbolic_expr = - diff(log(ThetaCh(KCh, omega_inv * vector([U1, U3]), tau, Acc)),  U1, U3, U3)
    return symbolic_expr.subs({U1: u1_val, U3: u3_val}).n()

# WeierstrassP1111
def WeierstrassP1111(u1_val, u3_val):
    symbolic_expr = - diff(log(ThetaCh(KCh, omega_inv * vector([U1, U3]), tau, Acc)), U1, 4)
    return symbolic_expr.subs({U1: u1_val, U3: u3_val}).n()

# WeierstrassP1113
def WeierstrassP1113(u1_val, u3_val):
    symbolic_expr = - diff(log(ThetaCh(KCh, omega_inv * vector([U1, U3]), tau, Acc)), U1, U1, U1, U3)
    return symbolic_expr.subs({U1: u1_val, U3: u3_val}).n()

# WeierstrassP3333
def WeierstrassP3333(u1_val, u3_val):
    symbolic_expr = - diff(log(ThetaCh(KCh, omega_inv * vector([U1, U3]), tau, Acc)), U3, 4)
    return symbolic_expr.subs({U1: u1_val, U3: u3_val}).n()

3.8.1. Tests#

3.8.1.1. Periodicity#

As a first test of these functions, we can check if they satisfy the key property

(3.41)#\[ \wp_{ij}(\mathbf{u}+2\omega \mathbf{n} + 2\omega'\mathbf{n}') = \wp_{ij}(\mathbf{u}),\]

where

(3.42)#\[\begin{split} \mathbf{n} = \begin{pmatrix} n_1\\ n_2 \end{pmatrix}, \quad \mathbf{n}' = \begin{pmatrix} n'_1\\ n'_2 \end{pmatrix} \in \mathbb{Z}^2.\end{split}\]
u1 = 0.212131
u3 = -2.231

ntest = vector([1, 2])
nPtest = vector([-3, -5])

wn = omega*ntest
wPn= omegaP*nPtest

print("Theta based definitions")
print()
print("Test P11:")
print(WeierstrassP11(u1, u3) - WeierstrassP11(u1 + 2*wn[0] + 2*wPn[0], u3 + 2*wn[1] + 2*wPn[1]))
print()
print("Test P13:")
print(WeierstrassP13(u1, u3) - WeierstrassP13(u1 + 2*wn[0] + 2*wPn[0], u3 + 2*wn[1] + 2*wPn[1]))
print()
print("Test P33:")
print(WeierstrassP33(u1, u3) - WeierstrassP33(u1 + 2*wn[0] + 2*wPn[0], u3 + 2*wn[1] + 2*wPn[1]))
Theta based definitions

Test P11:
5456.64098472225 - 129.384240060038*I

Test P13:
1691.55457358245 - 1547.13199513522*I

Test P33:
108.402730941999 - 956.649120634020*I

Let’s check this for another value of vector \(\mathbf{u}\)

u1 = 2
u3 = 3

ntest = vector([1, 2])
nPtest = vector([-3, -5])

wn = omega*ntest
wPn= omegaP*nPtest

print("Theta based definitions")
print()
print("Test P11:")
print(WeierstrassP11(u1, u3) - WeierstrassP11(u1 + 2*wn[0] + 2*wPn[0], u3 + 2*wn[1] + 2*wPn[1]))
print()
print("Test P13:")
print(WeierstrassP13(u1, u3) - WeierstrassP13(u1 + 2*wn[0] + 2*wPn[0], u3 + 2*wn[1] + 2*wPn[1]))
print()
print("Test P33:")
print(WeierstrassP33(u1, u3) - WeierstrassP33(u1 + 2*wn[0] + 2*wPn[0], u3 + 2*wn[1] + 2*wPn[1]))
Theta based definitions

Test P11:
9.82254277914762e-11 + 4.30588897870621e-12*I

Test P13:
7.81597009336110e-12 + 1.25623955682386e-11*I

Test P33:
1.39550593303284e-11 - 1.67688085639384e-11*I

Let’s check wheater this is a special divisor

print("sigma(u)=",Tsigma(u1, u3))
print("sigma1(u)=",Tsigma1(u1, u3))
print("sigma3(u)=",Tsigma3(u1, u3))
sigma(u)= 1.106709952459739e+24 + 1.6613439432427985e+24*I
sigma1(u)= 2.17016860035032e25 + 3.25776094449540e25*I
sigma3(u)= 2.55082918890276e25 + 3.82919175327891e25*I

Observation 3.1

Similar to the case with real brunch points, but this time the first set of coordinates is the one that causes errors.

3.8.1.2. KdV equation#

Definition 3.8

Multiply periodic Klein-Weierstrass \(\wp\)-functions should satisfy the KdV equation given by ([7] Eq. 44 and [4] Eq. 2.20 )

(3.43)#\[ wp_{1111}(\mathbf{u}) = 6\wp^2_{11}(\mathbf{u}) + 4 \wp_{13}(\mathbf{u}) + 4\lambda_2 \wp_{11}(\mathbf{u}) + 2 \lambda_4 \]
u1 = 0.212131
u3 = -2.231

LHS = WeierstrassP1111(u1,u3)
RHS = 6 * WeierstrassP11(u1,u3)^2 + 4 * l2 * WeierstrassP11(u1,u3) + 4 * WeierstrassP13(u1,u3) + 2*l4

LHS - RHS
-1.93267624126747e-12 - 2.87112418060853e-12*I

Let’s check this for another value of vector \(\mathbf{u}\)

u1 = -1.98213
u3 = 3.76213

LHS = WeierstrassP1111(u1,u3)
RHS = 6 * WeierstrassP11(u1,u3)^2 + 4 * l2 * WeierstrassP11(u1,u3) + 4 * WeierstrassP13(u1,u3) + 2*l4

LHS - RHS
9.86233317235019e-12 + 9.37916411203332e-12*I

3.8.1.3. Basic relations#

Definition 3.9

Multiply periodic Klein-Weierstrass \(\wp\)-functions should satisfy following relations ([4] Eq. 2.20):

(3.44)#\[ \wp_{133}(\mathbf{u}) = \wp_{13}(\mathbf{u})\wp_{111}(\mathbf{u}) - \wp_{11}(\mathbf{u})\wp_{113}(\mathbf{u})\]
(3.45)#\[ \wp_{1113}(\mathbf{u}) = 6 \wp_{13}(\mathbf{u}) \wp_{11}(\mathbf{u}) - 2\wp_{33}(\mathbf{u}) + 4\lambda_2 \wp_{13}(\mathbf{u})\]
(3.46)#\[ \wp_{33}(\mathbf{u}) = \frac{1}{4} \wp_{111}^2(\mathbf{u})- \wp_{11}^3(\mathbf{u}) - \wp_{13}(\mathbf{u}) \wp_{11}(\mathbf{u}) - \lambda_2 \wp_{11}^2(\mathbf{u}) - \lambda_4\wp_{11}(\mathbf{u}) - \lambda_6\]
u1 = 0.212131
u3 = -2.231

LHS1 = WeierstrassP133(u1,u3)
RHS1 = WeierstrassP13(u1,u3) * WeierstrassP111(u1,u3) - WeierstrassP11(u1,u3)*WeierstrassP113(u1,u3)

LHS2 = WeierstrassP1113(u1,u3)
RHS2 = 6*WeierstrassP13(u1,u3) * WeierstrassP11(u1,u3) - 2*WeierstrassP33(u1,u3) + 4*l2*WeierstrassP13(u1,u3)

LHS3 = WeierstrassP33(u1,u3)
RHS3 = (1/4) * WeierstrassP111(u1,u3)^2 - WeierstrassP11(u1,u3)^3 - WeierstrassP13(u1,u3)*WeierstrassP11(u1,u3) - l2*WeierstrassP11(u1,u3)^2 - l4*WeierstrassP11(u1,u3) - l6

print("Test of the first eq.:")
print(LHS1-RHS1)
print()
print("Test of the second eq.:")
print(LHS2-RHS2)
print()
print("Test of the third eq.:")
print(LHS3-RHS3)
Test of the first eq.:
-2.78532752417959e-12 - 4.88393017313736e-12*I

Test of the second eq.:
-1.38129507831763e-11 - 2.24519098308128e-11*I

Test of the third eq.:
3.66995323020092e-12 - 3.55449029934026e-12*I

Let’s check this for another value of vector \(\mathbf{u}\)

u1 = -1.98213
u3 = 3.76213

LHS1 = WeierstrassP133(u1,u3)
RHS1 = WeierstrassP13(u1,u3) * WeierstrassP111(u1,u3) - WeierstrassP11(u1,u3)*WeierstrassP113(u1,u3)

LHS2 = WeierstrassP1113(u1,u3)
RHS2 = 6*WeierstrassP13(u1,u3) * WeierstrassP11(u1,u3) - 2*WeierstrassP33(u1,u3) + 4*l2*WeierstrassP13(u1,u3)

LHS3 = WeierstrassP33(u1,u3)
RHS3 = (1/4) * WeierstrassP111(u1,u3)^2 - WeierstrassP11(u1,u3)^3 - WeierstrassP13(u1,u3)*WeierstrassP11(u1,u3) - l2*WeierstrassP11(u1,u3)^2 - l4*WeierstrassP11(u1,u3) - l6

print("Test of the first eq.:")
print(LHS1-RHS1)
print()
print("Test of the second eq.:")
print(LHS2-RHS2)
print()
print("Test of the third eq.:")
print(LHS3-RHS3)
Test of the first eq.:
-2.08046913030557e-11 + 3.70053756862678e-11*I

Test of the second eq.:
-9.37916411203332e-11 + 7.85378716560857e-11*I

Test of the third eq.:
1.17434950652751e-11 + 1.07062536925189e-11*I

Observation 3.2

In this case there are no problems, but you can probably find points where they occur.

3.9. Jacobi inversion problem#

Definition 3.10

Let

(3.47)#\[ D=\sum_{i=1}^n P_i,\]

be a divisor on a curve \(\mathscr{C}\). We assume that \(D\) is non-special, that is

  • \( n\geq g\),

  • \(D\) does not contain pairs of points connected by the hyperelliptic involution.

Definition 3.11

Let \(du\) be a normalised first kind differentials, then an Abel map \(\mathcal{A}\) can be constructed by:

(3.48)#\[ \mathcal{A}(P) = \int_{\infty}^P du, \quad P = (x,y) \in \mathscr{C}.\]

Here infinity is used as the base point. The Abel image of divisor \(D\) is computed by

(3.49)#\[ \mathcal{A}(D) = \sum_{i=1}^n \mathcal{A}(P_i).\]

3.9.1. Jacobi inversion problem on branch points#

Based on the Riemann vanishing theorem and the inverse Jaccobi problem we can define the following properties.

Definition 3.12

Based on [3](5.10-13) let’s take

(3.50)#\[ \Omega_{ij} = \frac{1}{2} \omega (\varepsilon_i + \varepsilon_j) + \frac{1}{2} \omega' (\varepsilon'_i + \varepsilon'_j)+ \mathbf{K},\]

then

(3.51)#\[ \wp_{11}(\Omega_{ij}) = e_i + e_j, \quad \wp_{13}(\Omega_{ij}) = -e_i e_j, \quad i,j=1,\ldots,5, \; i\neq j\]

and

(3.52)#\[ \wp_{33}(\Omega_{ij}) = e_i e_j (e_p + e_q + e_r) + e_p e_q e_r\]

T Now we can test the above definition using \(\Omega_{ij}\) calculated from characteristics \(\varepsilon\), but also using the built-in Sage function abel_jacobi(), which expects lists of tuples in the format (v,P), where v is the multiplicity of the point in the divisor (in our case 1 for both points), and P is a tuple (x,y) representing the point on the curve.

The multiplicity v (also called valuation) in the context of divisors determines how many times a given point appears in the divisor. Here are some key points:

  • For regular points on the curve that are neither singular points nor points at infinity, typically \(v = 1\) or \(v = -1\) is used. \(v = 1\) means the point appears positively in the divisor (it is “added”).

  • \(v = -1\) means the point appears negatively in the divisor (it is “subtracted”).

  • If a point appears multiple times, \(v\) can be greater than 1 or less than -1.

  • For special points (e.g., points at infinity or singular points), v may take other values depending on the local structure of the curve at that point.

Important

abel_jacobi() returns the \(\mathbf{u}\) coordinates in the reverse order of the convention we adopted.

## Definition of Omega_ij
vK1 = vector(KCh[0])
vK2 = vector(KCh[1])

Omega12 = (1/2)*omega*(eps1 + eps2 + vK1) + (1/2)*omegaP*(epsp1 + epsp2 + vK2)
Omega13 = (1/2)*omega*(eps1 + eps3 + vK1) + (1/2)*omegaP*(epsp1 + epsp3 + vK2)
Omega14 = (1/2)*omega*(eps1 + eps4 + vK1) + (1/2)*omegaP*(epsp1 + epsp4 + vK2) 
Omega15 = (1/2)*omega*(eps1 + eps5 + vK1) + (1/2)*omegaP*(epsp1 + epsp5 + vK2)
Omega23 = (1/2)*omega*(eps2 + eps3 + vK1) + (1/2)*omegaP*(epsp2 + epsp3 + vK2)
Omega24 = (1/2)*omega*(eps2 + eps4 + vK1) + (1/2)*omegaP*(epsp2 + epsp4 + vK2)
Omega25 = (1/2)*omega*(eps2 + eps5 + vK1) + (1/2)*omegaP*(epsp2 + epsp5 + vK2)
Omega34 = (1/2)*omega*(eps3 + eps4 + vK1) + (1/2)*omegaP*(epsp3 + epsp4 + vK2)
Omega35 = (1/2)*omega*(eps3 + eps5 + vK1) + (1/2)*omegaP*(epsp3 + epsp5 + vK2)
Omega45 = (1/2)*omega*(eps4 + eps5 + vK1) + (1/2)*omegaP*(epsp4 + epsp5 + vK2)
## Code for abel_jacobi()

# Branch points
e1 = e(0)
e2 = e(1)
e3 = e(2)
e4 = e(3)
e5 = e(4)

# Divisors

divisor12 = [(-1, (e1,0)),(1, (e2,0))]
divisor13 = [(-1, (e1,0)),(1, (e3,0))] 
divisor14 = [(-1, (e1,0)),(1, (e4,0))] 
divisor15 = [(-1, (e1,0)),(1, (e5,0))] 
divisor23 = [(-1, (e2,0)),(1, (e3,0))] 
divisor24 = [(-1, (e2,0)),(1, (e4,0))]
divisor25 = [(-1, (e2,0)),(1, (e5,0))] 
divisor34 = [(-1, (e3,0)),(1, (e4,0))]
divisor35 = [(-1, (e3,0)),(1, (e5,0))] 
divisor45 = [(-1, (e4,0)),(1, (e5,0))]

# Abels images
ai12 = S.abel_jacobi(divisor12)
ai13 = S.abel_jacobi(divisor13)
ai14 = S.abel_jacobi(divisor14)
ai15 = S.abel_jacobi(divisor15)
ai23 = S.abel_jacobi(divisor23)
ai24 = S.abel_jacobi(divisor24)
ai25 = S.abel_jacobi(divisor25)
ai34 = S.abel_jacobi(divisor34)
ai35 = S.abel_jacobi(divisor35)
ai45 = S.abel_jacobi(divisor45)

# Reversing the order of coordinates
AJ12 = (ai12[1],ai12[0])
AJ13 = (ai13[1],ai13[0])
AJ14 = (ai14[1],ai14[0])
AJ15 = (ai15[1],ai15[0])
AJ23 = (ai23[1],ai23[0])
AJ24 = (ai24[1],ai24[0])
AJ25 = (ai25[1],ai25[0])
AJ34 = (ai34[1],ai34[0])
AJ35 = (ai35[1],ai35[0])
AJ45 = (ai45[1],ai45[0])


################
# Adding the Riemann vector shift
################
vK1 = vector(KCh[0])
vK2 = vector(KCh[1])
uK = (1/2)*omega*vK1 + (1/2)*omegaP*vK2

AOmega12 = (AJ12[0]+uK[0], AJ12[1]+uK[1])
AOmega13 = (AJ13[0]+uK[0], AJ13[1]+uK[1])
AOmega14 = (AJ14[0]+uK[0], AJ14[1]+uK[1])
AOmega15 = (AJ15[0]+uK[0], AJ15[1]+uK[1])
AOmega23 = (AJ23[0]+uK[0], AJ23[1]+uK[1])
AOmega24 = (AJ24[0]+uK[0], AJ24[1]+uK[1])
AOmega25 = (AJ25[0]+uK[0], AJ25[1]+uK[1])
AOmega34 = (AJ34[0]+uK[0], AJ34[1]+uK[1])
AOmega35 = (AJ35[0]+uK[0], AJ35[1]+uK[1])
AOmega45 = (AJ45[0]+uK[0], AJ45[1]+uK[1])

Compare the angles calculated in both ways

print("Omega12 - AOmega12=", (Omega12[0].n(digits=5) - AOmega12[0].n(digits=5),Omega12[1].n(digits=5) - AOmega12[1].n(digits=5)))
print("Omega13 - AOmega13=", (Omega13[0].n(digits=5) - AOmega13[0].n(digits=5),Omega13[1].n(digits=5) - AOmega13[1].n(digits=5)))
print("Omega14 - AOmega14=", (Omega14[0].n(digits=5) - AOmega14[0].n(digits=5),Omega14[1].n(digits=5) - AOmega14[1].n(digits=5)))
print("Omega15 - AOmega15=", (Omega15[0].n(digits=5) - AOmega15[0].n(digits=5),Omega15[1].n(digits=5) - AOmega15[1].n(digits=5)))
print("Omega23 - AOmega23=", (Omega23[0].n(digits=5) - AOmega23[0].n(digits=5),Omega23[1].n(digits=5) - AOmega23[1].n(digits=5)))
print("Omega24 - AOmega24=", (Omega24[0].n(digits=5) - AOmega24[0].n(digits=5),Omega24[1].n(digits=5) - AOmega24[1].n(digits=5)))
print("Omega25 - AOmega25=", (Omega25[0].n(digits=5) - AOmega25[0].n(digits=5),Omega25[1].n(digits=5) - AOmega25[1].n(digits=5)))
print("Omega34 - AOmega34=", (Omega34[0].n(digits=5) - AOmega34[0].n(digits=5),Omega34[1].n(digits=5) - AOmega34[1].n(digits=5))) 
print("Omega35 - AOmega35=", (Omega35[0].n(digits=5) - AOmega35[0].n(digits=5),Omega35[1].n(digits=5) - AOmega35[1].n(digits=5)))
print("Omega45 - AOmega45=", (Omega45[0].n(digits=5) - AOmega45[0].n(digits=5),Omega45[1].n(digits=5) - AOmega45[1].n(digits=5))) 
Omega12 - AOmega12= (1.3122, -0.47840)
Omega13 - AOmega13= (1.1399 - 0.66033*I, 0.14607 + 0.28220*I)
Omega14 - AOmega14= (1.3122 - 0.91722*I, -0.47840 + 0.038862*I)
Omega15 - AOmega15= (1.3122 - 1.3207*I, -0.47840 + 0.56440*I)
Omega23 - AOmega23= (-0.17232 - 0.25689*I, 0.62448 - 0.24334*I)
Omega24 - AOmega24= (-0.51377*I, -0.48668*I)
Omega25 - AOmega25= (-0.91722*I, 0.038862*I)
Omega34 - AOmega34= (0.17232 - 0.25689*I, -0.62448 - 0.24334*I)
Omega35 - AOmega35= (0.17232 - 0.66033*I, -0.62448 + 0.28220*I)
Omega45 - AOmega45= (0.13760 - 0.66033*I, -0.62448 + 0.28220*I)

Let’s check whether, despite these differences, we get the correct results for the inverse Jacobi problem.

print("P11(Omega12)=",WeierstrassP11(Omega12[0],Omega12[1]))
print("P11(AOmega12)=",WeierstrassP11(AOmega12[0],AOmega12[1]))
print("From theory: e1+e2=",e1+ e2)
P11(Omega12)= -12.8777576097163 - 4.44089209850063e-15*I
P11(AOmega12)= -12.8777576097163 + 6.52256026967279e-16*I
From theory: e1+e2= -12.8777576097163 + 1.12667612778789e-15*I
print("P13(Omega12)=",WeierstrassP13(Omega12[0],Omega12[1]))
print("P13(AOmega12)=",WeierstrassP13(AOmega12[0],AOmega12[1]))
print("From theory: -e1*e2=",-e1* e2)
P13(Omega12)= -12.4476076227599 + 1.77635683940025e-15*I
P13(AOmega12)= -12.4476076227599 - 4.10782519111308e-15*I
From theory: -e1*e2= -12.4476076227599 - 2.44998284463022e-15*I
print("P33(Omega12)=",WeierstrassP33(Omega12[0],Omega12[1]))
print("P33(AOmega12)=",WeierstrassP33(AOmega12[0],AOmega12[1]))
print("From theory: e1*e2(e3+e4+e5) + e3*e4*e5=",e1*e2*(e3+e4+e5) + e3*e4*e5)
P33(Omega12)= 10.1226150881406 - 1.77635683940025e-14*I
P33(AOmega12)= 10.1226150881406 + 1.77635683940025e-14*I
From theory: e1*e2(e3+e4+e5) + e3*e4*e5= 10.1226150881406 + 4.96992635363731e-15*I
print("P11(Omega13)=",WeierstrassP11(Omega13[0],Omega13[1]))
print("P11(AOmega13)=",WeierstrassP11(AOmega13[0],AOmega13[1]))
print("From theory: e1+e3=",e1+ e3)
P11(Omega13)= -12.3374341010861 - 4.44089209850063e-15*I
P11(AOmega13)= -12.1578646219938 + 4.08842072453786e-8*I
From theory: e1+e3= -12.3374341010862 + 1.49536866230030e-15*I
print("P11(Omega14)=",WeierstrassP11(Omega14[0],Omega14[1]))
print("P11(AOmega14)=",WeierstrassP11(AOmega14[0],AOmega14[1]))
print("From theory: e1+e4=", e1+ e4)
P11(Omega14)= -11.1300783560491 + 1.04164546422316*I
P11(AOmega14)= -11.1300783560491 + 1.04164546422316*I
From theory: e1+e4= -11.1300783560491 + 1.04164546422316*I
print("P11(Omega15)=",WeierstrassP11(Omega15[0],Omega15[1]))
print("P11(AOmega15)=",WeierstrassP11(AOmega15[0],AOmega15[1]))
print("From theory: e1+e5=", e1+ e5)
P11(Omega15)= -11.1300783560490 - 1.04164546422317*I
P11(AOmega15)= -11.1300783560491 - 1.04164546422316*I
From theory: e1+e5= -11.1300783560491 - 1.04164546422316*I
print("P11(Omega23)=",WeierstrassP11(Omega23[0],Omega23[1]))
print("P11(AOmega23)=",WeierstrassP11(AOmega23[0],AOmega23[1]))
print("From theory: e2+e3=", e2+ e3)
P11(Omega23)= -1.56495942886875 - 3.88578058618805e-16*I
P11(AOmega23)= -1.60036675615352 + 6.99244995239212e-9*I
From theory: e2+e3= -1.56495942886875 - 3.06355487269817e-16*I
print("P11(Omega24)=",WeierstrassP11(Omega24[0],Omega24[1]))
print("P11(AOmega24)=",WeierstrassP11(AOmega24[0],AOmega24[1]))
print("From theory: e2+e4=",e2+ e4)
P11(Omega24)= -0.357603683831629 + 1.04164546422316*I
P11(AOmega24)= -0.357603683831641 + 1.04164546422316*I
From theory: e2+e4= -0.357603683831631 + 1.04164546422316*I
print("P11(Omega25)=",WeierstrassP11(Omega25[0],Omega25[1]))
print("P11(AOmega25)=",WeierstrassP11(AOmega25[0],AOmega25[1]))
print("From theory: e2+e5=",e2+ e5)
P11(Omega25)= -0.357603683831631 - 1.04164546422316*I
P11(AOmega25)= -0.357603683831631 - 1.04164546422316*I
From theory: e2+e5= -0.357603683831632 - 1.04164546422316*I
print("P11(Omega34)=",WeierstrassP11(Omega34[0],Omega34[1]))
print("P11(AOmega34)=",WeierstrassP11(AOmega34[0],AOmega34[1]))
print("From theory: e3+e4=",e3+ e4)
P11(Omega34)= 0.182719824798481 + 1.04164546422316*I
P11(AOmega34)= 0.187452462896973 + 1.07462830934531*I
From theory: e3+e4= 0.182719824798481 + 1.04164546422316*I
print("P11(Omega35)=",WeierstrassP11(Omega35[0],Omega35[1]))
print("P11(AOmega35)=",WeierstrassP11(AOmega35[0],AOmega35[1]))
print("From theory: e3+e5=",e3+ e5)
P11(Omega35)= 0.182719824798477 - 1.04164546422316*I
P11(AOmega35)= 0.187452457028380 - 1.07462830675917*I
From theory: e3+e5= 0.182719824798481 - 1.04164546422316*I
print("P11(Omega45)=",WeierstrassP11(Omega45[0],Omega45[1]))
print("P11(AOmega45)=",WeierstrassP11(AOmega45[0],AOmega45[1]))
print("From theory: e4+e5=",e4+ e5)
P11(Omega45)= 1.39007556983559 - 6.71684929898220e-15*I
P11(AOmega45)= 1.39007556983560 + 9.43689570931383e-16*I
From theory: e4+e5= 1.39007556983560 + 2.22044604925031e-16*I

3.9.2. General case#

As a point \(P_i\) we choose

(3.53)#\[ P_i = (x_i,y_i) = (x_i, y_{+}(x_i)),\]

where \(y_{+}(x) = + \sqrt{y^2(x)}\) (there is second option \(y_{-}(x) = - \sqrt{y^2(x)}\) ).

# Definition of polynomial
def Poly(x):
    return x^5 + l2*x^4 + l4*x^3 + l6*x^2 + l8*x + l10

# Definition of y
def y(x):
    # Polynomial module
    modulus = abs(Poly(x))
    
    # Polynomial argument
    angle = arg(Poly(x))  
    
    # Correction of angle according to conditions
    corrected_angle = angle / 2 + pi if angle < 0 else angle / 2
    
    # Assembly of module and phase
    root = sqrt(modulus) * exp(I * corrected_angle)
    
    return root

From the theory

(3.54)#\[ \wp_{11}(\mathbf{u}) = x_1 + x_2, \quad \wp_{13}(\mathbf{u}) = -x_1 x_2, \]

and for

(3.55)#\[ F(x,z) = \left[2\lambda_{10} + \lambda_8 (x+z)\right] + xz\left[ 2\lambda_6 + \lambda_4(x+z) \right] + x^2 z^2 \left[ 2\lambda_2 + (x+z) \right]\]
(3.56)#\[ \wp_{33}(\mathbf{u}) = \frac{F(x_1,x_2) - 2y_1 y_2}{4(x_1 - x_2)^2}\]
def fun(x,z):
    return 2*l10 + l8*(x+z) + x*z*(2*l6 + l4*(x+z)) + x^2 * z^2 *(2*l2+(x+z))

def p33(x1,x2):
    numerator = fun(x1,x2) - 2*y(x1)*y(x2)
    denominator = 4*(x1-x2)^2
    return numerator/denominator

Let’s recall brunch points

print("Branch points:", *branch_points, sep='\n')
Branch points:
-11.8251161409669
-1.05264146874943
-0.512317960119317
0.695037784917798 - 1.04164546422316*I
0.695037784917798 + 1.04164546422316*I
+Infinity

We compare the values of \(\mathbf{u}\) calculated fromabel_jacobi() with the values from the theory for arbitrary points \(x_1\) and \(x_2\)

x1 = 6.5
x2 = 5

y1 = y(x1)
y2 = y(x2)


# Divisors
divisor = [(-1, (Infinity,0)),(1, (e5,  0)),(-1, (x1, y1)),(1, (x2, y2))]

# Abel map
aj = S.abel_jacobi(divisor)
# Reversing the order of coordinates
AJ = (aj[1],aj[0])

u1=AJ[0]
u3=AJ[1]

print("P11(u):=",WeierstrassP11(u1, u3))
print("From theory: x1+x2=",x1+ x2)
print()
print("P13(u)=",WeierstrassP13(u1, u3))
print("From theory: -x1*x2=",-x1*x2)
print()     
print("P33(u)=",WeierstrassP33(u1, u3))
print("From theory:",p33(x1,x2) )
P11(u):= 0.184891289907970 - 1.07467781012325*I
From theory: x1+x2= 11.5000000000000
P13(u)= 0.353993474282754 - 0.557285599518726*I
From theory: -x1*x2= -32.5000000000000
P33(u)= 12.4404821020362 + 6.09804458536710*I
From theory: -204.146616551396*sqrt(881/2) + 4284.93055555556

Let’s change the the points \(x_1\) and \(x_2\)

x1 = 1.5
x2 = -0.5

y1 = y(x1)
y2 = y(x2)


# Divisors
divisor = [(-1, (Infinity,0)),(1, (e5,  0)),(-1, (x1, y1)),(1, (x2, y2))]

# Abel map
aj = S.abel_jacobi(divisor)
# Reversing the order of coordinates
AJ = (aj[1],aj[0])

u1=AJ[0]
u3=AJ[1]

print("P11(u):=",WeierstrassP11(u1, u3))
print("From theory: x1+x2=",x1+ x2)
print()
print("P13(u)=",WeierstrassP13(u1, u3))
print("From theory: -x1*x2=",-x1*x2)
print()     
print("P33(u)=",WeierstrassP33(u1, u3))
print("From theory:",p33(x1,x2) )
P11(u):= 1.77557997339084 - 1.22425273828305*I
From theory: x1+x2= 1.00000000000000
P13(u)= -0.552540873788430 + 1.23859942939941*I
From theory: -x1*x2= 0.750000000000000
P33(u)= -59.6555863818557 + 27.4989143355881*I
From theory: 2.66403194341274

Let’s change the brunch point in the divisor definition

x1 = 1.5
x2 = -0.5

y1 = y(x1)
y2 = y(x2)


# Divisors
divisor = [(-1, (Infinity,0)),(1, (e4,  0)),(-1, (x1, y1)),(1, (x2, y2))]

# Abel map
aj = S.abel_jacobi(divisor)
# Reversing the order of coordinates
AJ = (aj[1],aj[0])

u1=AJ[0]
u3=AJ[1]

print("P11(u):=",WeierstrassP11(u1, u3))
print("From theory: x1+x2=",x1+ x2)
print()
print("P13(u)=",WeierstrassP13(u1, u3))
print("From theory: -x1*x2=",-x1*x2)
print()     
print("P33(u)=",WeierstrassP33(u1, u3))
print("From theory:",p33(x1,x2) )
P11(u):= 1.77557996983274 + 1.22425274112177*I
From theory: x1+x2= 1.00000000000000
P13(u)= -0.552540869826764 - 1.23859943205059*I
From theory: -x1*x2= 0.750000000000000
P33(u)= -59.6555880494280 - 27.4989154654669*I
From theory: 2.66403194341274

Let’s change the brunch point in the divisor definition on the first one which is real

x1 = 1.5
x2 = -0.5

y1 = y(x1)
y2 = y(x2)


# Divisors
divisor = [(-1, (Infinity,0)),(1, (e3,  0)),(-1, (x1, y1)),(1, (x2, y2))]

# Abel map
aj = S.abel_jacobi(divisor)
# Reversing the order of coordinates
AJ = (aj[1],aj[0])

u1=AJ[0]
u3=AJ[1]

print("P11(u):=",WeierstrassP11(u1, u3))
print("From theory: x1+x2=",x1+ x2)
print()
print("P13(u)=",WeierstrassP13(u1, u3))
print("From theory: -x1*x2=",-x1*x2)
print()     
print("P33(u)=",WeierstrassP33(u1, u3))
print("From theory:",p33(x1,x2) )
P11(u):= 0.999818095342272 - 9.42324884789514e-10*I
From theory: x1+x2= 1.00000000000000
P13(u)= 0.749923179704208 + 1.16553522389040e-9*I
From theory: -x1*x2= 0.750000000000000
P33(u)= 15.4500625172065 - 8.65000444605357e-8*I
From theory: 2.66403194341274

Ok, let’s change the points \(x_1\) and \(x_2\) for it

x1 = 10.5
x2 = -8.5

y1 = y(x1)
y2 = y(x2)


# Divisors
divisor = [(-1, (Infinity,0)),(1, (e3,  0)),(-1, (x1, y1)),(1, (x2, y2))]

# Abel map
aj = S.abel_jacobi(divisor)
# Reversing the order of coordinates
AJ = (aj[1],aj[0])

u1=AJ[0]
u3=AJ[1]

print("P11(u):=",WeierstrassP11(u1, u3))
print("From theory: x1+x2=",x1+ x2)
print()
print("P13(u)=",WeierstrassP13(u1, u3))
print("From theory: -x1*x2=",-x1*x2)
print()     
print("P33(u)=",WeierstrassP33(u1, u3))
print("From theory:",p33(x1,x2) )
P11(u):= 4.73719784630354 + 4.00053699792235e-7*I
From theory: x1+x2= 2.00000000000000
P13(u)= 103.024851216997 + 6.27970203170847e-6*I
From theory: -x1*x2= 89.2500000000000
P33(u)= 1167.35969687998 + 0.0000704657058179237*I
From theory: 48.3847049517949

Let’s try the remaining brunch points in the dividers definition

x1 = 10.5
x2 = -8.5

y1 = y(x1)
y2 = y(x2)


# Divisors
divisor = [(-1, (Infinity,0)),(1, (e2,  0)),(-1, (x1, y1)),(1, (x2, y2))]

# Abel map
aj = S.abel_jacobi(divisor)
# Reversing the order of coordinates
AJ = (aj[1],aj[0])

u1=AJ[0]
u3=AJ[1]

print("P11(u):=",WeierstrassP11(u1, u3))
print("From theory: x1+x2=",x1+ x2)
print()
print("P13(u)=",WeierstrassP13(u1, u3))
print("From theory: -x1*x2=",-x1*x2)
print()     
print("P33(u)=",WeierstrassP33(u1, u3))
print("From theory:",p33(x1,x2) )
P11(u):= 1.63612121389863 - 2.08993484740461e-8*I
From theory: x1+x2= 2.00000000000000
P13(u)= -1.85649109962442 + 4.48041603817728e-9*I
From theory: -x1*x2= 89.2500000000000
P33(u)= -26.4254878930462 + 1.54122091089448e-7*I
From theory: 48.3847049517949
x1 = 10.5
x2 = -8.5

y1 = y(x1)
y2 = y(x2)


# Divisors
divisor = [(-1, (Infinity,0)),(1, (e1,  0)),(-1, (x1, y1)),(1, (x2, y2))]

# Abel map
aj = S.abel_jacobi(divisor)
# Reversing the order of coordinates
AJ = (aj[1],aj[0])

u1=AJ[0]
u3=AJ[1]

print("P11(u):=",WeierstrassP11(u1, u3))
print("From theory: x1+x2=",x1+ x2)
print()
print("P13(u)=",WeierstrassP13(u1, u3))
print("From theory: -x1*x2=",-x1*x2)
print()     
print("P33(u)=",WeierstrassP33(u1, u3))
print("From theory:",p33(x1,x2) )
P11(u):= 83.3574365361692 - 6.51322716094249e-6*I
From theory: x1+x2= 2.00000000000000
P13(u)= 42.2292611986710 - 3.25483494734158e-6*I
From theory: -x1*x2= 89.2500000000000
P33(u)= 33.3170685572608 - 1.97860730399846e-6*I
From theory: 48.3847049517949

Observation 3.3

As for now, there are no clear rules for selecting divisors.

Note

Sage still has a problem with dealing with situations where \(y\) has a large imaginary part, e.g. \(x_2=-7\), for which we get \(y_2=82.8492607088319\;i\). But in the physical applications that interest us this does not happen, because \(y\) always belongs to the real

3.10. Bibliography#

[1] (1,2,3)

Julia Bernatska. Computation of \$\wp\$-functions on plane algebraic curves. July 2024. URL: https://arxiv.org/abs/2407.05632v3 (visited on 2024-10-17).

[2]

Uniformization of a genus 4 hyperelliptic curve with arbitrary complex branch points - Online Technical Discussion Groups—Wolfram Community. URL: https://community.wolfram.com/groups/-/m/t/3243472 (visited on 2024-10-17).

[3] (1,2)

V.Z. Enolski, E. Hackmann, V. Kagramanova, J. Kunz, and C. Lämmerzahl. Inversion of hyperelliptic integrals of arbitrary genus with application to particle motion in general relativity. Journal of Geometry and Physics, 61(5):899–921, May 2011. URL: https://linkinghub.elsevier.com/retrieve/pii/S0393044011000027 (visited on 2024-10-28), doi:10.1016/j.geomphys.2011.01.001.

[4] (1,2)

Julia Bernatska. An analogue of Vélu's formulae in genus two. December 2024. arXiv:2412.10284 [math]. URL: http://arxiv.org/abs/2412.10284 (visited on 2025-01-06), doi:10.48550/arXiv.2412.10284.

[5]

Inversion of a general hyperelliptic integral and particle motion in Hořava–Lifshitz black hole space-times \textbar Journal of Mathematical Physics \textbar AIP Publishing. URL: https://pubs.aip.org/aip/jmp/article-abstract/53/1/012504/232886/Inversion-of-a-general-hyperelliptic-integral-and?redirectedFrom=fulltext (visited on 2024-10-17).

[6]

Victor Enolski and Peter Richter. Periods of hyperelliptic integrals expressed in terms of θ-constants by means of Thomae formulae. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 366(1867):1005–1024, June 2007. Publisher: Royal Society. URL: https://royalsocietypublishing.org/doi/abs/10.1098/rsta.2007.2059 (visited on 2024-10-17), doi:10.1098/rsta.2007.2059.

[7]

Julia Bernatska. Reality conditions for the KdV equation and exact quasi-periodic solutions in finite phase spaces. Journal of Geometry and Physics, 206:105322, December 2024. URL: https://linkinghub.elsevier.com/retrieve/pii/S0393044024002237 (visited on 2024-10-28), doi:10.1016/j.geomphys.2024.105322.