Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dataclass for solver kwargs #77

Merged
merged 75 commits into from
Jan 10, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
75 commits
Select commit Hold shift + click to select a range
bf4605f
put if __main__ guard to test
mcocdawc Dec 19, 2024
eb2969d
use h5py contextmanager nearly everywhere
mcocdawc Dec 19, 2024
fa6cc27
changed mbe.StoreBE into attrs defined class
mcocdawc Dec 19, 2024
be6176b
ignore broken link to numpy.float64 in documentation
mcocdawc Dec 19, 2024
86fa44b
fixed a bug from wrong close statement
mcocdawc Dec 19, 2024
97925ff
Merge branch 'h5py_use_contextmanager' into final_scratch_dir_attempt
mcocdawc Dec 19, 2024
3942320
testsuite don't wait for analysis
mcocdawc Dec 19, 2024
499aa5b
added types for molbe BE object
mcocdawc Dec 19, 2024
3492c66
fixed wrong close statements
mcocdawc Dec 19, 2024
35adb0c
fixed scratch-dir for molbe-be
mcocdawc Dec 19, 2024
b0d8817
added typing for molbe.BE.optimize
mcocdawc Dec 19, 2024
1bb1105
renamed _opt.py to opt.py
mcocdawc Dec 19, 2024
b46479b
pass on solver_kwargs to be_func
mcocdawc Dec 19, 2024
bcabc40
added types to be_func
mcocdawc Dec 19, 2024
0517d47
added delete multiple_files function
mcocdawc Dec 19, 2024
a458330
use the new scratch dir in be_func
mcocdawc Dec 19, 2024
5c384c0
added types to molbe.BE.optimize + call self.scratch_dir.cleanup
mcocdawc Dec 19, 2024
f4cb9fd
moved schmidt_decomposition to prevent circular import
mcocdawc Dec 19, 2024
83e6773
added type hints to be_func_parallel
mcocdawc Dec 19, 2024
2cdf8ff
Merge branch 'main' of github.com:troyvvgroup/quemb into final_scratc…
mcocdawc Dec 19, 2024
e24a5e3
fixed typo
mcocdawc Dec 20, 2024
b5ea348
fixed several small errors in the code
mcocdawc Dec 20, 2024
2fce4b9
fixed be_func_parallel calls
mcocdawc Dec 20, 2024
f5255c9
simplified be_func_parallel
mcocdawc Dec 20, 2024
17f443f
added types to run_solver
mcocdawc Dec 20, 2024
f6f93c5
use frag_scratch in be_func_parallel
mcocdawc Dec 20, 2024
14caf5c
Merge branch 'main' of github.com:troyvvgroup/quemb into final_scratc…
mcocdawc Dec 20, 2024
6904c62
use frag_scratch in run_solver
mcocdawc Dec 20, 2024
fe7bf55
added typehints to kbe BE class
mcocdawc Dec 20, 2024
9086c9c
simplified a few boolean expressions
mcocdawc Dec 20, 2024
9251873
the tests should be working now
mcocdawc Dec 20, 2024
a329d32
ensure scratch cleanup for kbe
mcocdawc Dec 20, 2024
28ac432
removed call to self.compute_energy_full (related to #35)
mcocdawc Dec 20, 2024
044707f
write nicer Pool statements
mcocdawc Dec 20, 2024
12bba3b
fixed naming error
mcocdawc Dec 20, 2024
182a015
use more explicit way of freeing memory
mcocdawc Dec 20, 2024
5f851b6
refactored expression
mcocdawc Dec 20, 2024
be0c403
use Tuple[int, ...] for numpy shapes :-(
mcocdawc Dec 20, 2024
b6a187c
refactored WorkDir to use atexit.register for cleanup
mcocdawc Dec 20, 2024
c8cf15a
added better docstring for config
mcocdawc Dec 20, 2024
d372022
require static analysis to be ok for running test suite
mcocdawc Dec 20, 2024
d86d09b
renamed DMRG specific solver kwargs to its proper name
mcocdawc Dec 20, 2024
ece5156
better naming for DMRG stuff
mcocdawc Dec 20, 2024
2a7514a
added types to block2 DMRG function
mcocdawc Dec 20, 2024
fc4dda7
refactor some DMRG arguments
mcocdawc Dec 20, 2024
4d04279
change behaviour of scratch contextmanager
mcocdawc Dec 21, 2024
9e7b26f
fixed the deadlock in the test requirements
mcocdawc Dec 21, 2024
69755ef
added type annotations to lo.py
mcocdawc Dec 23, 2024
f1d316b
added new scratch dir also to ube
mcocdawc Dec 23, 2024
34a1eb9
Merge branch 'final_scratch_dir_attempt' into more_type_annotations
mcocdawc Dec 23, 2024
e818f45
removed some run_solver args that are always true
mcocdawc Dec 23, 2024
57a2c11
simplified boolean expressions in molbe
mcocdawc Dec 23, 2024
07569c0
simplified boolean expressions in kbe
mcocdawc Dec 23, 2024
0577e97
introduced DMRG data class for solver args
mcocdawc Dec 23, 2024
9a7ea7a
Use a solver_args abstract class
mcocdawc Dec 23, 2024
997dc38
finished SHCI solver args dataclass
mcocdawc Dec 23, 2024
2e9c60e
fixed typing issues in examples
mcocdawc Dec 23, 2024
bce7ef7
made solver_args immutable and improved their docstrings
mcocdawc Dec 23, 2024
02e7ac2
incremented the python version for type checking
mcocdawc Dec 23, 2024
5118a0c
Merge branch 'more_type_annotations' into dataclass_for_solver_kwargs
mcocdawc Dec 23, 2024
128fffb
try floating instead of float64
mcocdawc Dec 23, 2024
cbc1d7d
don't overspecify the types
mcocdawc Dec 23, 2024
0204600
Merge branch 'more_type_annotations' into dataclass_for_solver_kwargs
mcocdawc Dec 23, 2024
6e7dc84
added correct link to doc
mcocdawc Dec 23, 2024
570a9d2
Merge branch 'more_type_annotations' into dataclass_for_solver_kwargs
mcocdawc Dec 23, 2024
e88d6e4
simplified DMRG args
mcocdawc Dec 23, 2024
271b1b0
use a factory for mutable default attributes
mcocdawc Dec 23, 2024
4198d9b
removed unused setting and changed default for SCRATCH to /tmp
mcocdawc Dec 23, 2024
214b091
added types to kbe.BE.optimize
mcocdawc Dec 23, 2024
d0f4c26
Merge branch 'main' of github.com:troyvvgroup/quemb into dataclass_fo…
mcocdawc Jan 9, 2025
a29ec7a
fixed small error
mcocdawc Jan 9, 2025
6d9bdb7
use tempfile.gettempdir()
mcocdawc Jan 9, 2025
5601fa4
fixed J0[-1, -1] also for kbe
mcocdawc Jan 9, 2025
8f080e7
adressed Shaun's comment
mcocdawc Jan 10, 2025
65ec4b4
properly hide _DMRG_Args and _SHCI_Args
mcocdawc Jan 10, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 5 additions & 4 deletions example/molbe_dmrg_block2.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
from pyscf import cc, fci, gto, scf

from quemb.molbe import BE, fragpart
from quemb.molbe.solver import DMRG_ArgsUser

# We'll consider the dissociation curve for a 1D chain of 8 H-atoms:
num_points = 3
Expand Down Expand Up @@ -52,7 +53,7 @@
# Next, run BE-DMRG with default parameters and maxM=100.
mybe.oneshot(
solver="block2", # or 'DMRG', 'DMRGSCF', 'DMRGCI'
DMRG_solver_kwargs=dict(
solver_args=DMRG_ArgsUser(
maxM=100, # Max fragment bond dimension
force_cleanup=True, # Remove all fragment DMRG tmpfiles
),
Expand Down Expand Up @@ -100,7 +101,7 @@
solver="block2", # or 'DMRG', 'DMRGSCF', 'DMRGCI'
max_iter=60, # Max number of sweeps
only_chem=True,
DMRG_solver_kwargs=dict(
solver_args=DMRG_ArgsUser(
startM=20, # Initial fragment bond dimension (1st sweep)
maxM=200, # Maximum fragment bond dimension
twodot_to_onedot=50, # Sweep num to switch from two- to one-dot algo.
Expand All @@ -113,7 +114,7 @@
)

# Or, alternatively, we can construct a full schedule by hand:
schedule = {
schedule: dict[str, list[int] | list[float]] = {
"scheduleSweeps": [0, 10, 20, 30, 40, 50], # Sweep indices
"scheduleMaxMs": [25, 50, 100, 200, 500, 500], # Sweep maxMs
"scheduleTols": [1e-5, 1e-5, 1e-6, 1e-6, 1e-8, 1e-8], # Sweep Davidson tolerances
Expand All @@ -124,7 +125,7 @@
mybe.optimize(
solver="block2",
only_chem=True,
DMRG_solver_kwargs=dict(
solver_args=DMRG_ArgsUser(
schedule_kwargs=schedule,
block_extra_keyword=["fiedler"],
force_cleanup=True,
Expand Down
69 changes: 25 additions & 44 deletions src/quemb/kbe/pbe.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
import h5py
import numpy
from libdmet.basis_transform.eri_transform import get_emb_eri_fast_gdf
from numpy import array, floating
from pyscf import ao2mo, pbc
from pyscf.pbc import df, gto
from pyscf.pbc.df.df_jk import _ewald_exxdiv_for_G0
Expand All @@ -18,13 +19,13 @@
from quemb.molbe.be_parallel import be_func_parallel
from quemb.molbe.helper import get_eri, get_scfObj, get_veff
from quemb.molbe.opt import BEOPT
from quemb.molbe.solver import be_func
from quemb.molbe.solver import UserSolverArgs, be_func
from quemb.shared.external.optqn import (
get_be_error_jacobian as _ext_get_be_error_jacobian,
)
from quemb.shared.helper import copy_docstring
from quemb.shared.manage_scratch import WorkDir
from quemb.shared.typing import KwargDict, PathLike
from quemb.shared.typing import Matrix, PathLike


class BE(Mixin_k_Localize):
Expand Down Expand Up @@ -58,12 +59,8 @@ def __init__(
save: bool = False,
restart_file: PathLike = "storebe.pk",
save_file: PathLike = "storebe.pk",
hci_pt: bool = False,
nproc: int = 1,
ompnum: int = 4,
hci_cutoff: float = 0.001,
ci_coeff_cutoff: float | None = None,
select_cutoff: float | None = None,
iao_val_core: bool = True,
exxdiv: str = "ewald",
kpts: list[list[float]] | None = None,
Expand Down Expand Up @@ -159,12 +156,6 @@ def __init__(
self.nkpt = nkpts_
self.kpts = kpts

# HCI parameters
self.hci_cutoff = hci_cutoff
self.ci_coeff_cutoff = ci_coeff_cutoff
self.select_cutoff = select_cutoff
self.hci_pt = hci_pt

if not restart:
self.mo_energy = mf.mo_energy
mf.exxdiv = None
Expand Down Expand Up @@ -325,17 +316,17 @@ def __init__(

def optimize(
self,
solver="MP2",
method="QN",
only_chem=False,
use_cumulant=True,
conv_tol=1.0e-6,
relax_density=False,
J0=None,
nproc=1,
ompnum=4,
max_iter=500,
):
solver: str = "MP2",
method: str = "QN",
only_chem: bool = False,
use_cumulant: bool = True,
conv_tol: float = 1.0e-6,
relax_density: bool = False,
J0: Matrix[floating] | None = None,
nproc: int = 1,
ompnum: int = 4,
max_iter: int = 500,
) -> None:
"""BE optimization function

Interfaces BEOPT to perform bootstrap embedding optimization.
Expand Down Expand Up @@ -393,20 +384,17 @@ def optimize(
conv_tol=conv_tol,
only_chem=only_chem,
use_cumulant=use_cumulant,
hci_cutoff=self.hci_cutoff,
ci_coeff_cutoff=self.ci_coeff_cutoff,
relax_density=relax_density,
select_cutoff=self.select_cutoff,
solver=solver,
ebe_hf=self.ebe_hf,
)

if method == "QN":
# Prepare the initial Jacobian matrix
if only_chem:
J0 = [[0.0]]
J0 = array([[0.0]])
J0 = self.get_be_error_jacobian(jac_solver="HF")
J0 = [[J0[-1, -1]]]
J0 = J0[-1:, -1:]
else:
J0 = self.get_be_error_jacobian(jac_solver="HF")

Expand All @@ -429,10 +417,10 @@ def optimize(
raise ValueError("This optimization method for BE is not supported")

@copy_docstring(_ext_get_be_error_jacobian)
def get_be_error_jacobian(self, jac_solver="HF"):
def get_be_error_jacobian(self, jac_solver: str = "HF") -> Matrix[floating]:
return _ext_get_be_error_jacobian(self.Nfrag, self.Fobjs, jac_solver)

def print_ini(self):
def print_ini(self) -> None:
"""
Print initialization banner for the kBE calculation.
"""
Expand Down Expand Up @@ -683,7 +671,7 @@ def oneshot(
use_cumulant: bool = True,
nproc: int = 1,
ompnum: int = 4,
DMRG_solver_kwargs: KwargDict | None = None,
solver_args: UserSolverArgs | None = None,
) -> None:
"""
Perform a one-shot bootstrap embedding calculation.
Expand Down Expand Up @@ -711,14 +699,11 @@ def oneshot(
solver,
self.enuc,
nproc=ompnum,
use_cumulant=use_cumulant,
eeval=True,
return_vec=False,
hci_cutoff=self.hci_cutoff,
ci_coeff_cutoff=self.ci_coeff_cutoff,
select_cutoff=self.select_cutoff,
scratch_dir=self.scratch_dir,
DMRG_solver_kwargs=DMRG_solver_kwargs,
solver_args=solver_args,
use_cumulant=use_cumulant,
return_vec=False,
)
else:
rets = be_func_parallel(
Expand All @@ -727,15 +712,13 @@ def oneshot(
self.Nocc,
solver,
self.enuc,
eeval=True,
nproc=nproc,
ompnum=ompnum,
scratch_dir=self.scratch_dir,
solver_args=solver_args,
use_cumulant=use_cumulant,
eeval=True,
return_vec=False,
hci_cutoff=self.hci_cutoff,
ci_coeff_cutoff=self.ci_coeff_cutoff,
select_cutoff=self.select_cutoff,
scratch_dir=self.scratch_dir,
)

print("-----------------------------------------------------", flush=True)
Expand All @@ -759,8 +742,6 @@ def oneshot(
flush=True,
)

self.ebe_tot = rets[0]

def update_fock(self, heff=None):
"""
Update the Fock matrix for each fragment with the effective Hamiltonian.
Expand Down
66 changes: 35 additions & 31 deletions src/quemb/molbe/be_parallel.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,9 @@
)
from quemb.molbe.pfrag import Frags
from quemb.molbe.solver import (
SHCI_ArgsUser,
UserSolverArgs,
_SHCI_Args,
make_rdm1_ccsd_t1,
make_rdm2_urlx,
solve_ccsd,
Expand Down Expand Up @@ -46,15 +49,13 @@ def run_solver(
eri_file: str = "eri_file.h5",
veff: Matrix[float64] | None = None,
veff0: Matrix[float64] | None = None,
hci_cutoff: float = 0.001,
ci_coeff_cutoff: float | None = None,
select_cutoff: float | None = None,
ompnum: int = 4,
writeh1: bool = False,
eeval: bool = True,
ret_vec: bool = False,
use_cumulant: bool = True,
relax_density: bool = False,
solver_args: UserSolverArgs | None = None,
):
"""
Run a quantum chemistry solver to compute the reduced density matrices.
Expand All @@ -67,9 +68,12 @@ def run_solver(
Initial guess for the density matrix.
scratch_dir :
The scratch dir root.
Fragment files will be stored in :code:`scratch_dir / dname`.
dname :
Directory name for storing intermediate files.
Fragment files will be stored in :code:`scratch_dir / dname`.
scratch_dir :
The scratch directory.
Fragment files will be stored in :code:`scratch_dir / dname`.
nao :
Number of atomic orbitals.
nocc :
Expand All @@ -95,12 +99,12 @@ def run_solver(
Number of OpenMP threads. Default is 4.
writeh1 :
If True, write the one-electron integrals to a file. Default is False.
use_cumulant :
If True, use the cumulant approximation for RDM2. Default is True.
eeval :
If True, evaluate the electronic energy. Default is True.
ret_vec :
If True, return vector with error and rdms. Default is True.
use_cumulant :
If True, use the cumulant approximation for RDM2. Default is True.
relax_density :
If True, use CCSD relaxed density. Default is False

Expand Down Expand Up @@ -144,19 +148,17 @@ def run_solver(
# pylint: disable-next=E0611
from pyscf import hci # noqa: PLC0415 # hci is an optional module

assert isinstance(solver_args, SHCI_ArgsUser)
SHCI_args = _SHCI_Args.from_user_input(solver_args)

nao, nmo = mf_.mo_coeff.shape
eri = ao2mo.kernel(mf_._eri, mf_.mo_coeff, aosym="s4", compact=False).reshape(
4 * ((nmo),)
)
ci_ = hci.SCI(mf_.mol)
if select_cutoff is None and ci_coeff_cutoff is None:
select_cutoff = hci_cutoff
ci_coeff_cutoff = hci_cutoff
elif select_cutoff is None or ci_coeff_cutoff is None:
raise ValueError

ci_.select_cutoff = select_cutoff
ci_.ci_coeff_cutoff = ci_coeff_cutoff
ci_.select_cutoff = SHCI_args.select_cutoff
ci_.ci_coeff_cutoff = SHCI_args.ci_coeff_cutoff

nelec = (nocc, nocc)
h1_ = multi_dot((mf_.mo_coeff.T, h1, mf_.mo_coeff))
Expand All @@ -174,6 +176,9 @@ def run_solver(

frag_scratch = WorkDir(scratch_dir / dname)

assert isinstance(solver_args, SHCI_ArgsUser)
SHCI_args = _SHCI_Args.from_user_input(solver_args)

nao, nmo = mf_.mo_coeff.shape
nelec = (nocc, nocc)
mch = shci.SHCISCF(mf_, nmo, nelec, orbpath=frag_scratch)
Expand All @@ -182,7 +187,7 @@ def run_solver(
mch.fcisolver.nPTiter = 0
mch.fcisolver.sweep_iter = [0]
mch.fcisolver.DoRDM = True
mch.fcisolver.sweep_epsilon = [hci_cutoff]
mch.fcisolver.sweep_epsilon = [solver_args.hci_cutoff]
mch.fcisolver.scratchDirectory = frag_scratch
if not writeh1:
mch.fcisolver.restart = True
Expand All @@ -193,6 +198,9 @@ def run_solver(
# pylint: disable-next=E0611
from pyscf import cornell_shci # noqa: PLC0415 # optional module

assert isinstance(solver_args, SHCI_ArgsUser)
SHCI_args = _SHCI_Args.from_user_input(solver_args)

frag_scratch = WorkDir(scratch_dir / dname)

nao, nmo = mf_.mo_coeff.shape
Expand All @@ -208,7 +216,7 @@ def run_solver(
ci.runtimedir = frag_scratch
ci.restart = True
ci.config["var_only"] = True
ci.config["eps_vars"] = [hci_cutoff]
ci.config["eps_vars"] = [solver_args.hci_cutoff]
ci.config["get_1rdm_csv"] = True
ci.config["get_2rdm_csv"] = True
ci.kernel(h1, eri, nmo, nelec)
Expand Down Expand Up @@ -382,16 +390,14 @@ def be_func_parallel(
solver: str,
enuc: float, # noqa: ARG001
scratch_dir: WorkDir,
only_chem: bool = False,
solver_args: UserSolverArgs | None,
nproc: int = 1,
ompnum: int = 4,
only_chem: bool = False,
relax_density: bool = False,
use_cumulant: bool = True,
eeval: bool = True,
return_vec: bool = True,
hci_cutoff: float = 0.001,
ci_coeff_cutoff: float | None = None,
select_cutoff: float | None = None,
eeval: bool = False,
return_vec: bool = False,
writeh1: bool = False,
):
"""
Expand All @@ -416,21 +422,21 @@ def be_func_parallel(
'FCI', 'HCI', 'SHCI', and 'SCI'.
enuc :
Nuclear component of the energy.
scratch_dir :
Scratch directory root
only_chem :
Whether to perform chemical potential optimization only.
Refer to bootstrap embedding literature. Defaults to False.
nproc :
Total number of processors assigned for the optimization. Defaults to 1.
When nproc > 1, Python multithreading is invoked.
ompnum :
If nproc > 1, sets the number of cores for OpenMP parallelization.
Defaults to 4.
use_cumulant :
Use cumulant energy expression. Defaults to True
only_chem :
Whether to perform chemical potential optimization only.
Refer to bootstrap embedding literature. Defaults to False.
eeval :
Whether to evaluate energies. Defaults to False.
scratch_dir :
Scratch directory root
use_cumulant :
Use cumulant energy expression. Defaults to True
return_vec :
Whether to return the error vector. Defaults to False.
writeh1 :
Expand Down Expand Up @@ -472,15 +478,13 @@ def be_func_parallel(
fobj.eri_file,
fobj.veff if not use_cumulant else None,
fobj.veff0,
hci_cutoff,
ci_coeff_cutoff,
select_cutoff,
ompnum,
writeh1,
eeval,
return_vec,
use_cumulant,
relax_density,
solver_args,
],
)

Expand Down
Loading
Loading