Skip to content

Commit

Permalink
Merge branch 'dev' into phase_contrast
Browse files Browse the repository at this point in the history
  • Loading branch information
gvarnavi authored Jan 16, 2025
2 parents b87e71b + 966a41c commit d7ebd28
Show file tree
Hide file tree
Showing 6 changed files with 131 additions and 44 deletions.
16 changes: 8 additions & 8 deletions CONTRIBUTORS.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,11 +20,11 @@ There are many ways to contribute to py4DSTEM, including:

### Coding Guidelines

* **Code style:** py4DSTEM uses the black code formatter and flake8 linter. All code must pass these checks without error before it can be merged. We suggest using `pre-commit` to help ensure any code commited follows these practices, checkout the [setting up developer environment section below](#install). We also try to abide by PEP8 coding style guide where possible.
* **Code style:** py4DSTEM uses the black code formatter and flake8 linter. All code must pass these checks without error before it can be merged. We suggest using `pre-commit` to help ensure any code committed follows these practices, checkout the [setting up developer environment section below](#install). We also try to abide by PEP8 coding style guide where possible.

* **Documentation:** All code should be well-documented, and use Numpy style docstrings. Use docstrings to document functions and classes, add comments to explain complex code both blocks and individual lines, and use informative variable names.

* **Testing:** Ideally all new code should be accompanied by tests using pyTest framework; at the least we require examples of old and new behaviour caused by the PR. For bug fixes this can be a block of code which currently fails and works with the proposed changes. For new workflows or extensive feature additions, please also include a Jupyter notebook demonstrating the changes for an entire workflow i.e. from loading the input data to visualizing and saving any processed results.
* **Testing:** Ideally all new code should be accompanied by tests using pyTest framework; at the least we require examples of old and new behaviour caused by the PR. For bug fixes this can be a block of code which currently fails and works with the proposed changes. For new workflows or extensive feature additions, please also include a Jupyter notebook demonstrating the changes for an entire workflow i.e. from loading the input data to visualizing and saving any processed results.

* **Dependencies:** New dependencies represent a significant change to the package, and any PRs which add new dependencies will require discussion and agreement from the development team. If a new dependency is required, please prioritize adding dependencies that are actively maintained, have permissive installation requirements, and are accessible through both pip and conda.

Expand Down Expand Up @@ -107,20 +107,20 @@ You can now make changes to the code and test them using your favorite Python ID
cd <path-to-your-fork-of-the-py4DSTEM-git-repo> # go to your py4DSTEM repo
pre-commit install
```

This will setup pre-commit to work on this repo by creating/changing a file in .git/hooks/pre-commit, which tells `pre-commit` to automatically run flake8 and black when you try to commit code. It won't affect any other repos.
**_extra tips_:**
**_extra tips_:**
```bash
# You can call pre commit manually at any time without committing
pre-commit run # will check any staged files
pre-commit run # will check any staged files
pre-commit run -a # will run on all files in repo
# you can bypass the hook and commit files without the checks
# you can bypass the hook and commit files without the checks
# (this isn't best practice and should be avoided, but there are times it can be useful)

git add file # stage file as usual
git add file # stage file as usual
git commit -m "you commit message" --no-verify # commit without running checks
git push # push to repo.
git push # push to repo.
```
1 change: 0 additions & 1 deletion py4DSTEM/io/filereaders/read_mib.py
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,6 @@ def scan_size(path, scan):
header_path = path[:-3] + "hdr"
result = {}
if os.path.exists(header_path):

with open(header_path, encoding="UTF-8") as f:
for line in f:
k, v = line.split("\t", 1)
Expand Down
7 changes: 1 addition & 6 deletions py4DSTEM/process/diffraction/WK_scattering_factors.py
Original file line number Diff line number Diff line change
Expand Up @@ -174,14 +174,9 @@ def compute_WK_factor(
B1 = B / (4.0 * np.pi) ** 2

for jj in range(4):
Fphon += (
A1[jj]
* A1[jj]
* (DWF * RI1(B1[jj], B1[jj], G) - RI2(B1[jj], B1[jj], G, UL))
)
for ii in range(jj + 1):
Fphon += (
2.0
(2.0 if jj != ii else 1.0)
* A1[jj]
* A1[ii]
* (DWF * RI1(B1[ii], B1[jj], G) - RI2(B1[ii], B1[jj], G, UL))
Expand Down
124 changes: 111 additions & 13 deletions py4DSTEM/process/diffraction/digital_dark_field.py
Original file line number Diff line number Diff line change
Expand Up @@ -334,7 +334,7 @@ def pointlist_to_array(
if True, applies rotational calibration to bragg_peaks
rphi: bool
if True, generates two extra columns of Qr and Qphi for addressing in polar
coordinates, Qphi is the angle anticlockwise from horizontal to the right
coordinates, Qphi is the angle in degrees anticlockwise from horizontal to the right
Returns
----------
Expand Down Expand Up @@ -482,7 +482,7 @@ def DDFimage(points_array, aperture_positions, Rshape=None, tol=1):
aperture_position = aperture_positions[aperture_index]
intensities = np.vstack(
(
points_array[:, 2:].T,
points_array[:, 2:5].T,
pointlist_differences(aperture_position, points_array),
)
).T
Expand All @@ -494,47 +494,145 @@ def DDFimage(points_array, aperture_positions, Rshape=None, tol=1):
return image


def DDF_radial_image(points_array, radius, Rshape, tol=1):
def radial_filtered_array(points_array_w_rphi, radius, tol=1):
"""
Calculates a Filtered points array from a list of detected diffraction peak positions in a points_array
matching a specific qr radius, within a defined matching tolerance
Parameters
----------
points_array_w_rphi: numpy array
as produced by pointlist_to_array with rphi=True and defined in docstring for that function
radius: float
the radius of diffraction spot you wish to filter by in pixels or calibrated units
tol: float
the tolerance in pixels or calibrated units for a point of qr in the points_array to be considered to match to the radius
Returns
----------
radial_filtered_points_array: numpy array
This will be an 2D numpy array of n points x 7 columns:
qx
qy
I
Rx
Ry
qr
qphi
"""
radial_filtered_points_array = np.delete(
points_array_w_rphi,
np.where(np.abs(points_array_w_rphi[:, 5] - radius) > tol),
axis=0,
)
return radial_filtered_points_array


def DDF_radial_image(points_array_w_rphi, radius, Rshape, tol=1):
"""
Calculates a Digital Dark Field image from a list of detected diffraction peak positions in a points_array matching a specific qr radius, within a defined matching tolerance
Parameters
----------
points_array: numpy array
as produced by pointlist_to_array and defined in docstring for that function, must be the version with r and phi included
points_array_w_rphi: numpy array
as produced by pointlist_to_array with rphi=True and defined in docstring for that function
radius: float
the radius of diffraction spot you wish to image in pixels or calibrated units
Rshape: tuple, list, array
a 2 element vector giving the real space dimensions. If not specified, this is determined from the max along points_array
tol: float
the tolerance in pixels or calibrated units for a point in the points_array to be considered to match to an aperture position in the aperture_positions array
the tolerance in pixels or calibrated units for a point of qr in the points_array to be considered to match to the radius
Returns
----------
image: numpy array
radialimage: numpy array
2D numpy array with dimensions determined by Rshape
"""

if Rshape is None:
Rshape = (
np.max(np.max(points_array[:, 3])).astype("int") + 1,
np.max(np.max(points_array[:, 4])).astype("int") + 1,
np.max(np.max(points_array_w_rphi[:, 3])).astype("int") + 1,
np.max(np.max(points_array_w_rphi[:, 4])).astype("int") + 1,
)

points_array_edit = np.delete(
points_array, np.where(np.abs(points_array[:, 5] - radius) > tol), axis=0
radial_filtered_points_array = radial_filtered_array(
points_array_w_rphi, radius, tol
)

radialimage = np.zeros(shape=Rshape)

for i in range(Rshape[0]):
for j in range(Rshape[1]):
radialimage[i, j] = np.where(
np.logical_and(
points_array_edit[:, 3] == i, points_array_edit[:, 4] == j
radial_filtered_points_array[:, 3] == i,
radial_filtered_points_array[:, 4] == j,
),
points_array_edit[:, 2],
radial_filtered_points_array[:, 2],
0,
).sum()

return radialimage


def DDFradialazimuthimage(points_array_w_rphi, radius, phi0, phi1, Rshape, tol=1):
"""
Calculates a Digital Dark Field image from a list of detected diffraction peak positions in a points_array
matching a specific qr radius, within a defined matching tolerance, and only within a defined azimuthal range
Parameters
----------
points_array_w_rphi: numpy array
as produced by pointlist_to_array with rphi=True and defined in docstring for that function
radius: float
the radius of diffraction spot you wish to image in pixels or calibrated units
phi0: float
Angle in degrees anticlockwise from horizontal-right for setting minimum qphi for inclusion in the image calculation
phi1: float
Angle in degrees anticlockwise from horizontal-right for setting maximum qphi for inclusion in the image calculation
Rshape: tuple, list, array
a 2 element vector giving the real space dimensions. If not specified, this is determined from the max along points_array
tol: float
the tolerance in pixels or calibrated units for a point of qr in the points_array to be considered to match to the radius
Returns
----------
image: numpy array
2D numpy array with dimensions determined by Rshape
"""
if Rshape is None:
Rshape = (
np.max(np.max(points_array_w_rphi[:, 3])).astype("int") + 1,
np.max(np.max(points_array_w_rphi[:, 4])).astype("int") + 1,
)

radial_filtered_points_array = radial_filtered_array(
points_array_w_rphi, radius, tol
)

rphi_filtered_points_array = np.delete(
radial_filtered_points_array,
np.where(
np.logical_or(
radial_filtered_points_array[:, 6] < phi0,
radial_filtered_points_array[:, 6] >= phi1,
)
),
axis=0,
)
radiusazimuthimage = np.zeros(shape=Rshape)

for i in range(Rshape[0]):
for j in range(Rshape[1]):
radiusazimuthimage[i, j] = np.where(
np.logical_and(
rphi_filtered_points_array[:, 3] == i,
rphi_filtered_points_array[:, 4] == j,
),
rphi_filtered_points_array[:, 2],
0,
).sum()
return radiusazimuthimage
21 changes: 11 additions & 10 deletions py4DSTEM/process/phase/parallax.py
Original file line number Diff line number Diff line change
Expand Up @@ -979,15 +979,17 @@ def guess_common_aberrations(
sampling = 1 / (
np.array(self._reciprocal_sampling) * self._region_of_interest_shape
)
aberrations_basis, aberrations_basis_du, aberrations_basis_dv = (
calculate_aberration_gradient_basis(
aberrations_mn,
sampling,
self._region_of_interest_shape,
self._wavelength,
rotation_angle=np.deg2rad(rotation_angle_deg),
xp=xp,
)
(
aberrations_basis,
aberrations_basis_du,
aberrations_basis_dv,
) = calculate_aberration_gradient_basis(
aberrations_mn,
sampling,
self._region_of_interest_shape,
self._wavelength,
rotation_angle=np.deg2rad(rotation_angle_deg),
xp=xp,
)

# shifts
Expand Down Expand Up @@ -3198,7 +3200,6 @@ def show_shifts(
shifts = shifts_px * scale_arrows * xp.array(self._reciprocal_sampling)

if plot_rotated_shifts and hasattr(self, "rotation_Q_to_R_rads"):

if figax is None:
figsize = kwargs.pop("figsize", (8, 4))
fig, ax = plt.subplots(1, 2, figsize=figsize)
Expand Down
6 changes: 0 additions & 6 deletions py4DSTEM/process/phase/xray_magnetic_ptychography.py
Original file line number Diff line number Diff line change
Expand Up @@ -892,7 +892,6 @@ def _gradient_descent_adjoint(

match (self._recon_mode, self._active_measurement_index):
case (0, 0) | (1, 0): # reverse

magnetic_conj = xp.exp(1.0j * xp.conj(object_patches[1]))

probe_magnetic_abs = xp.abs(shifted_probes * magnetic_conj)
Expand Down Expand Up @@ -930,7 +929,6 @@ def _gradient_descent_adjoint(
)

if not fix_probe:

electrostatic_magnetic_abs = xp.abs(
electrostatic_conj * magnetic_conj
)
Expand Down Expand Up @@ -962,7 +960,6 @@ def _gradient_descent_adjoint(
)

case (0, 1) | (1, 2) | (2, 1): # forward

magnetic_conj = xp.exp(-1.0j * xp.conj(object_patches[1]))

probe_magnetic_abs = xp.abs(shifted_probes * magnetic_conj)
Expand Down Expand Up @@ -992,7 +989,6 @@ def _gradient_descent_adjoint(
)

if not fix_probe:

electrostatic_magnetic_abs = xp.abs(
electrostatic_conj * magnetic_conj
)
Expand Down Expand Up @@ -1024,7 +1020,6 @@ def _gradient_descent_adjoint(
)

case (1, 1) | (2, 0): # neutral

probe_abs = xp.abs(shifted_probes)
probe_normalization = self._sum_overlapping_patches_bincounts(
probe_abs**2,
Expand All @@ -1047,7 +1042,6 @@ def _gradient_descent_adjoint(
)

if not fix_probe:

electrostatic_abs = xp.abs(electrostatic_conj)
electrostatic_normalization = xp.sum(
electrostatic_abs**2,
Expand Down

0 comments on commit d7ebd28

Please sign in to comment.