docstring
stringlengths
52
499
function
stringlengths
67
35.2k
__index_level_0__
int64
52.6k
1.16M
Returns the fractional coordinates given cartesian coordinates. Args: cart_coords (3x1 array): Cartesian coords. Returns: Fractional coordinates.
def get_fractional_coords(self, cart_coords: Vector3Like) -> np.ndarray: return dot(cart_coords, self.inv_matrix)
141,639
Returns the distance between the hkl plane and the origin Args: miller_index ([h,k,l]): Miller index of plane Returns: d_hkl (float)
def d_hkl(self, miller_index: Vector3Like) -> float: gstar = self.reciprocal_lattice_crystallographic.metric_tensor hkl = np.array(miller_index) return 1 / ((dot(dot(hkl, gstar), hkl.T)) ** (1 / 2))
141,640
Convenience constructor for a tetragonal lattice. Args: a (float): *a* lattice parameter of the tetragonal cell. c (float): *c* lattice parameter of the tetragonal cell. Returns: Tetragonal lattice of dimensions a x a x c.
def tetragonal(a: float, c: float): return Lattice.from_parameters(a, a, c, 90, 90, 90)
141,641
Convenience constructor for an orthorhombic lattice. Args: a (float): *a* lattice parameter of the orthorhombic cell. b (float): *b* lattice parameter of the orthorhombic cell. c (float): *c* lattice parameter of the orthorhombic cell. Returns: Orthorhombic lattice of dimensions a x b x c.
def orthorhombic(a: float, b: float, c: float): return Lattice.from_parameters(a, b, c, 90, 90, 90)
141,642
Convenience constructor for a hexagonal lattice. Args: a (float): *a* lattice parameter of the hexagonal cell. c (float): *c* lattice parameter of the hexagonal cell. Returns: Hexagonal lattice of dimensions a x a x c.
def hexagonal(a: float, c: float): return Lattice.from_parameters(a, a, c, 90, 90, 120)
141,644
Convenience constructor for a rhombohedral lattice. Args: a (float): *a* lattice parameter of the rhombohedral cell. alpha (float): Angle for the rhombohedral lattice in degrees. Returns: Rhombohedral lattice of dimensions a x a x a.
def rhombohedral(a: float, alpha: float): return Lattice.from_parameters(a, a, a, alpha, alpha, alpha)
141,645
Create a Lattice using unit cell lengths and angles (in degrees). Args: abc (3x1 array): Lattice parameters, e.g. (4, 4, 5). ang (3x1 array): Lattice angles in degrees, e.g., (90,90,120). Returns: A Lattice with the specified lattice parameters.
def from_lengths_and_angles(abc: List[float], ang: List[float]): return Lattice.from_parameters(abc[0], abc[1], abc[2], ang[0], ang[1], ang[2])
141,646
Json-serialization dict representation of the Lattice. Args: verbosity (int): Verbosity level. Default of 0 only includes the matrix representation. Set to 1 for more details.
def as_dict(self, verbosity: int = 0) -> Dict: d = { "@module": self.__class__.__module__, "@class": self.__class__.__name__, "matrix": self._matrix.tolist(), } (a, b, c), (alpha, beta, gamma) = self.lengths_and_angles if verbosity > 0: d.update( { "a": a, "b": b, "c": c, "alpha": alpha, "beta": beta, "gamma": gamma, "volume": self.volume, } ) return d
141,658
Get the Niggli reduced lattice using the numerically stable algo proposed by R. W. Grosse-Kunstleve, N. K. Sauter, & P. D. Adams, Acta Crystallographica Section A Foundations of Crystallography, 2003, 60(1), 1-6. doi:10.1107/S010876730302186X Args: tol (float): The numerical tolerance. The default of 1e-5 should result in stable behavior for most cases. Returns: Niggli-reduced lattice.
def get_niggli_reduced_lattice(self, tol: float = 1e-5) -> "Lattice": # lll reduction is more stable for skewed cells matrix = self.lll_matrix a = matrix[0] b = matrix[1] c = matrix[2] e = tol * self.volume ** (1 / 3) # Define metric tensor G = [ [dot(a, a), dot(a, b), dot(a, c)], [dot(a, b), dot(b, b), dot(b, c)], [dot(a, c), dot(b, c), dot(c, c)], ] G = np.array(G) # This sets an upper limit on the number of iterations. for count in range(100): # The steps are labelled as Ax as per the labelling scheme in the # paper. (A, B, C, E, N, Y) = ( G[0, 0], G[1, 1], G[2, 2], 2 * G[1, 2], 2 * G[0, 2], 2 * G[0, 1], ) if A > B + e or (abs(A - B) < e and abs(E) > abs(N) + e): # A1 M = [[0, -1, 0], [-1, 0, 0], [0, 0, -1]] G = dot(transpose(M), dot(G, M)) if (B > C + e) or (abs(B - C) < e and abs(N) > abs(Y) + e): # A2 M = [[-1, 0, 0], [0, 0, -1], [0, -1, 0]] G = dot(transpose(M), dot(G, M)) continue l = 0 if abs(E) < e else E / abs(E) m = 0 if abs(N) < e else N / abs(N) n = 0 if abs(Y) < e else Y / abs(Y) if l * m * n == 1: # A3 i = -1 if l == -1 else 1 j = -1 if m == -1 else 1 k = -1 if n == -1 else 1 M = [[i, 0, 0], [0, j, 0], [0, 0, k]] G = dot(transpose(M), dot(G, M)) elif l * m * n == 0 or l * m * n == -1: # A4 i = -1 if l == 1 else 1 j = -1 if m == 1 else 1 k = -1 if n == 1 else 1 if i * j * k == -1: if n == 0: k = -1 elif m == 0: j = -1 elif l == 0: i = -1 M = [[i, 0, 0], [0, j, 0], [0, 0, k]] G = dot(transpose(M), dot(G, M)) (A, B, C, E, N, Y) = ( G[0, 0], G[1, 1], G[2, 2], 2 * G[1, 2], 2 * G[0, 2], 2 * G[0, 1], ) # A5 if ( abs(E) > B + e or (abs(E - B) < e and 2 * N < Y - e) or (abs(E + B) < e and Y < -e) ): M = [[1, 0, 0], [0, 1, -E / abs(E)], [0, 0, 1]] G = dot(transpose(M), dot(G, M)) continue # A6 if ( abs(N) > A + e or (abs(A - N) < e and 2 * E < Y - e) or (abs(A + N) < e and Y < -e) ): M = [[1, 0, -N / abs(N)], [0, 1, 0], [0, 0, 1]] G = dot(transpose(M), dot(G, M)) continue # A7 if ( abs(Y) > A + e or (abs(A - Y) < e and 2 * E < N - e) or (abs(A + Y) < e and N < -e) ): M = [[1, -Y / abs(Y), 0], [0, 1, 0], [0, 0, 1]] G = dot(transpose(M), dot(G, M)) continue # A8 if E + N + Y + A + B < -e or (abs(E + N + Y + A + B) < e < Y + (A + N) * 2): M = [[1, 0, 1], [0, 1, 1], [0, 0, 1]] G = dot(transpose(M), dot(G, M)) continue break A = G[0, 0] B = G[1, 1] C = G[2, 2] E = 2 * G[1, 2] N = 2 * G[0, 2] Y = 2 * G[0, 1] a = math.sqrt(A) b = math.sqrt(B) c = math.sqrt(C) alpha = math.acos(E / 2 / b / c) / math.pi * 180 beta = math.acos(N / 2 / a / c) / math.pi * 180 gamma = math.acos(Y / 2 / a / b) / math.pi * 180 latt = Lattice.from_parameters(a, b, c, alpha, beta, gamma) mapped = self.find_mapping(latt, e, skip_rotation_matrix=True) if mapped is not None: if np.linalg.det(mapped[0].matrix) > 0: return mapped[0] else: return Lattice(-mapped[0].matrix) raise ValueError("can't find niggli")
141,665
Return a new Lattice with volume new_volume by performing a scaling of the lattice vectors so that length proportions and angles are preserved. Args: new_volume: New volume to scale to. Returns: New lattice with desired volume.
def scale(self, new_volume: float) -> "Lattice": versors = self.matrix / self.abc geo_factor = abs(dot(np.cross(versors[0], versors[1]), versors[2])) ratios = np.array(self.abc) / self.c new_c = (new_volume / (geo_factor * np.prod(ratios))) ** (1 / 3.0) return Lattice(versors * (new_c * ratios))
141,666
Compute the scalar product of vector(s). Args: coords_a, coords_b: Array-like objects with the coordinates. frac_coords (bool): Boolean stating whether the vector corresponds to fractional or cartesian coordinates. Returns: one-dimensional `numpy` array.
def dot( self, coords_a: Vector3Like, coords_b: Vector3Like, frac_coords: bool = False ) -> np.ndarray: coords_a, coords_b = ( np.reshape(coords_a, (-1, 3)), np.reshape(coords_b, (-1, 3)), ) if len(coords_a) != len(coords_b): raise ValueError("") if np.iscomplexobj(coords_a) or np.iscomplexobj(coords_b): raise TypeError("Complex array!") if not frac_coords: cart_a, cart_b = coords_a, coords_b else: cart_a = np.reshape( [self.get_cartesian_coords(vec) for vec in coords_a], (-1, 3) ) cart_b = np.reshape( [self.get_cartesian_coords(vec) for vec in coords_b], (-1, 3) ) return np.array([dot(a, b) for a, b in zip(cart_a, cart_b)])
141,668
Compute the norm of vector(s). Args: coords: Array-like object with the coordinates. frac_coords: Boolean stating whether the vector corresponds to fractional or cartesian coordinates. Returns: one-dimensional `numpy` array.
def norm(self, coords: Vector3Like, frac_coords: bool = True) -> float: return np.sqrt(self.dot(coords, coords, frac_coords=frac_coords))
141,669
read route line in gaussian input/output and return functional basis_set and a dictionary of other route parameters Args: route (str) : the route line return functional (str) : the method (HF, PBE ...) basis_set (str) : the basis set route (dict) : dictionary of parameters
def read_route_line(route): scrf_patt = re.compile(r"^([sS][cC][rR][fF])\s*=\s*(.+)") multi_params_patt = re.compile(r"^([A-z]+[0-9]*)[\s=]+\((.*)\)$") functional = None basis_set = None route_params = {} dieze_tag = None if route: if "/" in route: tok = route.split("/") functional = tok[0].split()[-1] basis_set = tok[1].split()[0] for tok in [functional, basis_set, "/"]: route = route.replace(tok, "") for tok in route.split(): if scrf_patt.match(tok): m = scrf_patt.match(tok) route_params[m.group(1)] = m.group(2) elif tok.upper() in ["#", "#N", "#P", "#T"]: # does not store # in route to avoid error in input if tok == "#": dieze_tag = "#N" else: dieze_tag = tok continue else: m = re.match(multi_params_patt, tok.strip("#")) if m: pars = {} for par in m.group(2).split(","): p = par.split("=") pars[p[0]] = None if len(p) == 1 else p[1] route_params[m.group(1)] = pars else: d = tok.strip("#").split("=") route_params[d[0]] = None if len(d) == 1 else d[1] return functional, basis_set, route_params, dieze_tag
141,675
Creates GaussianInput from a string. Args: contents: String representing an Gaussian input file. Returns: GaussianInput object
def from_string(contents): lines = [l.strip() for l in contents.split("\n")] link0_patt = re.compile(r"^(%.+)\s*=\s*(.+)") link0_dict = {} for i, l in enumerate(lines): if link0_patt.match(l): m = link0_patt.match(l) link0_dict[m.group(1).strip("=")] = m.group(2) route_patt = re.compile(r"^#[sSpPnN]*.*") route = "" route_index = None for i, l in enumerate(lines): if route_patt.match(l): route += " " + l route_index = i # This condition allows for route cards spanning multiple lines elif (l == "" or l.isspace()) and route_index: break functional, basis_set, route_paras, dieze_tag = read_route_line(route) ind = 2 title = [] while lines[route_index + ind].strip(): title.append(lines[route_index + ind].strip()) ind += 1 title = ' '.join(title) ind += 1 toks = re.split(r"[,\s]+", lines[route_index + ind]) charge = int(toks[0]) spin_mult = int(toks[1]) coord_lines = [] spaces = 0 input_paras = {} ind += 1 for i in range(route_index + ind, len(lines)): if lines[i].strip() == "": spaces += 1 if spaces >= 2: d = lines[i].split("=") if len(d) == 2: input_paras[d[0]] = d[1] else: coord_lines.append(lines[i].strip()) mol = GaussianInput._parse_coords(coord_lines) mol.set_charge_and_spin(charge, spin_mult) return GaussianInput(mol, charge=charge, spin_multiplicity=spin_mult, title=title, functional=functional, basis_set=basis_set, route_parameters=route_paras, input_parameters=input_paras, link0_parameters=link0_dict, dieze_tag=dieze_tag)
141,678
Get a matplotlib plot of the potential energy surface. Args: coords: internal coordinate name to use as abcissa.
def get_scan_plot(self, coords=None): from pymatgen.util.plotting import pretty_plot plt = pretty_plot(12, 8) d = self.read_scan() if coords and coords in d["coords"]: x = d["coords"][coords] plt.xlabel(coords) else: x = range(len(d["energies"])) plt.xlabel("points") plt.ylabel("Energy (eV)") e_min = min(d["energies"]) y = [(e - e_min) * Ha_to_eV for e in d["energies"]] plt.plot(x, y, "ro--") return plt
141,690
Save matplotlib plot of the potential energy surface to a file. Args: filename: Filename to write to. img_format: Image format to use. Defaults to EPS. coords: internal coordinate name to use as abcissa.
def save_scan_plot(self, filename="scan.pdf", img_format="pdf", coords=None): plt = self.get_scan_plot(coords) plt.savefig(filename, format=img_format)
141,691
Save matplotlib plot of the spectre to a file. Args: filename: Filename to write to. img_format: Image format to use. Defaults to EPS. sigma: Full width at half maximum in eV for normal functions. step: bin interval in eV
def save_spectre_plot(self, filename="spectre.pdf", img_format="pdf", sigma=0.05, step=0.01): d, plt = self.get_spectre_plot(sigma, step) plt.savefig(filename, format=img_format)
141,694
Cluster sites based on distance and species type. Args: mol (Molecule): Molecule **with origin at center of mass**. tol (float): Tolerance to use. Returns: (origin_site, clustered_sites): origin_site is a site at the center of mass (None if there are no origin atoms). clustered_sites is a dict of {(avg_dist, species_and_occu): [list of sites]}
def cluster_sites(mol, tol, give_only_index=False): # Cluster works for dim > 2 data. We just add a dummy 0 for second # coordinate. dists = [[np.linalg.norm(site.coords), 0] for site in mol] import scipy.cluster as spcluster f = spcluster.hierarchy.fclusterdata(dists, tol, criterion='distance') clustered_dists = defaultdict(list) for i, site in enumerate(mol): clustered_dists[f[i]].append(dists[i]) avg_dist = {label: np.mean(val) for label, val in clustered_dists.items()} clustered_sites = defaultdict(list) origin_site = None for i, site in enumerate(mol): if avg_dist[f[i]] < tol: if give_only_index: origin_site = i else: origin_site = site else: if give_only_index: clustered_sites[ (avg_dist[f[i]], site.species)].append(i) else: clustered_sites[ (avg_dist[f[i]], site.species)].append(site) return origin_site, clustered_sites
141,708
Recursive algorithm to permute through all possible combinations of the initially supplied symmetry operations to arrive at a complete set of operations mapping a single atom to all other equivalent atoms in the point group. This assumes that the initial number already uniquely identifies all operations. Args: symmops ([SymmOp]): Initial set of symmetry operations. Returns: Full set of symmetry operations.
def generate_full_symmops(symmops, tol): # Uses an algorithm described in: # Gregory Butler. Fundamental Algorithms for Permutation Groups. # Lecture Notes in Computer Science (Book 559). Springer, 1991. page 15 UNIT = np.eye(4) generators = [op.affine_matrix for op in symmops if not np.allclose(op.affine_matrix, UNIT)] if not generators: # C1 symmetry breaks assumptions in the algorithm afterwards return symmops else: full = list(generators) for g in full: for s in generators: op = np.dot(g, s) d = np.abs(full - op) < tol if not np.any(np.all(np.all(d, axis=2), axis=1)): full.append(op) d = np.abs(full - UNIT) < tol if not np.any(np.all(np.all(d, axis=2), axis=1)): full.append(UNIT) return [SymmOp(op) for op in full]
141,709
Calculate the weights for a list of kpoints. Args: kpoints (Sequence): Sequence of kpoints. np.arrays is fine. Note that the code does not check that the list of kpoints provided does not contain duplicates. atol (float): Tolerance for fractional coordinates comparisons. Returns: List of weights, in the SAME order as kpoints.
def get_kpoint_weights(self, kpoints, atol=1e-5): kpts = np.array(kpoints) shift = [] mesh = [] for i in range(3): nonzero = [i for i in kpts[:, i] if abs(i) > 1e-5] if len(nonzero) != len(kpts): # gamma centered if not nonzero: mesh.append(1) else: m = np.abs(np.round(1/np.array(nonzero))) mesh.append(int(max(m))) shift.append(0) else: # Monk m = np.abs(np.round(0.5/np.array(nonzero))) mesh.append(int(max(m))) shift.append(1) mapping, grid = spglib.get_ir_reciprocal_mesh( np.array(mesh), self._cell, is_shift=shift, symprec=self._symprec) mapping = list(mapping) grid = (np.array(grid) + np.array(shift) * (0.5, 0.5, 0.5)) / mesh weights = [] mapped = defaultdict(int) for k in kpoints: for i, g in enumerate(grid): if np.allclose(pbc_diff(k, g), (0, 0, 0), atol=atol): mapped[tuple(g)] += 1 weights.append(mapping.count(mapping[i])) break if (len(mapped) != len(set(mapping))) or ( not all([v == 1 for v in mapped.values()])): raise ValueError("Unable to find 1:1 corresponding between input " "kpoints and irreducible grid!") return [w/sum(weights) for w in weights]
141,723
Check if a particular symmetry operation is a valid symmetry operation for a molecule, i.e., the operation maps all atoms to another equivalent atom. Args: symmop (SymmOp): Symmetry operation to test. Returns: (bool): Whether SymmOp is valid for Molecule.
def is_valid_op(self, symmop): coords = self.centered_mol.cart_coords for site in self.centered_mol: coord = symmop.operate(site.coords) ind = find_in_coord_list(coords, coord, self.tol) if not (len(ind) == 1 and self.centered_mol[ind[0]].species == site.species): return False return True
141,739
Find the indices of matches of a particular coord in a coord_list. Args: coord_list: List of coords to test coord: Specific coordinates atol: Absolute tolerance. Defaults to 1e-8. Accepts both scalar and array. Returns: Indices of matches, e.g., [0, 1, 2, 3]. Empty list if not found.
def find_in_coord_list(coord_list, coord, atol=1e-8): if len(coord_list) == 0: return [] diff = np.array(coord_list) - np.array(coord)[None, :] return np.where(np.all(np.abs(diff) < atol, axis=1))[0]
141,746
Tests if a particular coord is within a coord_list. Args: coord_list: List of coords to test coord: Specific coordinates atol: Absolute tolerance. Defaults to 1e-8. Accepts both scalar and array. Returns: True if coord is in the coord list.
def in_coord_list(coord_list, coord, atol=1e-8): return len(find_in_coord_list(coord_list, coord, atol=atol)) > 0
141,747
Tests if all coords in subset are contained in superset. Doesn't use periodic boundary conditions Args: subset, superset: List of coords Returns: True if all of subset is in superset.
def is_coord_subset(subset, superset, atol=1e-8): c1 = np.array(subset) c2 = np.array(superset) is_close = np.all(np.abs(c1[:, None, :] - c2[None, :, :]) < atol, axis=-1) any_close = np.any(is_close, axis=-1) return np.all(any_close)
141,748
Gives the index mapping from a subset to a superset. Subset and superset cannot contain duplicate rows Args: subset, superset: List of coords Returns: list of indices such that superset[indices] = subset
def coord_list_mapping(subset, superset, atol=1e-8): c1 = np.array(subset) c2 = np.array(superset) inds = np.where(np.all(np.isclose(c1[:, None, :], c2[None, :, :], atol=atol), axis=2))[1] result = c2[inds] if not np.allclose(c1, result, atol=atol): if not is_coord_subset(subset, superset): raise ValueError("subset is not a subset of superset") if not result.shape == c1.shape: raise ValueError("Something wrong with the inputs, likely duplicates " "in superset") return inds
141,749
Gives the index mapping from a subset to a superset. Superset cannot contain duplicate matching rows Args: subset, superset: List of frac_coords Returns: list of indices such that superset[indices] = subset
def coord_list_mapping_pbc(subset, superset, atol=1e-8): atol = np.array([1., 1. ,1.]) * atol return cuc.coord_list_mapping_pbc(subset, superset, atol)
141,750
Returns an interpolated value by linear interpolation between two values. This method is written to avoid dependency on scipy, which causes issues on threading servers. Args: x_values: Sequence of x values. y_values: Corresponding sequence of y values x: Get value at particular x Returns: Value at x.
def get_linear_interpolated_value(x_values, y_values, x): a = np.array(sorted(zip(x_values, y_values), key=lambda d: d[0])) ind = np.where(a[:, 0] >= x)[0] if len(ind) == 0 or ind[0] == 0: raise ValueError("x is out of range of provided x_values") i = ind[0] x1, x2 = a[i - 1][0], a[i][0] y1, y2 = a[i - 1][1], a[i][1] return y1 + (y2 - y1) / (x2 - x1) * (x - x1)
141,751
Returns the distances between two lists of coordinates Args: coords1: First set of cartesian coordinates. coords2: Second set of cartesian coordinates. Returns: 2d array of cartesian distances. E.g the distance between coords1[i] and coords2[j] is distances[i,j]
def all_distances(coords1, coords2): c1 = np.array(coords1) c2 = np.array(coords2) z = (c1[:, None, :] - c2[None, :, :]) ** 2 return np.sum(z, axis=-1) ** 0.5
141,752
Get the indices of all points in a fractional coord list that are equal to a fractional coord (with a tolerance), taking into account periodic boundary conditions. Args: fcoord_list: List of fractional coords fcoord: A specific fractional coord to test. atol: Absolute tolerance. Defaults to 1e-8. Returns: Indices of matches, e.g., [0, 1, 2, 3]. Empty list if not found.
def find_in_coord_list_pbc(fcoord_list, fcoord, atol=1e-8): if len(fcoord_list) == 0: return [] fcoords = np.tile(fcoord, (len(fcoord_list), 1)) fdist = fcoord_list - fcoords fdist -= np.round(fdist) return np.where(np.all(np.abs(fdist) < atol, axis=1))[0]
141,755
Tests if a particular fractional coord is within a fractional coord_list. Args: fcoord_list: List of fractional coords to test fcoord: A specific fractional coord to test. atol: Absolute tolerance. Defaults to 1e-8. Returns: True if coord is in the coord list.
def in_coord_list_pbc(fcoord_list, fcoord, atol=1e-8): return len(find_in_coord_list_pbc(fcoord_list, fcoord, atol=atol)) > 0
141,756
Tests if all fractional coords in subset are contained in superset. Args: subset, superset: List of fractional coords atol (float or size 3 array): Tolerance for matching mask (boolean array): Mask of matches that are not allowed. i.e. if mask[1,2] == True, then subset[1] cannot be matched to superset[2] Returns: True if all of subset is in superset.
def is_coord_subset_pbc(subset, superset, atol=1e-8, mask=None): c1 = np.array(subset, dtype=np.float64) c2 = np.array(superset, dtype=np.float64) if mask is not None: m = np.array(mask, dtype=np.int) else: m = np.zeros((len(subset), len(superset)), dtype=np.int) atol = np.zeros(3, dtype=np.float64) + atol return cuc.is_coord_subset_pbc(c1, c2, atol, m)
141,757
Returns the list of points on the original lattice contained in the supercell in fractional coordinates (with the supercell basis). e.g. [[2,0,0],[0,1,0],[0,0,1]] returns [[0,0,0],[0.5,0,0]] Args: supercell_matrix: 3x3 matrix describing the supercell Returns: numpy array of the fractional coordinates
def lattice_points_in_supercell(supercell_matrix): diagonals = np.array( [[0, 0, 0], [0, 0, 1], [0, 1, 0], [0, 1, 1], [1, 0, 0], [1, 0, 1], [1, 1, 0], [1, 1, 1]]) d_points = np.dot(diagonals, supercell_matrix) mins = np.min(d_points, axis=0) maxes = np.max(d_points, axis=0) + 1 ar = np.arange(mins[0], maxes[0])[:, None] * \ np.array([1, 0, 0])[None, :] br = np.arange(mins[1], maxes[1])[:, None] * \ np.array([0, 1, 0])[None, :] cr = np.arange(mins[2], maxes[2])[:, None] * \ np.array([0, 0, 1])[None, :] all_points = ar[:, None, None] + br[None, :, None] + cr[None, None, :] all_points = all_points.reshape((-1, 3)) frac_points = np.dot(all_points, np.linalg.inv(supercell_matrix)) tvects = frac_points[np.all(frac_points < 1 - 1e-10, axis=1) & np.all(frac_points >= -1e-10, axis=1)] assert len(tvects) == round(abs(np.linalg.det(supercell_matrix))) return tvects
141,758
Converts a list of coordinates to barycentric coordinates, given a simplex with d+1 points. Only works for d >= 2. Args: coords: list of n coords to transform, shape should be (n,d) simplex: list of coordinates that form the simplex, shape should be (d+1, d) Returns: a LIST of barycentric coordinates (even if the original input was 1d)
def barycentric_coords(coords, simplex): coords = np.atleast_2d(coords) t = np.transpose(simplex[:-1, :]) - np.transpose(simplex[-1, :])[:, None] all_but_one = np.transpose( np.linalg.solve(t, np.transpose(coords - simplex[-1]))) last_coord = 1 - np.sum(all_but_one, axis=-1)[:, None] return np.append(all_but_one, last_coord, axis=-1)
141,759
Calculates the angle between two vectors. Args: v1: Vector 1 v2: Vector 2 units: "degrees" or "radians". Defaults to "degrees". Returns: Angle between them in degrees.
def get_angle(v1, v2, units="degrees"): d = np.dot(v1, v2) / np.linalg.norm(v1) / np.linalg.norm(v2) d = min(d, 1) d = max(d, -1) angle = math.acos(d) if units == "degrees": return math.degrees(angle) elif units == "radians": return angle else: raise ValueError("Invalid units {}".format(units))
141,760
Initializes a Simplex from vertex coordinates. Args: coords ([[float]]): Coords of the vertices of the simplex. E.g., [[1, 2, 3], [2, 4, 5], [6, 7, 8], [8, 9, 10].
def __init__(self, coords): self._coords = np.array(coords) self.space_dim, self.simplex_dim = self._coords.shape self.origin = self._coords[-1] if self.space_dim == self.simplex_dim + 1: # precompute augmented matrix for calculating bary_coords self._aug = np.concatenate([coords, np.ones((self.space_dim, 1))], axis=-1) self._aug_inv = np.linalg.inv(self._aug)
141,761
Computes the intersection points of a line with a simplex Args: point1, point2 ([float]): Points that determine the line Returns: points where the line intersects the simplex (0, 1, or 2)
def line_intersection(self, point1, point2, tolerance=1e-8): b1 = self.bary_coords(point1) b2 = self.bary_coords(point2) l = b1 - b2 # don't use barycentric dimension where line is parallel to face valid = np.abs(l) > 1e-10 # array of all the barycentric coordinates on the line where # one of the values is 0 possible = b1 - (b1[valid] / l[valid])[:, None] * l barys = [] for p in possible: # it's only an intersection if its in the simplex if (p >= -tolerance).all(): found = False # don't return duplicate points for b in barys: if np.all(np.abs(b - p) < tolerance): found = True break if not found: barys.append(p) assert len(barys) < 3 return [self.point_from_bary_coords(b) for b in barys]
141,765
Use pybtex to validate that a reference is in proper BibTeX format Args: reference: A String reference in BibTeX format. Returns: Boolean indicating if reference is valid bibtex.
def is_valid_bibtex(reference): # str is necessary since pybtex seems to have an issue with unicode. The # filter expression removes all non-ASCII characters. sio = StringIO(remove_non_ascii(reference)) parser = bibtex.Parser() errors.set_strict_mode(False) bib_data = parser.parse_stream(sio) return len(bib_data.entries) > 0
141,768
Parses a History Node object from either a dict or a tuple. Args: h_node: A dict with name/url/description fields or a 3-element tuple. Returns: History node.
def parse_history_node(h_node): if isinstance(h_node, dict): return HistoryNode.from_dict(h_node) else: if len(h_node) != 3: raise ValueError("Invalid History node, " "should be dict or (name, version, " "description) tuple: {}".format(h_node)) return HistoryNode(h_node[0], h_node[1], h_node[2])
141,769
Parses an Author object from either a String, dict, or tuple Args: author: A String formatted as "NAME <[email protected]>", (name, email) tuple, or a dict with name and email keys. Returns: An Author object.
def parse_author(author): if isinstance(author, str): # Regex looks for whitespace, (any name), whitespace, <, (email), # >, whitespace m = re.match(r'\s*(.*?)\s*<(.*?@.*?)>\s*', author) if not m or m.start() != 0 or m.end() != len(author): raise ValueError("Invalid author format! {}".format(author)) return Author(m.groups()[0], m.groups()[1]) elif isinstance(author, dict): return Author.from_dict(author) else: if len(author) != 2: raise ValueError("Invalid author, should be String or (name, " "email) tuple: {}".format(author)) return Author(author[0], author[1])
141,770
This function converts a strain to a deformation gradient that will produce that strain. Supports three methods: Args: strain (3x3 array-like): strain matrix shape: (string): method for determining deformation, supports "upper" produces an upper triangular defo "lower" produces a lower triangular defo "symmetric" produces a symmetric defo
def convert_strain_to_deformation(strain, shape="upper"): strain = SquareTensor(strain) ftdotf = 2*strain + np.eye(3) if shape == "upper": result = scipy.linalg.cholesky(ftdotf) elif shape == "symmetric": result = scipy.linalg.sqrtm(ftdotf) else: raise ValueError("shape must be \"upper\" or \"symmetric\"") return Deformation(result)
141,776
Create a Deformation object. Note that the constructor uses __new__ rather than __init__ according to the standard method of subclassing numpy ndarrays. Args: deformation_gradient (3x3 array-like): the 3x3 array-like representing the deformation gradient
def __new__(cls, deformation_gradient): obj = super().__new__(cls, deformation_gradient) return obj.view(cls)
141,777
Apply the deformation gradient to a structure. Args: structure (Structure object): the structure object to be modified by the deformation
def apply_to_structure(self, structure): def_struct = structure.copy() old_latt = def_struct.lattice.matrix new_latt = np.transpose(np.dot(self, np.transpose(old_latt))) def_struct.lattice = Lattice(new_latt) return def_struct
141,779
Factory method for constructing a Deformation object from a matrix position and amount Args: matrixpos (tuple): tuple corresponding the matrix position to have a perturbation added amt (float): amount to add to the identity matrix at position matrixpos
def from_index_amount(cls, matrixpos, amt): f = np.identity(3) f[matrixpos] += amt return cls(f)
141,780
constructs the deformed geometries of a structure. Generates m + n deformed structures according to the supplied parameters. Args: structure (Structure): structure to undergo deformation norm_strains (list of floats): strain values to apply to each normal mode. shear_strains (list of floats): strain values to apply to each shear mode. symmetry (bool): whether or not to use symmetry reduction.
def __init__(self, structure, norm_strains=None, shear_strains=None, symmetry=False): norm_strains = norm_strains or [-0.01, -0.005, 0.005, 0.01] shear_strains = shear_strains or [-0.06, -0.03, 0.03, 0.06] self.undeformed_structure = structure self.deformations = [] self.def_structs = [] # Generate deformations for ind in [(0, 0), (1, 1), (2, 2)]: for amount in norm_strains: strain = Strain.from_index_amount(ind, amount) self.deformations.append(strain.get_deformation_matrix()) for ind in [(0, 1), (0, 2), (1, 2)]: for amount in shear_strains: strain = Strain.from_index_amount(ind, amount) self.deformations.append(strain.get_deformation_matrix()) # Perform symmetry reduction if specified if symmetry: self.sym_dict = symmetry_reduce(self.deformations, structure) self.deformations = list(self.sym_dict.keys()) self.deformed_structures = [defo.apply_to_structure(structure) for defo in self.deformations]
141,781
Create a Strain object. Note that the constructor uses __new__ rather than __init__ according to the standard method of subclassing numpy ndarrays. Note also that the default constructor does not include the deformation gradient Args: strain_matrix (3x3 array-like): the 3x3 array-like representing the Green-Lagrange strain
def __new__(cls, strain_matrix): vscale = np.ones((6,)) vscale[3:] *= 2 obj = super().__new__(cls, strain_matrix, vscale=vscale) if not obj.is_symmetric(): raise ValueError("Strain objects must be initialized " "with a symmetric array or a voigt-notation " "vector with six entries.") return obj.view(cls)
141,782
Factory method that returns a Strain object from a deformation gradient Args: deformation (3x3 array-like):
def from_deformation(cls, deformation): dfm = Deformation(deformation) return cls(0.5 * (np.dot(dfm.trans, dfm) - np.eye(3)))
141,783
Like Deformation.from_index_amount, except generates a strain from the zero 3x3 tensor or voigt vector with the amount specified in the index location. Ensures symmetric strain. Args: idx (tuple or integer): index to be perturbed, can be voigt or full-tensor notation amount (float): amount to perturb selected index
def from_index_amount(cls, idx, amount): if np.array(idx).ndim == 0: v = np.zeros(6) v[idx] = amount return cls.from_voigt(v) elif np.array(idx).ndim == 1: v = np.zeros((3, 3)) for i in itertools.permutations(idx): v[i] = amount return cls(v) else: raise ValueError("Index must either be 2-tuple or integer " "corresponding to full-tensor or voigt index")
141,784
Given a list of coords for 3 points, Compute the area of this triangle. Args: pts: [a, b, c] three points
def get_tri_area(pts): a, b, c = pts[0], pts[1], pts[2] v1 = np.array(b) - np.array(a) v2 = np.array(c) - np.array(a) area_tri = abs(sp.linalg.norm(sp.cross(v1, v2)) / 2) return area_tri
141,787
Finds the element from "allowed elements" that (i) possesses the desired "oxidation state" and (ii) is closest in ionic radius to the target specie Args: target: (Specie) provides target ionic radius. oxidation_state: (float) codopant oxidation state. allowed_elements: ([str]) List of allowed elements. If None, all elements are tried. Returns: (Specie) with oxidation_state that has ionic radius closest to target.
def _find_codopant(target, oxidation_state, allowed_elements=None): ref_radius = target.ionic_radius candidates = [] symbols = allowed_elements or [el.symbol for el in Element] for sym in symbols: try: with warnings.catch_warnings(): warnings.simplefilter("ignore") sp = Specie(sym, oxidation_state) r = sp.ionic_radius if r is not None: candidates.append((r, sp)) except: pass return min(candidates, key=lambda l: abs(l[0] / ref_radius - 1))[1]
141,811
This substitutor uses the substitution probability class to find good substitutions for a given chemistry or structure. Args: threshold: probability threshold for predictions symprec: symmetry precision to determine if two structures are duplicates kwargs: kwargs for the SubstitutionProbability object lambda_table, alpha
def __init__(self, threshold=1e-3, symprec=0.1, **kwargs): self._kwargs = kwargs self._sp = SubstitutionProbability(**kwargs) self._threshold = threshold self._symprec = symprec
141,848
Convenience method to plot data with trend lines based on polynomial fit. Args: x: Sequence of x data. y: Sequence of y data. deg (int): Degree of polynomial. Defaults to 1. xlabel (str): Label for x-axis. ylabel (str): Label for y-axis. \\*\\*kwargs: Keyword args passed to pretty_plot. Returns: matplotlib.pyplot object.
def pretty_polyfit_plot(x, y, deg=1, xlabel=None, ylabel=None, **kwargs): plt = pretty_plot(**kwargs) pp = np.polyfit(x, y, deg) xp = np.linspace(min(x), max(x), 200) plt.plot(xp, np.polyval(pp, xp), 'k--', x, y, 'o') if xlabel: plt.xlabel(xlabel) if ylabel: plt.ylabel(ylabel) return plt
141,858
Converts str of chemical formula into latex format for labelling purposes Args: formula (str): Chemical formula
def format_formula(formula): formatted_formula = "" number_format = "" for i, s in enumerate(formula): if s.isdigit(): if not number_format: number_format = "_{" number_format += s if i == len(formula) - 1: number_format += "}" formatted_formula += number_format else: if number_format: number_format += "}" formatted_formula += number_format number_format = "" formatted_formula += s return r"$%s$" % (formatted_formula)
141,860
Helper function used in plot functions supporting an optional Axes argument. If ax is None, we build the `matplotlib` figure and create the Axes else we return the current active figure. Args: kwargs: keyword arguments are passed to plt.figure if ax is not None. Returns: ax: :class:`Axes` object figure: matplotlib figure plt: matplotlib pyplot module.
def get_ax_fig_plt(ax=None, **kwargs): import matplotlib.pyplot as plt if ax is None: fig = plt.figure(**kwargs) ax = fig.add_subplot(1, 1, 1) else: fig = plt.gcf() return ax, fig, plt
141,862
Helper function used in plot functions supporting an optional Axes3D argument. If ax is None, we build the `matplotlib` figure and create the Axes3D else we return the current active figure. Args: kwargs: keyword arguments are passed to plt.figure if ax is not None. Returns: ax: :class:`Axes` object figure: matplotlib figure plt: matplotlib pyplot module.
def get_ax3d_fig_plt(ax=None, **kwargs): import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import axes3d if ax is None: fig = plt.figure(**kwargs) ax = axes3d.Axes3D(fig) else: fig = plt.gcf() return ax, fig, plt
141,863
Method to sum two densities. Args: density1: First density. density2: Second density. Returns: Dict of {spin: density}.
def add_densities(density1, density2): return {spin: np.array(density1[spin]) + np.array(density2[spin]) for spin in density1.keys()}
141,866
Returns the equilibrium fermi-dirac. Args: E (float): energy in eV fermi (float): the fermi level in eV T (float): the temperature in kelvin
def f0(E, fermi, T): return 1. / (1. + np.exp((E - fermi) / (_cd("Boltzmann constant in eV/K") * T)))
141,867
Expects a DOS object and finds the cbm and vbm. Args: tol: tolerance in occupations for determining the gap abs_tol: An absolute tolerance (True) and a relative one (False) spin: Possible values are None - finds the gap in the summed densities, Up - finds the gap in the up spin channel, Down - finds the gap in the down spin channel. Returns: (cbm, vbm): float in eV corresponding to the gap
def get_cbm_vbm(self, tol=0.001, abs_tol=False, spin=None): # determine tolerance if spin is None: tdos = self.y if len(self.ydim) == 1 else np.sum(self.y, axis=1) elif spin == Spin.up: tdos = self.y[:, 0] else: tdos = self.y[:, 1] if not abs_tol: tol = tol * tdos.sum() / tdos.shape[0] # find index of fermi energy i_fermi = 0 while self.x[i_fermi] <= self.efermi: i_fermi += 1 # work backwards until tolerance is reached i_gap_start = i_fermi while i_gap_start - 1 >= 0 and tdos[i_gap_start - 1] <= tol: i_gap_start -= 1 # work forwards until tolerance is reached i_gap_end = i_gap_start while i_gap_end < tdos.shape[0] and tdos[i_gap_end] <= tol: i_gap_end += 1 i_gap_end -= 1 return self.x[i_gap_end], self.x[i_gap_start]
141,872
Expects a DOS object and finds the gap. Args: tol: tolerance in occupations for determining the gap abs_tol: An absolute tolerance (True) and a relative one (False) spin: Possible values are None - finds the gap in the summed densities, Up - finds the gap in the up spin channel, Down - finds the gap in the down spin channel. Returns: gap in eV
def get_gap(self, tol=0.001, abs_tol=False, spin=None): (cbm, vbm) = self.get_cbm_vbm(tol, abs_tol, spin) return max(cbm - vbm, 0.0)
141,873
Returns the density of states for a particular spin. Args: spin: Spin Returns: Returns the density of states for a particular spin. If Spin is None, the sum of all spins is returned.
def get_densities(self, spin=None): if self.densities is None: result = None elif spin is None: if Spin.down in self.densities: result = self.densities[Spin.up] + self.densities[Spin.down] else: result = self.densities[Spin.up] else: result = self.densities[spin] return result
141,876
Returns the Dict representation of the densities, {Spin: densities}, but with a Gaussian smearing of std dev sigma applied about the fermi level. Args: sigma: Std dev of Gaussian smearing function. Returns: Dict of Gaussian-smeared densities.
def get_smeared_densities(self, sigma): from scipy.ndimage.filters import gaussian_filter1d smeared_dens = {} diff = [self.energies[i + 1] - self.energies[i] for i in range(len(self.energies) - 1)] avgdiff = sum(diff) / len(diff) for spin, dens in self.densities.items(): smeared_dens[spin] = gaussian_filter1d(dens, sigma / avgdiff) return smeared_dens
141,877
Adds two DOS together. Checks that energy scales are the same. Otherwise, a ValueError is thrown. Args: other: Another DOS object. Returns: Sum of the two DOSs.
def __add__(self, other): if not all(np.equal(self.energies, other.energies)): raise ValueError("Energies of both DOS are not compatible!") densities = {spin: self.densities[spin] + other.densities[spin] for spin in self.densities.keys()} return Dos(self.efermi, self.energies, densities)
141,878
Returns interpolated density for a particular energy. Args: energy: Energy to return the density for.
def get_interpolated_value(self, energy): f = {} for spin in self.densities.keys(): f[spin] = get_linear_interpolated_value(self.energies, self.densities[spin], energy) return f
141,879
Expects a DOS object and finds the cbm and vbm. Args: tol: tolerance in occupations for determining the gap abs_tol: An absolute tolerance (True) and a relative one (False) spin: Possible values are None - finds the gap in the summed densities, Up - finds the gap in the up spin channel, Down - finds the gap in the down spin channel. Returns: (cbm, vbm): float in eV corresponding to the gap
def get_cbm_vbm(self, tol=0.001, abs_tol=False, spin=None): # determine tolerance tdos = self.get_densities(spin) if not abs_tol: tol = tol * tdos.sum() / tdos.shape[0] # find index of fermi energy i_fermi = 0 while self.energies[i_fermi] <= self.efermi: i_fermi += 1 # work backwards until tolerance is reached i_gap_start = i_fermi while i_gap_start - 1 >= 0 and tdos[i_gap_start - 1] <= tol: i_gap_start -= 1 # work forwards until tolerance is reached i_gap_end = i_gap_start while i_gap_end < tdos.shape[0] and tdos[i_gap_end] <= tol: i_gap_end += 1 i_gap_end -= 1 return self.energies[i_gap_end], self.energies[i_gap_start]
141,880
Get the Dos for a particular orbital of a particular site. Args: site: Site in Structure associated with CompleteDos. orbital: Orbital in the site. Returns: Dos containing densities for orbital of site.
def get_site_orbital_dos(self, site, orbital): return Dos(self.efermi, self.energies, self.pdos[site][orbital])
141,888
Get the total Dos for a site (all orbitals). Args: site: Site in Structure associated with CompleteDos. Returns: Dos containing summed orbital densities for site.
def get_site_dos(self, site): site_dos = functools.reduce(add_densities, self.pdos[site].values()) return Dos(self.efermi, self.energies, site_dos)
141,889
Get orbital projected Dos of a particular site Args: site: Site in Structure associated with CompleteDos. Returns: dict of {orbital: Dos}, e.g. {"s": Dos object, ...}
def get_site_spd_dos(self, site): spd_dos = dict() for orb, pdos in self.pdos[site].items(): orbital_type = _get_orb_type(orb) if orbital_type in spd_dos: spd_dos[orbital_type] = add_densities(spd_dos[orbital_type], pdos) else: spd_dos[orbital_type] = pdos return {orb: Dos(self.efermi, self.energies, densities) for orb, densities in spd_dos.items()}
141,890
Get the t2g, eg projected DOS for a particular site. Args: site: Site in Structure associated with CompleteDos. Returns: A dict {"e_g": Dos, "t2g": Dos} containing summed e_g and t2g DOS for the site.
def get_site_t2g_eg_resolved_dos(self, site): t2g_dos = [] eg_dos = [] for s, atom_dos in self.pdos.items(): if s == site: for orb, pdos in atom_dos.items(): if orb in (Orbital.dxy, Orbital.dxz, Orbital.dyz): t2g_dos.append(pdos) elif orb in (Orbital.dx2, Orbital.dz2): eg_dos.append(pdos) return {"t2g": Dos(self.efermi, self.energies, functools.reduce(add_densities, t2g_dos)), "e_g": Dos(self.efermi, self.energies, functools.reduce(add_densities, eg_dos))}
141,891
Get element and spd projected Dos Args: el: Element in Structure.composition associated with LobsterCompleteDos Returns: dict of {Element: {"S": densities, "P": densities, "D": densities}}
def get_element_spd_dos(self, el): el = get_el_sp(el) el_dos = {} for site, atom_dos in self.pdos.items(): if site.specie == el: for orb, pdos in atom_dos.items(): orbital_type = _get_orb_type_lobster(orb) if orbital_type not in el_dos: el_dos[orbital_type] = pdos else: el_dos[orbital_type] = \ add_densities(el_dos[orbital_type], pdos) return {orb: Dos(self.efermi, self.energies, densities) for orb, densities in el_dos.items()}
141,896
Tolerances as defined in StructureMatcher. Tolerances will be gradually decreased until only a single match is found (if possible). Args: initial_ltol: fractional length tolerance initial_stol: site tolerance initial_angle_tol: angle tolerance
def __init__(self, initial_ltol=0.2, initial_stol=0.3, initial_angle_tol=5): self.initial_ltol = initial_ltol self.initial_stol = initial_stol self.initial_angle_tol = initial_angle_tol
141,898
Generates the transformation matricies that convert a set of 2D vectors into a super lattice of integer area multiple as proven in Cassels: Cassels, John William Scott. An introduction to the geometry of numbers. Springer Science & Business Media, 2012. Args: area_multiple(int): integer multiple of unit cell area for super lattice area Returns: matrix_list: transformation matricies to covert unit vectors to super lattice vectors
def gen_sl_transform_matricies(area_multiple): return [np.array(((i, j), (0, area_multiple / i))) for i in get_factors(area_multiple) for j in range(area_multiple // i)]
141,906
Determine if two sets of vectors are the same within length and angle tolerances Args: vec_set1(array[array]): an array of two vectors vec_set2(array[array]): second array of two vectors
def is_same_vectors(self, vec_set1, vec_set2): if (np.absolute(rel_strain(vec_set1[0], vec_set2[0])) > self.max_length_tol): return False elif (np.absolute(rel_strain(vec_set1[1], vec_set2[1])) > self.max_length_tol): return False elif (np.absolute(rel_angle(vec_set1, vec_set2)) > self.max_angle_tol): return False else: return True
141,910
Returns dict which contains ZSL match Args: film_miller(array) substrate_miller(array)
def match_as_dict(self, film_sl_vectors, substrate_sl_vectors, film_vectors, substrate_vectors, match_area): d = {} d["film_sl_vecs"] = np.asarray(film_sl_vectors) d["sub_sl_vecs"] = np.asarray(substrate_sl_vectors) d["match_area"] = match_area d["film_vecs"] = np.asarray(film_vectors) d["sub_vecs"] = np.asarray(substrate_vectors) return d
141,914
Initializes the substrate analyzer Args: zslgen(ZSLGenerator): Defaults to a ZSLGenerator with standard tolerances, but can be fed one with custom tolerances film_max_miller(int): maximum miller index to generate for film surfaces substrate_max_miller(int): maximum miller index to generate for substrate surfaces
def __init__(self, zslgen=ZSLGenerator(), film_max_miller=1, substrate_max_miller=1): self.zsl = zslgen self.film_max_miller = film_max_miller self.substrate_max_miller = substrate_max_miller
141,915
Generates the film/substrate slab combinations for a set of given miller indicies Args: film_millers(array): all miller indices to generate slabs for film substrate_millers(array): all miller indicies to generate slabs for substrate
def generate_surface_vectors(self, film_millers, substrate_millers): vector_sets = [] for f in film_millers: film_slab = SlabGenerator(self.film, f, 20, 15, primitive=False).get_slab() film_vectors = reduce_vectors(film_slab.lattice.matrix[0], film_slab.lattice.matrix[1]) for s in substrate_millers: substrate_slab = SlabGenerator(self.substrate, s, 20, 15, primitive=False).get_slab() substrate_vectors = reduce_vectors( substrate_slab.lattice.matrix[0], substrate_slab.lattice.matrix[1]) vector_sets.append((film_vectors, substrate_vectors, f, s)) return vector_sets
141,916
Creates XYZ object from a string. Args: contents: String representing an XYZ file. Returns: XYZ object
def from_string(contents): if contents[-1] != "\n": contents += "\n" white_space = r"[ \t\r\f\v]" natoms_line = white_space + r"*\d+" + white_space + r"*\n" comment_line = r"[^\n]*\n" coord_lines = r"(\s*\w+\s+[0-9\-\+\.eEdD]+\s+[0-9\-\+\.eEdD]+\s+[0-9\-\+\.eEdD]+\s*\n)+" frame_pattern_text = natoms_line + comment_line + coord_lines pat = re.compile(frame_pattern_text, re.MULTILINE) mols = [] for xyz_match in pat.finditer(contents): xyz_text = xyz_match.group(0) mols.append(XYZ._from_frame_string(xyz_text)) return XYZ(mols)
141,921
Returns Ea, c, standard error of Ea from the Arrhenius fit: D = c * exp(-Ea/kT) Args: temps ([float]): A sequence of temperatures. units: K diffusivities ([float]): A sequence of diffusivities (e.g., from DiffusionAnalyzer.diffusivity). units: cm^2/s
def fit_arrhenius(temps, diffusivities): t_1 = 1 / np.array(temps) logd = np.log(diffusivities) # Do a least squares regression of log(D) vs 1/T a = np.array([t_1, np.ones(len(temps))]).T w, res, _, _ = np.linalg.lstsq(a, logd, rcond=None) w = np.array(w) n = len(temps) if n > 2: std_Ea = (res[0] / (n - 2) / ( n * np.var(t_1))) ** 0.5 * const.k / const.e else: std_Ea = None return -w[0] * const.k / const.e, np.exp(w[1]), std_Ea
141,955
Returns (Arrhenius) extrapolated diffusivity at new_temp Args: temps ([float]): A sequence of temperatures. units: K diffusivities ([float]): A sequence of diffusivities (e.g., from DiffusionAnalyzer.diffusivity). units: cm^2/s new_temp (float): desired temperature. units: K Returns: (float) Diffusivity at extrapolated temp in mS/cm.
def get_extrapolated_diffusivity(temps, diffusivities, new_temp): Ea, c, _ = fit_arrhenius(temps, diffusivities) return c * np.exp(-Ea / (const.k / const.e * new_temp))
141,956
Returns an iterator for the drift-corrected structures. Use of iterator is to reduce memory usage as # of structures in MD can be huge. You don't often need all the structures all at once. Args: start, stop, step (int): applies a start/stop/step to the iterator. Faster than applying it after generation, as it reduces the number of structures created.
def get_drift_corrected_structures(self, start=None, stop=None, step=None): coords = np.array(self.structure.cart_coords) species = self.structure.species_and_occu lattices = self.lattices nsites, nsteps, dim = self.corrected_displacements.shape for i in range(start or 0, stop or nsteps, step or 1): latt = lattices[0] if len(lattices) == 1 else lattices[i] yield Structure( latt, species, coords + self.corrected_displacements[:, i, :], coords_are_cartesian=True)
141,960
Provides a summary of diffusion information. Args: include_msd_t (bool): Whether to include mean square displace and time data with the data. include_msd_t (bool): Whether to include mean square charge displace and time data with the data. Returns: (dict) of diffusion and conductivity data.
def get_summary_dict(self, include_msd_t=False, include_mscd_t=False): d = { "D": self.diffusivity, "D_sigma": self.diffusivity_std_dev, "D_charge": self.chg_diffusivity, "D_charge_sigma": self.chg_diffusivity_std_dev, "S": self.conductivity, "S_sigma": self.conductivity_std_dev, "S_charge": self.chg_conductivity, "D_components": self.diffusivity_components.tolist(), "S_components": self.conductivity_components.tolist(), "D_components_sigma": self.diffusivity_components_std_dev.tolist(), "S_components_sigma": self.conductivity_components_std_dev.tolist(), "specie": str(self.specie), "step_skip": self.step_skip, "time_step": self.time_step, "temperature": self.temperature, "max_framework_displacement": self.max_framework_displacement, "Haven_ratio": self.haven_ratio } if include_msd_t: d["msd"] = self.msd.tolist() d["msd_components"] = self.msd_components.tolist() d["dt"] = self.dt.tolist() if include_mscd_t: d["mscd"] = self.mscd.tolist() return d
141,961
Get the plot of the smoothed msd vs time graph. Useful for checking convergence. This can be written to an image file. Args: plt: A plot object. Defaults to None, which means one will be generated. mode (str): Determines type of msd plot. By "species", "sites", or direction (default). If mode = "mscd", the smoothed mscd vs. time will be plotted.
def get_msd_plot(self, plt=None, mode="specie"): from pymatgen.util.plotting import pretty_plot plt = pretty_plot(12, 8, plt=plt) if np.max(self.dt) > 100000: plot_dt = self.dt / 1000 unit = 'ps' else: plot_dt = self.dt unit = 'fs' if mode == "species": for sp in sorted(self.structure.composition.keys()): indices = [i for i, site in enumerate(self.structure) if site.specie == sp] sd = np.average(self.sq_disp_ions[indices, :], axis=0) plt.plot(plot_dt, sd, label=sp.__str__()) plt.legend(loc=2, prop={"size": 20}) elif mode == "sites": for i, site in enumerate(self.structure): sd = self.sq_disp_ions[i, :] plt.plot(plot_dt, sd, label="%s - %d" % ( site.specie.__str__(), i)) plt.legend(loc=2, prop={"size": 20}) elif mode == "mscd": plt.plot(plot_dt, self.mscd, 'r') plt.legend(["Overall"], loc=2, prop={"size": 20}) else: # Handle default / invalid mode case plt.plot(plot_dt, self.msd, 'k') plt.plot(plot_dt, self.msd_components[:, 0], 'r') plt.plot(plot_dt, self.msd_components[:, 1], 'g') plt.plot(plot_dt, self.msd_components[:, 2], 'b') plt.legend(["Overall", "a", "b", "c"], loc=2, prop={"size": 20}) plt.xlabel("Timestep ({})".format(unit)) if mode == "mscd": plt.ylabel("MSCD ($\\AA^2$)") else: plt.ylabel("MSD ($\\AA^2$)") plt.tight_layout() return plt
141,963
Writes MSD data to a csv file that can be easily plotted in other software. Args: filename (str): Filename. Supported formats are csv and dat. If the extension is csv, a csv file is written. Otherwise, a dat format is assumed.
def export_msdt(self, filename): fmt = "csv" if filename.lower().endswith(".csv") else "dat" delimiter = ", " if fmt == "csv" else " " with open(filename, "wt") as f: if fmt == "dat": f.write("# ") f.write(delimiter.join(["t", "MSD", "MSD_a", "MSD_b", "MSD_c", "MSCD"])) f.write("\n") for dt, msd, msdc, mscd in zip(self.dt, self.msd, self.msd_components, self.mscd): f.write(delimiter.join(["%s" % v for v in [dt, msd] + list( msdc) + [mscd]])) f.write("\n")
141,964
True if element:amounts are exactly the same, i.e., oxidation state is not considered. Args: sp1: First species. A dict of {specie/element: amt} as per the definition in Site and PeriodicSite. sp2: Second species. A dict of {specie/element: amt} as per the definition in Site and PeriodicSite. Returns: Boolean indicating whether species are the same based on element and amounts.
def are_equal(self, sp1, sp2): comp1 = Composition(sp1) comp2 = Composition(sp2) return comp1.get_el_amt_dict() == comp2.get_el_amt_dict()
142,097
True if there is some overlap in composition between the species Args: sp1: First species. A dict of {specie/element: amt} as per the definition in Site and PeriodicSite. sp2: Second species. A dict of {specie/element: amt} as per the definition in Site and PeriodicSite. Returns: True always
def are_equal(self, sp1, sp2): set1 = set(sp1.elements) set2 = set(sp2.elements) return set1.issubset(set2) or set2.issubset(set1)
142,098
Yields lattices for s with lengths and angles close to the lattice of target_s. If supercell_size is specified, the returned lattice will have that number of primitive cells in it Args: s, target_s: Structure objects
def _get_lattices(self, target_lattice, s, supercell_size=1): lattices = s.lattice.find_all_mappings( target_lattice, ltol=self.ltol, atol=self.angle_tol, skip_rotation_matrix=True) for l, _, scale_m in lattices: if abs(abs(np.linalg.det(scale_m)) - supercell_size) < 0.5: yield l, scale_m
142,102
Fit two structures. Args: struct1 (Structure): 1st structure struct2 (Structure): 2nd structure Returns: True or False.
def fit(self, struct1, struct2): struct1, struct2 = self._process_species([struct1, struct2]) if not self._subset and self._comparator.get_hash(struct1.composition) \ != self._comparator.get_hash(struct2.composition): return None struct1, struct2, fu, s1_supercell = self._preprocess(struct1, struct2) match = self._match(struct1, struct2, fu, s1_supercell, break_on_match=True) if match is None: return False else: return match[0] <= self.stol
142,107
Calculate RMS displacement between two structures Args: struct1 (Structure): 1st structure struct2 (Structure): 2nd structure Returns: rms displacement normalized by (Vol / nsites) ** (1/3) and maximum distance between paired sites. If no matching lattice is found None is returned.
def get_rms_dist(self, struct1, struct2): struct1, struct2 = self._process_species([struct1, struct2]) struct1, struct2, fu, s1_supercell = self._preprocess(struct1, struct2) match = self._match(struct1, struct2, fu, s1_supercell, use_rms=True, break_on_match=False) if match is None: return None else: return match[0], max(match[1])
142,108
Matches struct2 onto struct1 (which should contain all sites in struct2). Args: struct1, struct2 (Structure): structures to be matched fu (int): size of supercell to create s1_supercell (bool): whether to create the supercell of struct1 (vs struct2) use_rms (bool): whether to minimize the rms of the matching break_on_match (bool): whether to stop search at first valid match
def _strict_match(self, struct1, struct2, fu, s1_supercell=True, use_rms=False, break_on_match=False): if fu < 1: raise ValueError("fu cannot be less than 1") mask, s1_t_inds, s2_t_ind = self._get_mask(struct1, struct2, fu, s1_supercell) if mask.shape[0] > mask.shape[1]: raise ValueError('after supercell creation, struct1 must ' 'have more sites than struct2') # check that a valid mapping exists if (not self._subset) and mask.shape[1] != mask.shape[0]: return None if LinearAssignment(mask).min_cost > 0: return None best_match = None # loop over all lattices for s1fc, s2fc, avg_l, sc_m in \ self._get_supercells(struct1, struct2, fu, s1_supercell): # compute fractional tolerance normalization = (len(s1fc) / avg_l.volume) ** (1/3) inv_abc = np.array(avg_l.reciprocal_lattice.abc) frac_tol = inv_abc * self.stol / (np.pi * normalization) # loop over all translations for s1i in s1_t_inds: t = s1fc[s1i] - s2fc[s2_t_ind] t_s2fc = s2fc + t if self._cmp_fstruct(s1fc, t_s2fc, frac_tol, mask): inv_lll_abc = np.array(avg_l.get_lll_reduced_lattice().reciprocal_lattice.abc) lll_frac_tol = inv_lll_abc * self.stol / (np.pi * normalization) dist, t_adj, mapping = self._cart_dists( s1fc, t_s2fc, avg_l, mask, normalization, lll_frac_tol) if use_rms: val = np.linalg.norm(dist) / len(dist) ** 0.5 else: val = max(dist) if best_match is None or val < best_match[0]: total_t = t + t_adj total_t -= np.round(total_t) best_match = val, dist, sc_m, total_t, mapping if (break_on_match or val < 1e-5) and val < self.stol: return best_match if best_match and best_match[0] < self.stol: return best_match
142,112
Given a list of structures, use fit to group them by structural equality. Args: s_list ([Structure]): List of structures to be grouped anonymous (bool): Wheher to use anonymous mode. Returns: A list of lists of matched structures Assumption: if s1 == s2 but s1 != s3, than s2 and s3 will be put in different groups without comparison.
def group_structures(self, s_list, anonymous=False): if self._subset: raise ValueError("allow_subset cannot be used with" " group_structures") original_s_list = list(s_list) s_list = self._process_species(s_list) # Use structure hash to pre-group structures if anonymous: c_hash = lambda c: c.anonymized_formula else: c_hash = self._comparator.get_hash s_hash = lambda s: c_hash(s[1].composition) sorted_s_list = sorted(enumerate(s_list), key=s_hash) all_groups = [] # For each pre-grouped list of structures, perform actual matching. for k, g in itertools.groupby(sorted_s_list, key=s_hash): unmatched = list(g) while len(unmatched) > 0: i, refs = unmatched.pop(0) matches = [i] if anonymous: inds = filter(lambda i: self.fit_anonymous(refs, unmatched[i][1]), list(range(len(unmatched)))) else: inds = filter(lambda i: self.fit(refs, unmatched[i][1]), list(range(len(unmatched)))) inds = list(inds) matches.extend([unmatched[i][0] for i in inds]) unmatched = [unmatched[i] for i in range(len(unmatched)) if i not in inds] all_groups.append([original_s_list[i] for i in matches]) return all_groups
142,113
Tries all permutations of matching struct1 to struct2. Args: struct1, struct2 (Structure): Preprocessed input structures Returns: List of (mapping, match)
def _anonymous_match(self, struct1, struct2, fu, s1_supercell=True, use_rms=False, break_on_match=False, single_match=False): if not isinstance(self._comparator, SpeciesComparator): raise ValueError('Anonymous fitting currently requires SpeciesComparator') # check that species lists are comparable sp1 = struct1.composition.elements sp2 = struct2.composition.elements if len(sp1) != len(sp2): return None ratio = fu if s1_supercell else 1/fu swapped = len(struct1) * ratio < len(struct2) s1_comp = struct1.composition s2_comp = struct2.composition matches = [] for perm in itertools.permutations(sp2): sp_mapping = dict(zip(sp1, perm)) # do quick check that compositions are compatible mapped_comp = Composition({sp_mapping[k]: v for k, v in s1_comp.items()}) if (not self._subset) and ( self._comparator.get_hash(mapped_comp) != self._comparator.get_hash(s2_comp)): continue mapped_struct = struct1.copy() mapped_struct.replace_species(sp_mapping) if swapped: m = self._strict_match(struct2, mapped_struct, fu, (not s1_supercell), use_rms, break_on_match) else: m = self._strict_match(mapped_struct, struct2, fu, s1_supercell, use_rms, break_on_match) if m: matches.append((sp_mapping, m)) if single_match: break return matches
142,116
Performs an anonymous fitting, which allows distinct species in one structure to map to another. E.g., to compare if the Li2O and Na2O structures are similar. Args: struct1 (Structure): 1st structure struct2 (Structure): 2nd structure Returns: True/False: Whether a species mapping can map struct1 to stuct2
def fit_anonymous(self, struct1, struct2, niggli=True): struct1, struct2 = self._process_species([struct1, struct2]) struct1, struct2, fu, s1_supercell = self._preprocess(struct1, struct2, niggli) matches = self._anonymous_match(struct1, struct2, fu, s1_supercell, break_on_match=True, single_match=True) if matches: return True else: return False
142,120
Convenience constructor to make a ConversionElectrode from a composition and a phase diagram. Args: comp: Starting composition for ConversionElectrode, e.g., Composition("FeF3") pd: A PhaseDiagram of the relevant system (e.g., Li-Fe-F) working_ion_symbol: Element symbol of working ion. Defaults to Li.
def from_composition_and_pd(comp, pd, working_ion_symbol="Li"): working_ion = Element(working_ion_symbol) entry = None working_ion_entry = None for e in pd.stable_entries: if e.composition.reduced_formula == comp.reduced_formula: entry = e elif e.is_element and \ e.composition.reduced_formula == working_ion_symbol: working_ion_entry = e if not entry: raise ValueError("Not stable compound found at composition {}." .format(comp)) profile = pd.get_element_profile(working_ion, comp) # Need to reverse because voltage goes form most charged to most # discharged. profile.reverse() if len(profile) < 2: return None working_ion_entry = working_ion_entry working_ion = working_ion_entry.composition.elements[0].symbol normalization_els = {} for el, amt in comp.items(): if el != Element(working_ion): normalization_els[el] = amt vpairs = [ConversionVoltagePair.from_steps(profile[i], profile[i + 1], normalization_els) for i in range(len(profile) - 1)] return ConversionElectrode(vpairs, working_ion_entry, comp)
142,128
Convenience constructor to make a ConversionElectrode from a composition and all entries in a chemical system. Args: comp: Starting composition for ConversionElectrode, e.g., Composition("FeF3") entries_in_chemsys: Sequence containing all entries in a chemical system. E.g., all Li-Fe-F containing entries. working_ion_symbol: Element symbol of working ion. Defaults to Li.
def from_composition_and_entries(comp, entries_in_chemsys, working_ion_symbol="Li"): pd = PhaseDiagram(entries_in_chemsys) return ConversionElectrode.from_composition_and_pd(comp, pd, working_ion_symbol)
142,129
Creates a ConversionVoltagePair from two steps in the element profile from a PD analysis. Args: step1: Starting step step2: Ending step normalization_els: Elements to normalize the reaction by. To ensure correct capacities.
def from_steps(step1, step2, normalization_els): working_ion_entry = step1["element_reference"] working_ion = working_ion_entry.composition.elements[0].symbol working_ion_valence = max(Element(working_ion).oxidation_states) voltage = (-step1["chempot"] + working_ion_entry.energy_per_atom)/working_ion_valence mAh = (step2["evolution"] - step1["evolution"]) \ * Charge(1, "e").to("C") * Time(1, "s").to("h") * N_A * 1000*working_ion_valence licomp = Composition(working_ion) prev_rxn = step1["reaction"] reactants = {comp: abs(prev_rxn.get_coeff(comp)) for comp in prev_rxn.products if comp != licomp} curr_rxn = step2["reaction"] products = {comp: abs(curr_rxn.get_coeff(comp)) for comp in curr_rxn.products if comp != licomp} reactants[licomp] = (step2["evolution"] - step1["evolution"]) rxn = BalancedReaction(reactants, products) for el, amt in normalization_els.items(): if rxn.get_el_amount(el) > 1e-6: rxn.normalize_to_element(el, amt) break prev_mass_dischg = sum([prev_rxn.all_comp[i].weight * abs(prev_rxn.coeffs[i]) for i in range(len(prev_rxn.all_comp))]) / 2 vol_charge = sum([abs(prev_rxn.get_coeff(e.composition)) * e.structure.volume for e in step1["entries"] if e.composition.reduced_formula != working_ion]) mass_discharge = sum([curr_rxn.all_comp[i].weight * abs(curr_rxn.coeffs[i]) for i in range(len(curr_rxn.all_comp))]) / 2 mass_charge = prev_mass_dischg mass_discharge = mass_discharge vol_discharge = sum([abs(curr_rxn.get_coeff(e.composition)) * e.structure.volume for e in step2["entries"] if e.composition.reduced_formula != working_ion]) totalcomp = Composition({}) for comp in prev_rxn.products: if comp.reduced_formula != working_ion: totalcomp += comp * abs(prev_rxn.get_coeff(comp)) frac_charge = totalcomp.get_atomic_fraction(Element(working_ion)) totalcomp = Composition({}) for comp in curr_rxn.products: if comp.reduced_formula != working_ion: totalcomp += comp * abs(curr_rxn.get_coeff(comp)) frac_discharge = totalcomp.get_atomic_fraction(Element(working_ion)) rxn = rxn entries_charge = step2["entries"] entries_discharge = step1["entries"] return ConversionVoltagePair(rxn, voltage, mAh, vol_charge, vol_discharge, mass_charge, mass_discharge, frac_charge, frac_discharge, entries_charge, entries_discharge, working_ion_entry)
142,137
Initializes a PCAN Channel Parameters: Channel : A TPCANHandle representing a PCAN Channel Btr0Btr1 : The speed for the communication (BTR0BTR1 code) HwType : NON PLUG&PLAY: The type of hardware and operation mode IOPort : NON PLUG&PLAY: The I/O address for the parallel port Interrupt: NON PLUG&PLAY: Interrupt number of the parallel port Returns: A TPCANStatus error code
def Initialize( self, Channel, Btr0Btr1, HwType = TPCANType(0), IOPort = c_uint(0), Interrupt = c_ushort(0)): try: res = self.__m_dllBasic.CAN_Initialize(Channel,Btr0Btr1,HwType,IOPort,Interrupt) return TPCANStatus(res) except: logger.error("Exception on PCANBasic.Initialize") raise
142,310
Uninitializes one or all PCAN Channels initialized by CAN_Initialize Remarks: Giving the TPCANHandle value "PCAN_NONEBUS", uninitialize all initialized channels Parameters: Channel : A TPCANHandle representing a PCAN Channel Returns: A TPCANStatus error code
def Uninitialize( self, Channel): try: res = self.__m_dllBasic.CAN_Uninitialize(Channel) return TPCANStatus(res) except: logger.error("Exception on PCANBasic.Uninitialize") raise
142,312
Resets the receive and transmit queues of the PCAN Channel Remarks: A reset of the CAN controller is not performed Parameters: Channel : A TPCANHandle representing a PCAN Channel Returns: A TPCANStatus error code
def Reset( self, Channel): try: res = self.__m_dllBasic.CAN_Reset(Channel) return TPCANStatus(res) except: logger.error("Exception on PCANBasic.Reset") raise
142,313
Gets the current status of a PCAN Channel Parameters: Channel : A TPCANHandle representing a PCAN Channel Returns: A TPCANStatus error code
def GetStatus( self, Channel): try: res = self.__m_dllBasic.CAN_GetStatus(Channel) return TPCANStatus(res) except: logger.error("Exception on PCANBasic.GetStatus") raise
142,314
Transmits a CAN message Parameters: Channel : A TPCANHandle representing a PCAN Channel MessageBuffer: A TPCANMsg representing the CAN message to be sent Returns: A TPCANStatus error code
def Write( self, Channel, MessageBuffer): try: res = self.__m_dllBasic.CAN_Write(Channel,byref(MessageBuffer)) return TPCANStatus(res) except: logger.error("Exception on PCANBasic.Write") raise
142,317
Transmits a CAN message over a FD capable PCAN Channel Parameters: Channel : The handle of a FD capable PCAN Channel MessageBuffer: A TPCANMsgFD buffer with the message to be sent Returns: A TPCANStatus error code
def WriteFD( self, Channel, MessageBuffer): try: res = self.__m_dllBasic.CAN_WriteFD(Channel,byref(MessageBuffer)) return TPCANStatus(res) except: logger.error("Exception on PCANBasic.WriteFD") raise
142,318
User authenticate method. --- description: Authenticate user with supplied credentials. parameters: - name: username in: formData type: string required: true - name: password in: formData type: string required: true responses: 200: description: User successfully logged in. 400: description: User login failed.
def login(): try: username = request.form.get("username") password = request.form.get("password") user = authenticate(username, password) if not user: raise Exception("User not found!") resp = jsonify({"message": "User authenticated"}) resp.status_code = 200 access_token = jwt.jwt_encode_callback(user) # add token to response headers - so SwaggerUI can use it resp.headers.extend({'jwt-token': access_token}) except Exception as e: resp = jsonify({"message": "Bad username and/or password"}) resp.status_code = 401 return resp
142,512