response
stringlengths 1
33.1k
| instruction
stringlengths 22
582k
|
---|---|
Fast Sanger FASTQ to Sanger FASTQ conversion (PRIVATE).
Useful for removing line wrapping and the redundant second identifier
on the plus lines. Will check also check the quality string is valid.
Avoids creating SeqRecord and Seq objects in order to speed up this
conversion. | def _fastq_sanger_convert_fastq_sanger(
in_file: _TextIOSource, out_file: _TextIOSource
) -> int:
"""Fast Sanger FASTQ to Sanger FASTQ conversion (PRIVATE).
Useful for removing line wrapping and the redundant second identifier
on the plus lines. Will check also check the quality string is valid.
Avoids creating SeqRecord and Seq objects in order to speed up this
conversion.
"""
# Map unexpected chars to null
mapping = "".join(
[chr(0) for ascii in range(33)]
+ [chr(ascii) for ascii in range(33, 127)]
+ [chr(0) for ascii in range(127, 256)]
)
assert len(mapping) == 256
return _fastq_generic(in_file, out_file, mapping) |
Fast Solexa FASTQ to Solexa FASTQ conversion (PRIVATE).
Useful for removing line wrapping and the redundant second identifier
on the plus lines. Will check also check the quality string is valid.
Avoids creating SeqRecord and Seq objects in order to speed up this
conversion. | def _fastq_solexa_convert_fastq_solexa(
in_file: _TextIOSource, out_file: _TextIOSource
) -> int:
"""Fast Solexa FASTQ to Solexa FASTQ conversion (PRIVATE).
Useful for removing line wrapping and the redundant second identifier
on the plus lines. Will check also check the quality string is valid.
Avoids creating SeqRecord and Seq objects in order to speed up this
conversion.
"""
# Map unexpected chars to null
mapping = "".join(
[chr(0) for ascii in range(59)]
+ [chr(ascii) for ascii in range(59, 127)]
+ [chr(0) for ascii in range(127, 256)]
)
assert len(mapping) == 256
return _fastq_generic(in_file, out_file, mapping) |
Fast Illumina 1.3+ FASTQ to Illumina 1.3+ FASTQ conversion (PRIVATE).
Useful for removing line wrapping and the redundant second identifier
on the plus lines. Will check also check the quality string is valid.
Avoids creating SeqRecord and Seq objects in order to speed up this
conversion. | def _fastq_illumina_convert_fastq_illumina(
in_file: _TextIOSource, out_file: _TextIOSource
) -> int:
"""Fast Illumina 1.3+ FASTQ to Illumina 1.3+ FASTQ conversion (PRIVATE).
Useful for removing line wrapping and the redundant second identifier
on the plus lines. Will check also check the quality string is valid.
Avoids creating SeqRecord and Seq objects in order to speed up this
conversion.
"""
# Map unexpected chars to null
mapping = "".join(
[chr(0) for ascii in range(64)]
+ [chr(ascii) for ascii in range(64, 127)]
+ [chr(0) for ascii in range(127, 256)]
)
assert len(mapping) == 256
return _fastq_generic(in_file, out_file, mapping) |
Fast Illumina 1.3+ FASTQ to Sanger FASTQ conversion (PRIVATE).
Avoids creating SeqRecord and Seq objects in order to speed up this
conversion. | def _fastq_illumina_convert_fastq_sanger(
in_file: _TextIOSource, out_file: _TextIOSource
) -> int:
"""Fast Illumina 1.3+ FASTQ to Sanger FASTQ conversion (PRIVATE).
Avoids creating SeqRecord and Seq objects in order to speed up this
conversion.
"""
# Map unexpected chars to null
mapping = "".join(
[chr(0) for ascii in range(64)]
+ [chr(33 + q) for q in range(62 + 1)]
+ [chr(0) for ascii in range(127, 256)]
)
assert len(mapping) == 256
return _fastq_generic(in_file, out_file, mapping) |
Fast Sanger FASTQ to Illumina 1.3+ FASTQ conversion (PRIVATE).
Avoids creating SeqRecord and Seq objects in order to speed up this
conversion. Will issue a warning if the scores had to be truncated at 62
(maximum possible in the Illumina 1.3+ FASTQ format) | def _fastq_sanger_convert_fastq_illumina(
in_file: _TextIOSource, out_file: _TextIOSource
) -> int:
"""Fast Sanger FASTQ to Illumina 1.3+ FASTQ conversion (PRIVATE).
Avoids creating SeqRecord and Seq objects in order to speed up this
conversion. Will issue a warning if the scores had to be truncated at 62
(maximum possible in the Illumina 1.3+ FASTQ format)
"""
# Map unexpected chars to null
trunc_char = chr(1)
mapping = "".join(
[chr(0) for ascii in range(33)]
+ [chr(64 + q) for q in range(62 + 1)]
+ [trunc_char for ascii in range(96, 127)]
+ [chr(0) for ascii in range(127, 256)]
)
assert len(mapping) == 256
return _fastq_generic2(
in_file,
out_file,
mapping,
trunc_char,
"Data loss - max PHRED quality 62 in Illumina 1.3+ FASTQ",
) |
Fast Solexa FASTQ to Sanger FASTQ conversion (PRIVATE).
Avoids creating SeqRecord and Seq objects in order to speed up this
conversion. | def _fastq_solexa_convert_fastq_sanger(
in_file: _TextIOSource, out_file: _TextIOSource
) -> int:
"""Fast Solexa FASTQ to Sanger FASTQ conversion (PRIVATE).
Avoids creating SeqRecord and Seq objects in order to speed up this
conversion.
"""
# Map unexpected chars to null
mapping = "".join(
[chr(0) for ascii in range(59)]
+ [
chr(33 + int(round(phred_quality_from_solexa(q))))
for q in range(-5, 62 + 1)
]
+ [chr(0) for ascii in range(127, 256)]
)
assert len(mapping) == 256
return _fastq_generic(in_file, out_file, mapping) |
Fast Sanger FASTQ to Solexa FASTQ conversion (PRIVATE).
Avoids creating SeqRecord and Seq objects in order to speed up this
conversion. Will issue a warning if the scores had to be truncated at 62
(maximum possible in the Solexa FASTQ format) | def _fastq_sanger_convert_fastq_solexa(
in_file: _TextIOSource, out_file: _TextIOSource
) -> int:
"""Fast Sanger FASTQ to Solexa FASTQ conversion (PRIVATE).
Avoids creating SeqRecord and Seq objects in order to speed up this
conversion. Will issue a warning if the scores had to be truncated at 62
(maximum possible in the Solexa FASTQ format)
"""
# Map unexpected chars to null
trunc_char = chr(1)
mapping = "".join(
[chr(0) for ascii in range(33)]
+ [chr(64 + int(round(solexa_quality_from_phred(q)))) for q in range(62 + 1)]
+ [trunc_char for ascii in range(96, 127)]
+ [chr(0) for ascii in range(127, 256)]
)
assert len(mapping) == 256
return _fastq_generic2(
in_file,
out_file,
mapping,
trunc_char,
"Data loss - max Solexa quality 62 in Solexa FASTQ",
) |
Fast Solexa FASTQ to Illumina 1.3+ FASTQ conversion (PRIVATE).
Avoids creating SeqRecord and Seq objects in order to speed up this
conversion. | def _fastq_solexa_convert_fastq_illumina(
in_file: _TextIOSource, out_file: _TextIOSource
) -> int:
"""Fast Solexa FASTQ to Illumina 1.3+ FASTQ conversion (PRIVATE).
Avoids creating SeqRecord and Seq objects in order to speed up this
conversion.
"""
# Map unexpected chars to null
mapping = "".join(
[chr(0) for ascii in range(59)]
+ [
chr(64 + int(round(phred_quality_from_solexa(q))))
for q in range(-5, 62 + 1)
]
+ [chr(0) for ascii in range(127, 256)]
)
assert len(mapping) == 256
return _fastq_generic(in_file, out_file, mapping) |
Fast Illumina 1.3+ FASTQ to Solexa FASTQ conversion (PRIVATE).
Avoids creating SeqRecord and Seq objects in order to speed up this
conversion. | def _fastq_illumina_convert_fastq_solexa(
in_file: _TextIOSource, out_file: _TextIOSource
) -> int:
"""Fast Illumina 1.3+ FASTQ to Solexa FASTQ conversion (PRIVATE).
Avoids creating SeqRecord and Seq objects in order to speed up this
conversion.
"""
# Map unexpected chars to null
mapping = "".join(
[chr(0) for ascii in range(64)]
+ [chr(64 + int(round(solexa_quality_from_phred(q)))) for q in range(62 + 1)]
+ [chr(0) for ascii in range(127, 256)]
)
assert len(mapping) == 256
return _fastq_generic(in_file, out_file, mapping) |
Fast FASTQ to FASTA conversion (PRIVATE).
Avoids dealing with the FASTQ quality encoding, and creating SeqRecord and
Seq objects in order to speed up this conversion.
NOTE - This does NOT check the characters used in the FASTQ quality string
are valid! | def _fastq_convert_fasta(in_file: _TextIOSource, out_file: _TextIOSource) -> int:
"""Fast FASTQ to FASTA conversion (PRIVATE).
Avoids dealing with the FASTQ quality encoding, and creating SeqRecord and
Seq objects in order to speed up this conversion.
NOTE - This does NOT check the characters used in the FASTQ quality string
are valid!
"""
# For real speed, don't even make SeqRecord and Seq objects!
count = 0
with as_handle(out_file, "w") as out_handle:
for title, seq, qual in FastqGeneralIterator(in_file):
count += 1
out_handle.write(f">{title}\n")
# Do line wrapping
for i in range(0, len(seq), 60):
out_handle.write(seq[i : i + 60] + "\n")
return count |
Fast FASTQ to simple tabbed conversion (PRIVATE).
Avoids dealing with the FASTQ quality encoding, and creating SeqRecord and
Seq objects in order to speed up this conversion.
NOTE - This does NOT check the characters used in the FASTQ quality string
are valid! | def _fastq_convert_tab(in_file: _TextIOSource, out_file: _TextIOSource) -> int:
"""Fast FASTQ to simple tabbed conversion (PRIVATE).
Avoids dealing with the FASTQ quality encoding, and creating SeqRecord and
Seq objects in order to speed up this conversion.
NOTE - This does NOT check the characters used in the FASTQ quality string
are valid!
"""
# For real speed, don't even make SeqRecord and Seq objects!
count = 0
with as_handle(out_file, "w") as out_handle:
for title, seq, qual in FastqGeneralIterator(in_file):
count += 1
out_handle.write(f"{title.split(None, 1)[0]}\t{seq}\n")
return count |
FASTQ helper function for QUAL output (PRIVATE).
Mapping should be a dictionary mapping expected ASCII characters from the
FASTQ quality string to PHRED quality scores (as strings). | def _fastq_convert_qual(
in_file: _TextIOSource,
out_file: _TextIOSource,
mapping: Mapping[str, str],
) -> int:
"""FASTQ helper function for QUAL output (PRIVATE).
Mapping should be a dictionary mapping expected ASCII characters from the
FASTQ quality string to PHRED quality scores (as strings).
"""
# For real speed, don't even make SeqRecord and Seq objects!
count = 0
with as_handle(out_file, "w") as out_handle:
for title, seq, qual in FastqGeneralIterator(in_file):
count += 1
out_handle.write(f">{title}\n")
# map the qual... note even with Sanger encoding max 2 digits
try:
qualities_strs = [mapping[ascii_] for ascii_ in qual]
except KeyError:
raise ValueError("Invalid character in quality string") from None
data = " ".join(qualities_strs)
while len(data) > 60:
# Know quality scores are either 1 or 2 digits, so there
# must be a space in any three consecutive characters.
if data[60] == " ":
out_handle.write(data[:60] + "\n")
data = data[61:]
elif data[59] == " ":
out_handle.write(data[:59] + "\n")
data = data[60:]
else:
assert data[58] == " ", "Internal logic failure in wrapping"
out_handle.write(data[:58] + "\n")
data = data[59:]
out_handle.write(data + "\n")
return count |
Fast Sanger FASTQ to QUAL conversion (PRIVATE). | def _fastq_sanger_convert_qual(in_file: _TextIOSource, out_file: _TextIOSource) -> int:
"""Fast Sanger FASTQ to QUAL conversion (PRIVATE)."""
mapping = {chr(q + 33): str(q) for q in range(93 + 1)}
return _fastq_convert_qual(in_file, out_file, mapping) |
Fast Solexa FASTQ to QUAL conversion (PRIVATE). | def _fastq_solexa_convert_qual(in_file: _TextIOSource, out_file: _TextIOSource) -> int:
"""Fast Solexa FASTQ to QUAL conversion (PRIVATE)."""
mapping = {
chr(q + 64): str(int(round(phred_quality_from_solexa(q))))
for q in range(-5, 62 + 1)
}
return _fastq_convert_qual(in_file, out_file, mapping) |
Fast Illumina 1.3+ FASTQ to QUAL conversion (PRIVATE). | def _fastq_illumina_convert_qual(
in_file: _TextIOSource, out_file: _TextIOSource
) -> int:
"""Fast Illumina 1.3+ FASTQ to QUAL conversion (PRIVATE)."""
mapping = {chr(q + 64): str(q) for q in range(62 + 1)}
return _fastq_convert_qual(in_file, out_file, mapping) |
Read in an SFF file header (PRIVATE).
Assumes the handle is at the start of the file, will read forwards
though the header and leave the handle pointing at the first record.
Returns a tuple of values from the header (header_length, index_offset,
index_length, number_of_reads, flows_per_read, flow_chars, key_sequence)
>>> with open("Roche/greek.sff", "rb") as handle:
... values = _sff_file_header(handle)
...
>>> print(values[0])
840
>>> print(values[1])
65040
>>> print(values[2])
256
>>> print(values[3])
24
>>> print(values[4])
800
>>> values[-1]
'TCAG' | def _sff_file_header(handle):
"""Read in an SFF file header (PRIVATE).
Assumes the handle is at the start of the file, will read forwards
though the header and leave the handle pointing at the first record.
Returns a tuple of values from the header (header_length, index_offset,
index_length, number_of_reads, flows_per_read, flow_chars, key_sequence)
>>> with open("Roche/greek.sff", "rb") as handle:
... values = _sff_file_header(handle)
...
>>> print(values[0])
840
>>> print(values[1])
65040
>>> print(values[2])
256
>>> print(values[3])
24
>>> print(values[4])
800
>>> values[-1]
'TCAG'
"""
# file header (part one)
# use big endiean encdoing >
# magic_number I
# version 4B
# index_offset Q
# index_length I
# number_of_reads I
# header_length H
# key_length H
# number_of_flows_per_read H
# flowgram_format_code B
# [rest of file header depends on the number of flows and how many keys]
fmt = ">4s4BQIIHHHB"
assert 31 == struct.calcsize(fmt)
data = handle.read(31)
if not data:
raise ValueError("Empty file.")
elif len(data) < 31:
raise ValueError("File too small to hold a valid SFF header.")
try:
(
magic_number,
ver0,
ver1,
ver2,
ver3,
index_offset,
index_length,
number_of_reads,
header_length,
key_length,
number_of_flows_per_read,
flowgram_format,
) = struct.unpack(fmt, data)
except TypeError:
raise StreamModeError("SFF files must be opened in binary mode.") from None
if magic_number in [_hsh, _srt, _mft]:
# Probably user error, calling Bio.SeqIO.parse() twice!
raise ValueError("Handle seems to be at SFF index block, not start")
if magic_number != _sff: # 779314790
raise ValueError(f"SFF file did not start '.sff', but {magic_number!r}")
if (ver0, ver1, ver2, ver3) != (0, 0, 0, 1):
raise ValueError(
"Unsupported SFF version in header, %i.%i.%i.%i" % (ver0, ver1, ver2, ver3)
)
if flowgram_format != 1:
raise ValueError("Flowgram format code %i not supported" % flowgram_format)
if (index_offset != 0) ^ (index_length != 0):
raise ValueError(
"Index offset %i but index length %i" % (index_offset, index_length)
)
flow_chars = handle.read(number_of_flows_per_read).decode("ASCII")
key_sequence = handle.read(key_length).decode("ASCII")
# According to the spec, the header_length field should be the total number
# of bytes required by this set of header fields, and should be equal to
# "31 + number_of_flows_per_read + key_length" rounded up to the next value
# divisible by 8.
assert header_length % 8 == 0
padding = header_length - number_of_flows_per_read - key_length - 31
assert 0 <= padding < 8, padding
if handle.read(padding).count(_null) != padding:
import warnings
from Bio import BiopythonParserWarning
warnings.warn(
"Your SFF file is invalid, post header %i byte "
"null padding region contained data." % padding,
BiopythonParserWarning,
)
return (
header_length,
index_offset,
index_length,
number_of_reads,
number_of_flows_per_read,
flow_chars,
key_sequence,
) |
Generate an index by scanning though all the reads in an SFF file (PRIVATE).
This is a slow but generic approach if we can't parse the provided index
(if present).
Will use the handle seek/tell functions. | def _sff_do_slow_index(handle):
"""Generate an index by scanning though all the reads in an SFF file (PRIVATE).
This is a slow but generic approach if we can't parse the provided index
(if present).
Will use the handle seek/tell functions.
"""
handle.seek(0)
(
header_length,
index_offset,
index_length,
number_of_reads,
number_of_flows_per_read,
flow_chars,
key_sequence,
) = _sff_file_header(handle)
# Now on to the reads...
read_header_fmt = ">2HI4H"
read_header_size = struct.calcsize(read_header_fmt)
# NOTE - assuming flowgram_format==1, which means struct type H
read_flow_fmt = ">%iH" % number_of_flows_per_read
read_flow_size = struct.calcsize(read_flow_fmt)
assert 1 == struct.calcsize(">B")
assert 1 == struct.calcsize(">s")
assert 1 == struct.calcsize(">c")
assert read_header_size % 8 == 0 # Important for padding calc later!
for read in range(number_of_reads):
record_offset = handle.tell()
if record_offset == index_offset:
# Found index block within reads, ignore it:
offset = index_offset + index_length
if offset % 8:
offset += 8 - (offset % 8)
assert offset % 8 == 0
handle.seek(offset)
record_offset = offset
# assert record_offset%8 == 0 # Worth checking, but slow
# First the fixed header
data = handle.read(read_header_size)
(
read_header_length,
name_length,
seq_len,
clip_qual_left,
clip_qual_right,
clip_adapter_left,
clip_adapter_right,
) = struct.unpack(read_header_fmt, data)
if read_header_length < 10 or read_header_length % 8 != 0:
raise ValueError(
"Malformed read header, says length is %i:\n%r"
% (read_header_length, data)
)
# now the name and any padding (remainder of header)
name = handle.read(name_length).decode()
padding = read_header_length - read_header_size - name_length
if handle.read(padding).count(_null) != padding:
import warnings
from Bio import BiopythonParserWarning
warnings.warn(
"Your SFF file is invalid, post name %i byte "
"padding region contained data" % padding,
BiopythonParserWarning,
)
assert record_offset + read_header_length == handle.tell()
# now the flowgram values, flowgram index, bases and qualities
size = read_flow_size + 3 * seq_len
handle.seek(size, 1)
# now any padding...
padding = size % 8
if padding:
padding = 8 - padding
if handle.read(padding).count(_null) != padding:
import warnings
from Bio import BiopythonParserWarning
warnings.warn(
"Your SFF file is invalid, post quality %i "
"byte padding region contained data" % padding,
BiopythonParserWarning,
)
yield name, record_offset
if handle.tell() % 8 != 0:
raise ValueError("After scanning reads, did not end on a multiple of 8") |
Locate any existing Roche style XML meta data and read index (PRIVATE).
Makes a number of hard coded assumptions based on reverse engineered SFF
files from Roche 454 machines.
Returns a tuple of read count, SFF "index" offset and size, XML offset
and size, and the actual read index offset and size.
Raises a ValueError for unsupported or non-Roche index blocks. | def _sff_find_roche_index(handle):
"""Locate any existing Roche style XML meta data and read index (PRIVATE).
Makes a number of hard coded assumptions based on reverse engineered SFF
files from Roche 454 machines.
Returns a tuple of read count, SFF "index" offset and size, XML offset
and size, and the actual read index offset and size.
Raises a ValueError for unsupported or non-Roche index blocks.
"""
handle.seek(0)
(
header_length,
index_offset,
index_length,
number_of_reads,
number_of_flows_per_read,
flow_chars,
key_sequence,
) = _sff_file_header(handle)
assert handle.tell() == header_length
if not index_offset or not index_length:
raise ValueError("No index present in this SFF file")
# Now jump to the header...
handle.seek(index_offset)
fmt = ">4s4B"
fmt_size = struct.calcsize(fmt)
data = handle.read(fmt_size)
if not data:
raise ValueError(
"Premature end of file? Expected index of size %i at offset %i, found nothing"
% (index_length, index_offset)
)
if len(data) < fmt_size:
raise ValueError(
"Premature end of file? Expected index of size %i at offset %i, found %r"
% (index_length, index_offset, data)
)
magic_number, ver0, ver1, ver2, ver3 = struct.unpack(fmt, data)
if magic_number == _mft: # 778921588
# Roche 454 manifest index
# This is typical from raw Roche 454 SFF files (2009), and includes
# both an XML manifest and the sorted index.
if (ver0, ver1, ver2, ver3) != (49, 46, 48, 48):
# This is "1.00" as a string
raise ValueError(
"Unsupported version in .mft index header, %i.%i.%i.%i"
% (ver0, ver1, ver2, ver3)
)
fmt2 = ">LL"
fmt2_size = struct.calcsize(fmt2)
xml_size, data_size = struct.unpack(fmt2, handle.read(fmt2_size))
if index_length != fmt_size + fmt2_size + xml_size + data_size:
raise ValueError(
"Problem understanding .mft index header, %i != %i + %i + %i + %i"
% (index_length, fmt_size, fmt2_size, xml_size, data_size)
)
return (
number_of_reads,
header_length,
index_offset,
index_length,
index_offset + fmt_size + fmt2_size,
xml_size,
index_offset + fmt_size + fmt2_size + xml_size,
data_size,
)
elif magic_number == _srt: # 779317876
# Roche 454 sorted index
# I've had this from Roche tool sfffile when the read identifiers
# had nonstandard lengths and there was no XML manifest.
if (ver0, ver1, ver2, ver3) != (49, 46, 48, 48):
# This is "1.00" as a string
raise ValueError(
"Unsupported version in .srt index header, %i.%i.%i.%i"
% (ver0, ver1, ver2, ver3)
)
data = handle.read(4)
if data != _null * 4:
raise ValueError("Did not find expected null four bytes in .srt index")
return (
number_of_reads,
header_length,
index_offset,
index_length,
0,
0,
index_offset + fmt_size + 4,
index_length - fmt_size - 4,
)
elif magic_number == _hsh:
raise ValueError(
"Hash table style indexes (.hsh) in SFF files are not (yet) supported"
)
else:
raise ValueError(
f"Unknown magic number {magic_number!r} in SFF index header:\n{data!r}"
) |
Read any Roche style XML manifest data in the SFF "index".
The SFF file format allows for multiple different index blocks, and Roche
took advantage of this to define their own index block which also embeds
an XML manifest string. This is not a publicly documented extension to
the SFF file format, this was reverse engineered.
The handle should be to an SFF file opened in binary mode. This function
will use the handle seek/tell functions and leave the handle in an
arbitrary location.
Any XML manifest found is returned as a Python string, which you can then
parse as appropriate, or reuse when writing out SFF files with the
SffWriter class.
Returns a string, or raises a ValueError if an Roche manifest could not be
found. | def ReadRocheXmlManifest(handle):
"""Read any Roche style XML manifest data in the SFF "index".
The SFF file format allows for multiple different index blocks, and Roche
took advantage of this to define their own index block which also embeds
an XML manifest string. This is not a publicly documented extension to
the SFF file format, this was reverse engineered.
The handle should be to an SFF file opened in binary mode. This function
will use the handle seek/tell functions and leave the handle in an
arbitrary location.
Any XML manifest found is returned as a Python string, which you can then
parse as appropriate, or reuse when writing out SFF files with the
SffWriter class.
Returns a string, or raises a ValueError if an Roche manifest could not be
found.
"""
(
number_of_reads,
header_length,
index_offset,
index_length,
xml_offset,
xml_size,
read_index_offset,
read_index_size,
) = _sff_find_roche_index(handle)
if not xml_offset or not xml_size:
raise ValueError("No XML manifest found")
handle.seek(xml_offset)
return handle.read(xml_size).decode() |
Read any existing Roche style read index provided in the SFF file (PRIVATE).
Will use the handle seek/tell functions.
This works on ".srt1.00" and ".mft1.00" style Roche SFF index blocks.
Roche SFF indices use base 255 not 256, meaning we see bytes in range the
range 0 to 254 only. This appears to be so that byte 0xFF (character 255)
can be used as a marker character to separate entries (required if the
read name lengths vary).
Note that since only four bytes are used for the read offset, this is
limited to 255^4 bytes (nearly 4GB). If you try to use the Roche sfffile
tool to combine SFF files beyond this limit, they issue a warning and
omit the index (and manifest). | def _sff_read_roche_index(handle):
"""Read any existing Roche style read index provided in the SFF file (PRIVATE).
Will use the handle seek/tell functions.
This works on ".srt1.00" and ".mft1.00" style Roche SFF index blocks.
Roche SFF indices use base 255 not 256, meaning we see bytes in range the
range 0 to 254 only. This appears to be so that byte 0xFF (character 255)
can be used as a marker character to separate entries (required if the
read name lengths vary).
Note that since only four bytes are used for the read offset, this is
limited to 255^4 bytes (nearly 4GB). If you try to use the Roche sfffile
tool to combine SFF files beyond this limit, they issue a warning and
omit the index (and manifest).
"""
(
number_of_reads,
header_length,
index_offset,
index_length,
xml_offset,
xml_size,
read_index_offset,
read_index_size,
) = _sff_find_roche_index(handle)
# Now parse the read index...
handle.seek(read_index_offset)
fmt = ">5B"
for read in range(number_of_reads):
# TODO - Be more aware of when the index should end?
data = handle.read(6)
while True:
more = handle.read(1)
if not more:
raise ValueError("Premature end of file!")
data += more
if more == _flag:
break
assert data[-1:] == _flag, data[-1:]
name = data[:-6].decode()
off4, off3, off2, off1, off0 = struct.unpack(fmt, data[-6:-1])
offset = off0 + 255 * off1 + 65025 * off2 + 16581375 * off3
if off4:
# Could in theory be used as a fifth piece of offset information,
# i.e. offset =+ 4228250625L*off4, but testing the Roche tools this
# is not the case. They simple don't support such large indexes.
raise ValueError("Expected a null terminator to the read name.")
yield name, offset
if handle.tell() != read_index_offset + read_index_size:
raise ValueError(
"Problem with index length? %i vs %i"
% (handle.tell(), read_index_offset + read_index_size)
) |
Parse the next read in the file, return data as a SeqRecord (PRIVATE). | def _sff_read_seq_record(
handle, number_of_flows_per_read, flow_chars, key_sequence, trim=False
):
"""Parse the next read in the file, return data as a SeqRecord (PRIVATE)."""
# Now on to the reads...
# the read header format (fixed part):
# read_header_length H
# name_length H
# seq_len I
# clip_qual_left H
# clip_qual_right H
# clip_adapter_left H
# clip_adapter_right H
# [rest of read header depends on the name length etc]
read_header_fmt = ">2HI4H"
read_header_size = struct.calcsize(read_header_fmt)
read_flow_fmt = ">%iH" % number_of_flows_per_read
read_flow_size = struct.calcsize(read_flow_fmt)
(
read_header_length,
name_length,
seq_len,
clip_qual_left,
clip_qual_right,
clip_adapter_left,
clip_adapter_right,
) = struct.unpack(read_header_fmt, handle.read(read_header_size))
if clip_qual_left:
clip_qual_left -= 1 # python counting
if clip_adapter_left:
clip_adapter_left -= 1 # python counting
if read_header_length < 10 or read_header_length % 8 != 0:
raise ValueError(
"Malformed read header, says length is %i" % read_header_length
)
# now the name and any padding (remainder of header)
name = handle.read(name_length).decode()
padding = read_header_length - read_header_size - name_length
if handle.read(padding).count(_null) != padding:
import warnings
from Bio import BiopythonParserWarning
warnings.warn(
"Your SFF file is invalid, post name %i "
"byte padding region contained data" % padding,
BiopythonParserWarning,
)
# now the flowgram values, flowgram index, bases and qualities
# NOTE - assuming flowgram_format==1, which means struct type H
flow_values = handle.read(read_flow_size) # unpack later if needed
temp_fmt = ">%iB" % seq_len # used for flow index and quals
flow_index = handle.read(seq_len) # unpack later if needed
seq = handle.read(seq_len) # Leave as bytes for Seq object
quals = list(struct.unpack(temp_fmt, handle.read(seq_len)))
# now any padding...
padding = (read_flow_size + seq_len * 3) % 8
if padding:
padding = 8 - padding
if handle.read(padding).count(_null) != padding:
import warnings
from Bio import BiopythonParserWarning
warnings.warn(
"Your SFF file is invalid, post quality %i "
"byte padding region contained data" % padding,
BiopythonParserWarning,
)
# Follow Roche and apply most aggressive of qual and adapter clipping.
# Note Roche seems to ignore adapter clip fields when writing SFF,
# and uses just the quality clipping values for any clipping.
clip_left = max(clip_qual_left, clip_adapter_left)
# Right clipping of zero means no clipping
if clip_qual_right:
if clip_adapter_right:
clip_right = min(clip_qual_right, clip_adapter_right)
else:
# Typical case with Roche SFF files
clip_right = clip_qual_right
elif clip_adapter_right:
clip_right = clip_adapter_right
else:
clip_right = seq_len
# Now build a SeqRecord
if trim:
if clip_left >= clip_right:
# Raise an error?
import warnings
from Bio import BiopythonParserWarning
warnings.warn(
"Overlapping clip values in SFF record, trimmed to nothing",
BiopythonParserWarning,
)
seq = ""
quals = []
else:
seq = seq[clip_left:clip_right].upper()
quals = quals[clip_left:clip_right]
# Don't record the clipping values, flow etc, they make no sense now:
annotations = {}
else:
if clip_left >= clip_right:
import warnings
from Bio import BiopythonParserWarning
warnings.warn(
"Overlapping clip values in SFF record", BiopythonParserWarning
)
seq = seq.lower()
else:
# This use of mixed case mimics the Roche SFF tool's FASTA output
seq = (
seq[:clip_left].lower()
+ seq[clip_left:clip_right].upper()
+ seq[clip_right:].lower()
)
annotations = {
"flow_values": struct.unpack(read_flow_fmt, flow_values),
"flow_index": struct.unpack(temp_fmt, flow_index),
"flow_chars": flow_chars,
"flow_key": key_sequence,
"clip_qual_left": clip_qual_left,
"clip_qual_right": clip_qual_right,
"clip_adapter_left": clip_adapter_left,
"clip_adapter_right": clip_adapter_right,
}
if re.match(_valid_UAN_read_name, name):
annotations["time"] = _get_read_time(name)
annotations["region"] = _get_read_region(name)
annotations["coords"] = _get_read_xy(name)
annotations["molecule_type"] = "DNA"
record = SeqRecord(
Seq(seq), id=name, name=name, description="", annotations=annotations
)
# Dirty trick to speed up this line:
# record.letter_annotations["phred_quality"] = quals
dict.__setitem__(record._per_letter_annotations, "phred_quality", quals)
# Return the record and then continue...
return record |
Interpret a string as a base-36 number as per 454 manual (PRIVATE). | def _string_as_base_36(string):
"""Interpret a string as a base-36 number as per 454 manual (PRIVATE)."""
total = 0
for c, power in zip(string[::-1], _powers_of_36):
# For reference: ord('0') = 48, ord('9') = 57
# For reference: ord('A') = 65, ord('Z') = 90
# For reference: ord('a') = 97, ord('z') = 122
if 48 <= ord(c) <= 57:
val = ord(c) - 22 # equivalent to: - ord('0') + 26
elif 65 <= ord(c) <= 90:
val = ord(c) - 65
elif 97 <= ord(c) <= 122:
val = ord(c) - 97
else:
# Invalid character
val = 0
total += val * power
return total |
Extract coordinates from last 5 characters of read name (PRIVATE). | def _get_read_xy(read_name):
"""Extract coordinates from last 5 characters of read name (PRIVATE)."""
number = _string_as_base_36(read_name[9:])
return divmod(number, 4096) |
Extract time from first 6 characters of read name (PRIVATE). | def _get_read_time(read_name):
"""Extract time from first 6 characters of read name (PRIVATE)."""
time_list = []
remainder = _string_as_base_36(read_name[:6])
for denominator in _time_denominators:
this_term, remainder = divmod(remainder, denominator)
time_list.append(this_term)
time_list.append(remainder)
time_list[0] += 2000
return time_list |
Extract region from read name (PRIVATE). | def _get_read_region(read_name):
"""Extract region from read name (PRIVATE)."""
return int(read_name[8]) |
Extract the next read in the file as a raw (bytes) string (PRIVATE). | def _sff_read_raw_record(handle, number_of_flows_per_read):
"""Extract the next read in the file as a raw (bytes) string (PRIVATE)."""
read_header_fmt = ">2HI"
read_header_size = struct.calcsize(read_header_fmt)
read_flow_fmt = ">%iH" % number_of_flows_per_read
read_flow_size = struct.calcsize(read_flow_fmt)
raw = handle.read(read_header_size)
read_header_length, name_length, seq_len = struct.unpack(read_header_fmt, raw)
if read_header_length < 10 or read_header_length % 8 != 0:
raise ValueError(
"Malformed read header, says length is %i" % read_header_length
)
# now the four clip values (4H = 8 bytes), and read name
raw += handle.read(8 + name_length)
# and any padding (remainder of header)
padding = read_header_length - read_header_size - 8 - name_length
pad = handle.read(padding)
if pad.count(_null) != padding:
import warnings
from Bio import BiopythonParserWarning
warnings.warn(
"Your SFF file is invalid, post name %i "
"byte padding region contained data" % padding,
BiopythonParserWarning,
)
raw += pad
# now the flowgram values, flowgram index, bases and qualities
raw += handle.read(read_flow_size + seq_len * 3)
padding = (read_flow_size + seq_len * 3) % 8
# now any padding...
if padding:
padding = 8 - padding
pad = handle.read(padding)
if pad.count(_null) != padding:
import warnings
from Bio import BiopythonParserWarning
warnings.warn(
"Your SFF file is invalid, post quality %i "
"byte padding region contained data" % padding,
BiopythonParserWarning,
)
raw += pad
# Return the raw bytes
return raw |
Check final padding is OK (8 byte alignment) and file ends (PRIVATE).
Will attempt to spot apparent SFF file concatenation and give an error.
Will not attempt to seek, only moves the handle forward. | def _check_eof(handle, index_offset, index_length):
"""Check final padding is OK (8 byte alignment) and file ends (PRIVATE).
Will attempt to spot apparent SFF file concatenation and give an error.
Will not attempt to seek, only moves the handle forward.
"""
offset = handle.tell()
extra = b""
padding = 0
if index_offset and offset <= index_offset:
# Index block then end of file...
if offset < index_offset:
raise ValueError(
"Gap of %i bytes after final record end %i, "
"before %i where index starts?"
% (index_offset - offset, offset, index_offset)
)
# Doing read to jump the index rather than a seek
# in case this is a network handle or similar
handle.read(index_offset + index_length - offset)
offset = index_offset + index_length
if offset != handle.tell():
raise ValueError(
"Wanted %i, got %i, index is %i to %i"
% (offset, handle.tell(), index_offset, index_offset + index_length)
)
if offset % 8:
padding = 8 - (offset % 8)
extra = handle.read(padding)
if padding >= 4 and extra[-4:] == _sff:
# Seen this in one user supplied file, should have been
# four bytes of null padding but was actually .sff and
# the start of a new concatenated SFF file!
raise ValueError(
"Your SFF file is invalid, post index %i byte "
"null padding region ended '.sff' which could "
"be the start of a concatenated SFF file? "
"See offset %i" % (padding, offset)
)
if padding and not extra:
# TODO - Is this error harmless enough to just ignore?
import warnings
from Bio import BiopythonParserWarning
warnings.warn(
"Your SFF file is technically invalid as it is missing "
"a terminal %i byte null padding region." % padding,
BiopythonParserWarning,
)
return
if extra.count(_null) != padding:
import warnings
from Bio import BiopythonParserWarning
warnings.warn(
"Your SFF file is invalid, post index %i byte "
"null padding region contained data: %r" % (padding, extra),
BiopythonParserWarning,
)
offset = handle.tell()
if offset % 8 != 0:
raise ValueError("Wanted offset %i %% 8 = %i to be zero" % (offset, offset % 8))
# Should now be at the end of the file...
extra = handle.read(4)
if extra == _sff:
raise ValueError(
"Additional data at end of SFF file, "
"perhaps multiple SFF files concatenated? "
"See offset %i" % offset
)
elif extra:
raise ValueError("Additional data at end of SFF file, see offset %i" % offset) |
Iterate over the packets of a SnapGene file.
A SnapGene file is made of packets, each packet being a TLV-like
structure comprising:
- 1 single byte indicating the packet's type;
- 1 big-endian long integer (4 bytes) indicating the length of the
packet's data;
- the actual data. | def _iterate(handle):
"""Iterate over the packets of a SnapGene file.
A SnapGene file is made of packets, each packet being a TLV-like
structure comprising:
- 1 single byte indicating the packet's type;
- 1 big-endian long integer (4 bytes) indicating the length of the
packet's data;
- the actual data.
"""
while True:
packet_type = handle.read(1)
if len(packet_type) < 1: # No more packet
return
packet_type = unpack(">B", packet_type)[0]
length = handle.read(4)
if len(length) < 4:
raise ValueError("Unexpected end of packet")
length = unpack(">I", length)[0]
data = handle.read(length)
if len(data) < length:
raise ValueError("Unexpected end of packet")
yield (packet_type, length, data) |
Parse a DNA sequence packet.
A DNA sequence packet contains a single byte flag followed by the
sequence itself. | def _parse_dna_packet(length, data, record):
"""Parse a DNA sequence packet.
A DNA sequence packet contains a single byte flag followed by the
sequence itself.
"""
if record.seq:
raise ValueError("The file contains more than one DNA packet")
flags, sequence = unpack(">B%ds" % (length - 1), data)
record.seq = Seq(sequence.decode("ASCII"))
record.annotations["molecule_type"] = "DNA"
if flags & 0x01:
record.annotations["topology"] = "circular"
else:
record.annotations["topology"] = "linear" |
Parse a 'Notes' packet.
This type of packet contains some metadata about the sequence. They
are stored as a XML string with a 'Notes' root node. | def _parse_notes_packet(length, data, record):
"""Parse a 'Notes' packet.
This type of packet contains some metadata about the sequence. They
are stored as a XML string with a 'Notes' root node.
"""
xml = parseString(data.decode("UTF-8"))
type = _get_child_value(xml, "Type")
if type == "Synthetic":
record.annotations["data_file_division"] = "SYN"
else:
record.annotations["data_file_division"] = "UNC"
date = _get_child_value(xml, "LastModified")
if date:
record.annotations["date"] = datetime.strptime(date, "%Y.%m.%d")
acc = _get_child_value(xml, "AccessionNumber")
if acc:
record.id = acc
comment = _get_child_value(xml, "Comments")
if comment:
record.name = comment.split(" ", 1)[0]
record.description = comment
if not acc:
record.id = record.name |
Parse a SnapGene cookie packet.
Every SnapGene file starts with a packet of this type. It acts as
a magic cookie identifying the file as a SnapGene file. | def _parse_cookie_packet(length, data, record):
"""Parse a SnapGene cookie packet.
Every SnapGene file starts with a packet of this type. It acts as
a magic cookie identifying the file as a SnapGene file.
"""
cookie, seq_type, exp_version, imp_version = unpack(">8sHHH", data)
if cookie.decode("ASCII") != "SnapGene":
raise ValueError("The file is not a valid SnapGene file") |
Parse a sequence features packet.
This packet stores sequence features (except primer binding sites,
which are in a dedicated Primers packet). The data is a XML string
starting with a 'Features' root node. | def _parse_features_packet(length, data, record):
"""Parse a sequence features packet.
This packet stores sequence features (except primer binding sites,
which are in a dedicated Primers packet). The data is a XML string
starting with a 'Features' root node.
"""
xml = parseString(data.decode("UTF-8"))
for feature in xml.getElementsByTagName("Feature"):
quals = {}
type = _get_attribute_value(feature, "type", default="misc_feature")
strand = +1
directionality = int(
_get_attribute_value(feature, "directionality", default="1")
)
if directionality == 2:
strand = -1
location = None
subparts = []
n_parts = 0
for segment in feature.getElementsByTagName("Segment"):
if _get_attribute_value(segment, "type", "standard") == "gap":
continue
rng = _get_attribute_value(segment, "range")
n_parts += 1
next_location = _parse_location(rng, strand, record)
if not location:
location = next_location
elif strand == -1:
# Reverse segments order for reverse-strand features
location = next_location + location
else:
location = location + next_location
name = _get_attribute_value(segment, "name")
if name:
subparts.append([n_parts, name])
if len(subparts) > 0:
# Add a "parts" qualifiers to represent "named subfeatures"
if strand == -1:
# Reverse segment indexes and order for reverse-strand features
subparts = reversed([[n_parts - i + 1, name] for i, name in subparts])
quals["parts"] = [";".join(f"{i}:{name}" for i, name in subparts)]
if not location:
raise ValueError("Missing feature location")
for qualifier in feature.getElementsByTagName("Q"):
qname = _get_attribute_value(
qualifier, "name", error="Missing qualifier name"
)
qvalues = []
for value in qualifier.getElementsByTagName("V"):
if value.hasAttribute("text"):
qvalues.append(_decode(value.attributes["text"].value))
elif value.hasAttribute("predef"):
qvalues.append(_decode(value.attributes["predef"].value))
elif value.hasAttribute("int"):
qvalues.append(int(value.attributes["int"].value))
quals[qname] = qvalues
name = _get_attribute_value(feature, "name")
if name:
if "label" not in quals:
# No explicit label attribute, use the SnapGene name
quals["label"] = [name]
elif name not in quals["label"]:
# The SnapGene name is different from the label,
# add a specific attribute to represent it
quals["name"] = [name]
feature = SeqFeature(location, type=type, qualifiers=quals)
record.features.append(feature) |
Parse a Primers packet.
A Primers packet is similar to a Features packet but specifically
stores primer binding features. The data is a XML string starting
with a 'Primers' root node. | def _parse_primers_packet(length, data, record):
"""Parse a Primers packet.
A Primers packet is similar to a Features packet but specifically
stores primer binding features. The data is a XML string starting
with a 'Primers' root node.
"""
xml = parseString(data.decode("UTF-8"))
for primer in xml.getElementsByTagName("Primer"):
quals = {}
name = _get_attribute_value(primer, "name")
if name:
quals["label"] = [name]
locations = []
for site in primer.getElementsByTagName("BindingSite"):
rng = _get_attribute_value(
site, "location", error="Missing binding site location"
)
strand = int(_get_attribute_value(site, "boundStrand", default="0"))
if strand == 1:
strand = -1
else:
strand = +1
location = _parse_location(rng, strand, record, is_primer=True)
simplified = int(_get_attribute_value(site, "simplified", default="0")) == 1
if simplified and location in locations:
# Duplicate "simplified" binding site, ignore
continue
locations.append(location)
feature = SeqFeature(
location,
type="primer_bind",
qualifiers=quals,
)
record.features.append(feature) |
Break up a Swiss-Prot/UniProt file into SeqRecord objects.
Argument source is a file-like object or a path to a file.
Every section from the ID line to the terminating // becomes
a single SeqRecord with associated annotation and features.
This parser is for the flat file "swiss" format as used by:
- Swiss-Prot aka SwissProt
- TrEMBL
- UniProtKB aka UniProt Knowledgebase
For consistency with BioPerl and EMBOSS we call this the "swiss"
format. See also the SeqIO support for "uniprot-xml" format.
Rather than calling it directly, you are expected to use this
parser via Bio.SeqIO.parse(..., format="swiss") instead. | def SwissIterator(source):
"""Break up a Swiss-Prot/UniProt file into SeqRecord objects.
Argument source is a file-like object or a path to a file.
Every section from the ID line to the terminating // becomes
a single SeqRecord with associated annotation and features.
This parser is for the flat file "swiss" format as used by:
- Swiss-Prot aka SwissProt
- TrEMBL
- UniProtKB aka UniProt Knowledgebase
For consistency with BioPerl and EMBOSS we call this the "swiss"
format. See also the SeqIO support for "uniprot-xml" format.
Rather than calling it directly, you are expected to use this
parser via Bio.SeqIO.parse(..., format="swiss") instead.
"""
swiss_records = SwissProt.parse(source)
for swiss_record in swiss_records:
# Convert the SwissProt record to a SeqRecord
record = SeqRecord(
Seq(swiss_record.sequence),
id=swiss_record.accessions[0],
name=swiss_record.entry_name,
description=swiss_record.description,
features=swiss_record.features,
)
for cross_reference in swiss_record.cross_references:
if len(cross_reference) < 2:
continue
database, accession = cross_reference[:2]
dbxref = f"{database}:{accession}"
if dbxref not in record.dbxrefs:
record.dbxrefs.append(dbxref)
annotations = record.annotations
annotations["molecule_type"] = "protein"
annotations["accessions"] = swiss_record.accessions
if swiss_record.protein_existence:
annotations["protein_existence"] = swiss_record.protein_existence
if swiss_record.created:
date, version = swiss_record.created
annotations["date"] = date
annotations["sequence_version"] = version
if swiss_record.sequence_update:
date, version = swiss_record.sequence_update
annotations["date_last_sequence_update"] = date
annotations["sequence_version"] = version
if swiss_record.annotation_update:
date, version = swiss_record.annotation_update
annotations["date_last_annotation_update"] = date
annotations["entry_version"] = version
if swiss_record.gene_name:
annotations["gene_name"] = swiss_record.gene_name
annotations["organism"] = swiss_record.organism.rstrip(".")
annotations["taxonomy"] = swiss_record.organism_classification
annotations["ncbi_taxid"] = swiss_record.taxonomy_id
if swiss_record.host_organism:
annotations["organism_host"] = swiss_record.host_organism
if swiss_record.host_taxonomy_id:
annotations["host_ncbi_taxid"] = swiss_record.host_taxonomy_id
if swiss_record.comments:
annotations["comment"] = "\n".join(swiss_record.comments)
if swiss_record.references:
annotations["references"] = []
for reference in swiss_record.references:
feature = SeqFeature.Reference()
feature.comment = " ".join("%s=%s;" % k_v for k_v in reference.comments)
for key, value in reference.references:
if key == "PubMed":
feature.pubmed_id = value
elif key == "MEDLINE":
feature.medline_id = value
elif key == "DOI":
pass
elif key == "AGRICOLA":
pass
else:
raise ValueError(f"Unknown key {key} found in references")
feature.authors = reference.authors
feature.title = reference.title
feature.journal = reference.location
annotations["references"].append(feature)
if swiss_record.keywords:
record.annotations["keywords"] = swiss_record.keywords
yield record |
Return record as tab separated (id(tab)seq) string. | def as_tab(record):
"""Return record as tab separated (id(tab)seq) string."""
title = _clean(record.id)
seq = _get_seq_string(record) # Catches sequence being None
assert "\t" not in title
assert "\n" not in title
assert "\r" not in title
assert "\t" not in seq
assert "\n" not in seq
assert "\r" not in seq
return f"{title}\t{seq}\n" |
Iterate over UniProt XML as SeqRecord objects.
parses an XML entry at a time from any UniProt XML file
returns a SeqRecord for each iteration
This generator can be used in Bio.SeqIO
Argument source is a file-like object or a path to a file.
Optional argument alphabet should not be used anymore.
return_raw_comments = True --> comment fields are returned as complete XML to allow further processing
skip_parsing_errors = True --> if parsing errors are found, skip to next entry | def UniprotIterator(source, alphabet=None, return_raw_comments=False):
"""Iterate over UniProt XML as SeqRecord objects.
parses an XML entry at a time from any UniProt XML file
returns a SeqRecord for each iteration
This generator can be used in Bio.SeqIO
Argument source is a file-like object or a path to a file.
Optional argument alphabet should not be used anymore.
return_raw_comments = True --> comment fields are returned as complete XML to allow further processing
skip_parsing_errors = True --> if parsing errors are found, skip to next entry
"""
if alphabet is not None:
raise ValueError("The alphabet argument is no longer supported")
try:
for event, elem in ElementTree.iterparse(
source, events=("start", "start-ns", "end")
):
if event == "start-ns" and not (
elem[1].startswith("http://www.w3.org/") or NS == f"{{{elem[1]}}}"
):
raise ValueError(
f"SeqIO format 'uniprot-xml' only parses xml with namespace: {NS} but xml has namespace: {{{elem[1]}}}"
)
if event == "end" and elem.tag == NS + "entry":
yield Parser(elem, return_raw_comments=return_raw_comments).parse()
elem.clear()
except ElementTree.ParseError as exception:
if errors.messages[exception.code] == errors.XML_ERROR_NO_ELEMENTS:
assert exception.position == (1, 0) # line 1, column 0
raise ValueError("Empty file.") from None
else:
raise |
Read the specified number of bytes from the given handle. | def _read(handle, length):
"""Read the specified number of bytes from the given handle."""
data = handle.read(length)
if len(data) < length:
raise ValueError("Cannot read %d bytes from handle" % length)
return data |
Read a Pascal string.
A Pascal string comprises a single byte giving the length of the string
followed by as many bytes. | def _read_pstring(handle):
"""Read a Pascal string.
A Pascal string comprises a single byte giving the length of the string
followed by as many bytes.
"""
length = unpack(">B", _read(handle, 1))[0]
return unpack("%ds" % length, _read(handle, length))[0].decode("ASCII") |
Read an overhang specification.
An overhang is represented in a XDNA file as:
- a Pascal string containing the text representation of the overhang
length, which also indicates the nature of the overhang:
- a length of zero means no overhang,
- a negative length means a 3' overhang,
- a positive length means a 5' overhang;
- the actual overhang sequence.
Examples:
- 0x01 0x30: no overhang ("0", as a P-string)
- 0x01 0x32 0x41 0x41: 5' AA overhang (P-string "2", then "AA")
- 0x02 0x2D 0x31 0x43: 3' C overhang (P-string "-1", then "C")
Returns a tuple (length, sequence). | def _read_overhang(handle):
"""Read an overhang specification.
An overhang is represented in a XDNA file as:
- a Pascal string containing the text representation of the overhang
length, which also indicates the nature of the overhang:
- a length of zero means no overhang,
- a negative length means a 3' overhang,
- a positive length means a 5' overhang;
- the actual overhang sequence.
Examples:
- 0x01 0x30: no overhang ("0", as a P-string)
- 0x01 0x32 0x41 0x41: 5' AA overhang (P-string "2", then "AA")
- 0x02 0x2D 0x31 0x43: 3' C overhang (P-string "-1", then "C")
Returns a tuple (length, sequence).
"""
length = _read_pstring_as_integer(handle)
if length != 0:
overhang = _read(handle, abs(length))
return (length, overhang)
else:
return (None, None) |
Parse the description field of a Xdna feature.
The 'description' field of a feature sometimes contains several
GenBank-like qualifiers, separated by carriage returns (CR, 0x0D). | def _parse_feature_description(desc, qualifiers):
"""Parse the description field of a Xdna feature.
The 'description' field of a feature sometimes contains several
GenBank-like qualifiers, separated by carriage returns (CR, 0x0D).
"""
# Split the field's value in CR-separated lines, skipping empty lines
for line in [x for x in desc.split("\x0d") if len(x) > 0]:
# Is it a qualifier="value" line?
m = match('^([^=]+)="([^"]+)"?$', line)
if m:
# Store the qualifier as provided
qual, value = m.groups()
qualifiers[qual] = [value]
elif '"' not in line: # Reject ill-formed qualifiers
# Store the entire line as a generic note qualifier
qualifiers["note"] = [line] |
Read a single sequence feature. | def _read_feature(handle, record):
"""Read a single sequence feature."""
name = _read_pstring(handle)
desc = _read_pstring(handle)
type = _read_pstring(handle) or "misc_feature"
start = _read_pstring_as_integer(handle)
end = _read_pstring_as_integer(handle)
# Feature flags (4 bytes):
# byte 1 is the strand (0: reverse strand, 1: forward strand);
# byte 2 tells whether to display the feature;
# byte 4 tells whether to draw an arrow when displaying the feature;
# meaning of byte 3 is unknown.
(forward, display, arrow) = unpack(">BBxB", _read(handle, 4))
if forward:
strand = 1
else:
strand = -1
start, end = end, start
# The last field is a Pascal string usually containing a
# comma-separated triplet of numbers ranging from 0 to 255.
# I suspect this represents the RGB color to use when displaying
# the feature. Skip it as we have no need for it.
_read_pstring(handle)
# Assemble the feature
# Shift start by -1 as XDNA feature coordinates are 1-based
# while Biopython uses 0-based counting.
location = SimpleLocation(start - 1, end, strand=strand)
qualifiers = {}
if name:
qualifiers["label"] = [name]
_parse_feature_description(desc, qualifiers)
feature = SeqFeature(location, type=type, qualifiers=qualifiers)
record.features.append(feature) |
Write complete set of sequences to a file.
Arguments:
- sequences - A list (or iterator) of SeqRecord objects, or a single
SeqRecord.
- handle - File handle object to write to, or filename as string.
- format - lower case string describing the file format to write.
Note if providing a file handle, your code should close the handle
after calling this function (to ensure the data gets flushed to disk).
Returns the number of records written (as an integer). | def write(
sequences: Union[Iterable[SeqRecord], SeqRecord],
handle: _TextIOSource,
format: str,
) -> int:
"""Write complete set of sequences to a file.
Arguments:
- sequences - A list (or iterator) of SeqRecord objects, or a single
SeqRecord.
- handle - File handle object to write to, or filename as string.
- format - lower case string describing the file format to write.
Note if providing a file handle, your code should close the handle
after calling this function (to ensure the data gets flushed to disk).
Returns the number of records written (as an integer).
"""
from Bio import AlignIO
# Try and give helpful error messages:
if not isinstance(format, str):
raise TypeError("Need a string for the file format (lower case)")
if not format:
raise ValueError("Format required (lower case string)")
if not format.islower():
raise ValueError(f"Format string '{format}' should be lower case")
if isinstance(handle, SeqRecord):
raise TypeError("Check arguments, handle should NOT be a SeqRecord")
if isinstance(handle, list):
# e.g. list of SeqRecord objects
raise TypeError("Check arguments, handle should NOT be a list")
if isinstance(sequences, SeqRecord):
# This raised an exception in older versions of Biopython
sequences = [sequences]
# Map the file format to a writer function/class
format_function = _FormatToString.get(format)
if format_function is not None:
count = 0
with as_handle(handle, "w") as fp:
for record in sequences:
fp.write(format_function(record))
count += 1
return count
writer_class = _FormatToWriter.get(format)
if writer_class is not None:
count = writer_class(handle).write_file(sequences)
if not isinstance(count, int):
raise RuntimeError(
"Internal error - the underlying %s writer "
"should have returned the record count, not %r" % (format, count)
)
return count
if format in AlignIO._FormatToWriter:
# Try and turn all the records into a single alignment,
# and write that using Bio.AlignIO
# Using a lazy import as most users won't need this loaded:
from Bio.Align import MultipleSeqAlignment
alignment = MultipleSeqAlignment(sequences)
alignment_count = AlignIO.write([alignment], handle, format)
if alignment_count != 1:
raise RuntimeError(
"Internal error - the underlying writer "
"should have returned 1, not %r" % alignment_count
)
count = len(alignment)
return count
if format in _FormatToIterator or format in AlignIO._FormatToIterator:
raise ValueError(f"Reading format '{format}' is supported, but not writing")
raise ValueError(f"Unknown format '{format}'") |
Turn a sequence file into an iterator returning SeqRecords.
Arguments:
- handle - handle to the file, or the filename as a string
(note older versions of Biopython only took a handle).
- format - lower case string describing the file format.
- alphabet - no longer used, should be None.
Typical usage, opening a file to read in, and looping over the record(s):
>>> from Bio import SeqIO
>>> filename = "Fasta/sweetpea.nu"
>>> for record in SeqIO.parse(filename, "fasta"):
... print("ID %s" % record.id)
... print("Sequence length %i" % len(record))
ID gi|3176602|gb|U78617.1|LOU78617
Sequence length 309
For lazy-loading file formats such as twobit, for which the file contents
is read on demand only, ensure that the file remains open while extracting
sequence data.
If you have a string 'data' containing the file contents, you must
first turn this into a handle in order to parse it:
>>> data = ">Alpha\nACCGGATGTA\n>Beta\nAGGCTCGGTTA\n"
>>> from Bio import SeqIO
>>> from io import StringIO
>>> for record in SeqIO.parse(StringIO(data), "fasta"):
... print("%s %s" % (record.id, record.seq))
Alpha ACCGGATGTA
Beta AGGCTCGGTTA
Use the Bio.SeqIO.read(...) function when you expect a single record
only. | def parse(handle, format, alphabet=None):
r"""Turn a sequence file into an iterator returning SeqRecords.
Arguments:
- handle - handle to the file, or the filename as a string
(note older versions of Biopython only took a handle).
- format - lower case string describing the file format.
- alphabet - no longer used, should be None.
Typical usage, opening a file to read in, and looping over the record(s):
>>> from Bio import SeqIO
>>> filename = "Fasta/sweetpea.nu"
>>> for record in SeqIO.parse(filename, "fasta"):
... print("ID %s" % record.id)
... print("Sequence length %i" % len(record))
ID gi|3176602|gb|U78617.1|LOU78617
Sequence length 309
For lazy-loading file formats such as twobit, for which the file contents
is read on demand only, ensure that the file remains open while extracting
sequence data.
If you have a string 'data' containing the file contents, you must
first turn this into a handle in order to parse it:
>>> data = ">Alpha\nACCGGATGTA\n>Beta\nAGGCTCGGTTA\n"
>>> from Bio import SeqIO
>>> from io import StringIO
>>> for record in SeqIO.parse(StringIO(data), "fasta"):
... print("%s %s" % (record.id, record.seq))
Alpha ACCGGATGTA
Beta AGGCTCGGTTA
Use the Bio.SeqIO.read(...) function when you expect a single record
only.
"""
# NOTE - The above docstring has some raw \n characters needed
# for the StringIO example, hence the whole docstring is in raw
# string mode (see the leading r before the opening quote).
from Bio import AlignIO
# Try and give helpful error messages:
if not isinstance(format, str):
raise TypeError("Need a string for the file format (lower case)")
if not format:
raise ValueError("Format required (lower case string)")
if not format.islower():
raise ValueError(f"Format string '{format}' should be lower case")
if alphabet is not None:
raise ValueError("The alphabet argument is no longer supported")
iterator_generator = _FormatToIterator.get(format)
if iterator_generator:
return iterator_generator(handle)
if format in AlignIO._FormatToIterator:
# Use Bio.AlignIO to read in the alignments
return (r for alignment in AlignIO.parse(handle, format) for r in alignment)
raise ValueError(f"Unknown format '{format}'") |
Turn a sequence file into a single SeqRecord.
Arguments:
- handle - handle to the file, or the filename as a string
(note older versions of Biopython only took a handle).
- format - string describing the file format.
- alphabet - no longer used, should be None.
This function is for use parsing sequence files containing
exactly one record. For example, reading a GenBank file:
>>> from Bio import SeqIO
>>> record = SeqIO.read("GenBank/arab1.gb", "genbank")
>>> print("ID %s" % record.id)
ID AC007323.5
>>> print("Sequence length %i" % len(record))
Sequence length 86436
If the handle contains no records, or more than one record,
an exception is raised. For example:
>>> from Bio import SeqIO
>>> record = SeqIO.read("GenBank/cor6_6.gb", "genbank")
Traceback (most recent call last):
...
ValueError: More than one record found in handle
If however you want the first record from a file containing
multiple records this function would raise an exception (as
shown in the example above). Instead use:
>>> from Bio import SeqIO
>>> record = next(SeqIO.parse("GenBank/cor6_6.gb", "genbank"))
>>> print("First record's ID %s" % record.id)
First record's ID X55053.1
Use the Bio.SeqIO.parse(handle, format) function if you want
to read multiple records from the handle. | def read(handle, format, alphabet=None):
"""Turn a sequence file into a single SeqRecord.
Arguments:
- handle - handle to the file, or the filename as a string
(note older versions of Biopython only took a handle).
- format - string describing the file format.
- alphabet - no longer used, should be None.
This function is for use parsing sequence files containing
exactly one record. For example, reading a GenBank file:
>>> from Bio import SeqIO
>>> record = SeqIO.read("GenBank/arab1.gb", "genbank")
>>> print("ID %s" % record.id)
ID AC007323.5
>>> print("Sequence length %i" % len(record))
Sequence length 86436
If the handle contains no records, or more than one record,
an exception is raised. For example:
>>> from Bio import SeqIO
>>> record = SeqIO.read("GenBank/cor6_6.gb", "genbank")
Traceback (most recent call last):
...
ValueError: More than one record found in handle
If however you want the first record from a file containing
multiple records this function would raise an exception (as
shown in the example above). Instead use:
>>> from Bio import SeqIO
>>> record = next(SeqIO.parse("GenBank/cor6_6.gb", "genbank"))
>>> print("First record's ID %s" % record.id)
First record's ID X55053.1
Use the Bio.SeqIO.parse(handle, format) function if you want
to read multiple records from the handle.
"""
iterator = parse(handle, format, alphabet)
try:
record = next(iterator)
except StopIteration:
raise ValueError("No records found in handle") from None
try:
next(iterator)
raise ValueError("More than one record found in handle")
except StopIteration:
pass
return record |
Turn a sequence iterator or list into a dictionary.
Arguments:
- sequences - An iterator that returns SeqRecord objects,
or simply a list of SeqRecord objects.
- key_function - Optional callback function which when given a
SeqRecord should return a unique key for the dictionary.
e.g. key_function = lambda rec : rec.name
or, key_function = lambda rec : rec.description.split()[0]
If key_function is omitted then record.id is used, on the assumption
that the records objects returned are SeqRecords with a unique id.
If there are duplicate keys, an error is raised.
Since Python 3.7, the default dict class maintains key order, meaning
this dictionary will reflect the order of records given to it. For
CPython and PyPy, this was already implemented for Python 3.6, so
effectively you can always assume the record order is preserved.
Example usage, defaulting to using the record.id as key:
>>> from Bio import SeqIO
>>> filename = "GenBank/cor6_6.gb"
>>> format = "genbank"
>>> id_dict = SeqIO.to_dict(SeqIO.parse(filename, format))
>>> print(list(id_dict))
['X55053.1', 'X62281.1', 'M81224.1', 'AJ237582.1', 'L31939.1', 'AF297471.1']
>>> print(id_dict["L31939.1"].description)
Brassica rapa (clone bif72) kin mRNA, complete cds
A more complex example, using the key_function argument in order to
use a sequence checksum as the dictionary key:
>>> from Bio import SeqIO
>>> from Bio.SeqUtils.CheckSum import seguid
>>> filename = "GenBank/cor6_6.gb"
>>> format = "genbank"
>>> seguid_dict = SeqIO.to_dict(SeqIO.parse(filename, format),
... key_function = lambda rec : seguid(rec.seq))
>>> for key, record in sorted(seguid_dict.items()):
... print("%s %s" % (key, record.id))
/wQvmrl87QWcm9llO4/efg23Vgg AJ237582.1
BUg6YxXSKWEcFFH0L08JzaLGhQs L31939.1
SabZaA4V2eLE9/2Fm5FnyYy07J4 X55053.1
TtWsXo45S3ZclIBy4X/WJc39+CY M81224.1
l7gjJFE6W/S1jJn5+1ASrUKW/FA X62281.1
uVEYeAQSV5EDQOnFoeMmVea+Oow AF297471.1
This approach is not suitable for very large sets of sequences, as all
the SeqRecord objects are held in memory. Instead, consider using the
Bio.SeqIO.index() function (if it supports your particular file format).
This dictionary will reflect the order of records given to it. | def to_dict(sequences, key_function=None):
"""Turn a sequence iterator or list into a dictionary.
Arguments:
- sequences - An iterator that returns SeqRecord objects,
or simply a list of SeqRecord objects.
- key_function - Optional callback function which when given a
SeqRecord should return a unique key for the dictionary.
e.g. key_function = lambda rec : rec.name
or, key_function = lambda rec : rec.description.split()[0]
If key_function is omitted then record.id is used, on the assumption
that the records objects returned are SeqRecords with a unique id.
If there are duplicate keys, an error is raised.
Since Python 3.7, the default dict class maintains key order, meaning
this dictionary will reflect the order of records given to it. For
CPython and PyPy, this was already implemented for Python 3.6, so
effectively you can always assume the record order is preserved.
Example usage, defaulting to using the record.id as key:
>>> from Bio import SeqIO
>>> filename = "GenBank/cor6_6.gb"
>>> format = "genbank"
>>> id_dict = SeqIO.to_dict(SeqIO.parse(filename, format))
>>> print(list(id_dict))
['X55053.1', 'X62281.1', 'M81224.1', 'AJ237582.1', 'L31939.1', 'AF297471.1']
>>> print(id_dict["L31939.1"].description)
Brassica rapa (clone bif72) kin mRNA, complete cds
A more complex example, using the key_function argument in order to
use a sequence checksum as the dictionary key:
>>> from Bio import SeqIO
>>> from Bio.SeqUtils.CheckSum import seguid
>>> filename = "GenBank/cor6_6.gb"
>>> format = "genbank"
>>> seguid_dict = SeqIO.to_dict(SeqIO.parse(filename, format),
... key_function = lambda rec : seguid(rec.seq))
>>> for key, record in sorted(seguid_dict.items()):
... print("%s %s" % (key, record.id))
/wQvmrl87QWcm9llO4/efg23Vgg AJ237582.1
BUg6YxXSKWEcFFH0L08JzaLGhQs L31939.1
SabZaA4V2eLE9/2Fm5FnyYy07J4 X55053.1
TtWsXo45S3ZclIBy4X/WJc39+CY M81224.1
l7gjJFE6W/S1jJn5+1ASrUKW/FA X62281.1
uVEYeAQSV5EDQOnFoeMmVea+Oow AF297471.1
This approach is not suitable for very large sets of sequences, as all
the SeqRecord objects are held in memory. Instead, consider using the
Bio.SeqIO.index() function (if it supports your particular file format).
This dictionary will reflect the order of records given to it.
"""
# This is to avoid a lambda function:
def _default_key_function(rec):
return rec.id
if key_function is None:
key_function = _default_key_function
d = {}
for record in sequences:
key = key_function(record)
if key in d:
raise ValueError(f"Duplicate key '{key}'")
d[key] = record
return d |
Indexes a sequence file and returns a dictionary like object.
Arguments:
- filename - string giving name of file to be indexed
- format - lower case string describing the file format
- alphabet - no longer used, leave as None
- key_function - Optional callback function which when given a
SeqRecord identifier string should return a unique key for the
dictionary.
This indexing function will return a dictionary like object, giving the
SeqRecord objects as values.
As of Biopython 1.69, this will preserve the ordering of the records in
file when iterating over the entries.
>>> from Bio import SeqIO
>>> records = SeqIO.index("Quality/example.fastq", "fastq")
>>> len(records)
3
>>> list(records) # make a list of the keys
['EAS54_6_R1_2_1_413_324', 'EAS54_6_R1_2_1_540_792', 'EAS54_6_R1_2_1_443_348']
>>> print(records["EAS54_6_R1_2_1_540_792"].format("fasta"))
>EAS54_6_R1_2_1_540_792
TTGGCAGGCCAAGGCCGATGGATCA
<BLANKLINE>
>>> "EAS54_6_R1_2_1_540_792" in records
True
>>> print(records.get("Missing", None))
None
>>> records.close()
If the file is BGZF compressed, this is detected automatically. Ordinary
GZIP files are not supported:
>>> from Bio import SeqIO
>>> records = SeqIO.index("Quality/example.fastq.bgz", "fastq")
>>> len(records)
3
>>> print(records["EAS54_6_R1_2_1_540_792"].seq)
TTGGCAGGCCAAGGCCGATGGATCA
>>> records.close()
When you call the index function, it will scan through the file, noting
the location of each record. When you access a particular record via the
dictionary methods, the code will jump to the appropriate part of the
file and then parse that section into a SeqRecord.
Note that not all the input formats supported by Bio.SeqIO can be used
with this index function. It is designed to work only with sequential
file formats (e.g. "fasta", "gb", "fastq") and is not suitable for any
interlaced file format (e.g. alignment formats such as "clustal").
For small files, it may be more efficient to use an in memory Python
dictionary, e.g.
>>> from Bio import SeqIO
>>> records = SeqIO.to_dict(SeqIO.parse("Quality/example.fastq", "fastq"))
>>> len(records)
3
>>> list(records) # make a list of the keys
['EAS54_6_R1_2_1_413_324', 'EAS54_6_R1_2_1_540_792', 'EAS54_6_R1_2_1_443_348']
>>> print(records["EAS54_6_R1_2_1_540_792"].format("fasta"))
>EAS54_6_R1_2_1_540_792
TTGGCAGGCCAAGGCCGATGGATCA
<BLANKLINE>
As with the to_dict() function, by default the id string of each record
is used as the key. You can specify a callback function to transform
this (the record identifier string) into your preferred key. For example:
>>> from Bio import SeqIO
>>> def make_tuple(identifier):
... parts = identifier.split("_")
... return int(parts[-2]), int(parts[-1])
>>> records = SeqIO.index("Quality/example.fastq", "fastq",
... key_function=make_tuple)
>>> len(records)
3
>>> list(records) # make a list of the keys
[(413, 324), (540, 792), (443, 348)]
>>> print(records[(540, 792)].format("fasta"))
>EAS54_6_R1_2_1_540_792
TTGGCAGGCCAAGGCCGATGGATCA
<BLANKLINE>
>>> (540, 792) in records
True
>>> "EAS54_6_R1_2_1_540_792" in records
False
>>> print(records.get("Missing", None))
None
>>> records.close()
Another common use case would be indexing an NCBI style FASTA file,
where you might want to extract the GI number from the FASTA identifier
to use as the dictionary key.
Notice that unlike the to_dict() function, here the key_function does
not get given the full SeqRecord to use to generate the key. Doing so
would impose a severe performance penalty as it would require the file
to be completely parsed while building the index. Right now this is
usually avoided.
See Also: Bio.SeqIO.index_db() and Bio.SeqIO.to_dict() | def index(filename, format, alphabet=None, key_function=None):
"""Indexes a sequence file and returns a dictionary like object.
Arguments:
- filename - string giving name of file to be indexed
- format - lower case string describing the file format
- alphabet - no longer used, leave as None
- key_function - Optional callback function which when given a
SeqRecord identifier string should return a unique key for the
dictionary.
This indexing function will return a dictionary like object, giving the
SeqRecord objects as values.
As of Biopython 1.69, this will preserve the ordering of the records in
file when iterating over the entries.
>>> from Bio import SeqIO
>>> records = SeqIO.index("Quality/example.fastq", "fastq")
>>> len(records)
3
>>> list(records) # make a list of the keys
['EAS54_6_R1_2_1_413_324', 'EAS54_6_R1_2_1_540_792', 'EAS54_6_R1_2_1_443_348']
>>> print(records["EAS54_6_R1_2_1_540_792"].format("fasta"))
>EAS54_6_R1_2_1_540_792
TTGGCAGGCCAAGGCCGATGGATCA
<BLANKLINE>
>>> "EAS54_6_R1_2_1_540_792" in records
True
>>> print(records.get("Missing", None))
None
>>> records.close()
If the file is BGZF compressed, this is detected automatically. Ordinary
GZIP files are not supported:
>>> from Bio import SeqIO
>>> records = SeqIO.index("Quality/example.fastq.bgz", "fastq")
>>> len(records)
3
>>> print(records["EAS54_6_R1_2_1_540_792"].seq)
TTGGCAGGCCAAGGCCGATGGATCA
>>> records.close()
When you call the index function, it will scan through the file, noting
the location of each record. When you access a particular record via the
dictionary methods, the code will jump to the appropriate part of the
file and then parse that section into a SeqRecord.
Note that not all the input formats supported by Bio.SeqIO can be used
with this index function. It is designed to work only with sequential
file formats (e.g. "fasta", "gb", "fastq") and is not suitable for any
interlaced file format (e.g. alignment formats such as "clustal").
For small files, it may be more efficient to use an in memory Python
dictionary, e.g.
>>> from Bio import SeqIO
>>> records = SeqIO.to_dict(SeqIO.parse("Quality/example.fastq", "fastq"))
>>> len(records)
3
>>> list(records) # make a list of the keys
['EAS54_6_R1_2_1_413_324', 'EAS54_6_R1_2_1_540_792', 'EAS54_6_R1_2_1_443_348']
>>> print(records["EAS54_6_R1_2_1_540_792"].format("fasta"))
>EAS54_6_R1_2_1_540_792
TTGGCAGGCCAAGGCCGATGGATCA
<BLANKLINE>
As with the to_dict() function, by default the id string of each record
is used as the key. You can specify a callback function to transform
this (the record identifier string) into your preferred key. For example:
>>> from Bio import SeqIO
>>> def make_tuple(identifier):
... parts = identifier.split("_")
... return int(parts[-2]), int(parts[-1])
>>> records = SeqIO.index("Quality/example.fastq", "fastq",
... key_function=make_tuple)
>>> len(records)
3
>>> list(records) # make a list of the keys
[(413, 324), (540, 792), (443, 348)]
>>> print(records[(540, 792)].format("fasta"))
>EAS54_6_R1_2_1_540_792
TTGGCAGGCCAAGGCCGATGGATCA
<BLANKLINE>
>>> (540, 792) in records
True
>>> "EAS54_6_R1_2_1_540_792" in records
False
>>> print(records.get("Missing", None))
None
>>> records.close()
Another common use case would be indexing an NCBI style FASTA file,
where you might want to extract the GI number from the FASTA identifier
to use as the dictionary key.
Notice that unlike the to_dict() function, here the key_function does
not get given the full SeqRecord to use to generate the key. Doing so
would impose a severe performance penalty as it would require the file
to be completely parsed while building the index. Right now this is
usually avoided.
See Also: Bio.SeqIO.index_db() and Bio.SeqIO.to_dict()
"""
# Try and give helpful error messages:
if not isinstance(format, str):
raise TypeError("Need a string for the file format (lower case)")
if not format:
raise ValueError("Format required (lower case string)")
if not format.islower():
raise ValueError(f"Format string '{format}' should be lower case")
if alphabet is not None:
raise ValueError("The alphabet argument is no longer supported")
# Map the file format to a sequence iterator:
from ._index import _FormatToRandomAccess # Lazy import
from Bio.File import _IndexedSeqFileDict
try:
proxy_class = _FormatToRandomAccess[format]
except KeyError:
raise ValueError(f"Unsupported format {format!r}") from None
repr = "SeqIO.index(%r, %r, alphabet=%r, key_function=%r)" % (
filename,
format,
alphabet,
key_function,
)
try:
random_access_proxy = proxy_class(filename, format)
except TypeError:
raise TypeError(
"Need a string or path-like object for the filename (not a handle)"
) from None
return _IndexedSeqFileDict(random_access_proxy, key_function, repr, "SeqRecord") |
Index several sequence files and return a dictionary like object.
The index is stored in an SQLite database rather than in memory (as in the
Bio.SeqIO.index(...) function).
Arguments:
- index_filename - Where to store the SQLite index
- filenames - list of strings specifying file(s) to be indexed, or when
indexing a single file this can be given as a string.
(optional if reloading an existing index, but must match)
- format - lower case string describing the file format
(optional if reloading an existing index, but must match)
- alphabet - no longer used, leave as None.
- key_function - Optional callback function which when given a
SeqRecord identifier string should return a unique
key for the dictionary.
This indexing function will return a dictionary like object, giving the
SeqRecord objects as values:
>>> from Bio import SeqIO
>>> files = ["GenBank/NC_000932.faa", "GenBank/NC_005816.faa"]
>>> def get_gi(name):
... parts = name.split("|")
... i = parts.index("gi")
... assert i != -1
... return parts[i+1]
>>> idx_name = ":memory:" #use an in memory SQLite DB for this test
>>> records = SeqIO.index_db(idx_name, files, "fasta", key_function=get_gi)
>>> len(records)
95
>>> records["7525076"].description
'gi|7525076|ref|NP_051101.1| Ycf2 [Arabidopsis thaliana]'
>>> records["45478717"].description
'gi|45478717|ref|NP_995572.1| pesticin [Yersinia pestis biovar Microtus str. 91001]'
>>> records.close()
In this example the two files contain 85 and 10 records respectively.
BGZF compressed files are supported, and detected automatically. Ordinary
GZIP compressed files are not supported.
See Also: Bio.SeqIO.index() and Bio.SeqIO.to_dict(), and the Python module
glob which is useful for building lists of files. | def index_db(
index_filename, filenames=None, format=None, alphabet=None, key_function=None
):
"""Index several sequence files and return a dictionary like object.
The index is stored in an SQLite database rather than in memory (as in the
Bio.SeqIO.index(...) function).
Arguments:
- index_filename - Where to store the SQLite index
- filenames - list of strings specifying file(s) to be indexed, or when
indexing a single file this can be given as a string.
(optional if reloading an existing index, but must match)
- format - lower case string describing the file format
(optional if reloading an existing index, but must match)
- alphabet - no longer used, leave as None.
- key_function - Optional callback function which when given a
SeqRecord identifier string should return a unique
key for the dictionary.
This indexing function will return a dictionary like object, giving the
SeqRecord objects as values:
>>> from Bio import SeqIO
>>> files = ["GenBank/NC_000932.faa", "GenBank/NC_005816.faa"]
>>> def get_gi(name):
... parts = name.split("|")
... i = parts.index("gi")
... assert i != -1
... return parts[i+1]
>>> idx_name = ":memory:" #use an in memory SQLite DB for this test
>>> records = SeqIO.index_db(idx_name, files, "fasta", key_function=get_gi)
>>> len(records)
95
>>> records["7525076"].description
'gi|7525076|ref|NP_051101.1| Ycf2 [Arabidopsis thaliana]'
>>> records["45478717"].description
'gi|45478717|ref|NP_995572.1| pesticin [Yersinia pestis biovar Microtus str. 91001]'
>>> records.close()
In this example the two files contain 85 and 10 records respectively.
BGZF compressed files are supported, and detected automatically. Ordinary
GZIP compressed files are not supported.
See Also: Bio.SeqIO.index() and Bio.SeqIO.to_dict(), and the Python module
glob which is useful for building lists of files.
"""
from os import fspath
def is_pathlike(obj):
"""Test if the given object can be accepted as a path."""
try:
fspath(obj)
return True
except TypeError:
return False
# Try and give helpful error messages:
if not is_pathlike(index_filename):
raise TypeError("Need a string or path-like object for filename (not a handle)")
if is_pathlike(filenames):
# Make the API a little more friendly, and more similar
# to Bio.SeqIO.index(...) for indexing just one file.
filenames = [filenames]
if filenames is not None and not isinstance(filenames, list):
raise TypeError(
"Need a list of filenames (as strings or path-like "
"objects), or one filename"
)
if format is not None and not isinstance(format, str):
raise TypeError("Need a string for the file format (lower case)")
if format and not format.islower():
raise ValueError(f"Format string '{format}' should be lower case")
if alphabet is not None:
raise ValueError("The alphabet argument is no longer supported")
# Map the file format to a sequence iterator:
from ._index import _FormatToRandomAccess # Lazy import
from Bio.File import _SQLiteManySeqFilesDict
repr = "SeqIO.index_db(%r, filenames=%r, format=%r, key_function=%r)" % (
index_filename,
filenames,
format,
key_function,
)
def proxy_factory(format, filename=None):
"""Given a filename returns proxy object, else boolean if format OK."""
if filename:
return _FormatToRandomAccess[format](filename, format)
else:
return format in _FormatToRandomAccess
return _SQLiteManySeqFilesDict(
index_filename, filenames, proxy_factory, format, key_function, repr
) |
Convert between two sequence file formats, return number of records.
Arguments:
- in_file - an input handle or filename
- in_format - input file format, lower case string
- out_file - an output handle or filename
- out_format - output file format, lower case string
- molecule_type - optional molecule type to apply, string containing
"DNA", "RNA" or "protein".
**NOTE** - If you provide an output filename, it will be opened which will
overwrite any existing file without warning.
The idea here is that while doing this will work::
from Bio import SeqIO
records = SeqIO.parse(in_handle, in_format)
count = SeqIO.write(records, out_handle, out_format)
it is shorter to write::
from Bio import SeqIO
count = SeqIO.convert(in_handle, in_format, out_handle, out_format)
Also, Bio.SeqIO.convert is faster for some conversions as it can make some
optimisations.
For example, going from a filename to a handle:
>>> from Bio import SeqIO
>>> from io import StringIO
>>> handle = StringIO("")
>>> SeqIO.convert("Quality/example.fastq", "fastq", handle, "fasta")
3
>>> print(handle.getvalue())
>EAS54_6_R1_2_1_413_324
CCCTTCTTGTCTTCAGCGTTTCTCC
>EAS54_6_R1_2_1_540_792
TTGGCAGGCCAAGGCCGATGGATCA
>EAS54_6_R1_2_1_443_348
GTTGCTTCTGGCGTGGGTGGGGGGG
<BLANKLINE>
Note some formats like SeqXML require you to specify the molecule type
when it cannot be determined by the parser:
>>> from Bio import SeqIO
>>> from io import BytesIO
>>> handle = BytesIO()
>>> SeqIO.convert("Quality/example.fastq", "fastq", handle, "seqxml", "DNA")
3 | def convert(in_file, in_format, out_file, out_format, molecule_type=None):
"""Convert between two sequence file formats, return number of records.
Arguments:
- in_file - an input handle or filename
- in_format - input file format, lower case string
- out_file - an output handle or filename
- out_format - output file format, lower case string
- molecule_type - optional molecule type to apply, string containing
"DNA", "RNA" or "protein".
**NOTE** - If you provide an output filename, it will be opened which will
overwrite any existing file without warning.
The idea here is that while doing this will work::
from Bio import SeqIO
records = SeqIO.parse(in_handle, in_format)
count = SeqIO.write(records, out_handle, out_format)
it is shorter to write::
from Bio import SeqIO
count = SeqIO.convert(in_handle, in_format, out_handle, out_format)
Also, Bio.SeqIO.convert is faster for some conversions as it can make some
optimisations.
For example, going from a filename to a handle:
>>> from Bio import SeqIO
>>> from io import StringIO
>>> handle = StringIO("")
>>> SeqIO.convert("Quality/example.fastq", "fastq", handle, "fasta")
3
>>> print(handle.getvalue())
>EAS54_6_R1_2_1_413_324
CCCTTCTTGTCTTCAGCGTTTCTCC
>EAS54_6_R1_2_1_540_792
TTGGCAGGCCAAGGCCGATGGATCA
>EAS54_6_R1_2_1_443_348
GTTGCTTCTGGCGTGGGTGGGGGGG
<BLANKLINE>
Note some formats like SeqXML require you to specify the molecule type
when it cannot be determined by the parser:
>>> from Bio import SeqIO
>>> from io import BytesIO
>>> handle = BytesIO()
>>> SeqIO.convert("Quality/example.fastq", "fastq", handle, "seqxml", "DNA")
3
"""
if molecule_type:
if not isinstance(molecule_type, str):
raise TypeError(f"Molecule type should be a string, not {molecule_type!r}")
elif (
"DNA" in molecule_type
or "RNA" in molecule_type
or "protein" in molecule_type
):
pass
else:
raise ValueError(f"Unexpected molecule type, {molecule_type!r}")
f = _converter.get((in_format, out_format))
if f:
count = f(in_file, out_file)
else:
records = parse(in_file, in_format)
if molecule_type:
# Edit the records on the fly to set molecule type
def over_ride(record):
"""Over-ride molecule in-place."""
record.annotations["molecule_type"] = molecule_type
return record
records = (over_ride(_) for _ in records)
count = write(records, out_file, out_format)
return count |
Iterate of ACE file contig by contig.
Argument source is a file-like object or a path to a file.
This function returns an iterator that allows you to iterate
over the ACE file record by record::
records = parse(source)
for record in records:
# do something with the record
where each record is a Contig object. | def parse(source):
"""Iterate of ACE file contig by contig.
Argument source is a file-like object or a path to a file.
This function returns an iterator that allows you to iterate
over the ACE file record by record::
records = parse(source)
for record in records:
# do something with the record
where each record is a Contig object.
"""
try:
handle = open(source)
except TypeError:
handle = source
if handle.read(0) != "":
raise ValueError("Ace files must be opened in text mode.") from None
try:
line = ""
while True:
# at beginning, skip the AS and look for first CO command
try:
while True:
if line.startswith("CO"):
break
line = next(handle)
except StopIteration:
return
record = Contig(line)
for line in handle:
line = line.strip()
if not line:
break
record.sequence += line
for line in handle:
if line.strip():
break
if not line.startswith("BQ"):
raise ValueError("Failed to find BQ line")
for line in handle:
if not line.strip():
break
record.quality.extend(int(x) for x in line.split())
for line in handle:
if line.strip():
break
while True:
if not line.startswith("AF "):
break
record.af.append(af(line))
try:
line = next(handle)
except StopIteration:
raise ValueError("Unexpected end of AF block") from None
while True:
if line.strip():
break
try:
line = next(handle)
except StopIteration:
raise ValueError("Unexpected end of file") from None
while True:
if not line.startswith("BS "):
break
record.bs.append(bs(line))
try:
line = next(handle)
except StopIteration:
raise ValueError("Failed to find end of BS block") from None
# now read all the read data
# it starts with a 'RD', and then a mandatory QA
# then follows an optional DS
# CT,RT,WA,WR may or may not be there in unlimited quantity.
# They might refer to the actual read or contig, or, if
# encountered at the end of file, to any previous read or contig.
# The sort() method deals with that later.
while True:
# each read must have a rd and qa
try:
while True:
# If I've met the condition, then stop reading the line.
if line.startswith("RD "):
break
line = next(handle)
except StopIteration:
raise ValueError("Failed to find RD line") from None
record.reads.append(Reads(line))
for line in handle:
line = line.strip()
if not line:
break
record.reads[-1].rd.sequence += line
for line in handle:
if line.strip():
break
if not line.startswith("QA "):
raise ValueError("Failed to find QA line")
record.reads[-1].qa = qa(line)
# now one ds can follow
for line in handle:
if line.strip():
break
else:
break
if line.startswith("DS "):
record.reads[-1].ds = ds(line)
line = ""
# the file could just end, or there's some more stuff.
# In ace files, anything can happen.
# the following tags are interspersed between reads and can appear multiple times.
while True:
# something left
try:
while True:
if line.strip():
break
line = next(handle)
except StopIteration:
# file ends here
break
if line.startswith("RT{"):
# now if we're at the end of the file, this rt could
# belong to a previous read, not the actual one.
# we store it here were it appears, the user can sort later.
if record.reads[-1].rt is None:
record.reads[-1].rt = []
for line in handle:
line = line.strip()
# if line=="COMMENT{":
if line.startswith("COMMENT{"):
if line[8:].strip():
# MIRA 3.0.5 would miss the new line out :(
record.reads[-1].rt[-1].comment.append(line[8:])
for line in handle:
line = line.strip()
if line.endswith("C}"):
break
record.reads[-1].rt[-1].comment.append(line)
elif line == "}":
break
else:
record.reads[-1].rt.append(rt(line))
line = ""
elif line.startswith("WR{"):
if record.reads[-1].wr is None:
record.reads[-1].wr = []
for line in handle:
line = line.strip()
if line == "}":
break
record.reads[-1].wr.append(wr(line))
line = ""
elif line.startswith("WA{"):
if record.wa is None:
record.wa = []
try:
line = next(handle)
except StopIteration:
raise ValueError("Failed to read WA block") from None
record.wa.append(wa(line))
for line in handle:
line = line.strip()
if line == "}":
break
record.wa[-1].info.append(line)
line = ""
elif line.startswith("CT{"):
if record.ct is None:
record.ct = []
try:
line = next(handle)
except StopIteration:
raise ValueError("Failed to read CT block") from None
record.ct.append(ct(line))
for line in handle:
line = line.strip()
if line == "COMMENT{":
for line in handle:
line = line.strip()
if line.endswith("C}"):
break
record.ct[-1].comment.append(line)
elif line == "}":
break
else:
record.ct[-1].info.append(line)
line = ""
else:
break
if not line.startswith("RD"): # another read?
break
yield record
finally:
if handle is not source:
handle.close() |
Parse a full ACE file into a list of contigs. | def read(handle):
"""Parse a full ACE file into a list of contigs."""
handle = iter(handle)
record = ACEFileRecord()
try:
line = next(handle)
except StopIteration:
raise ValueError("Premature end of file") from None
# check if the file starts correctly
if not line.startswith("AS"):
raise ValueError("File does not start with 'AS'.")
words = line.split()
record.ncontigs = int(words[1])
record.nreads = int(words[2])
# now read all the records
record.contigs = list(parse(handle))
# wa, ct, rt rags are usually at the end of the file, but not necessarily (correct?).
# If the iterator is used, the tags are returned with the contig or the read after which they appear,
# if all tags are at the end, they are read with the last contig. The concept of an
# iterator leaves no other choice. But if the user uses the ACEParser, we can check
# them and put them into the appropriate contig/read instance.
# Conclusion: An ACE file is not a filetype for which iteration is 100% suitable...
record.sort()
return record |
Read one PHD record from the file and return it as a Record object.
Argument source is a file-like object opened in text mode, or a path
to a file.
This function reads PHD file data line by line from the source, and
returns a single Record object. A ValueError is raised if more than
one record is found in the file. | def read(source):
"""Read one PHD record from the file and return it as a Record object.
Argument source is a file-like object opened in text mode, or a path
to a file.
This function reads PHD file data line by line from the source, and
returns a single Record object. A ValueError is raised if more than
one record is found in the file.
"""
handle = _open(source)
try:
record = _read(handle)
try:
next(handle)
except StopIteration:
return record
else:
raise ValueError("More than one PHD record found")
finally:
if handle is not source:
handle.close() |
Iterate over a file yielding multiple PHD records.
Argument source is a file-like object opened in text mode, or a path
to a file.
The data is read line by line from the source.
Typical usage::
records = parse(handle)
for record in records:
# do something with the record object | def parse(source):
"""Iterate over a file yielding multiple PHD records.
Argument source is a file-like object opened in text mode, or a path
to a file.
The data is read line by line from the source.
Typical usage::
records = parse(handle)
for record in records:
# do something with the record object
"""
handle = _open(source)
try:
while True:
record = _read(handle)
if not record:
return
yield record
finally:
if handle is not source:
handle.close() |
Return the crc32 checksum for a sequence (string or Seq object).
Note that the case is important:
>>> crc32("ACGTACGTACGT")
20049947
>>> crc32("acgtACGTacgt")
1688586483 | def crc32(seq):
"""Return the crc32 checksum for a sequence (string or Seq object).
Note that the case is important:
>>> crc32("ACGTACGTACGT")
20049947
>>> crc32("acgtACGTacgt")
1688586483
"""
try:
# Assume it's a Seq object
s = bytes(seq)
except TypeError:
# Assume it's a string
s = seq.encode()
return binascii.crc32(s) |
Return the crc64 checksum for a sequence (string or Seq object).
Note that the case is important:
>>> crc64("ACGTACGTACGT")
'CRC-C4FBB762C4A87EBD'
>>> crc64("acgtACGTacgt")
'CRC-DA4509DC64A87EBD' | def crc64(s):
"""Return the crc64 checksum for a sequence (string or Seq object).
Note that the case is important:
>>> crc64("ACGTACGTACGT")
'CRC-C4FBB762C4A87EBD'
>>> crc64("acgtACGTacgt")
'CRC-DA4509DC64A87EBD'
"""
crcl = 0
crch = 0
for c in s:
shr = (crch & 0xFF) << 24
temp1h = crch >> 8
temp1l = (crcl >> 8) | shr
idx = (crcl ^ ord(c)) & 0xFF
crch = temp1h ^ _table_h[idx]
crcl = temp1l
return f"CRC-{crch:08X}{crcl:08X}" |
Return the GCG checksum (int) for a sequence (string or Seq object).
Given a nucleotide or amino-acid sequence (or any string),
returns the GCG checksum (int). Checksum used by GCG program.
seq type = str.
Based on BioPerl GCG_checksum. Adapted by Sebastian Bassi
with the help of John Lenton, Pablo Ziliani, and Gabriel Genellina.
All sequences are converted to uppercase.
>>> gcg("ACGTACGTACGT")
5688
>>> gcg("acgtACGTacgt")
5688 | def gcg(seq):
"""Return the GCG checksum (int) for a sequence (string or Seq object).
Given a nucleotide or amino-acid sequence (or any string),
returns the GCG checksum (int). Checksum used by GCG program.
seq type = str.
Based on BioPerl GCG_checksum. Adapted by Sebastian Bassi
with the help of John Lenton, Pablo Ziliani, and Gabriel Genellina.
All sequences are converted to uppercase.
>>> gcg("ACGTACGTACGT")
5688
>>> gcg("acgtACGTacgt")
5688
"""
index = checksum = 0
for char in seq:
index += 1
checksum += index * ord(char.upper())
if index == 57:
index = 0
return checksum % 10000 |
Return the SEGUID (string) for a sequence (string or Seq object).
Given a nucleotide or amino-acid sequence (or any string),
returns the SEGUID string (A SEquence Globally Unique IDentifier).
seq type = str.
Note that the case is not important:
>>> seguid("ACGTACGTACGT")
'If6HIvcnRSQDVNiAoefAzySc6i4'
>>> seguid("acgtACGTacgt")
'If6HIvcnRSQDVNiAoefAzySc6i4'
For more information about SEGUID, see:
http://bioinformatics.anl.gov/seguid/
https://doi.org/10.1002/pmic.200600032 | def seguid(seq):
"""Return the SEGUID (string) for a sequence (string or Seq object).
Given a nucleotide or amino-acid sequence (or any string),
returns the SEGUID string (A SEquence Globally Unique IDentifier).
seq type = str.
Note that the case is not important:
>>> seguid("ACGTACGTACGT")
'If6HIvcnRSQDVNiAoefAzySc6i4'
>>> seguid("acgtACGTacgt")
'If6HIvcnRSQDVNiAoefAzySc6i4'
For more information about SEGUID, see:
http://bioinformatics.anl.gov/seguid/
https://doi.org/10.1002/pmic.200600032
"""
import hashlib
import base64
m = hashlib.sha1()
try:
# Assume it's a Seq object
seq = bytes(seq)
except TypeError:
# Assume it's a string
seq = seq.encode()
m.update(seq.upper())
tmp = base64.encodebytes(m.digest())
return tmp.decode().replace("\n", "").rstrip("=") |
Calculate Local Composition Complexity (LCC) values over sliding window.
Returns a list of floats, the LCC values for a sliding window over
the sequence.
seq - an unambiguous DNA sequence (a string or Seq object)
wsize - window size, integer
The result is the same as applying lcc_simp multiple times, but this
version is optimized for speed. The optimization works by using the
value of previous window as a base to compute the next one. | def lcc_mult(seq, wsize):
"""Calculate Local Composition Complexity (LCC) values over sliding window.
Returns a list of floats, the LCC values for a sliding window over
the sequence.
seq - an unambiguous DNA sequence (a string or Seq object)
wsize - window size, integer
The result is the same as applying lcc_simp multiple times, but this
version is optimized for speed. The optimization works by using the
value of previous window as a base to compute the next one.
"""
l4 = math.log(4)
seq = seq.upper()
tamseq = len(seq)
compone = [0]
lccsal = []
for i in range(wsize):
compone.append(((i + 1) / wsize) * math.log((i + 1) / wsize) / l4)
window = seq[0:wsize]
cant_a = window.count("A")
cant_c = window.count("C")
cant_t = window.count("T")
cant_g = window.count("G")
term_a = compone[cant_a]
term_c = compone[cant_c]
term_t = compone[cant_t]
term_g = compone[cant_g]
lccsal.append(-(term_a + term_c + term_t + term_g))
tail = seq[0]
for x in range(tamseq - wsize):
window = seq[x + 1 : wsize + x + 1]
if tail == window[-1]:
lccsal.append(lccsal[-1])
elif tail == "A":
cant_a -= 1
if window.endswith("C"):
cant_c += 1
term_a = compone[cant_a]
term_c = compone[cant_c]
lccsal.append(-(term_a + term_c + term_t + term_g))
elif window.endswith("T"):
cant_t += 1
term_a = compone[cant_a]
term_t = compone[cant_t]
lccsal.append(-(term_a + term_c + term_t + term_g))
elif window.endswith("G"):
cant_g += 1
term_a = compone[cant_a]
term_g = compone[cant_g]
lccsal.append(-(term_a + term_c + term_t + term_g))
elif tail == "C":
cant_c -= 1
if window.endswith("A"):
cant_a += 1
term_a = compone[cant_a]
term_c = compone[cant_c]
lccsal.append(-(term_a + term_c + term_t + term_g))
elif window.endswith("T"):
cant_t += 1
term_c = compone[cant_c]
term_t = compone[cant_t]
lccsal.append(-(term_a + term_c + term_t + term_g))
elif window.endswith("G"):
cant_g += 1
term_c = compone[cant_c]
term_g = compone[cant_g]
lccsal.append(-(term_a + term_c + term_t + term_g))
elif tail == "T":
cant_t -= 1
if window.endswith("A"):
cant_a += 1
term_a = compone[cant_a]
term_t = compone[cant_t]
lccsal.append(-(term_a + term_c + term_t + term_g))
elif window.endswith("C"):
cant_c += 1
term_c = compone[cant_c]
term_t = compone[cant_t]
lccsal.append(-(term_a + term_c + term_t + term_g))
elif window.endswith("G"):
cant_g += 1
term_t = compone[cant_t]
term_g = compone[cant_g]
lccsal.append(-(term_a + term_c + term_t + term_g))
elif tail == "G":
cant_g -= 1
if window.endswith("A"):
cant_a += 1
term_a = compone[cant_a]
term_g = compone[cant_g]
lccsal.append(-(term_a + term_c + term_t + term_g))
elif window.endswith("C"):
cant_c += 1
term_c = compone[cant_c]
term_g = compone[cant_g]
lccsal.append(-(term_a + term_c + term_t + term_g))
elif window.endswith("T"):
cant_t += 1
term_t = compone[cant_t]
term_g = compone[cant_g]
lccsal.append(-(term_a + term_c + term_t + term_g))
tail = window[0]
return lccsal |
Calculate Local Composition Complexity (LCC) for a sequence.
seq - an unambiguous DNA sequence (a string or Seq object)
Returns the Local Composition Complexity (LCC) value for the entire
sequence (as a float).
Reference:
Andrzej K Konopka (2005) Sequence Complexity and Composition
https://doi.org/10.1038/npg.els.0005260 | def lcc_simp(seq):
"""Calculate Local Composition Complexity (LCC) for a sequence.
seq - an unambiguous DNA sequence (a string or Seq object)
Returns the Local Composition Complexity (LCC) value for the entire
sequence (as a float).
Reference:
Andrzej K Konopka (2005) Sequence Complexity and Composition
https://doi.org/10.1038/npg.els.0005260
"""
wsize = len(seq)
seq = seq.upper()
l4 = math.log(4)
# Check to avoid calculating the log of 0.
if "A" not in seq:
term_a = 0
else:
term_a = (seq.count("A") / wsize) * math.log(seq.count("A") / wsize) / l4
if "C" not in seq:
term_c = 0
else:
term_c = (seq.count("C") / wsize) * math.log(seq.count("C") / wsize) / l4
if "T" not in seq:
term_t = 0
else:
term_t = (seq.count("T") / wsize) * math.log(seq.count("T") / wsize) / l4
if "G" not in seq:
term_g = 0
else:
term_g = (seq.count("G") / wsize) * math.log(seq.count("G") / wsize) / l4
return -(term_a + term_c + term_t + term_g) |
Return a table with thermodynamic parameters (as dictionary).
Arguments:
- oldtable: An existing dictionary with thermodynamic parameters.
- values: A dictionary with new or updated values.
E.g., to replace the initiation parameters in the Sugimoto '96 dataset with
the initiation parameters from Allawi & SantaLucia '97:
>>> from Bio.SeqUtils.MeltingTemp import make_table, DNA_NN2
>>> table = DNA_NN2 # Sugimoto '96
>>> table['init_A/T']
(0, 0)
>>> newtable = make_table(oldtable=DNA_NN2, values={'init': (0, 0),
... 'init_A/T': (2.3, 4.1),
... 'init_G/C': (0.1, -2.8)})
>>> print("%0.1f, %0.1f" % newtable['init_A/T'])
2.3, 4.1 | def make_table(oldtable=None, values=None):
"""Return a table with thermodynamic parameters (as dictionary).
Arguments:
- oldtable: An existing dictionary with thermodynamic parameters.
- values: A dictionary with new or updated values.
E.g., to replace the initiation parameters in the Sugimoto '96 dataset with
the initiation parameters from Allawi & SantaLucia '97:
>>> from Bio.SeqUtils.MeltingTemp import make_table, DNA_NN2
>>> table = DNA_NN2 # Sugimoto '96
>>> table['init_A/T']
(0, 0)
>>> newtable = make_table(oldtable=DNA_NN2, values={'init': (0, 0),
... 'init_A/T': (2.3, 4.1),
... 'init_G/C': (0.1, -2.8)})
>>> print("%0.1f, %0.1f" % newtable['init_A/T'])
2.3, 4.1
"""
if oldtable is None:
table = {
"init": (0, 0),
"init_A/T": (0, 0),
"init_G/C": (0, 0),
"init_oneG/C": (0, 0),
"init_allA/T": (0, 0),
"init_5T/A": (0, 0),
"sym": (0, 0),
"AA/TT": (0, 0),
"AT/TA": (0, 0),
"TA/AT": (0, 0),
"CA/GT": (0, 0),
"GT/CA": (0, 0),
"CT/GA": (0, 0),
"GA/CT": (0, 0),
"CG/GC": (0, 0),
"GC/CG": (0, 0),
"GG/CC": (0, 0),
}
else:
table = oldtable.copy()
if values:
table.update(values)
return table |
Return a sequence which fulfills the requirements of the given method (PRIVATE).
All Tm methods in this package require the sequence in uppercase format.
Most methods make use of the length of the sequence (directly or
indirectly), which can only be expressed as len(seq) if the sequence does
not contain whitespaces and other non-base characters. RNA sequences are
backtranscribed to DNA. This method is PRIVATE.
Arguments:
- seq: The sequence as given by the user (passed as string).
- method: Tm_Wallace, Tm_GC or Tm_NN.
>>> from Bio.SeqUtils import MeltingTemp as mt
>>> mt._check('10 ACGTTGCAAG tccatggtac', 'Tm_NN')
'ACGTTGCAAGTCCATGGTAC' | def _check(seq, method):
"""Return a sequence which fulfills the requirements of the given method (PRIVATE).
All Tm methods in this package require the sequence in uppercase format.
Most methods make use of the length of the sequence (directly or
indirectly), which can only be expressed as len(seq) if the sequence does
not contain whitespaces and other non-base characters. RNA sequences are
backtranscribed to DNA. This method is PRIVATE.
Arguments:
- seq: The sequence as given by the user (passed as string).
- method: Tm_Wallace, Tm_GC or Tm_NN.
>>> from Bio.SeqUtils import MeltingTemp as mt
>>> mt._check('10 ACGTTGCAAG tccatggtac', 'Tm_NN')
'ACGTTGCAAGTCCATGGTAC'
"""
seq = "".join(seq.split()).upper()
seq = str(Seq.Seq(seq).back_transcribe())
if method == "Tm_Wallace":
return seq
if method == "Tm_GC":
baseset = (
"A",
"B",
"C",
"D",
"G",
"H",
"I",
"K",
"M",
"N",
"R",
"S",
"T",
"V",
"W",
"X",
"Y",
)
if method == "Tm_NN":
baseset = ("A", "C", "G", "T", "I")
seq = "".join([base for base in seq if base in baseset])
return seq |
Calculate a term to correct Tm for salt ions.
Depending on the Tm calculation, the term will correct Tm or entropy. To
calculate corrected Tm values, different operations need to be applied:
- methods 1-4: Tm(new) = Tm(old) + corr
- method 5: deltaS(new) = deltaS(old) + corr
- methods 6+7: Tm(new) = 1/(1/Tm(old) + corr)
Arguments:
- Na, K, Tris, Mg, dNTPS: Millimolar concentration of respective ion. To
have a simple 'salt correction', just pass Na. If any of K, Tris, Mg and
dNTPS is non-zero, a 'sodium-equivalent' concentration is calculated
according to von Ahsen et al. (2001, Clin Chem 47: 1956-1961):
[Na_eq] = [Na+] + [K+] + [Tris]/2 + 120*([Mg2+] - [dNTPs])^0.5
If [dNTPs] >= [Mg2+]: [Na_eq] = [Na+] + [K+] + [Tris]/2
- method: Which method to be applied. Methods 1-4 correct Tm, method 5
corrects deltaS, methods 6 and 7 correct 1/Tm. The methods are:
1. 16.6 x log[Na+]
(Schildkraut & Lifson (1965), Biopolymers 3: 195-208)
2. 16.6 x log([Na+]/(1.0 + 0.7*[Na+]))
(Wetmur (1991), Crit Rev Biochem Mol Biol 126: 227-259)
3. 12.5 x log(Na+]
(SantaLucia et al. (1996), Biochemistry 35: 3555-3562
4. 11.7 x log[Na+]
(SantaLucia (1998), Proc Natl Acad Sci USA 95: 1460-1465
5. Correction for deltaS: 0.368 x (N-1) x ln[Na+]
(SantaLucia (1998), Proc Natl Acad Sci USA 95: 1460-1465)
6. (4.29(%GC)-3.95)x1e-5 x ln[Na+] + 9.40e-6 x ln[Na+]^2
(Owczarzy et al. (2004), Biochemistry 43: 3537-3554)
7. Complex formula with decision tree and 7 empirical constants.
Mg2+ is corrected for dNTPs binding (if present)
(Owczarzy et al. (2008), Biochemistry 47: 5336-5353)
Examples
--------
>>> from Bio.SeqUtils.MeltingTemp import salt_correction
>>> print('%0.2f' % salt_correction(Na=50, method=1))
-21.60
>>> print('%0.2f' % salt_correction(Na=50, method=2))
-21.85
>>> print('%0.2f' % salt_correction(Na=100, Tris=20, method=2))
-16.45
>>> print('%0.2f' % salt_correction(Na=100, Tris=20, Mg=1.5, method=2))
-10.99 | def salt_correction(Na=0, K=0, Tris=0, Mg=0, dNTPs=0, method=1, seq=None):
"""Calculate a term to correct Tm for salt ions.
Depending on the Tm calculation, the term will correct Tm or entropy. To
calculate corrected Tm values, different operations need to be applied:
- methods 1-4: Tm(new) = Tm(old) + corr
- method 5: deltaS(new) = deltaS(old) + corr
- methods 6+7: Tm(new) = 1/(1/Tm(old) + corr)
Arguments:
- Na, K, Tris, Mg, dNTPS: Millimolar concentration of respective ion. To
have a simple 'salt correction', just pass Na. If any of K, Tris, Mg and
dNTPS is non-zero, a 'sodium-equivalent' concentration is calculated
according to von Ahsen et al. (2001, Clin Chem 47: 1956-1961):
[Na_eq] = [Na+] + [K+] + [Tris]/2 + 120*([Mg2+] - [dNTPs])^0.5
If [dNTPs] >= [Mg2+]: [Na_eq] = [Na+] + [K+] + [Tris]/2
- method: Which method to be applied. Methods 1-4 correct Tm, method 5
corrects deltaS, methods 6 and 7 correct 1/Tm. The methods are:
1. 16.6 x log[Na+]
(Schildkraut & Lifson (1965), Biopolymers 3: 195-208)
2. 16.6 x log([Na+]/(1.0 + 0.7*[Na+]))
(Wetmur (1991), Crit Rev Biochem Mol Biol 126: 227-259)
3. 12.5 x log(Na+]
(SantaLucia et al. (1996), Biochemistry 35: 3555-3562
4. 11.7 x log[Na+]
(SantaLucia (1998), Proc Natl Acad Sci USA 95: 1460-1465
5. Correction for deltaS: 0.368 x (N-1) x ln[Na+]
(SantaLucia (1998), Proc Natl Acad Sci USA 95: 1460-1465)
6. (4.29(%GC)-3.95)x1e-5 x ln[Na+] + 9.40e-6 x ln[Na+]^2
(Owczarzy et al. (2004), Biochemistry 43: 3537-3554)
7. Complex formula with decision tree and 7 empirical constants.
Mg2+ is corrected for dNTPs binding (if present)
(Owczarzy et al. (2008), Biochemistry 47: 5336-5353)
Examples
--------
>>> from Bio.SeqUtils.MeltingTemp import salt_correction
>>> print('%0.2f' % salt_correction(Na=50, method=1))
-21.60
>>> print('%0.2f' % salt_correction(Na=50, method=2))
-21.85
>>> print('%0.2f' % salt_correction(Na=100, Tris=20, method=2))
-16.45
>>> print('%0.2f' % salt_correction(Na=100, Tris=20, Mg=1.5, method=2))
-10.99
"""
if method in (5, 6, 7) and not seq:
raise ValueError(
"sequence is missing (is needed to calculate GC content or sequence length)."
)
corr = 0
if not method:
return corr
Mon = Na + K + Tris / 2.0 # Note: all these values are millimolar
mg = Mg * 1e-3 # Lowercase ions (mg, mon, dntps) are molar
# Na equivalent according to von Ahsen et al. (2001):
if sum((K, Mg, Tris, dNTPs)) > 0 and method != 7 and dNTPs < Mg:
# dNTPs bind Mg2+ strongly. If [dNTPs] is larger or equal than
# [Mg2+], free Mg2+ is considered not to be relevant.
Mon += 120 * math.sqrt(Mg - dNTPs)
mon = Mon * 1e-3
# Note: math.log = ln(), math.log10 = log()
if method in range(1, 7) and not mon:
raise ValueError(
"Total ion concentration of zero is not allowed in this method."
)
if method == 1:
corr = 16.6 * math.log10(mon)
if method == 2:
corr = 16.6 * math.log10((mon) / (1.0 + 0.7 * (mon)))
if method == 3:
corr = 12.5 * math.log10(mon)
if method == 4:
corr = 11.7 * math.log10(mon)
if method == 5:
corr = 0.368 * (len(seq) - 1) * math.log(mon)
if method == 6:
corr = (
(4.29 * SeqUtils.gc_fraction(seq, "ignore") - 3.95) * 1e-5 * math.log(mon)
) + 9.40e-6 * math.log(mon) ** 2
# Turn black code style off
# fmt: off
if method == 7:
a, b, c, d = 3.92, -0.911, 6.26, 1.42
e, f, g = -48.2, 52.5, 8.31
if dNTPs > 0:
dntps = dNTPs * 1e-3
ka = 3e4 # Dissociation constant for Mg:dNTP
# Free Mg2+ calculation:
mg = (-(ka * dntps - ka * mg + 1.0)
+ math.sqrt((ka * dntps - ka * mg + 1.0) ** 2
+ 4.0 * ka * mg)) / (2.0 * ka)
if Mon > 0:
R = math.sqrt(mg) / mon
if R < 0.22:
corr = (4.29 * SeqUtils.gc_fraction(seq, "ignore") - 3.95) * \
1e-5 * math.log(mon) + 9.40e-6 * math.log(mon) ** 2
return corr
elif R < 6.0:
a = 3.92 * (0.843 - 0.352 * math.sqrt(mon) * math.log(mon))
d = 1.42 * (1.279 - 4.03e-3 * math.log(mon)
- 8.03e-3 * math.log(mon) ** 2)
g = 8.31 * (0.486 - 0.258 * math.log(mon)
+ 5.25e-3 * math.log(mon) ** 3)
corr = (a + b * math.log(mg) + (SeqUtils.gc_fraction(seq, "ignore"))
* (c + d * math.log(mg)) + (1 / (2.0 * (len(seq) - 1)))
* (e + f * math.log(mg) + g * math.log(mg) ** 2)) * 1e-5
# Turn black code style on
# fmt: on
if method > 7:
raise ValueError("Allowed values for parameter 'method' are 1-7.")
return corr |
Correct a given Tm for DMSO and formamide.
Please note that these corrections are +/- rough approximations.
Arguments:
- melting_temp: Melting temperature.
- DMSO: Percent DMSO.
- fmd: Formamide concentration in %(fmdmethod=1) or molar (fmdmethod=2).
- DMSOfactor: How much should Tm decreases per percent DMSO. Default=0.65
(von Ahsen et al. 2001). Other published values are 0.5, 0.6 and 0.675.
- fmdfactor: How much should Tm decrease per percent formamide.
Default=0.65. Several papers report factors between 0.6 and 0.72.
- fmdmethod:
1. Tm = Tm - factor(%formamide) (Default)
2. Tm = Tm + (0.453(f(GC)) - 2.88) x [formamide]
Here f(GC) is fraction of GC.
Note (again) that in fmdmethod=1 formamide concentration is given in %,
while in fmdmethod=2 it is given in molar.
- GC: GC content in percent.
Examples:
>>> from Bio.SeqUtils import MeltingTemp as mt
>>> mt.chem_correction(70)
70
>>> print('%0.2f' % mt.chem_correction(70, DMSO=3))
67.75
>>> print('%0.2f' % mt.chem_correction(70, fmd=5))
66.75
>>> print('%0.2f' % mt.chem_correction(70, fmdmethod=2, fmd=1.25,
... GC=50))
66.68 | def chem_correction(
melting_temp, DMSO=0, fmd=0, DMSOfactor=0.75, fmdfactor=0.65, fmdmethod=1, GC=None
):
"""Correct a given Tm for DMSO and formamide.
Please note that these corrections are +/- rough approximations.
Arguments:
- melting_temp: Melting temperature.
- DMSO: Percent DMSO.
- fmd: Formamide concentration in %(fmdmethod=1) or molar (fmdmethod=2).
- DMSOfactor: How much should Tm decreases per percent DMSO. Default=0.65
(von Ahsen et al. 2001). Other published values are 0.5, 0.6 and 0.675.
- fmdfactor: How much should Tm decrease per percent formamide.
Default=0.65. Several papers report factors between 0.6 and 0.72.
- fmdmethod:
1. Tm = Tm - factor(%formamide) (Default)
2. Tm = Tm + (0.453(f(GC)) - 2.88) x [formamide]
Here f(GC) is fraction of GC.
Note (again) that in fmdmethod=1 formamide concentration is given in %,
while in fmdmethod=2 it is given in molar.
- GC: GC content in percent.
Examples:
>>> from Bio.SeqUtils import MeltingTemp as mt
>>> mt.chem_correction(70)
70
>>> print('%0.2f' % mt.chem_correction(70, DMSO=3))
67.75
>>> print('%0.2f' % mt.chem_correction(70, fmd=5))
66.75
>>> print('%0.2f' % mt.chem_correction(70, fmdmethod=2, fmd=1.25,
... GC=50))
66.68
"""
if DMSO:
melting_temp -= DMSOfactor * DMSO
if fmd:
# McConaughy et al. (1969), Biochemistry 8: 3289-3295
if fmdmethod == 1:
# Note: Here fmd is given in percent
melting_temp -= fmdfactor * fmd
# Blake & Delcourt (1996), Nucl Acids Res 11: 2095-2103
if fmdmethod == 2:
if GC is None or GC < 0:
raise ValueError("'GC' is missing or negative")
# Note: Here fmd is given in molar
melting_temp += (0.453 * (GC / 100.0) - 2.88) * fmd
if fmdmethod not in (1, 2):
raise ValueError("'fmdmethod' must be 1 or 2")
return melting_temp |
Calculate and return the Tm using the 'Wallace rule'.
Tm = 4 degC * (G + C) + 2 degC * (A+T)
The Wallace rule (Thein & Wallace 1986, in Human genetic diseases: a
practical approach, 33-50) is often used as rule of thumb for approximate
Tm calculations for primers of 14 to 20 nt length.
Non-DNA characters (e.g., E, F, J, !, 1, etc) are ignored by this method.
Examples:
>>> from Bio.SeqUtils import MeltingTemp as mt
>>> mt.Tm_Wallace('ACGTTGCAATGCCGTA')
48.0
>>> mt.Tm_Wallace('ACGT TGCA ATGC CGTA')
48.0
>>> mt.Tm_Wallace('1ACGT2TGCA3ATGC4CGTA')
48.0 | def Tm_Wallace(seq, check=True, strict=True):
"""Calculate and return the Tm using the 'Wallace rule'.
Tm = 4 degC * (G + C) + 2 degC * (A+T)
The Wallace rule (Thein & Wallace 1986, in Human genetic diseases: a
practical approach, 33-50) is often used as rule of thumb for approximate
Tm calculations for primers of 14 to 20 nt length.
Non-DNA characters (e.g., E, F, J, !, 1, etc) are ignored by this method.
Examples:
>>> from Bio.SeqUtils import MeltingTemp as mt
>>> mt.Tm_Wallace('ACGTTGCAATGCCGTA')
48.0
>>> mt.Tm_Wallace('ACGT TGCA ATGC CGTA')
48.0
>>> mt.Tm_Wallace('1ACGT2TGCA3ATGC4CGTA')
48.0
"""
seq = str(seq)
if check:
seq = _check(seq, "Tm_Wallace")
melting_temp = 2 * (sum(map(seq.count, ("A", "T", "W")))) + 4 * (
sum(map(seq.count, ("C", "G", "S")))
)
# Intermediate values for ambiguous positions:
tmp = (
3 * (sum(map(seq.count, ("K", "M", "N", "R", "Y"))))
+ 10 / 3.0 * (sum(map(seq.count, ("B", "V"))))
+ 8 / 3.0 * (sum(map(seq.count, ("D", "H"))))
)
if strict and tmp:
raise ValueError(
"ambiguous bases B, D, H, K, M, N, R, V, Y not allowed when strict=True"
)
else:
melting_temp += tmp
return melting_temp |
Return the Tm using empirical formulas based on GC content.
General format: Tm = A + B(%GC) - C/N + salt correction - D(%mismatch)
A, B, C, D: empirical constants, N: primer length
D (amount of decrease in Tm per % mismatch) is often 1, but sometimes other
values have been used (0.6-1.5). Use 'X' to indicate the mismatch position
in the sequence. Note that this mismatch correction is a rough estimate.
>>> from Bio.SeqUtils import MeltingTemp as mt
>>> print("%0.2f" % mt.Tm_GC('CTGCTGATXGCACGAGGTTATGG', valueset=2))
69.20
Arguments:
- valueset: A few often cited variants are included:
1. Tm = 69.3 + 0.41(%GC) - 650/N
(Marmur & Doty 1962, J Mol Biol 5: 109-118; Chester & Marshak 1993),
Anal Biochem 209: 284-290)
2. Tm = 81.5 + 0.41(%GC) - 675/N - %mismatch
'QuikChange' formula. Recommended (by the manufacturer) for the
design of primers for QuikChange mutagenesis.
3. Tm = 81.5 + 0.41(%GC) - 675/N + 16.6 x log[Na+]
(Marmur & Doty 1962, J Mol Biol 5: 109-118; Schildkraut & Lifson
1965, Biopolymers 3: 195-208)
4. Tm = 81.5 + 0.41(%GC) - 500/N + 16.6 x log([Na+]/(1.0 + 0.7 x
[Na+])) - %mismatch
(Wetmur 1991, Crit Rev Biochem Mol Biol 126: 227-259). This is the
standard formula in approximative mode of MELTING 4.3.
5. Tm = 78 + 0.7(%GC) - 500/N + 16.6 x log([Na+]/(1.0 + 0.7 x [Na+]))
- %mismatch
(Wetmur 1991, Crit Rev Biochem Mol Biol 126: 227-259). For RNA.
6. Tm = 67 + 0.8(%GC) - 500/N + 16.6 x log([Na+]/(1.0 + 0.7 x [Na+]))
- %mismatch
(Wetmur 1991, Crit Rev Biochem Mol Biol 126: 227-259). For RNA/DNA
hybrids.
7. Tm = 81.5 + 0.41(%GC) - 600/N + 16.6 x log[Na+]
Used by Primer3Plus to calculate the product Tm. Default set.
8. Tm = 77.1 + 0.41(%GC) - 528/N + 11.7 x log[Na+]
(von Ahsen et al. 2001, Clin Chem 47: 1956-1961). Recommended 'as a
tradeoff between accuracy and ease of use'.
- userset: Tuple of four values for A, B, C, and D. Usersets override
valuesets.
- Na, K, Tris, Mg, dNTPs: Concentration of the respective ions [mM]. If
any of K, Tris, Mg and dNTPS is non-zero, a 'sodium-equivalent'
concentration is calculated and used for salt correction (von Ahsen et
al., 2001).
- saltcorr: Type of salt correction (see method salt_correction).
Default=0. 0 or None means no salt correction.
- mismatch: If 'True' (default) every 'X' in the sequence is counted as
mismatch. | def Tm_GC(
seq,
check=True,
strict=True,
valueset=7,
userset=None,
Na=50,
K=0,
Tris=0,
Mg=0,
dNTPs=0,
saltcorr=0,
mismatch=True,
):
"""Return the Tm using empirical formulas based on GC content.
General format: Tm = A + B(%GC) - C/N + salt correction - D(%mismatch)
A, B, C, D: empirical constants, N: primer length
D (amount of decrease in Tm per % mismatch) is often 1, but sometimes other
values have been used (0.6-1.5). Use 'X' to indicate the mismatch position
in the sequence. Note that this mismatch correction is a rough estimate.
>>> from Bio.SeqUtils import MeltingTemp as mt
>>> print("%0.2f" % mt.Tm_GC('CTGCTGATXGCACGAGGTTATGG', valueset=2))
69.20
Arguments:
- valueset: A few often cited variants are included:
1. Tm = 69.3 + 0.41(%GC) - 650/N
(Marmur & Doty 1962, J Mol Biol 5: 109-118; Chester & Marshak 1993),
Anal Biochem 209: 284-290)
2. Tm = 81.5 + 0.41(%GC) - 675/N - %mismatch
'QuikChange' formula. Recommended (by the manufacturer) for the
design of primers for QuikChange mutagenesis.
3. Tm = 81.5 + 0.41(%GC) - 675/N + 16.6 x log[Na+]
(Marmur & Doty 1962, J Mol Biol 5: 109-118; Schildkraut & Lifson
1965, Biopolymers 3: 195-208)
4. Tm = 81.5 + 0.41(%GC) - 500/N + 16.6 x log([Na+]/(1.0 + 0.7 x
[Na+])) - %mismatch
(Wetmur 1991, Crit Rev Biochem Mol Biol 126: 227-259). This is the
standard formula in approximative mode of MELTING 4.3.
5. Tm = 78 + 0.7(%GC) - 500/N + 16.6 x log([Na+]/(1.0 + 0.7 x [Na+]))
- %mismatch
(Wetmur 1991, Crit Rev Biochem Mol Biol 126: 227-259). For RNA.
6. Tm = 67 + 0.8(%GC) - 500/N + 16.6 x log([Na+]/(1.0 + 0.7 x [Na+]))
- %mismatch
(Wetmur 1991, Crit Rev Biochem Mol Biol 126: 227-259). For RNA/DNA
hybrids.
7. Tm = 81.5 + 0.41(%GC) - 600/N + 16.6 x log[Na+]
Used by Primer3Plus to calculate the product Tm. Default set.
8. Tm = 77.1 + 0.41(%GC) - 528/N + 11.7 x log[Na+]
(von Ahsen et al. 2001, Clin Chem 47: 1956-1961). Recommended 'as a
tradeoff between accuracy and ease of use'.
- userset: Tuple of four values for A, B, C, and D. Usersets override
valuesets.
- Na, K, Tris, Mg, dNTPs: Concentration of the respective ions [mM]. If
any of K, Tris, Mg and dNTPS is non-zero, a 'sodium-equivalent'
concentration is calculated and used for salt correction (von Ahsen et
al., 2001).
- saltcorr: Type of salt correction (see method salt_correction).
Default=0. 0 or None means no salt correction.
- mismatch: If 'True' (default) every 'X' in the sequence is counted as
mismatch.
"""
if saltcorr == 5:
raise ValueError("salt-correction method 5 not applicable to Tm_GC")
seq = str(seq)
if check:
seq = _check(seq, "Tm_GC")
if strict and any(x in seq for x in "KMNRYBVDH"):
raise ValueError(
"ambiguous bases B, D, H, K, M, N, R, V, Y not allowed when 'strict=True'"
)
# Ambiguous bases: add 0.5, 0.67 or 0.33% depending on G+C probability:
percent_gc = SeqUtils.gc_fraction(seq, "weighted") * 100
# gc_fraction counts X as 0.5
if mismatch:
percent_gc -= seq.count("X") * 50.0 / len(seq)
if userset:
A, B, C, D = userset
else:
if valueset == 1:
A, B, C, D = (69.3, 0.41, 650, 1)
saltcorr = 0
if valueset == 2:
A, B, C, D = (81.5, 0.41, 675, 1)
saltcorr = 0
if valueset == 3:
A, B, C, D = (81.5, 0.41, 675, 1)
saltcorr = 1
if valueset == 4:
A, B, C, D = (81.5, 0.41, 500, 1)
saltcorr = 2
if valueset == 5:
A, B, C, D = (78.0, 0.7, 500, 1)
saltcorr = 2
if valueset == 6:
A, B, C, D = (67.0, 0.8, 500, 1)
saltcorr = 2
if valueset == 7:
A, B, C, D = (81.5, 0.41, 600, 1)
saltcorr = 1
if valueset == 8:
A, B, C, D = (77.1, 0.41, 528, 1)
saltcorr = 4
if valueset > 8:
raise ValueError("allowed values for parameter 'valueset' are 0-8.")
melting_temp = A + B * percent_gc - C / len(seq)
if saltcorr:
melting_temp += salt_correction(
Na=Na, K=K, Tris=Tris, Mg=Mg, dNTPs=dNTPs, seq=seq, method=saltcorr
)
if mismatch:
melting_temp -= D * (seq.count("X") * 100.0 / len(seq))
return melting_temp |
Throw an error or a warning if there is no data for the neighbors (PRIVATE). | def _key_error(neighbors, strict):
"""Throw an error or a warning if there is no data for the neighbors (PRIVATE)."""
# We haven't found the key in the tables
if strict:
raise ValueError(f"no thermodynamic data for neighbors {neighbors!r} available")
else:
warnings.warn(
"no themodynamic data for neighbors %r available. "
"Calculation will be wrong" % neighbors,
BiopythonWarning,
) |
Return the Tm using nearest neighbor thermodynamics.
Arguments:
- seq: The primer/probe sequence as string or Biopython sequence object.
For RNA/DNA hybridizations seq must be the RNA sequence.
- c_seq: Complementary sequence. The sequence of the template/target in
3'->5' direction. c_seq is necessary for mismatch correction and
dangling-ends correction. Both corrections will automatically be
applied if mismatches or dangling ends are present. Default=None.
- shift: Shift of the primer/probe sequence on the template/target
sequence, e.g.::
shift=0 shift=1 shift= -1
Primer (seq): 5' ATGC... 5' ATGC... 5' ATGC...
Template (c_seq): 3' TACG... 3' CTACG... 3' ACG...
The shift parameter is necessary to align seq and c_seq if they have
different lengths or if they should have dangling ends. Default=0
- table: Thermodynamic NN values, eight tables are implemented:
For DNA/DNA hybridizations:
- DNA_NN1: values from Breslauer et al. (1986)
- DNA_NN2: values from Sugimoto et al. (1996)
- DNA_NN3: values from Allawi & SantaLucia (1997) (default)
- DNA_NN4: values from SantaLucia & Hicks (2004)
For RNA/RNA hybridizations:
- RNA_NN1: values from Freier et al. (1986)
- RNA_NN2: values from Xia et al. (1998)
- RNA_NN3: values from Chen et al. (2012)
For RNA/DNA hybridizations:
- R_DNA_NN1: values from Sugimoto et al. (1995)
Note that ``seq`` must be the RNA sequence.
Use the module's maketable method to make a new table or to update one
one of the implemented tables.
- tmm_table: Thermodynamic values for terminal mismatches.
Default: DNA_TMM1 (SantaLucia & Peyret, 2001)
- imm_table: Thermodynamic values for internal mismatches, may include
insosine mismatches. Default: DNA_IMM1 (Allawi & SantaLucia, 1997-1998;
Peyret et al., 1999; Watkins & SantaLucia, 2005)
- de_table: Thermodynamic values for dangling ends:
- DNA_DE1: for DNA. Values from Bommarito et al. (2000) (default)
- RNA_DE1: for RNA. Values from Turner & Mathews (2010)
- dnac1: Concentration of the higher concentrated strand [nM]. Typically
this will be the primer (for PCR) or the probe. Default=25.
- dnac2: Concentration of the lower concentrated strand [nM]. In PCR this
is the template strand which concentration is typically very low and may
be ignored (dnac2=0). In oligo/oligo hybridization experiments, dnac1
equals dnac1. Default=25.
MELTING and Primer3Plus use k = [Oligo(Total)]/4 by default. To mimic
this behaviour, you have to divide [Oligo(Total)] by 2 and assign this
concentration to dnac1 and dnac2. E.g., Total oligo concentration of
50 nM in Primer3Plus means dnac1=25, dnac2=25.
- selfcomp: Is the sequence self-complementary? Default=False. If 'True'
the primer is thought binding to itself, thus dnac2 is not considered.
- Na, K, Tris, Mg, dNTPs: See method 'Tm_GC' for details. Defaults: Na=50,
K=0, Tris=0, Mg=0, dNTPs=0.
- saltcorr: See method 'Tm_GC'. Default=5. 0 means no salt correction. | def Tm_NN(
seq,
check=True,
strict=True,
c_seq=None,
shift=0,
nn_table=None,
tmm_table=None,
imm_table=None,
de_table=None,
dnac1=25,
dnac2=25,
selfcomp=False,
Na=50,
K=0,
Tris=0,
Mg=0,
dNTPs=0,
saltcorr=5,
):
"""Return the Tm using nearest neighbor thermodynamics.
Arguments:
- seq: The primer/probe sequence as string or Biopython sequence object.
For RNA/DNA hybridizations seq must be the RNA sequence.
- c_seq: Complementary sequence. The sequence of the template/target in
3'->5' direction. c_seq is necessary for mismatch correction and
dangling-ends correction. Both corrections will automatically be
applied if mismatches or dangling ends are present. Default=None.
- shift: Shift of the primer/probe sequence on the template/target
sequence, e.g.::
shift=0 shift=1 shift= -1
Primer (seq): 5' ATGC... 5' ATGC... 5' ATGC...
Template (c_seq): 3' TACG... 3' CTACG... 3' ACG...
The shift parameter is necessary to align seq and c_seq if they have
different lengths or if they should have dangling ends. Default=0
- table: Thermodynamic NN values, eight tables are implemented:
For DNA/DNA hybridizations:
- DNA_NN1: values from Breslauer et al. (1986)
- DNA_NN2: values from Sugimoto et al. (1996)
- DNA_NN3: values from Allawi & SantaLucia (1997) (default)
- DNA_NN4: values from SantaLucia & Hicks (2004)
For RNA/RNA hybridizations:
- RNA_NN1: values from Freier et al. (1986)
- RNA_NN2: values from Xia et al. (1998)
- RNA_NN3: values from Chen et al. (2012)
For RNA/DNA hybridizations:
- R_DNA_NN1: values from Sugimoto et al. (1995)
Note that ``seq`` must be the RNA sequence.
Use the module's maketable method to make a new table or to update one
one of the implemented tables.
- tmm_table: Thermodynamic values for terminal mismatches.
Default: DNA_TMM1 (SantaLucia & Peyret, 2001)
- imm_table: Thermodynamic values for internal mismatches, may include
insosine mismatches. Default: DNA_IMM1 (Allawi & SantaLucia, 1997-1998;
Peyret et al., 1999; Watkins & SantaLucia, 2005)
- de_table: Thermodynamic values for dangling ends:
- DNA_DE1: for DNA. Values from Bommarito et al. (2000) (default)
- RNA_DE1: for RNA. Values from Turner & Mathews (2010)
- dnac1: Concentration of the higher concentrated strand [nM]. Typically
this will be the primer (for PCR) or the probe. Default=25.
- dnac2: Concentration of the lower concentrated strand [nM]. In PCR this
is the template strand which concentration is typically very low and may
be ignored (dnac2=0). In oligo/oligo hybridization experiments, dnac1
equals dnac1. Default=25.
MELTING and Primer3Plus use k = [Oligo(Total)]/4 by default. To mimic
this behaviour, you have to divide [Oligo(Total)] by 2 and assign this
concentration to dnac1 and dnac2. E.g., Total oligo concentration of
50 nM in Primer3Plus means dnac1=25, dnac2=25.
- selfcomp: Is the sequence self-complementary? Default=False. If 'True'
the primer is thought binding to itself, thus dnac2 is not considered.
- Na, K, Tris, Mg, dNTPs: See method 'Tm_GC' for details. Defaults: Na=50,
K=0, Tris=0, Mg=0, dNTPs=0.
- saltcorr: See method 'Tm_GC'. Default=5. 0 means no salt correction.
"""
# Set defaults
if not nn_table:
nn_table = DNA_NN3
if not tmm_table:
tmm_table = DNA_TMM1
if not imm_table:
imm_table = DNA_IMM1
if not de_table:
de_table = DNA_DE1
seq = str(seq)
if not c_seq:
# c_seq must be provided by user if dangling ends or mismatches should
# be taken into account. Otherwise take perfect complement.
c_seq = Seq.Seq(seq).complement()
c_seq = str(c_seq)
if check:
seq = _check(seq, "Tm_NN")
c_seq = _check(c_seq, "Tm_NN")
tmp_seq = seq
tmp_cseq = c_seq
delta_h = 0
delta_s = 0
d_h = 0 # Names for indexes
d_s = 1 # 0 and 1
# Dangling ends?
if shift or len(seq) != len(c_seq):
# Align both sequences using the shift parameter
if shift > 0:
tmp_seq = "." * shift + seq
if shift < 0:
tmp_cseq = "." * abs(shift) + c_seq
if len(tmp_cseq) > len(tmp_seq):
tmp_seq += (len(tmp_cseq) - len(tmp_seq)) * "."
if len(tmp_cseq) < len(tmp_seq):
tmp_cseq += (len(tmp_seq) - len(tmp_cseq)) * "."
# Remove 'over-dangling' ends
while tmp_seq.startswith("..") or tmp_cseq.startswith(".."):
tmp_seq = tmp_seq[1:]
tmp_cseq = tmp_cseq[1:]
while tmp_seq.endswith("..") or tmp_cseq.endswith(".."):
tmp_seq = tmp_seq[:-1]
tmp_cseq = tmp_cseq[:-1]
# Now for the dangling ends
if tmp_seq.startswith(".") or tmp_cseq.startswith("."):
left_de = tmp_seq[:2] + "/" + tmp_cseq[:2]
try:
delta_h += de_table[left_de][d_h]
delta_s += de_table[left_de][d_s]
except KeyError:
_key_error(left_de, strict)
tmp_seq = tmp_seq[1:]
tmp_cseq = tmp_cseq[1:]
if tmp_seq.endswith(".") or tmp_cseq.endswith("."):
right_de = tmp_cseq[-2:][::-1] + "/" + tmp_seq[-2:][::-1]
try:
delta_h += de_table[right_de][d_h]
delta_s += de_table[right_de][d_s]
except KeyError:
_key_error(right_de, strict)
tmp_seq = tmp_seq[:-1]
tmp_cseq = tmp_cseq[:-1]
# Now for terminal mismatches
left_tmm = tmp_cseq[:2][::-1] + "/" + tmp_seq[:2][::-1]
if left_tmm in tmm_table:
delta_h += tmm_table[left_tmm][d_h]
delta_s += tmm_table[left_tmm][d_s]
tmp_seq = tmp_seq[1:]
tmp_cseq = tmp_cseq[1:]
right_tmm = tmp_seq[-2:] + "/" + tmp_cseq[-2:]
if right_tmm in tmm_table:
delta_h += tmm_table[right_tmm][d_h]
delta_s += tmm_table[right_tmm][d_s]
tmp_seq = tmp_seq[:-1]
tmp_cseq = tmp_cseq[:-1]
# Now everything 'unusual' at the ends is handled and removed and we can
# look at the initiation.
# One or several of the following initiation types may apply:
# Type: General initiation value
delta_h += nn_table["init"][d_h]
delta_s += nn_table["init"][d_s]
# Type: Duplex with no (allA/T) or at least one (oneG/C) GC pair
if SeqUtils.gc_fraction(seq, "ignore") == 0:
delta_h += nn_table["init_allA/T"][d_h]
delta_s += nn_table["init_allA/T"][d_s]
else:
delta_h += nn_table["init_oneG/C"][d_h]
delta_s += nn_table["init_oneG/C"][d_s]
# Type: Penalty if 5' end is T
if seq.startswith("T"):
delta_h += nn_table["init_5T/A"][d_h]
delta_s += nn_table["init_5T/A"][d_s]
if seq.endswith("A"):
delta_h += nn_table["init_5T/A"][d_h]
delta_s += nn_table["init_5T/A"][d_s]
# Type: Different values for G/C or A/T terminal basepairs
ends = seq[0] + seq[-1]
AT = ends.count("A") + ends.count("T")
GC = ends.count("G") + ends.count("C")
delta_h += nn_table["init_A/T"][d_h] * AT
delta_s += nn_table["init_A/T"][d_s] * AT
delta_h += nn_table["init_G/C"][d_h] * GC
delta_s += nn_table["init_G/C"][d_s] * GC
# Finally, the 'zipping'
for basenumber in range(len(tmp_seq) - 1):
neighbors = (
tmp_seq[basenumber : basenumber + 2]
+ "/"
+ tmp_cseq[basenumber : basenumber + 2]
)
if neighbors in imm_table:
delta_h += imm_table[neighbors][d_h]
delta_s += imm_table[neighbors][d_s]
elif neighbors[::-1] in imm_table:
delta_h += imm_table[neighbors[::-1]][d_h]
delta_s += imm_table[neighbors[::-1]][d_s]
elif neighbors in nn_table:
delta_h += nn_table[neighbors][d_h]
delta_s += nn_table[neighbors][d_s]
elif neighbors[::-1] in nn_table:
delta_h += nn_table[neighbors[::-1]][d_h]
delta_s += nn_table[neighbors[::-1]][d_s]
else:
# We haven't found the key...
_key_error(neighbors, strict)
k = (dnac1 - (dnac2 / 2.0)) * 1e-9
if selfcomp:
k = dnac1 * 1e-9
delta_h += nn_table["sym"][d_h]
delta_s += nn_table["sym"][d_s]
R = 1.987 # universal gas constant in Cal/degrees C*Mol
if saltcorr:
corr = salt_correction(
Na=Na, K=K, Tris=Tris, Mg=Mg, dNTPs=dNTPs, method=saltcorr, seq=seq
)
if saltcorr == 5:
delta_s += corr
melting_temp = (1000 * delta_h) / (delta_s + (R * (math.log(k)))) - 273.15
if saltcorr in (1, 2, 3, 4):
melting_temp += corr
if saltcorr in (6, 7):
# Tm = 1/(1/Tm + corr)
melting_temp = 1 / (1 / (melting_temp + 273.15) + corr) - 273.15
return melting_temp |
Calculate G+C percentage in seq (float between 0 and 1).
Copes with mixed case sequences. Ambiguous Nucleotides in this context are
those different from ATCGSW (S is G or C, and W is A or T).
If ambiguous equals "remove" (default), will only count GCS and will only
include ACTGSW when calculating the sequence length. Equivalent to removing
all characters in the set BDHKMNRVXY before calculating the GC content, as
each of these ambiguous nucleotides can either be in (A,T) or in (C,G).
If ambiguous equals "ignore", it will treat only unambiguous nucleotides (GCS)
as counting towards the GC percentage, but will include all ambiguous and
unambiguous nucleotides when calculating the sequence length.
If ambiguous equals "weighted", will use a "mean" value when counting the
ambiguous characters, for example, G and C will be counted as 1, N and X will
be counted as 0.5, D will be counted as 0.33 etc. See Bio.SeqUtils._gc_values
for a full list.
Will raise a ValueError for any other value of the ambiguous parameter.
>>> from Bio.SeqUtils import gc_fraction
>>> seq = "ACTG"
>>> print(f"GC content of {seq} : {gc_fraction(seq):.2f}")
GC content of ACTG : 0.50
S and W are ambiguous for the purposes of calculating the GC content.
>>> seq = "ACTGSSSS"
>>> gc = gc_fraction(seq, "remove")
>>> print(f"GC content of {seq} : {gc:.2f}")
GC content of ACTGSSSS : 0.75
>>> gc = gc_fraction(seq, "ignore")
>>> print(f"GC content of {seq} : {gc:.2f}")
GC content of ACTGSSSS : 0.75
>>> gc = gc_fraction(seq, "weighted")
>>> print(f"GC content with ambiguous counting: {gc:.2f}")
GC content with ambiguous counting: 0.75
Some examples with ambiguous nucleotides.
>>> seq = "ACTGN"
>>> gc = gc_fraction(seq, "ignore")
>>> print(f"GC content of {seq} : {gc:.2f}")
GC content of ACTGN : 0.40
>>> gc = gc_fraction(seq, "weighted")
>>> print(f"GC content with ambiguous counting: {gc:.2f}")
GC content with ambiguous counting: 0.50
>>> gc = gc_fraction(seq, "remove")
>>> print(f"GC content with ambiguous removing: {gc:.2f}")
GC content with ambiguous removing: 0.50
Ambiguous nucleotides are also removed from the length of the sequence.
>>> seq = "GDVV"
>>> gc = gc_fraction(seq, "ignore")
>>> print(f"GC content of {seq} : {gc:.2f}")
GC content of GDVV : 0.25
>>> gc = gc_fraction(seq, "weighted")
>>> print(f"GC content with ambiguous counting: {gc:.4f}")
GC content with ambiguous counting: 0.6667
>>> gc = gc_fraction(seq, "remove")
>>> print(f"GC content with ambiguous removing: {gc:.2f}")
GC content with ambiguous removing: 1.00
Note that this will return zero for an empty sequence. | def gc_fraction(seq, ambiguous="remove"):
"""Calculate G+C percentage in seq (float between 0 and 1).
Copes with mixed case sequences. Ambiguous Nucleotides in this context are
those different from ATCGSW (S is G or C, and W is A or T).
If ambiguous equals "remove" (default), will only count GCS and will only
include ACTGSW when calculating the sequence length. Equivalent to removing
all characters in the set BDHKMNRVXY before calculating the GC content, as
each of these ambiguous nucleotides can either be in (A,T) or in (C,G).
If ambiguous equals "ignore", it will treat only unambiguous nucleotides (GCS)
as counting towards the GC percentage, but will include all ambiguous and
unambiguous nucleotides when calculating the sequence length.
If ambiguous equals "weighted", will use a "mean" value when counting the
ambiguous characters, for example, G and C will be counted as 1, N and X will
be counted as 0.5, D will be counted as 0.33 etc. See Bio.SeqUtils._gc_values
for a full list.
Will raise a ValueError for any other value of the ambiguous parameter.
>>> from Bio.SeqUtils import gc_fraction
>>> seq = "ACTG"
>>> print(f"GC content of {seq} : {gc_fraction(seq):.2f}")
GC content of ACTG : 0.50
S and W are ambiguous for the purposes of calculating the GC content.
>>> seq = "ACTGSSSS"
>>> gc = gc_fraction(seq, "remove")
>>> print(f"GC content of {seq} : {gc:.2f}")
GC content of ACTGSSSS : 0.75
>>> gc = gc_fraction(seq, "ignore")
>>> print(f"GC content of {seq} : {gc:.2f}")
GC content of ACTGSSSS : 0.75
>>> gc = gc_fraction(seq, "weighted")
>>> print(f"GC content with ambiguous counting: {gc:.2f}")
GC content with ambiguous counting: 0.75
Some examples with ambiguous nucleotides.
>>> seq = "ACTGN"
>>> gc = gc_fraction(seq, "ignore")
>>> print(f"GC content of {seq} : {gc:.2f}")
GC content of ACTGN : 0.40
>>> gc = gc_fraction(seq, "weighted")
>>> print(f"GC content with ambiguous counting: {gc:.2f}")
GC content with ambiguous counting: 0.50
>>> gc = gc_fraction(seq, "remove")
>>> print(f"GC content with ambiguous removing: {gc:.2f}")
GC content with ambiguous removing: 0.50
Ambiguous nucleotides are also removed from the length of the sequence.
>>> seq = "GDVV"
>>> gc = gc_fraction(seq, "ignore")
>>> print(f"GC content of {seq} : {gc:.2f}")
GC content of GDVV : 0.25
>>> gc = gc_fraction(seq, "weighted")
>>> print(f"GC content with ambiguous counting: {gc:.4f}")
GC content with ambiguous counting: 0.6667
>>> gc = gc_fraction(seq, "remove")
>>> print(f"GC content with ambiguous removing: {gc:.2f}")
GC content with ambiguous removing: 1.00
Note that this will return zero for an empty sequence.
"""
if ambiguous not in ("weighted", "remove", "ignore"):
raise ValueError(f"ambiguous value '{ambiguous}' not recognized")
gc = sum(seq.count(x) for x in "CGScgs")
if ambiguous == "remove":
length = gc + sum(seq.count(x) for x in "ATWatw")
else:
length = len(seq)
if ambiguous == "weighted":
gc += sum(
(seq.count(x) + seq.count(x.lower())) * _gc_values[x] for x in "BDHKMNRVXY"
)
if length == 0:
return 0
return gc / length |
Calculate G+C content: total, for first, second and third positions.
Returns a tuple of four floats (percentages between 0 and 100) for the
entire sequence, and the three codon positions. e.g.
>>> from Bio.SeqUtils import GC123
>>> GC123("ACTGTN")
(40.0, 50.0, 50.0, 0.0)
Copes with mixed case sequences, but does NOT deal with ambiguous
nucleotides. | def GC123(seq):
"""Calculate G+C content: total, for first, second and third positions.
Returns a tuple of four floats (percentages between 0 and 100) for the
entire sequence, and the three codon positions. e.g.
>>> from Bio.SeqUtils import GC123
>>> GC123("ACTGTN")
(40.0, 50.0, 50.0, 0.0)
Copes with mixed case sequences, but does NOT deal with ambiguous
nucleotides.
"""
d = {}
for nt in ["A", "T", "G", "C"]:
d[nt] = [0, 0, 0]
for i in range(0, len(seq), 3):
codon = seq[i : i + 3]
if len(codon) < 3:
codon += " "
for pos in range(3):
for nt in ["A", "T", "G", "C"]:
if codon[pos] == nt or codon[pos] == nt.lower():
d[nt][pos] += 1
gc = {}
gcall = 0
nall = 0
for i in range(3):
try:
n = d["G"][i] + d["C"][i] + d["T"][i] + d["A"][i]
gc[i] = (d["G"][i] + d["C"][i]) * 100.0 / n
except Exception: # TODO - ValueError?
gc[i] = 0
gcall = gcall + d["G"][i] + d["C"][i]
nall = nall + n
gcall = 100.0 * gcall / nall
return gcall, gc[0], gc[1], gc[2] |
Calculate GC skew (G-C)/(G+C) for multiple windows along the sequence.
Returns a list of ratios (floats), controlled by the length of the sequence
and the size of the window.
Returns 0 for windows without any G/C by handling zero division errors.
Does NOT look at any ambiguous nucleotides. | def GC_skew(seq, window=100):
"""Calculate GC skew (G-C)/(G+C) for multiple windows along the sequence.
Returns a list of ratios (floats), controlled by the length of the sequence
and the size of the window.
Returns 0 for windows without any G/C by handling zero division errors.
Does NOT look at any ambiguous nucleotides.
"""
# 8/19/03: Iddo: added lowercase
values = []
for i in range(0, len(seq), window):
s = seq[i : i + window]
g = s.count("G") + s.count("g")
c = s.count("C") + s.count("c")
try:
skew = (g - c) / (g + c)
except ZeroDivisionError:
skew = 0.0
values.append(skew)
return values |
Calculate and plot normal and accumulated GC skew (GRAPHICS !!!). | def xGC_skew(seq, window=1000, zoom=100, r=300, px=100, py=100):
"""Calculate and plot normal and accumulated GC skew (GRAPHICS !!!)."""
import tkinter
yscroll = tkinter.Scrollbar(orient=tkinter.VERTICAL)
xscroll = tkinter.Scrollbar(orient=tkinter.HORIZONTAL)
canvas = tkinter.Canvas(
yscrollcommand=yscroll.set, xscrollcommand=xscroll.set, background="white"
)
win = canvas.winfo_toplevel()
win.geometry("700x700")
yscroll.config(command=canvas.yview)
xscroll.config(command=canvas.xview)
yscroll.pack(side=tkinter.RIGHT, fill=tkinter.Y)
xscroll.pack(side=tkinter.BOTTOM, fill=tkinter.X)
canvas.pack(fill=tkinter.BOTH, side=tkinter.LEFT, expand=1)
canvas.update()
X0, Y0 = r + px, r + py
x1, x2, y1, y2 = X0 - r, X0 + r, Y0 - r, Y0 + r
ty = Y0
canvas.create_text(X0, ty, text="%s...%s (%d nt)" % (seq[:7], seq[-7:], len(seq)))
ty += 20
canvas.create_text(X0, ty, text=f"GC {gc_fraction(seq):3.2f}%")
ty += 20
canvas.create_text(X0, ty, text="GC Skew", fill="blue")
ty += 20
canvas.create_text(X0, ty, text="Accumulated GC Skew", fill="magenta")
ty += 20
canvas.create_oval(x1, y1, x2, y2)
acc = 0
start = 0
for gc in GC_skew(seq, window):
r1 = r
acc += gc
# GC skew
alpha = pi - (2 * pi * start) / len(seq)
r2 = r1 - gc * zoom
x1 = X0 + r1 * sin(alpha)
y1 = Y0 + r1 * cos(alpha)
x2 = X0 + r2 * sin(alpha)
y2 = Y0 + r2 * cos(alpha)
canvas.create_line(x1, y1, x2, y2, fill="blue")
# accumulated GC skew
r1 = r - 50
r2 = r1 - acc
x1 = X0 + r1 * sin(alpha)
y1 = Y0 + r1 * cos(alpha)
x2 = X0 + r2 * sin(alpha)
y2 = Y0 + r2 * cos(alpha)
canvas.create_line(x1, y1, x2, y2, fill="magenta")
canvas.update()
start += window
canvas.configure(scrollregion=canvas.bbox(tkinter.ALL)) |
Search for a DNA subseq in seq, return list of [subseq, positions].
Use ambiguous values (like N = A or T or C or G, R = A or G etc.),
searches only on forward strand. | def nt_search(seq, subseq):
"""Search for a DNA subseq in seq, return list of [subseq, positions].
Use ambiguous values (like N = A or T or C or G, R = A or G etc.),
searches only on forward strand.
"""
pattern = ""
for nt in subseq:
value = IUPACData.ambiguous_dna_values[nt]
if len(value) == 1:
pattern += value
else:
pattern += f"[{value}]"
pos = -1
result = [pattern]
while True:
pos += 1
s = seq[pos:]
m = re.search(pattern, s)
if not m:
break
pos += int(m.start(0))
result.append(pos)
return result |
Convert protein sequence from one-letter to three-letter code.
The single required input argument 'seq' should be a protein sequence using
single letter codes, either as a Python string or as a Seq or MutableSeq
object.
This function returns the amino acid sequence as a string using the three
letter amino acid codes. Output follows the IUPAC standard (including
ambiguous characters B for "Asx", J for "Xle" and X for "Xaa", and also U
for "Sel" and O for "Pyl") plus "Ter" for a terminator given as an
asterisk. Any unknown character (including possible gap characters),
is changed into 'Xaa' by default.
e.g.
>>> from Bio.SeqUtils import seq3
>>> seq3("MAIVMGRWKGAR*")
'MetAlaIleValMetGlyArgTrpLysGlyAlaArgTer'
You can set a custom translation of the codon termination code using the
dictionary "custom_map" argument (which defaults to {'*': 'Ter'}), e.g.
>>> seq3("MAIVMGRWKGAR*", custom_map={"*": "***"})
'MetAlaIleValMetGlyArgTrpLysGlyAlaArg***'
You can also set a custom translation for non-amino acid characters, such
as '-', using the "undef_code" argument, e.g.
>>> seq3("MAIVMGRWKGA--R*", undef_code='---')
'MetAlaIleValMetGlyArgTrpLysGlyAla------ArgTer'
If not given, "undef_code" defaults to "Xaa", e.g.
>>> seq3("MAIVMGRWKGA--R*")
'MetAlaIleValMetGlyArgTrpLysGlyAlaXaaXaaArgTer'
This function was inspired by BioPerl's seq3. | def seq3(seq, custom_map=None, undef_code="Xaa"):
"""Convert protein sequence from one-letter to three-letter code.
The single required input argument 'seq' should be a protein sequence using
single letter codes, either as a Python string or as a Seq or MutableSeq
object.
This function returns the amino acid sequence as a string using the three
letter amino acid codes. Output follows the IUPAC standard (including
ambiguous characters B for "Asx", J for "Xle" and X for "Xaa", and also U
for "Sel" and O for "Pyl") plus "Ter" for a terminator given as an
asterisk. Any unknown character (including possible gap characters),
is changed into 'Xaa' by default.
e.g.
>>> from Bio.SeqUtils import seq3
>>> seq3("MAIVMGRWKGAR*")
'MetAlaIleValMetGlyArgTrpLysGlyAlaArgTer'
You can set a custom translation of the codon termination code using the
dictionary "custom_map" argument (which defaults to {'*': 'Ter'}), e.g.
>>> seq3("MAIVMGRWKGAR*", custom_map={"*": "***"})
'MetAlaIleValMetGlyArgTrpLysGlyAlaArg***'
You can also set a custom translation for non-amino acid characters, such
as '-', using the "undef_code" argument, e.g.
>>> seq3("MAIVMGRWKGA--R*", undef_code='---')
'MetAlaIleValMetGlyArgTrpLysGlyAla------ArgTer'
If not given, "undef_code" defaults to "Xaa", e.g.
>>> seq3("MAIVMGRWKGA--R*")
'MetAlaIleValMetGlyArgTrpLysGlyAlaXaaXaaArgTer'
This function was inspired by BioPerl's seq3.
"""
if custom_map is None:
custom_map = {"*": "Ter"}
# not doing .update() on IUPACData dict with custom_map dict
# to preserve its initial state (may be imported in other modules)
threecode = dict(
list(IUPACData.protein_letters_1to3_extended.items()) + list(custom_map.items())
)
# We use a default of 'Xaa' for undefined letters
# Note this will map '-' to 'Xaa' which may be undesirable!
return "".join(threecode.get(aa, undef_code) for aa in seq) |
Convert protein sequence from three-letter to one-letter code.
The single required input argument 'seq' should be a protein sequence
using three-letter codes, either as a Python string or as a Seq or
MutableSeq object.
This function returns the amino acid sequence as a string using the one
letter amino acid codes. Output follows the IUPAC standard (including
ambiguous characters "B" for "Asx", "J" for "Xle", "X" for "Xaa", "U" for
"Sel", and "O" for "Pyl") plus "*" for a terminator given the "Ter" code.
Any unknown character (including possible gap characters), is changed
into '-' by default.
e.g.
>>> from Bio.SeqUtils import seq1
>>> seq1("MetAlaIleValMetGlyArgTrpLysGlyAlaArgTer")
'MAIVMGRWKGAR*'
The input is case insensitive, e.g.
>>> from Bio.SeqUtils import seq1
>>> seq1("METalaIlEValMetGLYArgtRplysGlyAlaARGTer")
'MAIVMGRWKGAR*'
You can set a custom translation of the codon termination code using the
dictionary "custom_map" argument (defaulting to {'Ter': '*'}), e.g.
>>> seq1("MetAlaIleValMetGlyArgTrpLysGlyAla***", custom_map={"***": "*"})
'MAIVMGRWKGA*'
You can also set a custom translation for non-amino acid characters, such
as '-', using the "undef_code" argument, e.g.
>>> seq1("MetAlaIleValMetGlyArgTrpLysGlyAla------ArgTer", undef_code='?')
'MAIVMGRWKGA??R*'
If not given, "undef_code" defaults to "X", e.g.
>>> seq1("MetAlaIleValMetGlyArgTrpLysGlyAla------ArgTer")
'MAIVMGRWKGAXXR*' | def seq1(seq, custom_map=None, undef_code="X"):
"""Convert protein sequence from three-letter to one-letter code.
The single required input argument 'seq' should be a protein sequence
using three-letter codes, either as a Python string or as a Seq or
MutableSeq object.
This function returns the amino acid sequence as a string using the one
letter amino acid codes. Output follows the IUPAC standard (including
ambiguous characters "B" for "Asx", "J" for "Xle", "X" for "Xaa", "U" for
"Sel", and "O" for "Pyl") plus "*" for a terminator given the "Ter" code.
Any unknown character (including possible gap characters), is changed
into '-' by default.
e.g.
>>> from Bio.SeqUtils import seq1
>>> seq1("MetAlaIleValMetGlyArgTrpLysGlyAlaArgTer")
'MAIVMGRWKGAR*'
The input is case insensitive, e.g.
>>> from Bio.SeqUtils import seq1
>>> seq1("METalaIlEValMetGLYArgtRplysGlyAlaARGTer")
'MAIVMGRWKGAR*'
You can set a custom translation of the codon termination code using the
dictionary "custom_map" argument (defaulting to {'Ter': '*'}), e.g.
>>> seq1("MetAlaIleValMetGlyArgTrpLysGlyAla***", custom_map={"***": "*"})
'MAIVMGRWKGA*'
You can also set a custom translation for non-amino acid characters, such
as '-', using the "undef_code" argument, e.g.
>>> seq1("MetAlaIleValMetGlyArgTrpLysGlyAla------ArgTer", undef_code='?')
'MAIVMGRWKGA??R*'
If not given, "undef_code" defaults to "X", e.g.
>>> seq1("MetAlaIleValMetGlyArgTrpLysGlyAla------ArgTer")
'MAIVMGRWKGAXXR*'
"""
if custom_map is None:
custom_map = {"Ter": "*"}
# reverse map of threecode
# upper() on all keys to enable caps-insensitive input seq handling
onecode = {k.upper(): v for k, v in IUPACData.protein_letters_3to1_extended.items()}
# add the given termination codon code and custom maps
onecode.update((k.upper(), v) for k, v in custom_map.items())
seqlist = [seq[3 * i : 3 * (i + 1)] for i in range(len(seq) // 3)]
return "".join(onecode.get(aa.upper(), undef_code) for aa in seqlist) |
Calculate the molecular mass of DNA, RNA or protein sequences as float.
Only unambiguous letters are allowed. Nucleotide sequences are assumed to
have a 5' phosphate.
Arguments:
- seq: string, Seq, or SeqRecord object.
- seq_type: The default is to assume DNA; override this with a string
"DNA", "RNA", or "protein".
- double_stranded: Calculate the mass for the double stranded molecule?
- circular: Is the molecule circular (has no ends)?
- monoisotopic: Use the monoisotopic mass tables?
>>> print("%0.2f" % molecular_weight("AGC"))
949.61
>>> print("%0.2f" % molecular_weight(Seq("AGC")))
949.61
However, it is better to be explicit - for example with strings:
>>> print("%0.2f" % molecular_weight("AGC", "DNA"))
949.61
>>> print("%0.2f" % molecular_weight("AGC", "RNA"))
997.61
>>> print("%0.2f" % molecular_weight("AGC", "protein"))
249.29 | def molecular_weight(
seq, seq_type="DNA", double_stranded=False, circular=False, monoisotopic=False
):
"""Calculate the molecular mass of DNA, RNA or protein sequences as float.
Only unambiguous letters are allowed. Nucleotide sequences are assumed to
have a 5' phosphate.
Arguments:
- seq: string, Seq, or SeqRecord object.
- seq_type: The default is to assume DNA; override this with a string
"DNA", "RNA", or "protein".
- double_stranded: Calculate the mass for the double stranded molecule?
- circular: Is the molecule circular (has no ends)?
- monoisotopic: Use the monoisotopic mass tables?
>>> print("%0.2f" % molecular_weight("AGC"))
949.61
>>> print("%0.2f" % molecular_weight(Seq("AGC")))
949.61
However, it is better to be explicit - for example with strings:
>>> print("%0.2f" % molecular_weight("AGC", "DNA"))
949.61
>>> print("%0.2f" % molecular_weight("AGC", "RNA"))
997.61
>>> print("%0.2f" % molecular_weight("AGC", "protein"))
249.29
"""
try:
seq = seq.seq
except AttributeError: # not a SeqRecord object
pass
seq = "".join(str(seq).split()).upper() # Do the minimum formatting
if seq_type == "DNA":
if monoisotopic:
weight_table = IUPACData.monoisotopic_unambiguous_dna_weights
else:
weight_table = IUPACData.unambiguous_dna_weights
elif seq_type == "RNA":
if monoisotopic:
weight_table = IUPACData.monoisotopic_unambiguous_rna_weights
else:
weight_table = IUPACData.unambiguous_rna_weights
elif seq_type == "protein":
if monoisotopic:
weight_table = IUPACData.monoisotopic_protein_weights
else:
weight_table = IUPACData.protein_weights
else:
raise ValueError(f"Allowed seq_types are DNA, RNA or protein, not {seq_type!r}")
if monoisotopic:
water = 18.010565
else:
water = 18.0153
try:
weight = sum(weight_table[x] for x in seq) - (len(seq) - 1) * water
if circular:
weight -= water
except KeyError as e:
raise ValueError(
f"'{e}' is not a valid unambiguous letter for {seq_type}"
) from None
if double_stranded:
if seq_type == "protein":
raise ValueError("protein sequences cannot be double-stranded")
elif seq_type == "DNA":
seq = complement(seq)
elif seq_type == "RNA":
seq = complement_rna(seq)
weight += sum(weight_table[x] for x in seq) - (len(seq) - 1) * water
if circular:
weight -= water
return weight |
Return pretty string showing the 6 frame translations and GC content.
Nice looking 6 frame translation with GC content - code from xbbtools
similar to DNA Striders six-frame translation
>>> from Bio.SeqUtils import six_frame_translations
>>> print(six_frame_translations("AUGGCCAUUGUAAUGGGCCGCUGA"))
GC_Frame: a:5 t:0 g:8 c:5
Sequence: auggccauug ... gggccgcuga, 24 nt, 54.17 %GC
<BLANKLINE>
<BLANKLINE>
1/1
G H C N G P L
W P L * W A A
M A I V M G R *
auggccauuguaaugggccgcuga 54 %
uaccgguaacauuacccggcgacu
A M T I P R Q
H G N Y H A A S
P W Q L P G S
<BLANKLINE>
<BLANKLINE> | def six_frame_translations(seq, genetic_code=1):
"""Return pretty string showing the 6 frame translations and GC content.
Nice looking 6 frame translation with GC content - code from xbbtools
similar to DNA Striders six-frame translation
>>> from Bio.SeqUtils import six_frame_translations
>>> print(six_frame_translations("AUGGCCAUUGUAAUGGGCCGCUGA"))
GC_Frame: a:5 t:0 g:8 c:5
Sequence: auggccauug ... gggccgcuga, 24 nt, 54.17 %GC
<BLANKLINE>
<BLANKLINE>
1/1
G H C N G P L
W P L * W A A
M A I V M G R *
auggccauuguaaugggccgcuga 54 %
uaccgguaacauuacccggcgacu
A M T I P R Q
H G N Y H A A S
P W Q L P G S
<BLANKLINE>
<BLANKLINE>
""" # noqa for pep8 W291 trailing whitespace
from Bio.Seq import reverse_complement, reverse_complement_rna
if "u" in seq.lower():
anti = reverse_complement_rna(seq)
else:
anti = reverse_complement(seq)
comp = anti[::-1]
length = len(seq)
frames = {}
for i in range(3):
fragment_length = 3 * ((length - i) // 3)
frames[i + 1] = translate(seq[i : i + fragment_length], genetic_code)
frames[-(i + 1)] = translate(anti[i : i + fragment_length], genetic_code)[::-1]
# create header
if length > 20:
short = f"{seq[:10]} ... {seq[-10:]}"
else:
short = seq
header = "GC_Frame:"
for nt in ["a", "t", "g", "c"]:
header += " %s:%d" % (nt, seq.count(nt.upper()))
gc = 100 * gc_fraction(seq, ambiguous="ignore")
header += "\nSequence: %s, %d nt, %0.2f %%GC\n\n\n" % (
short.lower(),
length,
gc,
)
res = header
for i in range(0, length, 60):
subseq = seq[i : i + 60]
csubseq = comp[i : i + 60]
p = i // 3
res += "%d/%d\n" % (i + 1, i / 3 + 1)
res += " " + " ".join(frames[3][p : p + 20]) + "\n"
res += " " + " ".join(frames[2][p : p + 20]) + "\n"
res += " ".join(frames[1][p : p + 20]) + "\n"
# seq
res += subseq.lower() + "%5d %%\n" % int(gc)
res += csubseq.lower() + "\n"
# - frames
res += " ".join(frames[-2][p : p + 20]) + "\n"
res += " " + " ".join(frames[-1][p : p + 20]) + "\n"
res += " " + " ".join(frames[-3][p : p + 20]) + "\n\n"
return res |
Parse the keyword list from file handle.
Returns a generator object which yields keyword entries as
Bio.SwissProt.KeyWList.Record() object. | def parse(handle):
"""Parse the keyword list from file handle.
Returns a generator object which yields keyword entries as
Bio.SwissProt.KeyWList.Record() object.
"""
record = Record()
# First, skip the header - look for start of a record
for line in handle:
if line.startswith("ID "):
# Looks like there was no header
record["ID"] = line[5:].strip()
break
if line.startswith("IC "):
# Looks like there was no header
record["IC"] = line[5:].strip()
break
# Now parse the records
for line in handle:
if line.startswith("-------------------------------------"):
# We have reached the footer
break
key = line[:2]
if key == "//":
record["DE"] = " ".join(record["DE"])
record["SY"] = " ".join(record["SY"])
yield record
record = Record()
elif line[2:5] == " ":
value = line[5:].strip()
if key in ("ID", "IC", "AC", "CA"):
record[key] = value
elif key in ("DE", "SY", "GO", "HI", "WW"):
record[key].append(value)
else:
raise ValueError(f"Cannot parse line '{line.strip()}'")
# Read the footer and throw it away
for line in handle:
pass |
Read multiple SwissProt records from file.
Argument source is a file-like object or a path to a file.
Returns a generator object which yields Bio.SwissProt.Record() objects. | def parse(source):
"""Read multiple SwissProt records from file.
Argument source is a file-like object or a path to a file.
Returns a generator object which yields Bio.SwissProt.Record() objects.
"""
handle = _open(source)
try:
while True:
record = _read(handle)
if not record:
return
yield record
finally:
if handle is not source:
handle.close() |
Read one SwissProt record from file.
Argument source is a file-like object or a path to a file.
Returns a Record() object. | def read(source):
"""Read one SwissProt record from file.
Argument source is a file-like object or a path to a file.
Returns a Record() object.
"""
handle = _open(source)
try:
record = _read(handle)
if not record:
raise ValueError("No SwissProt record found")
# We should have reached the end of the record by now.
# Try to read one more line to be sure:
try:
next(handle)
except StopIteration:
return record
raise ValueError("More than one SwissProt record found")
finally:
if handle is not source:
handle.close() |
Query a TogoWS URL for a plain text list of values (PRIVATE). | def _get_fields(url):
"""Query a TogoWS URL for a plain text list of values (PRIVATE)."""
handle = _open(url)
fields = handle.read().strip().split()
handle.close()
return fields |
Call TogoWS 'entry' to fetch a record.
Arguments:
- db - database (string), see list below.
- id - identifier (string) or a list of identifiers (either as a list of
strings or a single string with comma separators).
- format - return data file format (string), options depend on the database
e.g. "xml", "json", "gff", "fasta", "ttl" (RDF Turtle)
- field - specific field from within the database record (string)
e.g. "au" or "authors" for pubmed.
At the time of writing, this includes the following::
KEGG: compound, drug, enzyme, genes, glycan, orthology, reaction,
module, pathway
DDBj: ddbj, dad, pdb
NCBI: nuccore, nucest, nucgss, nucleotide, protein, gene, onim,
homologue, snp, mesh, pubmed
EBI: embl, uniprot, uniparc, uniref100, uniref90, uniref50
For the current list, please see http://togows.dbcls.jp/entry/
This function is essentially equivalent to the NCBI Entrez service
EFetch, available in Biopython as Bio.Entrez.efetch(...), but that
does not offer field extraction. | def entry(db, id, format=None, field=None):
"""Call TogoWS 'entry' to fetch a record.
Arguments:
- db - database (string), see list below.
- id - identifier (string) or a list of identifiers (either as a list of
strings or a single string with comma separators).
- format - return data file format (string), options depend on the database
e.g. "xml", "json", "gff", "fasta", "ttl" (RDF Turtle)
- field - specific field from within the database record (string)
e.g. "au" or "authors" for pubmed.
At the time of writing, this includes the following::
KEGG: compound, drug, enzyme, genes, glycan, orthology, reaction,
module, pathway
DDBj: ddbj, dad, pdb
NCBI: nuccore, nucest, nucgss, nucleotide, protein, gene, onim,
homologue, snp, mesh, pubmed
EBI: embl, uniprot, uniparc, uniref100, uniref90, uniref50
For the current list, please see http://togows.dbcls.jp/entry/
This function is essentially equivalent to the NCBI Entrez service
EFetch, available in Biopython as Bio.Entrez.efetch(...), but that
does not offer field extraction.
"""
global _entry_db_names, _entry_db_fields, fetch_db_formats
if _entry_db_names is None:
_entry_db_names = _get_entry_dbs()
if db not in _entry_db_names:
raise ValueError(
f"TogoWS entry fetch does not officially support database '{db}'."
)
if field:
try:
fields = _entry_db_fields[db]
except KeyError:
fields = _get_entry_fields(db)
_entry_db_fields[db] = fields
if db == "pubmed" and field == "ti" and "title" in fields:
# Backwards compatibility fix for TogoWS change Nov/Dec 2013
field = "title"
import warnings
warnings.warn(
"TogoWS dropped 'pubmed' field alias 'ti', please use 'title' instead."
)
if field not in fields:
raise ValueError(
"TogoWS entry fetch does not explicitly support "
"field '%s' for database '%s'. Only: %s"
% (field, db, ", ".join(sorted(fields)))
)
if format:
try:
formats = _entry_db_formats[db]
except KeyError:
formats = _get_entry_formats(db)
_entry_db_formats[db] = formats
if format not in formats:
raise ValueError(
"TogoWS entry fetch does not explicitly support "
"format '%s' for database '%s'. Only: %s"
% (format, db, ", ".join(sorted(formats)))
)
if isinstance(id, list):
id = ",".join(id)
url = _BASE_URL + f"/entry/{db}/{quote(id)}"
if field:
url += "/" + field
if format:
url += "." + format
return _open(url) |
Call TogoWS search count to see how many matches a search gives.
Arguments:
- db - database (string), see http://togows.dbcls.jp/search
- query - search term (string)
You could then use the count to download a large set of search results in
batches using the offset and limit options to Bio.TogoWS.search(). In
general however the Bio.TogoWS.search_iter() function is simpler to use. | def search_count(db, query):
"""Call TogoWS search count to see how many matches a search gives.
Arguments:
- db - database (string), see http://togows.dbcls.jp/search
- query - search term (string)
You could then use the count to download a large set of search results in
batches using the offset and limit options to Bio.TogoWS.search(). In
general however the Bio.TogoWS.search_iter() function is simpler to use.
"""
global _search_db_names
if _search_db_names is None:
_search_db_names = _get_fields(_BASE_URL + "/search")
if db not in _search_db_names:
# TODO - Make this a ValueError? Right now despite the HTML website
# claiming to, the "gene" or "ncbi-gene" don't work and are not listed.
import warnings
warnings.warn(
"TogoWS search does not officially support database '%s'. "
"See %s/search/ for options." % (db, _BASE_URL)
)
url = _BASE_URL + f"/search/{db}/{quote(query)}/count"
handle = _open(url)
data = handle.read()
handle.close()
if not data:
raise ValueError(f"TogoWS returned no data from URL {url}")
try:
return int(data.strip())
except ValueError:
raise ValueError(f"Expected an integer from URL {url}, got: {data!r}") from None |
Call TogoWS search iterating over the results (generator function).
Arguments:
- db - database (string), see http://togows.dbcls.jp/search
- query - search term (string)
- limit - optional upper bound on number of search results
- batch - number of search results to pull back each time talk to
TogoWS (currently limited to 100).
You would use this function within a for loop, e.g.
>>> from Bio import TogoWS
>>> for id in TogoWS.search_iter("pubmed", "diabetes+human", limit=10):
... print("PubMed ID: %s" %id) # maybe fetch data with entry?
PubMed ID: ...
Internally this first calls the Bio.TogoWS.search_count() and then
uses Bio.TogoWS.search() to get the results in batches. | def search_iter(db, query, limit=None, batch=100):
"""Call TogoWS search iterating over the results (generator function).
Arguments:
- db - database (string), see http://togows.dbcls.jp/search
- query - search term (string)
- limit - optional upper bound on number of search results
- batch - number of search results to pull back each time talk to
TogoWS (currently limited to 100).
You would use this function within a for loop, e.g.
>>> from Bio import TogoWS
>>> for id in TogoWS.search_iter("pubmed", "diabetes+human", limit=10):
... print("PubMed ID: %s" %id) # maybe fetch data with entry?
PubMed ID: ...
Internally this first calls the Bio.TogoWS.search_count() and then
uses Bio.TogoWS.search() to get the results in batches.
"""
count = search_count(db, query)
if not count:
return
# NOTE - We leave it to TogoWS to enforce any upper bound on each
# batch, they currently return an HTTP 400 Bad Request if above 100.
remain = count
if limit is not None:
remain = min(remain, limit)
offset = 1 # They don't use zero based counting
prev_ids = [] # Just cache the last batch for error checking
while remain:
batch = min(batch, remain)
# print("%r left, asking for %r" % (remain, batch))
ids = search(db, query, offset, batch).read().strip().split()
assert len(ids) == batch, "Got %i, expected %i" % (len(ids), batch)
# print("offset %i, %s ... %s" % (offset, ids[0], ids[-1]))
if ids == prev_ids:
raise RuntimeError("Same search results for previous offset")
for identifier in ids:
if identifier in prev_ids:
raise RuntimeError(f"Result {identifier} was in previous batch")
yield identifier
offset += batch
remain -= batch
prev_ids = ids |
Call TogoWS search.
This is a low level wrapper for the TogoWS search function, which
can return results in a several formats. In general, the search_iter
function is more suitable for end users.
Arguments:
- db - database (string), see http://togows.dbcls.jp/search/
- query - search term (string)
- offset, limit - optional integers specifying which result to start from
(1 based) and the number of results to return.
- format - return data file format (string), e.g. "json", "ttl" (RDF)
By default plain text is returned, one result per line.
At the time of writing, TogoWS applies a default count limit of 100
search results, and this is an upper bound. To access more results,
use the offset argument or the search_iter(...) function.
TogoWS supports a long list of databases, including many from the NCBI
(e.g. "ncbi-pubmed" or "pubmed", "ncbi-genbank" or "genbank", and
"ncbi-taxonomy"), EBI (e.g. "ebi-ebml" or "embl", "ebi-uniprot" or
"uniprot, "ebi-go"), and KEGG (e.g. "kegg-compound" or "compound").
For the current list, see http://togows.dbcls.jp/search/
The NCBI provide the Entrez Search service (ESearch) which is similar,
available in Biopython as the Bio.Entrez.esearch() function.
See also the function Bio.TogoWS.search_count() which returns the number
of matches found, and the Bio.TogoWS.search_iter() function which allows
you to iterate over the search results (taking care of batching for you). | def search(db, query, offset=None, limit=None, format=None):
"""Call TogoWS search.
This is a low level wrapper for the TogoWS search function, which
can return results in a several formats. In general, the search_iter
function is more suitable for end users.
Arguments:
- db - database (string), see http://togows.dbcls.jp/search/
- query - search term (string)
- offset, limit - optional integers specifying which result to start from
(1 based) and the number of results to return.
- format - return data file format (string), e.g. "json", "ttl" (RDF)
By default plain text is returned, one result per line.
At the time of writing, TogoWS applies a default count limit of 100
search results, and this is an upper bound. To access more results,
use the offset argument or the search_iter(...) function.
TogoWS supports a long list of databases, including many from the NCBI
(e.g. "ncbi-pubmed" or "pubmed", "ncbi-genbank" or "genbank", and
"ncbi-taxonomy"), EBI (e.g. "ebi-ebml" or "embl", "ebi-uniprot" or
"uniprot, "ebi-go"), and KEGG (e.g. "kegg-compound" or "compound").
For the current list, see http://togows.dbcls.jp/search/
The NCBI provide the Entrez Search service (ESearch) which is similar,
available in Biopython as the Bio.Entrez.esearch() function.
See also the function Bio.TogoWS.search_count() which returns the number
of matches found, and the Bio.TogoWS.search_iter() function which allows
you to iterate over the search results (taking care of batching for you).
"""
global _search_db_names
if _search_db_names is None:
_search_db_names = _get_fields(_BASE_URL + "/search")
if db not in _search_db_names:
# TODO - Make this a ValueError? Right now despite the HTML website
# claiming to, the "gene" or "ncbi-gene" don't work and are not listed.
import warnings
warnings.warn(
"TogoWS search does not explicitly support database '%s'. "
"See %s/search/ for options." % (db, _BASE_URL)
)
url = _BASE_URL + f"/search/{db}/{quote(query)}"
if offset is not None and limit is not None:
try:
offset = int(offset)
except ValueError:
raise ValueError(
f"Offset should be an integer (at least one), not {offset!r}"
) from None
try:
limit = int(limit)
except ValueError:
raise ValueError(
f"Limit should be an integer (at least one), not {limit!r}"
) from None
if offset <= 0:
raise ValueError("Offset should be at least one, not %i" % offset)
if limit <= 0:
raise ValueError("Count should be at least one, not %i" % limit)
url += "/%i,%i" % (offset, limit)
elif offset is not None or limit is not None:
raise ValueError("Expect BOTH offset AND limit to be provided (or neither)")
if format:
url += "." + format
# print(url)
return _open(url) |
Call TogoWS for file format conversion.
Arguments:
- data - string or handle containing input record(s)
- in_format - string describing the input file format (e.g. "genbank")
- out_format - string describing the requested output format (e.g. "fasta")
For a list of supported conversions (e.g. "genbank" to "fasta"), see
http://togows.dbcls.jp/convert/
Note that Biopython has built in support for conversion of sequence and
alignnent file formats (functions Bio.SeqIO.convert and Bio.AlignIO.convert) | def convert(data, in_format, out_format):
"""Call TogoWS for file format conversion.
Arguments:
- data - string or handle containing input record(s)
- in_format - string describing the input file format (e.g. "genbank")
- out_format - string describing the requested output format (e.g. "fasta")
For a list of supported conversions (e.g. "genbank" to "fasta"), see
http://togows.dbcls.jp/convert/
Note that Biopython has built in support for conversion of sequence and
alignnent file formats (functions Bio.SeqIO.convert and Bio.AlignIO.convert)
"""
global _convert_formats
if not _convert_formats:
_convert_formats = _get_convert_formats()
if [in_format, out_format] not in _convert_formats:
msg = "\n".join("%s -> %s" % tuple(pair) for pair in _convert_formats)
raise ValueError(f"Unsupported conversion. Choose from:\n{msg}")
url = _BASE_URL + f"/convert/{in_format}.{out_format}"
# TODO - Should we just accept a string not a handle? What about a filename?
try:
# Handle
data = data.read()
except AttributeError:
# String
pass
return _open(url, post=data) |
Build the URL and open a handle to it (PRIVATE).
Open a handle to TogoWS, will raise an IOError if it encounters an error.
In the absence of clear guidelines, this function enforces a limit of
"up to three queries per second" to avoid abusing the TogoWS servers. | def _open(url, post=None):
"""Build the URL and open a handle to it (PRIVATE).
Open a handle to TogoWS, will raise an IOError if it encounters an error.
In the absence of clear guidelines, this function enforces a limit of
"up to three queries per second" to avoid abusing the TogoWS servers.
"""
delay = 0.333333333 # one third of a second
current = time.time()
wait = _open.previous + delay - current
if wait > 0:
time.sleep(wait)
_open.previous = current + wait
else:
_open.previous = current
if post:
handle = urlopen(url, post.encode())
else:
handle = urlopen(url)
# We now trust TogoWS to have set an HTTP error code, that
# suffices for my current unit tests. Previously we would
# examine the start of the data returned back.
text_handle = io.TextIOWrapper(handle, encoding="UTF-8")
text_handle.url = handle.url
return text_handle |
Read and load a UniGene records, for files containing multiple records. | def parse(handle):
"""Read and load a UniGene records, for files containing multiple records."""
while True:
record = _read(handle)
if not record:
return
yield record |
Read and load a UniGene record, one record per file. | def read(handle):
"""Read and load a UniGene record, one record per file."""
record = _read(handle)
if not record:
raise ValueError("No SwissProt record found")
# We should have reached the end of the record by now
remainder = handle.read()
if remainder:
raise ValueError("More than one SwissProt record found")
return record |
Read GPI 1.0 format files (PRIVATE).
This iterator is used to read a gp_information.goa_uniprot
file which is in the GPI 1.0 format. | def _gpi10iterator(handle):
"""Read GPI 1.0 format files (PRIVATE).
This iterator is used to read a gp_information.goa_uniprot
file which is in the GPI 1.0 format.
"""
for inline in handle:
if inline[0] == "!":
continue
inrec = inline.rstrip("\n").split("\t")
if len(inrec) == 1:
continue
inrec[5] = inrec[5].split("|") # DB_Object_Synonym(s)
inrec[8] = inrec[8].split("|") # Annotation_Target_Set
yield dict(zip(GPI10FIELDS, inrec)) |
Read GPI 1.1 format files (PRIVATE).
This iterator is used to read a gp_information.goa_uniprot
file which is in the GPI 1.1 format. | def _gpi11iterator(handle):
"""Read GPI 1.1 format files (PRIVATE).
This iterator is used to read a gp_information.goa_uniprot
file which is in the GPI 1.1 format.
"""
for inline in handle:
if inline[0] == "!":
continue
inrec = inline.rstrip("\n").split("\t")
if len(inrec) == 1:
continue
inrec[2] = inrec[2].split("|") # DB_Object_Name
inrec[3] = inrec[3].split("|") # DB_Object_Synonym(s)
inrec[7] = inrec[7].split("|") # DB_Xref(s)
inrec[8] = inrec[8].split("|") # Properties
yield dict(zip(GPI11FIELDS, inrec)) |
Read GPI 1.2 format files (PRIVATE).
This iterator is used to read a gp_information.goa_uniprot
file which is in the GPI 1.2 format. | def _gpi12iterator(handle):
"""Read GPI 1.2 format files (PRIVATE).
This iterator is used to read a gp_information.goa_uniprot
file which is in the GPI 1.2 format.
"""
for inline in handle:
if inline[0] == "!":
continue
inrec = inline.rstrip("\n").split("\t")
if len(inrec) == 1:
continue
inrec[3] = inrec[3].split("|") # DB_Object_Name
inrec[4] = inrec[4].split("|") # DB_Object_Synonym(s)
inrec[8] = inrec[8].split("|") # DB_Xref(s)
inrec[9] = inrec[9].split("|") # Properties
yield dict(zip(GPI12FIELDS, inrec)) |
Read GPI format files.
This function should be called to read a
gp_information.goa_uniprot file. At the moment, there is
only one format, but this may change, so
this function is a placeholder a future wrapper. | def gpi_iterator(handle):
"""Read GPI format files.
This function should be called to read a
gp_information.goa_uniprot file. At the moment, there is
only one format, but this may change, so
this function is a placeholder a future wrapper.
"""
inline = handle.readline()
if inline.strip() == "!gpi-version: 1.2":
return _gpi12iterator(handle)
elif inline.strip() == "!gpi-version: 1.1":
# sys.stderr.write("gpi 1.1\n")
return _gpi11iterator(handle)
elif inline.strip() == "!gpi-version: 1.0":
# sys.stderr.write("gpi 1.0\n")
return _gpi10iterator(handle)
elif inline.strip() == "!gpi-version: 2.1":
# sys.stderr.write("gpi 2.1\n")
# return _gpi20iterator(handle)
raise NotImplementedError("Sorry, parsing GPI version 2 not implemented yet.")
else:
raise ValueError(f"Unknown GPI version {inline}\n") |
Read GPA 1.0 format files (PRIVATE).
This iterator is used to read a gp_association.*
file which is in the GPA 1.0 format. Do not call directly. Rather,
use the gpaiterator function. | def _gpa10iterator(handle):
"""Read GPA 1.0 format files (PRIVATE).
This iterator is used to read a gp_association.*
file which is in the GPA 1.0 format. Do not call directly. Rather,
use the gpaiterator function.
"""
for inline in handle:
if inline[0] == "!":
continue
inrec = inline.rstrip("\n").split("\t")
if len(inrec) == 1:
continue
inrec[2] = inrec[2].split("|") # Qualifier
inrec[4] = inrec[4].split("|") # DB:Reference(s)
inrec[6] = inrec[6].split("|") # With
inrec[10] = inrec[10].split("|") # Annotation extension
yield dict(zip(GPA10FIELDS, inrec)) |
Read GPA 1.1 format files (PRIVATE).
This iterator is used to read a gp_association.goa_uniprot
file which is in the GPA 1.1 format. Do not call directly. Rather
use the gpa_iterator function | def _gpa11iterator(handle):
"""Read GPA 1.1 format files (PRIVATE).
This iterator is used to read a gp_association.goa_uniprot
file which is in the GPA 1.1 format. Do not call directly. Rather
use the gpa_iterator function
"""
for inline in handle:
if inline[0] == "!":
continue
inrec = inline.rstrip("\n").split("\t")
if len(inrec) == 1:
continue
inrec[2] = inrec[2].split("|") # Qualifier
inrec[4] = inrec[4].split("|") # DB:Reference(s)
inrec[6] = inrec[6].split("|") # With
inrec[10] = inrec[10].split("|") # Annotation extension
yield dict(zip(GPA11FIELDS, inrec)) |
Read GPA format files.
This function should be called to read a
gene_association.goa_uniprot file. Reads the first record and
returns a gpa 1.1 or a gpa 1.0 iterator as needed | def gpa_iterator(handle):
"""Read GPA format files.
This function should be called to read a
gene_association.goa_uniprot file. Reads the first record and
returns a gpa 1.1 or a gpa 1.0 iterator as needed
"""
inline = handle.readline()
if inline.strip() == "!gpa-version: 1.1":
# sys.stderr.write("gpa 1.1\n")
return _gpa11iterator(handle)
elif inline.strip() == "!gpa-version: 1.0":
# sys.stderr.write("gpa 1.0\n")
return _gpa10iterator(handle)
else:
raise ValueError(f"Unknown GPA version {inline}\n") |
Iterate over records in a gene association file.
Returns a list of all consecutive records with the same DB_Object_ID
This function should be called to read a
gene_association.goa_uniprot file. Reads the first record and
returns a gaf 2.0 or a gaf 1.0 iterator as needed
2016-04-09: added GAF 2.1 iterator & fixed bug in iterator assignment
In the meantime GAF 2.1 uses the GAF 2.0 iterator | def gafbyproteiniterator(handle):
"""Iterate over records in a gene association file.
Returns a list of all consecutive records with the same DB_Object_ID
This function should be called to read a
gene_association.goa_uniprot file. Reads the first record and
returns a gaf 2.0 or a gaf 1.0 iterator as needed
2016-04-09: added GAF 2.1 iterator & fixed bug in iterator assignment
In the meantime GAF 2.1 uses the GAF 2.0 iterator
"""
inline = handle.readline()
if inline.strip() == "!gaf-version: 2.0":
# sys.stderr.write("gaf 2.0\n")
return _gaf20byproteiniterator(handle)
elif inline.strip() == "!gaf-version: 1.0":
# sys.stderr.write("gaf 1.0\n")
return _gaf10byproteiniterator(handle)
elif inline.strip() == "!gaf-version: 2.1":
# Handle GAF 2.1 as GAF 2.0 for now TODO: fix
# sys.stderr.write("gaf 2.1\n")
return _gaf20byproteiniterator(handle)
elif inline.strip() == "!gaf-version: 2.2":
# Handle GAF 2.2 as GAF 2.0 for now. Change from
# 2.1 to 2.2 is that Qualifier field is no longer optional.
# As this type of checks has not been done before, we can
# continue to use the gaf2.0 parser
return _gaf20byproteiniterator(handle)
else:
raise ValueError(f"Unknown GAF version {inline}\n") |
Iterate over a GAF 1.0 or 2.x file.
This function should be called to read a
gene_association.goa_uniprot file. Reads the first record and
returns a gaf 2.x or a gaf 1.0 iterator as needed
Example: open, read, interat and filter results.
Original data file has been trimmed to ~600 rows.
Original source ftp://ftp.ebi.ac.uk/pub/databases/GO/goa/YEAST/goa_yeast.gaf.gz
>>> from Bio.UniProt.GOA import gafiterator, record_has
>>> Evidence = {'Evidence': set(['ND'])}
>>> Synonym = {'Synonym': set(['YA19A_YEAST', 'YAL019W-A'])}
>>> Taxon_ID = {'Taxon_ID': set(['taxon:559292'])}
>>> with open('UniProt/goa_yeast.gaf', 'r') as handle:
... for rec in gafiterator(handle):
... if record_has(rec, Taxon_ID) and record_has(rec, Evidence) and record_has(rec, Synonym):
... for key in ('DB_Object_Name', 'Evidence', 'Synonym', 'Taxon_ID'):
... print(rec[key])
...
Putative uncharacterized protein YAL019W-A
ND
['YA19A_YEAST', 'YAL019W-A']
['taxon:559292']
Putative uncharacterized protein YAL019W-A
ND
['YA19A_YEAST', 'YAL019W-A']
['taxon:559292']
Putative uncharacterized protein YAL019W-A
ND
['YA19A_YEAST', 'YAL019W-A']
['taxon:559292'] | def gafiterator(handle):
"""Iterate over a GAF 1.0 or 2.x file.
This function should be called to read a
gene_association.goa_uniprot file. Reads the first record and
returns a gaf 2.x or a gaf 1.0 iterator as needed
Example: open, read, interat and filter results.
Original data file has been trimmed to ~600 rows.
Original source ftp://ftp.ebi.ac.uk/pub/databases/GO/goa/YEAST/goa_yeast.gaf.gz
>>> from Bio.UniProt.GOA import gafiterator, record_has
>>> Evidence = {'Evidence': set(['ND'])}
>>> Synonym = {'Synonym': set(['YA19A_YEAST', 'YAL019W-A'])}
>>> Taxon_ID = {'Taxon_ID': set(['taxon:559292'])}
>>> with open('UniProt/goa_yeast.gaf', 'r') as handle:
... for rec in gafiterator(handle):
... if record_has(rec, Taxon_ID) and record_has(rec, Evidence) and record_has(rec, Synonym):
... for key in ('DB_Object_Name', 'Evidence', 'Synonym', 'Taxon_ID'):
... print(rec[key])
...
Putative uncharacterized protein YAL019W-A
ND
['YA19A_YEAST', 'YAL019W-A']
['taxon:559292']
Putative uncharacterized protein YAL019W-A
ND
['YA19A_YEAST', 'YAL019W-A']
['taxon:559292']
Putative uncharacterized protein YAL019W-A
ND
['YA19A_YEAST', 'YAL019W-A']
['taxon:559292']
"""
inline = handle.readline()
if inline.strip() == "!gaf-version: 2.0":
# sys.stderr.write("gaf 2.0\n")
return _gaf20iterator(handle)
elif inline.strip() == "!gaf-version: 2.1":
# sys.stderr.write("gaf 2.1\n")
# Handle GAF 2.1 as GAF 2.0 for now. TODO: fix
return _gaf20iterator(handle)
elif inline.strip() == "!gaf-version: 2.2":
# Handle GAF 2.2 as GAF 2.0 for now. Change from
# 2.1 to 2.2 is that Qualifier field is no longer optional.
# As this type of checks has not been done before, we can
# continue to use the gaf2.0 parser
return _gaf20iterator(handle)
elif inline.strip() == "!gaf-version: 1.0":
# sys.stderr.write("gaf 1.0\n")
return _gaf10iterator(handle)
else:
raise ValueError(f"Unknown GAF version {inline}\n") |
Write a single UniProt-GOA record to an output stream.
Caller should know the format version. Default: gaf-2.0
If header has a value, then it is assumed this is the first record,
a header is written. | def writerec(outrec, handle, fields=GAF20FIELDS):
"""Write a single UniProt-GOA record to an output stream.
Caller should know the format version. Default: gaf-2.0
If header has a value, then it is assumed this is the first record,
a header is written.
"""
outstr = ""
for field in fields[:-1]:
if isinstance(outrec[field], list):
for subfield in outrec[field]:
outstr += subfield + "|"
outstr = outstr[:-1] + "\t"
else:
outstr += outrec[field] + "\t"
outstr += outrec[fields[-1]] + "\n"
handle.write(outstr) |
Write a list of GAF records to an output stream.
Caller should know the format version. Default: gaf-2.0
If header has a value, then it is assumed this is the first record,
a header is written. Typically the list is the one read by fafbyproteinrec, which
contains all consecutive lines with the same DB_Object_ID | def writebyproteinrec(outprotrec, handle, fields=GAF20FIELDS):
"""Write a list of GAF records to an output stream.
Caller should know the format version. Default: gaf-2.0
If header has a value, then it is assumed this is the first record,
a header is written. Typically the list is the one read by fafbyproteinrec, which
contains all consecutive lines with the same DB_Object_ID
"""
for outrec in outprotrec:
writerec(outrec, handle, fields=fields) |
Accept a record, and a dictionary of field values.
The format is {'field_name': set([val1, val2])}.
If any field in the record has a matching value, the function returns
True. Otherwise, returns False. | def record_has(inrec, fieldvals):
"""Accept a record, and a dictionary of field values.
The format is {'field_name': set([val1, val2])}.
If any field in the record has a matching value, the function returns
True. Otherwise, returns False.
"""
retval = False
for field in fieldvals:
if isinstance(inrec[field], str):
set1 = {inrec[field]}
else:
set1 = set(inrec[field])
if set1 & fieldvals[field]:
retval = True
break
return retval |
Search the UniProt database.
Consider using `query syntax <https://www.uniprot.org/help/text-search>`_ and
`query fields <https://www.uniprot.org/help/query-fields>`_ to refine your search.
See the API details `here <https://www.uniprot.org/help/api_queries>`_.
>>> from Bio import UniProt
>>> from itertools import islice
>>> # Get the first 10 results
>>> results = UniProt.search("(organism_id:2697049) AND (reviewed:true)")[:10]
:param query: The query string to search UniProt with
:type query: str
:param fields: The columns to retrieve in the results, defaults to all fields
:type fields: List[str], optional
:param batch_size: The number of results to retrieve in each batch, defaults to 500
:type batch_size: int
:return: An iterator over the search results
:rtype: _UniProtSearchResults | def search(
query: str, fields: Optional[List[str]] = None, batch_size: int = 500
) -> _UniProtSearchResults:
"""Search the UniProt database.
Consider using `query syntax <https://www.uniprot.org/help/text-search>`_ and
`query fields <https://www.uniprot.org/help/query-fields>`_ to refine your search.
See the API details `here <https://www.uniprot.org/help/api_queries>`_.
>>> from Bio import UniProt
>>> from itertools import islice
>>> # Get the first 10 results
>>> results = UniProt.search("(organism_id:2697049) AND (reviewed:true)")[:10]
:param query: The query string to search UniProt with
:type query: str
:param fields: The columns to retrieve in the results, defaults to all fields
:type fields: List[str], optional
:param batch_size: The number of results to retrieve in each batch, defaults to 500
:type batch_size: int
:return: An iterator over the search results
:rtype: _UniProtSearchResults
"""
parameters = {
"query": query,
"size": batch_size,
"format": "json",
}
if fields:
parameters["fields"] = ",".join(fields)
url = f"https://rest.uniprot.org/uniprotkb/search?{urllib.parse.urlencode(parameters)}"
return _UniProtSearchResults(url) |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.