code
stringlengths 501
5.19M
| package
stringlengths 2
81
| path
stringlengths 9
304
| filename
stringlengths 4
145
|
---|---|---|---|
<div id="top"></div>
<!-- PROJECT LOGO -->
<br />
<div align="center">
<a href="https://gitlab.com/justin_lehnen/zsim-cli">
<img src="images/logo.png" alt="Logo" width="80" height="80">
</a>
<h3 align="center">zsim-cli</h3>
<p align="center">
A simulator for an assembly like toy-language
<br />
<a href="https://gitlab.com/justin_lehnen/zsim-cli"><strong>Explore the docs »</strong></a>
<br />
<br />
<a href="https://gitlab.com/justin_lehnen/zsim-cli">View Demo</a>
·
<a href="https://gitlab.com/justin_lehnen/zsim-cli/issues">Report Bug</a>
·
<a href="https://gitlab.com/justin_lehnen/zsim-cli/issues">Request Feature</a>
</p>
</div>
<!-- TABLE OF CONTENTS -->
<details>
<summary>Table of Contents</summary>
<ol>
<li>
<a href="#about-the-project">About The Project</a>
<ul>
<li><a href="#built-with">Built With</a></li>
</ul>
</li>
<li>
<a href="#getting-started">Getting Started</a>
<ul>
<li><a href="#prerequisites">Prerequisites</a></li>
<li><a href="#installation">Installation</a></li>
</ul>
</li>
<li>
<a href="#usage">Usage</a>
<ul>
<li><a href="#tokenization">Tokenization</a></li>
<li><a href="#validation">Validation</a></li>
<li><a href="#simulating">Simulating</a></li>
<li><a href="#using-interactive-mode">Using interactive mode</a></li>
</ul>
</li>
<li><a href="#roadmap">Roadmap</a></li>
<li><a href="#contributing">Contributing</a></li>
<li><a href="#license">License</a></li>
<li><a href="#contact">Contact</a></li>
<li><a href="#acknowledgments">Acknowledgments</a></li>
</ol>
</details>
<!-- ABOUT THE PROJECT -->
## About The Project
[![zsim-cli Demo][zsim-demo]](https://gitlab.com/justin_lehnen/zsim-cli)
Implemented for the compiler building course after the winter semester 2020 on the Aachen University of Applied Sciences.<br />
Z-Code is a simplified assembly like toy language created to prove that compiling to a temporary language like Java-Bytecode can be much easier than going from a high-level language directly to assembly.
Check out the syntax diagrams for detailed information on how the syntax of Z-Code is defined [here](ZCODE.md).
Sublime syntax highlighting for Z-Code available [here](zcode.sublime-syntax)!
[![Z-Code syntax highlighting][zcode-syntax-highlighting]](zcode.sublime-syntax)
It even works on [Termux](https://termux.com/)!
[![zsim on Termux][zsim-termux-screenshot]](https://termux.com/)
<div align="right">(<a href="#top">back to top</a>)</div>
### Built With
zsim-cli relies heavily on the following libraries.
* [Click](https://click.palletsprojects.com)
* [pytest](https://pytest.org)
<div align="right">(<a href="#top">back to top</a>)</div>
<!-- GETTING STARTED -->
## Getting Started
To get a local copy up and running follow these simple steps.
### Prerequisites
* Python 3.6
- Head over to _[https://python.org](https://www.python.org/downloads/)_ and install the binary suitable for your operation system. Make sure you can run it:
```sh
python --version
```
* pip
- Check if pip is already installed:
```sh
pip --version
```
- If your python installation does not come with pip pre-installed visit _[https://pip.pypa.io](https://pip.pypa.io/en/stable/installation/)_ to install it and then check again.
### Installation
1. Clone the repository
```sh
git clone https://gitlab.com/justin_lehnen/zsim-cli.git
cd zsim-cli
```
1. (Optionally) create a python virtual environment and enter it
```sh
python -m venv venv
# Windows
venv/Scripts/activate.bat
# Unix or MacOS
source venv/bin/activate
```
1. Install using pip
```sh
pip install -e .
```
1. Run unit tests
```sh
pytest
```
And you're set!
<div align="right">(<a href="#top">back to top</a>)</div>
<!-- USAGE EXAMPLES -->
## Usage
_For more examples, please refer to the [Documentation](https://gitlab.com/justin_lehnen/zsim-cli)_
### Tokenization
`zsim-cli tokenize [ZCODE]`
#### --json / --no-json
Type: `Boolean`
Default: `False`
If the flag is set, the output will be in JSON format.<br />
The JSON schema is `[ { "type": "<type>", "value": "<value>" }, ... ]`.
Examples:
```bash
# Tokenize code in JSON
zsim-cli tokenize "LIT 7; PRT;" --json
```
```bash
# Pipe output to a JSON parser like jq for powerful postprocessing!
zsim-cli tokenize "LIT 7; PRT;" --json | jq -r .[].type
```
#### -i, --input FILENAME
Type: `String`
Default: `None`
Set an input file to read the Z-Code from. `-` will use the stdin.
>If you're using Windows, remember that the default `cmd.exe` requires two EOF characters followed by return (see examples).
Examples:
```bash
# Tokenize file programs/test.zc
zsim-cli tokenize -i programs/test.zc
```
```bash
# Tokenize from stdin
zsim-cli tokenize -i -
LIT 7;
PRT;
# Windows
^Z <ENTER>
^Z <ENTER>
# Unix or MacOS
^D^D
```
#### --encoding TEXT
Type: `String`
Default: `"utf_8"`
Encoding to use when opening files with `-i, --input`.<br />
Refer to the [Python docs](https://docs.python.org/3/library/codecs.html#standard-encodings) for possible values.
Examples:
```bash
# Use ASCII encoding
zsim-cli tokenize -i ascii_encoded.zc --encoding ascii
```
### Validation
`zsim-cli validate [ZCODE]`
#### --json / --no-json
Type: `Boolean`
Default: `False`
If the flag is set, the output will be in JSON format.<br />
The JSON schema is `{ "success": <boolean>, "message": "<string>" }`.
Examples:
```bash
# Validate code in JSON
zsim-cli validate "LIT 7; PRT;" --json
```
```bash
# Pipe output to a JSON parser like jq for powerful postprocessing!
zsim-cli validate "LIT 7; PRT;" --json | jq -r .message
```
#### -i, --input FILENAME
Type: `String`
Default: `None`
Set an input file to read the Z-Code from. `-` will use the stdin.
>If you're using Windows, remember that the default `cmd.exe` requires two EOF characters followed by return (see examples).
Examples:
```bash
# Validate file programs/test.zc
zsim-cli validate -i programs/test.zc
```
```bash
# Validate from stdin
zsim-cli validate -i -
LIT 7;
PRT;
# Windows
^Z <ENTER>
^Z <ENTER>
# Unix or MacOS
^D^D
```
#### --encoding TEXT
Type: `String`
Default: `"utf_8"`
Encoding to use when opening files with `-i, --input`.<br />
Refer to the [Python docs](https://docs.python.org/3/library/codecs.html#standard-encodings) for possible values.
Examples:
```bash
# Use ASCII encoding
zsim-cli validate -i ascii_encoded.zc --encoding ascii
```
### Simulating
`zsim-cli run [ZCODE]`
#### --json / --no-json
Type: `Boolean`
Default: `False`
If the flag is set, the output will be in JSON format.<br />
This flag is **not compatible** with `--step`!<br />
The JSON schema is either `{ "success": <boolean>, "message": "<string>" }` for invalid zcode or <br />
```json
{
"success": true,
"instruction_set": "<z|zfp|zds|zfpds>",
"initial_memory": { "..." },
"code": "...",
"final_state": {
"m": 1,
"d": [],
"b": [],
"h": {},
"o": "",
},
"states": [
{
"state": { "Same as final_state" },
"next_instruction": {
"command": "...",
"mnemonic": "...",
"parameters": [ 1, 2, 3 ],
},
},
"..."
],
}
```
when the execution was successful.
Examples:
```bash
# Run code and output information about the states in JSON
zsim-cli run "LIT 7; PRT;" --json
```
```bash
# Pipe output to a JSON parser like jq for powerful postprocessing!
zsim-cli run "LIT 7; PRT;" --json | jq -r .final_state.o
```
#### -i, --input FILENAME
Type: `String`
Default: `None`
Set an input file to read the Z-Code from. `-` will use the stdin.
>If you're using Windows, remember that the default `cmd.exe` requires two EOF characters followed by return (see examples).
Examples:
```bash
# Run file programs/test.zc
zsim-cli run -i programs/test.zc
```
```bash
# Run from stdin
zsim-cli run -i -
LIT 7;
PRT;
# Windows
^Z <ENTER>
^Z <ENTER>
# Unix or MacOS
^D^D
```
#### --encoding TEXT
Type: `String`
Default: `"utf_8"`
Encoding to use when opening files with `-i, --input`.<br />
Refer to the [Python docs](https://docs.python.org/3/library/codecs.html#standard-encodings) for possible values.
Examples:
```bash
# Use ASCII encoding
zsim-cli run -i ascii_encoded.zc --encoding ascii
```
#### -h, --memory TEXT
Type: `String`
Default: `"[]"`
Optionally overwrite the memory configuration for the executing code.<br />
The format is `[ <addr>/<value>, ... ]`.<br />
<!--Many -h or --memory can be used.-->
Examples:
```bash
# 10 + 5
zsim-cli run "LOD 1; LOD 2; ADD;" -h "[1/10, 2/5]"
```
#### --instructionset
Type: `String`
Default: `"zfpds"`
Set the instruction set. Each instruction set has different available instructions to use.<br />
For example `LODI` comes from `zds`, while `LODLOCAL` is defined in `zfp`.<br />
When you are unsure, stick with `zfpds` where all instructions are available.
Examples:
```bash
# Use a specific instruction-set
zsim-cli run "LIT 7; LIT 3; ADD;" --instructionset "z"
```
#### --step / --no-step
Type: `Boolean`
Default: `False`
If this flag is set, the execution will ask for confirmation after each step of the execution.<br />
This flag is **not compatible** with `--json` or `--full-output`!<br />
Examples:
```bash
# Go through Z-Code instruction by instruction
zsim-cli run "LIT 7; POP;" --step
```
#### --format
Type: `String`
Default: `"({m}, {d}, {b}, {h}, {o})"`
The `--format` option allows you to customize the regular output of the simulation.
Available placeholders:
- `{m}` = instruction counter
- `{d}` = data stack
- `{b}` = procedure stack
- `{h}` = heap memory
- `{o}` = output
Examples:
```bash
# Use less components from the machine. This will yield "m: 5, h: [1/7], output: '7'"
zsim-cli run "LIT 7; STO 1; LOD 1; PRT;" --format "m: {m}, h: {h}, output: '{o}'"
```
#### --full-output / --no-full-output
Type: `Boolean`
Default: `False`
If this flag is given, all states are printed on the standard output.<br />
`--step` will ignore this option.
Examples:
```bash
# This will print all five states on the standard output
zsim-cli run "LIT 7; STO 1; LOD 1; PRT;" --full-output
```
#### -it, --interactive / --no-interactive
Type: `Boolean`
Default: `False`
Use this flag to start a Z-Code interpreter.<br />
Only `--format`, `--instructionset` and `-h, --memory` are compatible with this option.
Examples:
```bash
zsim-cli run -it
```
<div align="right">(<a href="#top">back to top</a>)</div>
### Using interactive mode
With `zsim-cli run -it` you can start an interactive interpreter to execute Z-Code line by line.
[![zsim-cli interactive mode Demo][zsim-interactive-demo]](https://gitlab.com/justin_lehnen/zsim-cli)
Using the interactive mode might present you with difficulties when using jumps or function calls.
The following code will **not** work in interactive mode:
```
LIT 6;
CALL(increment, 1,);
HALT;
increment: LODLOCAL 1;
LIT 1;
ADD;
RET;
```
`CALL(increment, 1,);` will fail since `increment` is not defined until later.<br />
To circumvent this issue two special commands have been added: `#noexec` and `#exec`.
These commands push and pop frames in which commands are not directly executed but parsed and saved.
The following example does the same as the Z-Code above, but uses `#noexec` and `#exec`:
```
> LIT 6;
> #noexec
#> f: LODLOCAL 1;
#> LIT 1;
#> ADD;
#> RET;
#> #exec
> CALL(f, 1,);
```
>Note the `#` in front of the prompt that tell how many `#noexec` frames are running.
You are not limited to defining functions that way either! The next example uses `#noexec` differently:
```
> #noexec
#> add_and_sto: ADD;
#> STO 1;
#> HALT;
#> #exec
> LIT 4;
> LIT 1;
> JMP add_and_sto;
> LOD 1;
> PRT;
```
In the standard simulation mode `HALT` would jump after `PRT` but since the last typed command was `JMP add_and_sto;` it will jump continue right after the instruction we just sent off!<br />
<div align="right">(<a href="#top">back to top</a>)</div>
<!-- ROADMAP -->
## Roadmap
- [x] Code execution
- [x] Memory allocation in code
- [x] Comments
- [x] Interactive mode
- [ ] Better -h, --memory parsing
- [ ] Error handling
- [ ] More instruction sets
- [ ] Documentation
- [ ] More sample programs
See the [open issues](https://gitlab.com/justin_lehnen/zsim-cli/issues) for a full list of proposed features (and known issues).
<div align="right">(<a href="#top">back to top</a>)</div>
<!-- CONTRIBUTING -->
## Contributing
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are **greatly appreciated**.
If you have a suggestion that would make this better, please fork the repo and create a merge request. You can also simply open an issue with the label "enhancement".
Don't forget to give the project a star! Thanks again!
1. Fork the project
1. Create your feature branch (`git checkout -b feature/AmazingFeature`)
1. Commit your changes (`git commit -m 'Add some AmazingFeature'`)
1. Use pytest to run unit tests (`pytest`)
1. Push to the branch (`git push origin feature/AmazingFeature`)
1. Open a merge request
### Codestyle
* Four space indentation
* One class per file
* Variables and functions are written in **snake_case**
* Class names are written in **PascalCase**
<div align="right">(<a href="#top">back to top</a>)</div>
<!-- LICENSE -->
## License
Distributed under the Unlicense license. See [LICENSE][license-url] for more information.
<div align="right">(<a href="#top">back to top</a>)</div>
<!-- CONTACT -->
## Contact
Justin Lehnen - [email protected] - [email protected]
Project Link: [https://gitlab.com/justin_lehnen/zsim-cli](https://gitlab.com/justin_lehnen/zsim-cli)
<div align="right">(<a href="#top">back to top</a>)</div>
<!-- ACKNOWLEDGMENTS -->
## Acknowledgments
* [Aachen University of Applied Sciences](https://www.fh-aachen.de/en/)
* [Choose an Open Source License](https://choosealicense.com)
* [GitHub Emoji Cheat Sheet](https://www.webpagefx.com/tools/emoji-cheat-sheet)
* [Img Shields](https://shields.io)
<div align="right">(<a href="#top">back to top</a>)</div>
<!-- MARKDOWN LINKS & IMAGES -->
<!-- https://www.markdownguide.org/basic-syntax/#reference-style-links -->
[issues-url]: https://gitlab.com/justin_lehnen/zsim-cli/issues
[license-url]: https://gitlab.com/justin_lehnen/zsim-cli/blob/main/LICENSE
[zsim-screenshot]: images/screenshot.png
[zsim-demo]: images/demo.gif
[zsim-interactive-demo]: images/interactive.gif
[zsim-termux-screenshot]: images/termux.png
[zcode-syntax-highlighting]: images/syntax_highlighting.png
| zsim-cli | /zsim-cli-0.6.1.tar.gz/zsim-cli-0.6.1/README.md | README.md |
from .parser import Tokenizer, Token
from .instructions import Instruction, JumpInstruction, CallInstruction
from typing import List, Union, Optional
from copy import deepcopy
class Machine(object):
'''Executes code based on an instruction set'''
memory_scanner = tokenizer = (Tokenizer(eof=None)
.token('open', r'\[')
.token('close', r'\]')
.token('number', r'0|-?[1-9][0-9]*')
.token('assign', r'/')
.token('comma', r',')
.skip(r'\s+')
.skip(r'\r')
.skip(r'\n'))
def __init__(self, instruction_set, tokens: List[Token], h:Optional[Union[str, dict]] = {}, format: Optional[str] = '({m}, {d}, {b}, {h}, {o})'):
super(Machine, self).__init__()
self.m = 1
self.d = []
self.b = []
self.o = ''
self.format = format
self.jumps = {}
self.instructions = []
if isinstance(h, str):
self.h = self.__parse_memory(h)
elif isinstance(h, dict):
self.h = h
elif not h:
self.h = {}
else:
raise Exception('Cannot parse memory allocation')
if len(tokens) > 0:
self.__parse_instructions(tokens, instruction_set)
self.initial_h = deepcopy(self.h)
def done(self):
return self.m - 1 >= len(self.instructions)
def step(self):
instruction = self.get_next_instruction()
if instruction:
instruction(self)
return deepcopy(self)
def run(self, reset=True):
if reset:
self.reset()
while not self.done():
self.step()
def enumerate_run(self, reset=True):
if reset:
self.reset()
while not self.done():
yield deepcopy(self)
self.step()
yield self
def reset(self):
self.m = 1
self.d = []
self.b = []
self.o = ''
self.h = deepcopy(self.initial_h)
def get_next_instruction(self):
if self.done():
return
return self.instructions[self.m - 1]
def update_heap_pointer(self):
return len(self.h) if 0 in self.h else len(self.h) + 1
def __str__(self):
dStr = ' '.join(map(str, self.d[::-1]))
bStr = ' '.join(map(str, self.b[::-1]))
hStr = '[' + ', '.join(map(lambda k: f'{ k[0] }/{ k[1] }', self.h.items())) + ']'
return self.format.format(m=self.m, d=dStr, b=bStr, h=hStr, o=self.o.encode("unicode_escape").decode("utf-8"))
def __getitem__(self, m: int) -> Instruction:
if m > 0 and m <= len(self.instructions):
return self.instructions[m - 1]
def __parse_memory(self, memory: Optional[str]) -> dict:
if not memory:
return {}
h = {}
tokens = self.memory_scanner.tokenize(memory.strip())
# Instead of doing another recursive descent we manually check the syntax here
if tokens[0].type != 'open' or tokens[-1].type != 'close':
raise Exception('Wrong syntax for memory allocation.')
for i, token in enumerate(tokens):
# New allocation starts after "[" or ","
if token.type == 'open' or token.type == 'comma':
# Check for <num>"/"<num>
if (tokens[i+1].type == 'number' and
tokens[i+2].type == 'assign' and
tokens[i+3].type == 'number'):
addr = tokens[i+1].value
value = tokens[i+3].value
h[int(addr)] = int(value)
else:
raise Exception('Wrong syntax for memory allocation.')
return h
def __parse_instructions(self, tokens, instruction_set) -> None:
# Remove all newlines
tokens = list(filter(lambda t: t.type != 'newline', tokens))
instruction_counter = 1
# Preprocess memory allocation in first line
if (tokens[0].type == 'openingBracket' and
tokens[1].type == 'number' and
tokens[2].type == 'assignment' and
tokens[3].type == 'number'):
# Don't override explicitely set allocation
if not int(tokens[1].value) in self.h:
self.h[int(tokens[1].value)] = int(tokens[3].value)
i = 4
while (tokens[i].type == 'comma' and
tokens[i+1].type == 'number' and
tokens[i+2].type == 'assignment' and
tokens[i+3].type == 'number'):
# Don't override explicitely set allocation
if not int(tokens[i+1].value) in self.h:
self.h[int(tokens[i+1].value)] = int(tokens[i+3].value)
i += 4
# Preprocess all jump targets
for i, token in enumerate(tokens):
# Match for <identifier>":"
if token.type == 'identifier' and tokens[i+1].type == 'colon':
if token.value not in self.jumps:
self.jumps[token.value] = instruction_counter
elif token.type == 'semicolon':
instruction_counter += 1
for i, token in enumerate(tokens):
type = token.type
value = token.value
# Command in form <command>
if type == 'command0':
self.instructions.append(Instruction(value, instruction_set[value]))
# Command in form <command> <parameter>
elif type == 'command1':
parameter = int(tokens[i+1].value)
self.instructions.append(Instruction(value, instruction_set[value], parameter))
# Command in form <jmpcommand> <identifier|number>
elif type == 'jmpcommand':
target_token = tokens[i + 1]
target = 0
if target_token.type == 'identifier' and target_token.value in self.jumps:
target = self.jumps[target_token.value]
elif target_token.type == 'number':
target = int(target_token.value)
self.instructions.append(JumpInstruction(value, instruction_set[value], target))
# Command in form <call> "(" <identifier|number> "," <number> "," <number>* ")"
elif type == 'call':
target_token = tokens[i + 2]
target = 0
if target_token.type == 'identifier' and target_token.value in self.jumps:
target = self.jumps[target_token.value]
elif target_token.type == 'number':
target = int(target_token.value)
parameter_count = int(tokens[i + 4].value)
parameter_index = i + 6
parameters = []
while tokens[parameter_index].type != 'closingParenthesis':
parameters.append(int(tokens[parameter_index].value))
parameter_index += 1
self.instructions.append(CallInstruction(value, instruction_set[value], target, parameter_count, parameters)) | zsim-cli | /zsim-cli-0.6.1.tar.gz/zsim-cli-0.6.1/src/zsim/machine.py | machine.py |
from .parser import RecursiveDescent
from .instructions import Instruction, JumpInstruction, CallInstruction
class ZCodeStatementParser(RecursiveDescent):
'''Validates if a given Z-Code'''
def __init__(self, tokens, instruction_set, instruction_counter = 1, jump_targets = {}):
super(ZCodeStatementParser, self).__init__(tokens)
self.instruction_set = instruction_set
self.instruction_counter = instruction_counter
self.jump_targets = jump_targets
self.instruction = None
def program(self):
self.__statement()
def __statement(self):
self.__labels()
self.__command()
def __labels(self):
while self.accept('identifier'):
self.jump_targets[self.previous.value] = self.instruction_counter
self.expect('colon')
self.accept_many('newline')
def __command(self):
if self.accept('command0'):
self.instruction = Instruction(self.previous.value, self.instruction_set[self.previous.value])
self.expect('semicolon')
elif self.accept('command1'):
self.instruction = Instruction(self.previous.value, self.instruction_set[self.previous.value])
self.expect('number')
self.instruction.parameters = (int(self.previous.value),)
self.expect('semicolon')
elif self.accept('jmpcommand'):
self.instruction = JumpInstruction(self.previous.value, self.instruction_set[self.previous.value])
self.expect('number', 'identifier')
if self.previous.type == 'identifier':
target = 0
if self.previous.value in self.jump_targets:
target = self.jump_targets[self.previous.value]
else:
target = int(self.previous.value)
self.instruction.parameters = (target,)
self.expect('semicolon')
elif self.accept('call'):
self.instruction = CallInstruction(self.previous.value, self.instruction_set[self.previous.value])
self.expect('openingParenthesis')
self.expect('number', 'identifier')
target = self.previous.value
if self.previous.type == 'identifier':
target = 0
if self.previous.value in self.jump_targets:
target = self.jump_targets[self.previous.value]
self.expect('comma')
self.expect('number')
param_count = int(self.previous.value)
self.expect('comma')
params = []
while self.accept('number'):
params.append(int(self.previous.value))
self.instruction.parameters = (target, param_count, params)
self.expect('closingParenthesis')
self.expect('semicolon') | zsim-cli | /zsim-cli-0.6.1.tar.gz/zsim-cli-0.6.1/src/zsim/zcode_statement_parser.py | zcode_statement_parser.py |
import click
import json as _json
def print_command_help(instruction_set, command):
command_parts = command.split(' ')
if len(command_parts) > 1:
# help <command>
matching = list(filter(lambda instr: instr.upper().startswith(command_parts[1].strip().upper()), dir(instruction_set)))
if len(matching) > 0:
# Found at least one entry
for instruction in matching:
click.echo(f'{instruction} - { instruction_set[instruction].__doc__ }')
else:
# Not found
click.echo(f'\'{command_parts[1]}\' Does not seem to be part of this instruction set.')
else:
click.echo('Available commands:')
for instruction in dir(instruction_set):
click.echo(' ' + instruction)
click.secho(' help - Show this help message.', fg='yellow')
click.secho(' exit - Exit the interpreter.', fg='yellow')
click.secho(' labels - Prints all defined labels.', fg='yellow')
click.secho(' #noexec - Pushes a frame in which commands are not executed. Useful for function definition.', fg='yellow')
click.secho(' #exec - Pops a previously defined \'#noexec\' frame.', fg='yellow')
click.echo('Try \'help <command>\' for more information on a specific command.')
def interactive_mode(memory, instruction_set, format):
from . import tokenizer, ZCodeStatementParser
from .machine import Machine
machine = Machine(instruction_set, [], memory, format)
original = []
tokens = []
jump_targets = {}
no_exec = 0 # Allows for contexts to be pushed before executed, like functions with labels, etc.
click.secho('Welcome to the Z-Code interactive simulator! Try \'help\' for more information.', fg='yellow')
while True:
line = click.prompt(click.style('>' if no_exec == 0 else '#' * no_exec + '>', fg='yellow', bold=True), prompt_suffix=' ')
# Special interactive mode commands
if line.strip().lower().startswith('help'):
print_command_help(instruction_set, line.strip().lower())
continue
elif line.strip().lower() == 'exit':
click.echo(machine)
break
elif line.strip().lower() == 'labels':
for label in jump_targets:
click.echo(f'{label} -> {jump_targets[label]}')
continue
elif line.strip().lower() == '#noexec':
no_exec += 1
continue
elif line.strip().lower() == '#exec':
no_exec = max(0, no_exec - 1)
continue
# Parse and execute typed command
original += [line]
tokens += tokenizer.tokenize(line)
if any([t.type == 'semicolon' for t in tokens]):
parser = ZCodeStatementParser(tokens, instruction_set, machine.m, jump_targets)
if not parser.valid():
click.echo(f'{ click.style(parser.last_error, fg="red") }\nCannot execute command \'{ " ".join(original) }\'\n')
continue
original = []
tokens = []
machine.instructions.append(parser.instruction)
if no_exec == 0:
generator = machine.enumerate_run(reset=False)
next(generator)
for state in generator:
click.echo(machine)
else:
# To prevent continuing with the #noexec block we have to skip the just added instruction
machine.m += 1
def read_file(file, encoding='utf_8', buffer_size=1024):
if not file:
return
content = ''
chunk = file.read(buffer_size)
while chunk:
content += chunk.decode(encoding)
chunk = file.read(buffer_size)
return content
@click.group()
def entry_point():
pass
@click.command()
@click.option('--json/--no-json', default=False)
@click.option('-i', '--input', type=click.File('rb'))
@click.option('-it', '--interactive/--no-interactive', default=False, help='Run the simulator in an interactive interpreter mode.')
@click.option('-e','--encoding', type=str, default='utf_8', show_default=True, help='Encoding to use when using --input. Refer to "https://docs.python.org/3/library/codecs.html#standard-encodings" for possible values.') # click.Choice makes the help text a mess.
@click.option('-h', '--memory', type=str)
@click.option('--instructionset', type=click.Choice(['z', 'zfp', 'zds', 'zfpds'], case_sensitive=False), default='zfpds', show_default=True)
@click.option('--step/--no-step', default=False, help='Run step by step. Not compatible with --json')
@click.option('--full-output/--no-full-output', default=False, help='Show all states when simulating. Not compatible with --step')
@click.option('--format', default='({m}, {d}, {b}, {h}, {o})', show_default=True, type=str)
@click.argument('zcode', type=str, required=False)
@click.pass_context
def run(ctx, json, input, interactive, encoding, memory, instructionset, step, full_output, format, zcode):
'''Run ZCODE'''
from .instruction_sets import ZInstructions, ZFPInstructions, ZDSInstructions, ZFPDSInstructions
instructionset_map = {
'z': ZInstructions(),
'zfp': ZFPInstructions(),
'zds': ZDSInstructions(),
'zfpds': ZFPDSInstructions(),
}
if interactive:
interactive_mode(memory, instructionset_map[instructionset], format)
return;
if input:
zcode = read_file(input, encoding)
if not zcode:
click.echo(ctx.get_help())
return
from . import tokenizer
tokens = tokenizer.tokenize(zcode)
from . import ZCodeParser
parser = ZCodeParser(tokens)
success = parser.valid()
if not success:
if json:
output = { 'success': success, 'message': str(parser.last_error) }
click.echo(_json.dumps(output))
else:
output = str(parser.last_error)
click.echo(output)
return
from .machine import Machine
machine = Machine(instructionset_map[instructionset], tokens, memory, format)
if step:
while not machine.done():
next_instruction = machine.get_next_instruction()
state_string = f'{machine}{" next: " + next_instruction.string(machine) if next_instruction else "" }'
click.echo(state_string)
click.pause()
machine.step()
click.echo(machine)
return
if json:
states = [(state, machine[state.m]) for state in machine.enumerate_run()]
output = {
'success': True,
'instruction_set': instructionset,
'initial_memory': states[0][0].initial_h,
'code': zcode,
'final_state': {
'm': machine.m,
'd': machine.d,
'b': machine.b,
'h': machine.h,
'o': machine.o,
},
}
if full_output:
output['states'] = [
{
'state': {
'm': state_tuple[0].m,
'd': state_tuple[0].d,
'b': state_tuple[0].b,
'h': state_tuple[0].h,
'o': state_tuple[0].o,
},
'next_instruction': {
'command': state_tuple[1].string(machine),
'mnemonic': state_tuple[1].mnemonic,
'parameters': state_tuple[1].parameters,
} if state_tuple[1] else None
} for state_tuple in states
]
click.echo(_json.dumps(output))
else:
if full_output:
for state_string in map(lambda x: f'{x}{" next: " + x[x.m].string(x) if machine[x.m] else "" }', machine.enumerate_run()):
click.echo(state_string)
else:
machine.run()
click.echo(machine)
@click.command()
@click.option('--json/--no-json', default=False)
@click.option('-i', '--input', type=click.File('rb'))
@click.option('--encoding', type=str, default='utf_8', show_default=True, help='Encoding to use when using --input. Refer to "https://docs.python.org/3/library/codecs.html#standard-encodings" for possible values.') # click.Choice makes the help text a mess.
@click.argument('zcode', type=str, required=False)
@click.pass_context
def validate(ctx, json, input, encoding, zcode):
'''Validate ZCODE'''
if input:
zcode = read_file(input, encoding)
if not zcode:
click.echo(ctx.get_help())
return
from . import tokenizer
tokens = tokenizer.tokenize(zcode)
from . import ZCodeParser
parser = ZCodeParser(tokens)
success = parser.valid()
if json:
output = { 'success': success, 'message': 'Code is syntactically correct.' }
if not success:
output['message'] = str(parser.last_error)
click.echo(_json.dumps(output))
else:
output = str(success)
if not success:
output = str(parser.last_error)
click.echo(output)
@click.command()
@click.option('--json/--no-json', default=False)
@click.option('-i', '--input', type=click.File('rb'))
@click.option('--encoding', type=str, default='utf_8', show_default=True, help='Encoding to use when using --input. Refer to "https://docs.python.org/3/library/codecs.html#standard-encodings" for possible values.') # click.Choice makes the help text a mess.
@click.argument('zcode', type=str, required=False)
@click.pass_context
def tokenize(ctx, json, input, encoding, zcode):
'''Tokenize ZCODE'''
if input:
zcode = read_file(input, encoding)
if not zcode:
click.echo(ctx.get_help())
return
from . import tokenizer
tokens = tokenizer.tokenize(zcode)
if json:
output = _json.dumps(list(map(lambda x: {'type': x.type, 'value': x.value}, tokens)))
click.echo(output)
else:
output = list(map(lambda x: (x.type, x.value), tokens))
click.echo(output)
entry_point.add_command(run)
entry_point.add_command(validate)
entry_point.add_command(tokenize) | zsim-cli | /zsim-cli-0.6.1.tar.gz/zsim-cli-0.6.1/src/zsim/cli.py | cli.py |
from .token_type import TokenType
from .token import Token
from typing import List, Optional
import re
class Tokenizer(object):
'''Splits an input string into a list of tokens'''
def __init__(self, default: str = 'unknown', skip: Optional[str] = 'skip', eof: Optional[str] = 'eof'):
super(Tokenizer, self).__init__()
self.default = default
self.default_skip = skip
self.eof = eof
self.__types = []
def token(self, token_type: str, *regex_list: List[str]):
for regex in regex_list:
self.__types.append(TokenType(token_type, regex, False))
return self
def skip(self, regex):
self.__types.append(TokenType(self.default_skip, regex, True))
return self
def get_types(self) -> List[TokenType]:
return self.__types
def tokenize(self, text: str) -> List[Token]:
if not text:
return [ Token(self.default, None) ]
# Start with one token which contains the entire text as its value
tokens = [ Token(self.default, text) ]
# Iterate all known types
for token_type, regex, skip in self.__types:
compiled_regex = re.compile(regex)
if skip:
tokens = list(filter(lambda t: t.type != self.default_skip, self.__extract_token_type(tokens, compiled_regex, self.default_skip)))
else:
tokens = list(self.__extract_token_type(tokens, compiled_regex, token_type))
# Add a seperate EOF token if defined
if self.eof:
tokens.append(Token(self.eof, None))
return tokens
def __extract_token_type(self, tokens: List[Token], regex: re, token_type: str) -> Token:
for token in tokens:
# We only operate on non-default tokens, so yield the already processed ones
if token.type != self.default:
yield token
continue
# This token does not match, so we yield it unchanged
if not regex.search(token.value):
yield token
continue
index = 0
for match in regex.finditer(token.value):
# Yield everything before the match as a default token
if index < match.start():
yield Token(self.default, token.value[index:match.start()])
# Yield the new token with the match
yield Token(token_type, match[0])
index = match.end()
# Yield everything after the match as a default token
if index < len(token.value):
yield Token(self.default, token.value[index:])
def __str__(self) -> str:
return f'Tokenizer types: {", ".join(map(lambda x: x.type, self.__types))}'
def __repr__(self) -> str:
'''Returns representation of the object'''
properties = ('%s=%r' % (k, v) for k, v in self.__dict__.items() if not k.startswith('_'))
return '%s(%s)' % (self.__class__.__name__, ', '.join(properties)) | zsim-cli | /zsim-cli-0.6.1.tar.gz/zsim-cli-0.6.1/src/zsim/parser/tokenizer.py | tokenizer.py |
from .token import Token
from .parse_exception import ParseException
from typing import List
from abc import ABC, abstractmethod
import inspect
class RecursiveDescent(ABC):
'''Implements the recursive descent pattern for parsing tokens.'''
def __init__(self, tokens: List[Token]):
super(RecursiveDescent, self).__init__()
self.__tokens = tokens
self.__index = 0
self.last_error = None
@property
def token(self):
if self.__index < 0 or self.__index >= len(self.__tokens):
return
return self.__tokens[self.__index]
@property
def previous(self):
if self.__index > 0:
return self.__tokens[self.__index - 1]
@property
def next(self):
if self.__index + 1 < len(self.__tokens):
return self.__tokens[self.__index + 1]
@property
def index(self):
return self.__index
def accept(self, *token_types, advance=True):
if any([self.token and self.token.type == t for t in token_types]):
if advance:
self.advance()
return True
return False
def expect(self, *token_types, advance=True):
if not self.accept(*token_types, advance):
self.error(*token_types)
def error(self, *expected):
caller = inspect.stack()[1].function
if caller == 'expect':
caller = inspect.stack()[2].function
raise ParseException(list(expected), self.token, caller,
previous=self.__tokens[:self.__index],
next=self.__tokens[self.__index+1:-1])
def accept_many(self, *token_types, amount=-1):
found = 0
while self.accept(*token_types) and found < amount:
found += 1
return found > 0
def expect_many(self, *token_types, required=-1):
found = 0
while found < required:
self.expect(*token_types)
found += 1
def advance(self):
if self.next:
self.__index += 1
else:
self.__index = -1
def valid(self):
self.__index = 0
self.last_error = None
try:
self.program()
return True
except ParseException as ex:
self.last_error = ex
return False
@abstractmethod
def program(self):
pass | zsim-cli | /zsim-cli-0.6.1.tar.gz/zsim-cli-0.6.1/src/zsim/parser/recursive_descent.py | recursive_descent.py |
from . import InstructionSet
class ZInstructions(InstructionSet):
'''Implements the basic Z-Code instructions'''
def __init__(self):
super(ZInstructions, self).__init__()
def LIT(self, machine, *parameters):
'''(LIT z)(m, d, b, h, o) = (m+1, z:d, b, h, o)'''
machine.d.append(parameters[0])
machine.m += 1
def POP(self, machine, *parameters):
'''(POP)(m, z:d, b, h, o) = (m+1, d, b, h, o)'''
machine.d.pop()
machine.m += 1
def NOP(self, machine, *parameters):
'''(NOP)(m, d, b, h, o) = (m+1, d, b, h, o)'''
machine.m += 1
def LOD(self, machine, *parameters):
'''(LOD n)(m, d, b, h, o) = (m+1, h(n):d, b, h, o)'''
addr = parameters[0]
machine.d.append(machine.h[addr])
machine.m += 1
def STO(self, machine, *parameters):
'''(STO n)(m, z:d, b, h, o) = (m+1, d, b, h[n/z], o)'''
addr = parameters[0]
machine.h[addr] = machine.d.pop()
machine.m += 1
def PRN(self, machine, *parameters):
'''(PRN)(m, z:d, b, h, o) = (m+1, d, b, h, oz\n)'''
machine.o += str(machine.d.pop()) + '\n'
machine.m += 1
def PRT(self, machine, *parameters):
'''(PRT)(m, z:d, b, h, o) = (m+1, d, b, h, oz)'''
machine.o += str(machine.d.pop())
machine.m += 1
def PRC(self, machine, *parameters):
'''(PRT)(m, z:d, b, h, o) = (m+1, d, b, h, ochar(z))'''
machine.o += chr(machine.d.pop())
machine.m += 1
def ADD(self, machine, *parameters):
'''(ADD)(m, z1 z2:d, b, h, o) = (m+1, (z1+z2):d, b, h, o)'''
machine.d.append(machine.d.pop() + machine.d.pop())
machine.m += 1
def SUB(self, machine, *parameters):
'''(SUB)(m, z1 z2:d, b, h, o) = (m+1, (z2-z1):d, b, h, o)'''
z2 = machine.d.pop()
machine.d.append(machine.d.pop() - z2)
machine.m += 1
def MULT(self, machine, *parameters):
'''(MULT)(m, z1 z2:d, b, h, o) = (m+1, (z1*z2):d, b, h, o)'''
machine.d.append(int(machine.d.pop() * machine.d.pop()))
machine.m += 1
def DIV(self, machine, *parameters):
'''(DIV)(m, z1 z2:d, b, h, o) = (m+1, (z2/z1):d, b, h, o)'''
z2 = machine.d.pop()
machine.d.append(machine.d.pop() // z2)
machine.m += 1
def JMP(self, machine, *parameters):
'''(JMP n)(m, d, b, h, o) =
if n > 0 then (n, d, b, h, o)
if n < 0 then (stopaddr + n, d, b, h, o)
if n = 0 then (stopaddr, d, b, h, o)'''
if parameters[0] == 0:
machine.m = len(machine.instructions) + 1
elif parameters[0] < 0:
machine.m = len(machine.instructions) + parameters[0] + 1
else:
machine.m = parameters[0]
def JMC(self, machine, *parameters):
'''(JMC n)(m, b:d, b, h, o) =
if b = 0 then
if n > 0 then (n, d, b, h, o)
if n < 0 then (stopaddr + n, d, b, h, o)
if n = 0 then (stopaddr, d, b, h, o)
if b = 1 then (m+1, d, b, h, o)'''
if parameters[0] == 0:
target = len(machine.instructions) + 1
elif parameters[0] < 0:
target = len(machine.instructions) + parameters[0] + 1
else:
target = parameters[0]
top = machine.d.pop()
machine.m = target if top == 0 else machine.m + 1
def EQ(self, machine, *parameters):
'''(EQ)(m, z1 z2:d, b, h, o) =
if z2 EQ z1 = true then (m+1, 1:d, b, h, o)
if z2 EQ z1 = false then (m+1, 0:d, b, h, o)'''
z2 = machine.d.pop()
machine.d.append(1 if machine.d.pop() == z2 else 0)
machine.m += 1
def NE(self, machine, *parameters):
'''(NE)(m, z1 z2:d, b, h, o) =
if z2 NE z1 = true then (m+1, 1:d, b, h, o)
if z2 NE z1 = false then (m+1, 0:d, b, h, o)'''
z2 = machine.d.pop()
machine.d.append(1 if machine.d.pop() != z2 else 0)
machine.m += 1
def LT(self, machine, *parameters):
'''(LT)(m, z1 z2:d, b, h, o) =
if z2 LT z1 = true then (m+1, 1:d, b, h, o)
if z2 LT z1 = false then (m+1, 0:d, b, h, o)'''
z2 = machine.d.pop()
machine.d.append(1 if machine.d.pop() < z2 else 0)
machine.m += 1
def GT(self, machine, *parameters):
'''(GT)(m, z1 z2:d, b, h, o) =
if z2 GT z1 = true then (m+1, 1:d, b, h, o)
if z2 GT z1 = false then (m+1, 0:d, b, h, o)'''
z2 = machine.d.pop()
machine.d.append(1 if machine.d.pop() > z2 else 0)
machine.m += 1
def LE(self, machine, *parameters):
'''(LE)(m, z1 z2:d, b, h, o) =
if z2 LE z1 = true then (m+1, 1:d, b, h, o)
if z2 LE z1 = false then (m+1, 0:d, b, h, o)'''
z2 = machine.d.pop()
machine.d.append(1 if machine.d.pop() <= z2 else 0)
machine.m += 1
def GE(self, machine, *parameters):
'''(GE)(m, z1 z2:d, b, h, o) =
if z2 GE z1 = true then (m+1, 1:d, b, h, o)
if z2 GE z1 = false then (m+1, 0:d, b, h, o)'''
z2 = machine.d.pop()
machine.d.append(1 if machine.d.pop() >= z2 else 0)
machine.m += 1 | zsim-cli | /zsim-cli-0.6.1.tar.gz/zsim-cli-0.6.1/src/zsim/instruction_sets/z_instructions.py | z_instructions.py |
[](https://travis-ci.org/AtteqCom/zsl)
[](https://coveralls.io/github/AtteqCom/zsl?branch=master)
# ZSL - z' service layer
ZSL is a Python micro-framework utilizing
[dependency injection](https://en.wikipedia.org/wiki/Dependency_injection)
for creating service applications on top of
[Flask](https://flask.palletsprojects.com/en/1.1.x/) web framework and
[Gearman](http://gearman.org/) job server or
[Celery](http://http://www.celeryproject.org/) task queue.
## Motivation
We developed ZSL to modernize our workflow with maintaining our clients'
mostly web applications written in various older CMS solutions without the
need to rewrite them significantly. With ZSL we can write our new components
in Python, with one coherent shared codebase, accessible trough Gearman or
JavaScript. Also the same code can be called through various endpoints - web or
task queue nowadays.
## Disclaimer
At current stage this should be taken as proof of concept. We don't recommend to
run in any production except ours. It is too rigid, with minimum test coverage
and lots of bad code practices. We open sourced it as way of motivation for us
to make it better.
## Installation
We recommend to install it trough [PyPi](https://pypi.org/) and run it in
a [virtualenv](https://docs.python.org/3/library/venv.html) or
[docker](https://www.docker.com/) container.
```bash
$ pip install zsl
```
## Getting started
For now it is a bit cumbersome to get it running. It has inherited settings
trough ENV variables from Flask and has a rigid directory structure like django
apps. On top of that, it needs a database and Redis.
The minimum application layout has to contain:
```
.
├── app # application sources
│ ├── __init__.py
│ └── tasks # public tasks
│ ├── hello.py
│ └── __init__.py
├── settings # settings
│ ├── app_settings.cfg
│ ├── default_settings.py
│ └── __init__.py
└── tests
```
```bash
$ export ZSL_SETTINGS=`pwd`/settings/app_settings.cfg
```
```python
# settings/app_settings.cfg
TASKS = TaskConfiguration()\
.create_namespace('task')\
.add_packages(['app.tasks'])\
.get_configuration()
RESOURCE_PACKAGE = ()
DATABASE_URI = 'postgresql://postgres:postgres@localhost/postgres'
DATABASE_ENGINE_PROPS = {}
SERVICE_INJECTION = ()
REDIS = {
'host': 'localhost',
'port': 6379,
'db': 0
}
RELOAD = True
```
```python
# hello.py
class HelloWorldTask(object):
def perform(self, data):
return "Hello World"
```
```bash
$ python -m zsl web
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
```
```bash
$ curl http://localhost:5000/task/hello/hello_world_task
Hello world!
```
## Deploying
Deploy will happen upon pushing a new tag to Gitlab.
### Creating new tag/version
Use [bump2version](https://github.com/c4urself/bump2version) to update version in config files. It will also create commit and new tag.
```bash
$ bumpversion --new-version ${VERSION} {major|minor|patch} --tag-name ${VERSION}
```
Version name uses [semver](https://semver.org/). Starts with number.
### Pipeline
Current pipeline tries to copy previous Travis runs. It runs tox target seperately and on a tag push will create deploy.
#### Tox Docker image
Gitlab pipeline runs inside a docker image which is defined in `docker/Dockerfile.tox`. Currently we manually configure, build and push it to gitlab container registry. So to update the container follow this steps.
When pushing for the first time run, you have to create an access token and login to atteq gitlab container registry.
Go to https://gitlab.atteq.com/atteq/z-service-layer/zsl/-/settings/access_tokens and create a token to read/write to registry. Then run
`docker login registry.gitlab.atteq.com:443`
To build/push the image:
1. Build image locally.
`docker build -t zsl/tox-env -f docker/Dockerfile.tox`
2. Tag image.
`docker tag zsl/tox-env registry.gitlab.atteq.com:443/atteq/z-service-layer/zsl/tox-env:latest`
3. Push image.
`docker push registry.gitlab.atteq.com:443/atteq/z-service-layer/zsl/tox-env:latest`
4. Update image hash in `.gitlab-ci.yml`. (copy from build output or `docker images --digests`). | zsl | /zsl-0.28.1.tar.gz/zsl-0.28.1/README.md | README.md |
Getting started
===============
Installation
------------
We recommend to install it trough `PyPi <https://pypi.org/>`_ and
run it in a `virtualenv <https://docs.python.org/3/library/venv.html>`_ or
`docker <https://www.docker.com/>`_ container.
.. code-block:: console
$ pip install zsl
Hello world app
---------------
For now it is a bit cumbersome to get it running. It has inherited settings
trough ENV variables from Flask and has a rigid directory structure like django
apps. On top of that, it needs a database and redis.
The minimum application layout has to contain:
.. code-block:: console
.
├── app # application sources
│ ├── __init__.py
│ └── tasks # public tasks
│ ├── hello.py
│ └── __init__.py
├── settings # settings
│ ├── app_settings.cfg
│ ├── default_settings.py
│ └── __init__.py
└── tests
.. code-block:: console
$ export ZSL_SETTINGS=`pwd`/settings/app_settings.cfg
The minimum configuration has to contain these values:
::
# settings/app_settings.cfg
TASKS = TaskConfiguration()\
.create_namespace('task')\
.add_packages(['app.tasks'])\
.get_configuration()
RESOURCE_PACKAGE = ()
DATABASE_URI = 'postgresql://postgres:postgres@localhost/postgres'
DATABASE_ENGINE_PROPS = {}
SERVICE_INJECTION = ()
REDIS={
'host': 'localhost',
'port': 6379,
'db': 0
}
RELOAD=True
::
# hello.py
class HelloWorldTask(object):
def perform(self, data):
return "Hello World"
.. code-block:: console
$ python -m zsl web
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
.. code-block:: console
$ curl http://localhost:5000/task/hello/hello_world_task
Hello world!
| zsl | /zsl-0.28.1.tar.gz/zsl-0.28.1/docs/getting_started.rst | getting_started.rst |
Developing ZSL
##############
Documentation
=============
Creating the documentation is easy. The requirements for generating
documentation are in `documentation` extra of `zsl`. Just install
`sphinx` and the mentioned dependencies if required and perform
the following.
.. code-block:: console
$ tox -e docs
Running ZSL unit tests
======================
To run all of the ZSL unit tests one should start
.. code-block:: console
$ tox
To run only a selected test, e.g. tests in :mod:`tests.resource.guarded_resource_test`:
.. code-block:: console
$ cd zsl
$ python -m unittest tests.resource.guarded_resource_test
| zsl | /zsl-0.28.1.tar.gz/zsl-0.28.1/docs/developing.rst | developing.rst |
.. zsl documentation master file, created by
sphinx-quickstart on Tue Dec 20 16:26:44 2016.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
ZSL - z' service layer
======================
ZSL is a Python 2.7 micro-framework utilizing
`dependency injection <https://en.wikipedia.org/wiki/Dependency_injection>`_
for creating service applications on top of `Flask <https://flask.palletsprojects.com/en/1.1.x/>`_
and `Gearman <http://gearman.org/>`_ job server.
Motivation
##########
We developed ZSL to modernize our workflow with maintaining our clients web
applications written in various older CMS solutions without the need to rewrite
them significantly. With ZSL we can write our new components in python, with one
coherent shared codebase, accessible trough Gearman or JavaScript.
Disclaimer
##########
At current stage this should be taken as proof of concept. We don't recommend to
run in any production except ours. It is too rigid, with minimum test coverage
and lots of bad code practices. We open sourced it as way of motivation for us
to make it better.
.. toctree::
:glob:
:maxdepth: 1
getting_started
configuration
modules_and_containers
error_handling
unit_testing
extending
message_queues
caching
database
api
developing
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| zsl | /zsl-0.28.1.tar.gz/zsl-0.28.1/docs/index.rst | index.rst |
Modules and Containers
######################
Modules and Containers are the main way how to:
* provide services,
* master dependency injection,
* extend ZSL.
Each module can provide objects via dependency injection. And thus if module
is added to the application container, the dependency injection may inject
the objects provided by module.
Services
========
Instead of manually providing services via a module, there is a mechanism to
automatically provided services via injection. If they are placed inside the
configured service package, they are automatically created and injected.
| zsl | /zsl-0.28.1.tar.gz/zsl-0.28.1/docs/modules_and_containers.rst | modules_and_containers.rst |
Error handling
##############
Custom error handlers may be added using the following code.::
class ForbiddenExceptionErrorHandler(ErrorHandler):
def can_handle(self, e):
return isinstance(e, ForbiddenException)
def handle(self, e):
return "Forbidden!"
class FileNotFoundExceptionErrorHandler(ErrorHandler):
def can_handle(self, e):
return isinstance(e, FileNotFoundException)
def handle(self, e):
return "FileResource not found!"
class DefaultErrorHandler(ErrorHandler):
def can_handle(self, e):
return isinstance(e, Exception)
def handle(self, e):
@crossdomain()
def error_handler():
return "error"
return error_handler()
class ErrorHandlerModule(Module):
def configure(self, binder: Binder) -> None:
register(FileNotFoundExceptionErrorHandler())
register(ForbiddenExceptionErrorHandler())
register(DefaultErrorHandler())
The call to register function is best timed when a module is created. To set status code for a web response use
:class:`zsl.task.job_context.StatusCodeResponder`.
| zsl | /zsl-0.28.1.tar.gz/zsl-0.28.1/docs/error_handling.rst | error_handling.rst |
Configuration
#############
A global configuration is required to exists as a package
``settings.default_settings``. Per-installation/environment configuration is possible.
A file with suffix `cfg` and name set in environment variable ``ZSL_SETTINGS`` is needed.
Environment variables
---------------------
* ``ZSL_SETTINGS``
Location of the per-installation configuration file (with \*.cfg suffix).
Required fields
---------------
* ``TASKS``
The task router will search for task according to the task configuration.
The object under this variable is an instance of
:class:`zsl.router.task.TaskConfiguration`. The users register the packages
holding task classes under their urls. See the example app and the following
example for more.
If we have a ``FooApiTask`` in module ``myapp.tasks.api.v1.foo_api_task``, then
we register it under url `/api/v1` in so that we use::
TASKS = TaskConfiguration()\
.create_namespace('api/v1')\
.add_packages(['myapp.tasks.api.v1'])\
.get_configuration()
Notice that all the tasks in the modules in the package
``myapp.tasks.api.v1.foo_api_task`` will be routed under `/api/v1` url. If one needs
a more detailed routing, then ``add_routes`` method is much more convenient.
* ``RESOURCE_PACKAGES``
List of packages with resources. The resource router will search any resource
in these packages with given order.
* ``DATABASE_URI``
Database URL for `SQLAlchemy <https://www.sqlalchemy.org/>`_'s
`crate_engine <https://docs.sqlalchemy.org/en/14/core/engines.html#sqlalchemy.create_engine>`_.
* ``DATABASE_ENGINE_PROPS``
A dictionary of optional properties for the DB connection.
* ``SERVICE_INJECTION``
List of services initialized and bind to the injecor after start.::
SERVICE_INJECTION = ({
'list': ['AccountService', 'BuildService']
'package': 'app.services'
})
* ``REDIS``
`Redis <https://redis-py.readthedocs.io/en/latest/index.html>`_ configuration.
Optional fields
---------------
* ``RELOAD``
Reload tasks on every call. Especially usable when debugging.
* ``DEBUG``
Set the debug mode - ``True`` or ``False``.
* ``LOGGING``
Logging settings are specified in `LOGGING` variable as a python dictionary.
ZSL uses python logging as the logging infrastructure and the configuration
is done accordingly.
The concrete options are specified in Python Logging library in the part
called dictionary configuration, just check the `logging.config
<https://docs.python.org/3/library/logging.config.html#module-logging.config>`_
module documentation for more. An example is here.::
LOGGING = {
'version': 1,
'formatters': {
'default': {
'format': '%(levelname)s %(name)s %(asctime)-15s %(message)s'
}
},
'handlers': {
'console': {
'class': 'logging.StreamHandler',
'formatter': 'default'
},
},
'loggers': {
'storage': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': True,
}
},
'root': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': True,
}
}
Because of using Flask one has to provide `import_name` parameter to the
:class:`zsl.application.service_application.ServiceApplication` object
constructor. This `import_name` is also the name of the root logger used
in the application. Thus it is convenient to choose as the root package
name of the project.
* ``EXTERNAL_LIBRARIES``
Add external libraries to path. This option will be deprecated
use a correct virtualenv environment instead.::
EXTERNAL_LIBRARIES = {
'vendor_path': './vendor'
'libs': ['my_lib_dir1', 'my_lib_dir2']
}
* ``CORS``
The configuration containing the default CORS/crossdomain settings. Check
:class:`zsl.application.modules.web.cors.CORSConfiguration`. The available
options are:
* `origin`,
* `allow_headers`,
* `expose_headers`,
* `max_age`.
Check CORS explanation on `Wikipedia <https://en.wikipedia.org/wiki/Cross-origin_resource_sharing>`_.
| zsl | /zsl-0.28.1.tar.gz/zsl-0.28.1/docs/configuration.rst | configuration.rst |
from builtins import str
# noinspection PyCompatibility
from builtins import range
# noinspection PyCompatibility
from builtins import object
from abc import abstractmethod
import hashlib
import json
import random
import requests
import gearman
ALLOWED_CHARACTERS = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789abcdefghijklmnopqrstuvwxyz'
def _random_string(length, allowed_characters=ALLOWED_CHARACTERS):
return ''.join(random.choice(allowed_characters) for _ in range(length))
class Task(object):
@abstractmethod
def get_name(self):
pass
name = property(get_name)
@abstractmethod
def get_data(self):
pass
data = property(get_data)
class RawTask(Task):
def __init__(self, name, data):
self._name = name
self._data = data
def get_name(self):
return self._name
name = property(get_name)
def get_data(self):
return self._data
data = property(get_data)
class TaskResult(object):
@abstractmethod
def get_task(self):
pass
task = property(get_task)
@abstractmethod
def get_result(self):
pass
result = property(get_result)
class RawTaskResult(TaskResult):
def __init__(self, task, result):
assert isinstance(task, Task)
self._task = task
self._result = result
def get_task(self):
return self._task
def get_result(self):
return self._result
class TaskDecorator(Task):
def __init__(self, task):
self._task = task
def get_name(self):
return self._task.get_name()
def get_data(self):
return self._task.get_data()
class JsonTask(TaskDecorator):
def get_name(self):
return TaskDecorator.get_name(self)
def get_data(self):
data = self._task.get_data()
return json.dumps(data)
class SecuredTask(TaskDecorator):
def get_name(self):
return TaskDecorator.get_name(self)
def set_asl(self, asl):
self._asl = asl
def get_data(self):
random_token = _random_string(16)
return {
"data": self._task.get_data(),
"security": {
"random_token": random_token,
"hashed_token": hashlib.sha1(random_token + self._asl.get_secure_token()).hexdigest().upper()
}
}
class TaskResultDecorator(object):
def __init__(self, task_result):
assert isinstance(task_result, TaskResult)
self._task_result = task_result
class JsonTaskResult(TaskResult, TaskResultDecorator):
def get_result(self):
result = self._task_result.get_result()
return json.loads(result)
class ErrorTaskResult(TaskResult, TaskResultDecorator):
def get_complete_result(self):
result = self._task_result.get_result()
return result
def get_result(self):
result = self._task_result.get_result()
return json.loads(result['data'])
def is_error(self):
result = self._task_result.get_result()
return True if 'error' in result else False
def get_error(self):
return self._task_result.get_result()['error']
class Service(object):
def __init__(self):
self._security_config = {}
@abstractmethod
def _inner_call(self, name, data):
"""
Make request to service layer and returns response to this request.
:param name: name of the task
:type name: str
:param data: task data
:return response to task request on service layer
"""
pass
def call(self, task, decorators=None):
"""
Call given task on service layer.
:param task: task to be called. task will be decorated with
TaskDecorator's contained in 'decorators' list
:type task: instance of Task class
:param decorators: list of TaskDecorator's / TaskResultDecorator's
inherited classes
:type decorators: list
:return task_result: result of task call decorated with TaskResultDecorator's
contained in 'decorators' list
:rtype TaskResult instance
"""
if decorators is None:
decorators = []
task = self.apply_task_decorators(task, decorators)
data = task.get_data()
name = task.get_name()
result = self._inner_call(name, data)
task_result = RawTaskResult(task, result)
return self.apply_task_result_decorators(task_result, decorators)
def apply_task_decorators(self, task, decorators):
for d in decorators:
if TaskDecorator in d.__bases__:
task = d(task)
if hasattr(task, 'set_asl'):
task.set_asl(self)
return task
@staticmethod
def apply_task_result_decorators(task_result, decorators):
for d in decorators:
if TaskResultDecorator in d.__bases__:
task_result = d(task_result)
return task_result
@property
def security_token(self):
return self._security_config['SECURITY_TOKEN']
@security_token.setter
def security_token(self, value):
self._security_config['SECURITY_TOKEN'] = value
class GearmanService(Service):
def __init__(self, gearman_config, security_config=None):
super(GearmanService, self).__init__()
self._gearman_config = gearman_config
self._security_config = security_config
self._gearman_client = gearman.client.GearmanClient(self._gearman_config['HOST'])
self._blocking_status = True
def set_blocking(self, blocking_status):
self._blocking_status = blocking_status
def _inner_call(self, name, data):
if data is None:
data = "null"
elif not isinstance(data, str):
data = str(data)
completed_job_request = self._gearman_client.submit_job(
self._gearman_config['TASK_NAME'],
json.dumps({
'path': name,
'data': data
}),
background=not self._blocking_status
)
if self._blocking_status:
return completed_job_request.result
class WebService(Service):
def __init__(self, web_config, security_config):
super(WebService, self).__init__()
self._web_config = web_config
self._security_config = security_config
self._service_layer_url = self._web_config['SERVICE_LAYER_URL']
def get_service_layer_url(self):
return self._service_layer_url
def _inner_call(self, name, data):
if data is None:
data = "null"
elif not isinstance(data, str):
data = str(data)
req = requests.post(self._service_layer_url + name, json=data)
return req.text | zsl_client | /zsl_client-0.9.tar.gz/zsl_client-0.9/zsl_client.py | zsl_client.py |
========
Overview
========
.. start-badges
.. list-table::
:stub-columns: 1
* - docs
- |docs|
* - tests
- | |travis|
| |coveralls| |codecov|
* - package
- | |version| |wheel| |supported-versions| |supported-implementations|
| |commits-since|
.. |docs| image:: https://readthedocs.org/projects/zsl_jwt/badge/?style=flat
:target: https://readthedocs.org/projects/zsl_jwt
:alt: Documentation Status
.. |travis| image:: https://travis-ci.org/AtteqCom/zsl_jwt.svg?branch=master
:alt: Travis-CI Build Status
:target: https://travis-ci.org/AtteqCom/zsl_jwt
.. |coveralls| image:: https://coveralls.io/repos/AtteqCom/zsl_jwt/badge.svg?branch=master&service=github
:alt: Coverage Status
:target: https://coveralls.io/r/AtteqCom/zsl_jwt
.. |codecov| image:: https://codecov.io/github/AtteqCom/zsl_jwt/coverage.svg?branch=master
:alt: Coverage Status
:target: https://codecov.io/github/AtteqCom/zsl_jwt
.. |version| image:: https://img.shields.io/pypi/v/zsl-jwt.svg
:alt: PyPI Package latest release
:target: https://pypi.python.org/pypi/zsl-jwt
.. |commits-since| image:: https://img.shields.io/github/commits-since/AtteqCom/zsl_jwt/v0.1.7.svg
:alt: Commits since latest release
:target: https://github.com/AtteqCom/zsl_jwt/compare/v0.1.7...master
.. |wheel| image:: https://img.shields.io/pypi/wheel/zsl-jwt.svg
:alt: PyPI Wheel
:target: https://pypi.python.org/pypi/zsl-jwt
.. |supported-versions| image:: https://img.shields.io/pypi/pyversions/zsl-jwt.svg
:alt: Supported versions
:target: https://pypi.python.org/pypi/zsl-jwt
.. |supported-implementations| image:: https://img.shields.io/pypi/implementation/zsl-jwt.svg
:alt: Supported implementations
:target: https://pypi.python.org/pypi/zsl-jwt
.. end-badges
JWT implementation for ZSL framework. This modules adds security
possibilities to ZSL.
* Free software: BSD license
Installation
============
Just add `zsl_jwt` to your requirements or use
::
pip install zsl-jwt
Usage
=====
Add `zsl_jwt.module.JWTModule` to the modules in your `IoCContainer`
and provide a `zsl_jwt.configuration.JWTConfiguration` in your
configuration under `JWT` variable.
Documentation
=============
See more in https://zsl_jwt.readthedocs.io/
Development
===========
To run the all tests run::
tox
Note, to combine the coverage data from all the tox environments run:
.. list-table::
:widths: 10 90
:stub-columns: 1
- - Windows
- ::
set PYTEST_ADDOPTS=--cov-append
tox
- - Other
- ::
PYTEST_ADDOPTS=--cov-append tox
| zsl_jwt | /zsl_jwt-0.1.7.tar.gz/zsl_jwt-0.1.7/README.rst | README.rst |
============
Contributing
============
Contributions are welcome, and they are greatly appreciated! Every
little bit helps, and credit will always be given.
Bug reports
===========
When `reporting a bug <https://github.com/AtteqCom/zsl_jwt/issues>`_ please include:
* Your operating system name and version.
* Any details about your local setup that might be helpful in troubleshooting.
* Detailed steps to reproduce the bug.
Documentation improvements
==========================
zsl_jwt could always use more documentation, whether as part of the
official zsl_jwt docs, in docstrings, or even on the web in blog posts,
articles, and such.
Feature requests and feedback
=============================
The best way to send feedback is to file an issue at https://github.com/AtteqCom/zsl_jwt/issues.
If you are proposing a feature:
* Explain in detail how it would work.
* Keep the scope as narrow as possible, to make it easier to implement.
* Remember that this is a volunteer-driven project, and that code contributions are welcome :)
Development
===========
To set up `zsl_jwt` for local development:
1. Fork `zsl_jwt <https://github.com/AtteqCom/zsl_jwt>`_
(look for the "Fork" button).
2. Clone your fork locally::
git clone [email protected]:your_name_here/zsl_jwt.git
3. Create a branch for local development::
git checkout -b name-of-your-bugfix-or-feature
Now you can make your changes locally.
4. When you're done making changes, run all the checks, doc builder and spell checker with `tox <http://tox.readthedocs.io/en/latest/install.html>`_ one command::
tox
5. Commit your changes and push your branch to GitHub::
git add .
git commit -m "Your detailed description of your changes."
git push origin name-of-your-bugfix-or-feature
6. Submit a pull request through the GitHub website.
Pull Request Guidelines
-----------------------
If you need some code review or feedback while you're developing the code just make the pull request.
For merging, you should:
1. Include passing tests (run ``tox``) [1]_.
2. Update documentation when there's new API, functionality etc.
3. Add a note to ``CHANGELOG.rst`` about the changes.
4. Add yourself to ``AUTHORS.rst``.
.. [1] If you don't have all the necessary python versions available locally you can rely on Travis - it will
`run the tests <https://travis-ci.org/AtteqCom/zsl_jwt/pull_requests>`_ for each change you add in the pull request.
It will be slower though ...
Tips
----
To run a subset of tests::
tox -e envname -- py.test -k test_myfeature
To run all the test environments in *parallel* (you need to ``pip install detox``)::
detox
| zsl_jwt | /zsl_jwt-0.1.7.tar.gz/zsl_jwt-0.1.7/CONTRIBUTING.rst | CONTRIBUTING.rst |
API reference of :mod:`zsl_jwt`
===============================
.. testsetup::
from zsl_jwt import *
.. automodule:: zsl_jwt
:members:
:undoc-members:
.. automodule:: zsl_jwt.codec
:members:
:undoc-members:
.. automodule:: zsl_jwt.configuration
:members:
:undoc-members:
.. automodule:: zsl_jwt.module
:members:
:undoc-members:
.. automodule:: zsl_jwt.auth.configuration
:members:
:undoc-members:
.. automodule:: zsl_jwt.auth.module
:members:
:undoc-members:
.. automodule:: zsl_jwt.auth.service
:members:
:undoc-members:
.. automodule:: zsl_jwt.auth.controller
:members:
:undoc-members:
| zsl_jwt | /zsl_jwt-0.1.7.tar.gz/zsl_jwt-0.1.7/docs/reference/zsl_jwt.rst | zsl_jwt.rst |
from __future__ import absolute_import, print_function, unicode_literals
import os
import sys
from os.path import abspath
from os.path import dirname
from os.path import exists
from os.path import join
if __name__ == "__main__":
base_path = dirname(dirname(abspath(__file__)))
print("Project path: {0}".format(base_path))
env_path = join(base_path, ".tox", "bootstrap")
if sys.platform == "win32":
bin_path = join(env_path, "Scripts")
else:
bin_path = join(env_path, "bin")
if not exists(env_path):
import subprocess
print("Making bootstrap env in: {0} ...".format(env_path))
try:
subprocess.check_call(["virtualenv", env_path])
except subprocess.CalledProcessError:
subprocess.check_call([sys.executable, "-m", "virtualenv", env_path])
print("Installing `jinja2` and `matrix` into bootstrap environment...")
subprocess.check_call([join(bin_path, "pip"), "install", "jinja2", "matrix"])
activate = join(bin_path, "activate_this.py")
# noinspection PyCompatibility
exec(compile(open(activate, "rb").read(), activate, "exec"), dict(__file__=activate))
import jinja2
import matrix
jinja = jinja2.Environment(
loader=jinja2.FileSystemLoader(join(base_path, "ci", "templates")),
trim_blocks=True,
lstrip_blocks=True,
keep_trailing_newline=True
)
tox_environments = {}
for (alias, conf) in matrix.from_file(join(base_path, "setup.cfg")).items():
python = conf["python_versions"]
deps = conf["dependencies"]
tox_environments[alias] = {
"python": "python" + python if "py" not in python else python,
"deps": deps.split(),
}
if "coverage_flags" in conf:
cover = {"false": False, "true": True}[conf["coverage_flags"].lower()]
tox_environments[alias].update(cover=cover)
if "environment_variables" in conf:
env_vars = conf["environment_variables"]
tox_environments[alias].update(env_vars=env_vars.split())
for name in os.listdir(join("ci", "templates")):
with open(join(base_path, name), "w") as fh:
fh.write(jinja.get_template(name).render(tox_environments=tox_environments))
print("Wrote {}".format(name))
print("DONE.") | zsl_jwt | /zsl_jwt-0.1.7.tar.gz/zsl_jwt-0.1.7/ci/bootstrap.py | bootstrap.py |
========
Overview
========
.. start-badges
.. list-table::
:stub-columns: 1
* - docs
- |docs|
* - tests
- | |travis|
| |coveralls| |codecov|
* - package
- | |version| |wheel| |supported-versions| |supported-implementations|
| |commits-since|
.. |docs| image:: https://readthedocs.org/projects/zsl_openapi/badge/?style=flat
:target: https://readthedocs.org/projects/zsl_openapi
:alt: Documentation Status
.. |travis| image:: https://travis-ci.org/AtteqCom/zsl_openapi.svg?branch=master
:alt: Travis-CI Build Status
:target: https://travis-ci.org/AtteqCom/zsl_openapi
.. |coveralls| image:: https://coveralls.io/repos/AtteqCom/zsl_openapi/badge.svg?branch=master&service=github
:alt: Coverage Status
:target: https://coveralls.io/r/AtteqCom/zsl_openapi
.. |codecov| image:: https://codecov.io/github/AtteqCom/zsl_openapi/coverage.svg?branch=master
:alt: Coverage Status
:target: https://codecov.io/github/AtteqCom/zsl_openapi
.. |version| image:: https://img.shields.io/pypi/v/zsl-openapi.svg
:alt: PyPI Package latest release
:target: https://pypi.python.org/pypi/zsl-openapi
.. |commits-since| image:: https://img.shields.io/github/commits-since/AtteqCom/zsl_openapi/v0.2.0.svg
:alt: Commits since latest release
:target: https://github.com/AtteqCom/zsl_openapi/compare/v0.2.0...master
.. |wheel| image:: https://img.shields.io/pypi/wheel/zsl-openapi.svg
:alt: PyPI Wheel
:target: https://pypi.python.org/pypi/zsl-openapi
.. |supported-versions| image:: https://img.shields.io/pypi/pyversions/zsl-openapi.svg
:alt: Supported versions
:target: https://pypi.python.org/pypi/zsl-openapi
.. |supported-implementations| image:: https://img.shields.io/pypi/implementation/zsl-openapi.svg
:alt: Supported implementations
:target: https://pypi.python.org/pypi/zsl-openapi
.. end-badges
Generate OpenAPI specification for your models and API out of your ZSL service. This module scans the given packages
for the persistent models and generates model definitions out of them.
The full API (paths) may be declared manually.
* Free software: BSD license
Installation
============
::
pip install zsl-openapi
How to use
==========
Define container with the `zsl_openapi.module.OpenAPIModule`.
::
class MyContainer(WebContainer):
open_api = OpenAPIModule
Then you may use CLI `open_api` command.
::
python app.py \
open_api generate \
--package storage.models.persistent \
--output api/openapi_spec_full.yml \
--description api/openapi_spec.yml
See more in the documentation mentioned below.
Documentation
=============
https://zsl_openapi.readthedocs.io/
Development
===========
Setup a virtualenv using Python 2.7 and activate it. To install all the development requirements run::
pip install -r requirements.txt
To run the all tests run::
tox
Note, to combine the coverage data from all the tox environments run:
.. list-table::
:widths: 10 90
:stub-columns: 1
- - Windows
- ::
set PYTEST_ADDOPTS=--cov-append
tox
- - Other
- ::
PYTEST_ADDOPTS=--cov-append tox
| zsl_openapi | /zsl_openapi-0.2.0.tar.gz/zsl_openapi-0.2.0/README.rst | README.rst |
============
Contributing
============
Contributions are welcome, and they are greatly appreciated! Every
little bit helps, and credit will always be given.
Bug reports
===========
When `reporting a bug <https://github.com/AtteqCom/zsl_openapi/issues>`_
please include:
* Your operating system name and version.
* Any details about your local setup that might be helpful in
troubleshooting.
* Detailed steps to reproduce the bug.
Documentation improvements
==========================
zsl_openapi could always use more documentation, whether as part of the
official zsl_openapi docs, in docstrings, or even on the web in blog posts,
articles, and such.
Feature requests and feedback
=============================
The best way to send feedback is to file an issue at
https://github.com/AtteqCom/zsl_openapi/issues.
If you are proposing a feature:
* Explain in detail how it would work.
* Keep the scope as narrow as possible, to make it easier to implement.
* Remember that this is a volunteer-driven project, and that code contributions
are welcome :)
Development
===========
To set up `zsl_openapi` for local development:
1. Fork `zsl_openapi <https://github.com/AtteqCom/zsl_openapi>`_
(look for the "Fork" button).
2. Clone your fork locally::
git clone [email protected]:your_name_here/zsl_openapi.git
3. Create a branch for local development::
git checkout -b name-of-your-bugfix-or-feature
Now you can make your changes locally.
4. When you're done making changes, run all the checks, doc builder and spell
checker with `tox <http://tox.readthedocs.io/en/latest/install.html>`_ one
command::
tox
5. Commit your changes and push your branch to GitHub::
git add .
git commit -m "Your detailed description of your changes."
git push origin name-of-your-bugfix-or-feature
6. Submit a pull request through the GitHub website.
Pull Request Guidelines
-----------------------
If you need some code review or feedback while you're developing the code just
make the pull request.
For merging, you should:
1. Include passing tests (run ``tox``) [1]_.
2. Update documentation when there's new API, functionality etc.
3. Add a note to ``CHANGELOG.rst`` about the changes.
4. Add yourself to ``AUTHORS.rst``.
.. [1] If you don't have all the necessary python versions available locally
you can rely on Travis - it will
`run the tests <https://travis-ci.org/AtteqCom/zsl_openapi/pull_requests>`_
for each change you add in the pull request.
It will be slower though ...
Tips
----
To run a subset of tests::
tox -e envname -- py.test -k test_myfeature
To run all the test environments in *parallel* (you need to
``pip install detox``)::
detox
Remarks for Windows users
-------------------------
If you have a MinGW/MSYS2 environment, when invoking tox, either remove MinGW's
python from ``PATH`` variable or add your Python environment first. Afterwards
the things get initialized correctly. If there are any errors and warnings
follow the solution printed in the console.
If ``vcruntime140.dll`` is missing put it into ``PATH`` folder such as the
main virtualenv ``Scripts`` folder.
| zsl_openapi | /zsl_openapi-0.2.0.tar.gz/zsl_openapi-0.2.0/CONTRIBUTING.rst | CONTRIBUTING.rst |
=====
Usage
=====
In high level, it is enough to define container with the :class:`zsl_openapi.module.OpenAPIModule`.
::
class StorageContainer(WebContainer):
open_api = OpenAPIModule
Then you may use CLI `open_api` command with your application.
::
python app.py \
open_api generate \
--package storage.models.persistent \
--output api/openapi_spec_full.yml \
--description api/openapi_spec.yml
The complete sample `app.py` may look like this. Change the container name from `StorageContainer` to whatever has
meaning for you.
::
class StorageContainer(WebContainer):
... # The other modules.
open_api = OpenAPIModule
def main() -> None:
initialize()
run()
def initialize() -> None:
if os.environ.get('PROFILE'):
set_profile(os.environ['PROFILE'])
else:
logging.getLogger().warning("Running without a setting profile. This is usually unwanted, set up the profile "
"using PROFILE environment variable and corresponding file <PROFILE>.cfg in "
"settings directory.")
app = Zsl(app_name, version=__version__, modules=StorageContainer.modules())
logging.getLogger(app_name).debug("ZSL app created {0}.".format(app))
@inject(zsl_cli=ZslCli)
def run(zsl_cli: ZslCli) -> None:
zsl_cli.cli()
if __name__ == "__main__":
main()
And complete API description (`openapi_spec.yml`) may look like this:
::
swagger: "2.0"
info:
description: "Music Storage API version 1 specification. The storage API provides a convenient way to browse the catalogues and perform maintenance tasks"
version: "1"
title: "MusicStorage API v1"
termsOfService: "Feel free to use!"
contact:
email: "[email protected]"
license:
name: "BSD"
url: "https://opensource.org/licenses/BSD-3-Clause"
Then you can use the `python app.py open_api generate` CLI command to generate OpenAPI specification from your python
persistent models.
The possible options of the `generate` command:
--package <package_name> the package/module where the persistent modules are sought. If there are more, you can use mode `--package` options.
--output <filename> file in which the spec is generated.
--description <input_filename> file from which the description of API is taken - name, license, etc.
| zsl_openapi | /zsl_openapi-0.2.0.tar.gz/zsl_openapi-0.2.0/docs/usage.rst | usage.rst |
from __future__ import absolute_import, print_function, unicode_literals
import os
import sys
from os.path import abspath
from os.path import dirname
from os.path import exists
from os.path import join
if __name__ == "__main__":
base_path = dirname(dirname(abspath(__file__)))
print("Project path: {0}".format(base_path))
env_path = join(base_path, ".tox", "bootstrap")
if sys.platform == "win32":
bin_path = join(env_path, "Scripts")
else:
bin_path = join(env_path, "bin")
if not exists(env_path):
import subprocess
print("Making bootstrap env in: {0} ...".format(env_path))
try:
subprocess.check_call(["virtualenv", env_path])
except subprocess.CalledProcessError:
subprocess.check_call([sys.executable, "-m", "virtualenv", env_path])
print("Installing `jinja2` and `matrix` into bootstrap environment...")
subprocess.check_call([join(bin_path, "pip"), "install", "jinja2", "matrix"])
activate = join(bin_path, "activate_this.py")
# noinspection PyCompatibility
exec(compile(open(activate, "rb").read(), activate, "exec"), dict(__file__=activate))
import jinja2
import matrix
jinja = jinja2.Environment(
loader=jinja2.FileSystemLoader(join(base_path, "ci", "templates")),
trim_blocks=True,
lstrip_blocks=True,
keep_trailing_newline=True
)
tox_environments = {}
for (alias, conf) in matrix.from_file(join(base_path, "setup.cfg")).items():
python = conf["python_versions"]
deps = conf["dependencies"]
tox_environments[alias] = {
"python": "python" + python if "py" not in python else python,
"deps": deps.split(),
}
if "coverage_flags" in conf:
cover = {"false": False, "true": True}[conf["coverage_flags"].lower()]
tox_environments[alias].update(cover=cover)
if "environment_variables" in conf:
env_vars = conf["environment_variables"]
tox_environments[alias].update(env_vars=env_vars.split())
for name in os.listdir(join("ci", "templates")):
with open(join(base_path, name), "w") as fh:
fh.write(jinja.get_template(name).render(tox_environments=tox_environments))
print("Wrote {}".format(name))
print("DONE.") | zsl_openapi | /zsl_openapi-0.2.0.tar.gz/zsl_openapi-0.2.0/ci/bootstrap.py | bootstrap.py |
from pathlib import Path
import re
import datetime
import platform
import yaml
from marshmallow import Schema, fields
import marshmallow
from . import zfs
class ValidationError(Exception):
pass
class DatasetField(fields.String):
def _deserialize(self, value, attr, obj, **kwargs):
return zfs.Dataset(name=value)
def _validate(self, value):
datasets = zfs.get_datasets()
for dataset in datasets:
if dataset == value:
return dataset
raise marshmallow.ValidationError("Dataset does not exist")
class LabelField(fields.String):
pattern = r"[^a-zA-Z0-9-]"
def _validate(self, value):
matches = re.findall(self.pattern, value)
if matches:
formatted_matches = ", ".join([f"'{m}'" for m in matches])
raise marshmallow.ValidationError(
f"Contains invalid characters: {formatted_matches}"
)
return value
class FrequencyField(fields.String):
pattern = r"(?P<value>\d+)(?P<unit>[a-zA-Z]+)"
units = {
"seconds": ["s", "sec", "second", "secs", "seconds"],
"minutes": ["m", "min", "minute", "minutes"],
"hours": ["h", "hour", "hours"],
"days": ["d", "day", "days"],
"weeks": ["w", "week", "weeks"],
}
def _deserialize(self, value, attr, obj, **kwargs):
result = {}
pattern = re.compile(self.pattern)
matches = pattern.finditer(value)
for match in matches:
group = match.groupdict()
found_unit = None
for unit, aliases in self.units.items():
if group["unit"].lower() in aliases:
found_unit = unit
if found_unit is not None:
result[found_unit] = int(group["value"])
else:
raise marshmallow.ValidationError(
f"Unit not supported: {group['unit']}"
)
return datetime.timedelta(**result)
class SnapshotSchema(Schema):
dataset = DatasetField(required=True)
label = LabelField(required=True)
frequency = FrequencyField(required=True)
retention = fields.Integer(required=True)
class ConfigSchema(Schema):
snapshots = fields.Nested(SnapshotSchema, many=True, required=True)
def get_platform_path() -> Path:
paths = {"FreeBSD": "/usr/local/etc/", "Linux": "/etc/"}
system = platform.system()
return Path(paths[system])
def get_path() -> Path:
return get_platform_path() / "zsm.yaml"
def load(data: str) -> dict:
try:
data = yaml.safe_load(data)
except yaml.YAMLError as e:
msg = "Invalid YAML"
problem = getattr(e, "problem", None)
if problem is not None:
msg += f": {e.problem}"
problem_mark = getattr(e, "problem_mark", None)
if problem_mark is not None:
msg += (
f": line={e.problem_mark.line + 1} column={e.problem_mark.column + 1}"
)
raise ValidationError(msg)
try:
data = ConfigSchema().load(data)
except marshmallow.ValidationError as e:
raise ValidationError(e.messages)
return data | zsm-lib | /zsm_lib-0.2.1-py36-none-any.whl/zsm_lib/config.py | config.py |
from typing import Iterable, Dict
import logging
import datetime
from . import zfs
log = logging.getLogger(__name__)
SNAPSHOT_DELIMITER = "_"
SNAPSHOT_PREFIX = "zsm"
SNAPSHOT_TIMESTAMP_FORMAT = "%Y-%m-%dT%H:%M:%S"
def get_snapshots(dataset: zfs.Dataset, label: str) -> Iterable[zfs.Snapshot]:
snapshots = zfs.get_snapshots(dataset=dataset)
result = []
for snapshot in snapshots:
parts = snapshot.name.split(SNAPSHOT_DELIMITER)
if len(parts) != 3:
continue
if parts[0] == SNAPSHOT_PREFIX and parts[1] == label:
result.append(snapshot)
return result
def create_snapshot(dataset: zfs.Dataset, label: str) -> None:
timestamp = datetime.datetime.now().strftime(SNAPSHOT_TIMESTAMP_FORMAT)
zfs.create_snapshot(
dataset=dataset,
name=SNAPSHOT_DELIMITER.join([SNAPSHOT_PREFIX, label, timestamp]),
)
def parse_snapshot(snapshot: zfs.Snapshot) -> (str, datetime.datetime):
_, label, timestamp = snapshot.name.split(SNAPSHOT_DELIMITER)
timestamp = datetime.datetime.strptime(timestamp, SNAPSHOT_TIMESTAMP_FORMAT)
return label, timestamp
def manage_snapshots(config: Dict, now: datetime.datetime, dry_run: bool) -> None:
for snapshot_config in config["snapshots"]:
create_snapshots(
now=now,
dataset=snapshot_config["dataset"],
label=snapshot_config["label"],
frequency=snapshot_config["frequency"],
dry_run=dry_run,
)
for snapshot_config in config["snapshots"]:
destroy_snapshots(
dataset=snapshot_config["dataset"],
label=snapshot_config["label"],
retention=snapshot_config["retention"],
dry_run=dry_run,
)
def create_snapshots(
now: datetime.datetime,
dataset: zfs.Dataset,
label: str,
frequency: datetime.timedelta,
dry_run: bool,
) -> None:
snapshots = get_snapshots(dataset=dataset, label=label)
if len(snapshots) == 0:
log.info(f"[{dataset.name}:{label}] No snapshots yet, creating the first one.")
if not dry_run:
create_snapshot(dataset=dataset, label=label)
else:
latest_snapshot = snapshots[0]
_, latest_timestamp = parse_snapshot(snapshot=latest_snapshot)
latest_age = now - latest_timestamp
if latest_age > frequency:
log.info(
f"[{dataset.name}:{label}] "
f"Latest snapshot ({latest_snapshot.name}) is {latest_age} old, "
"creating new."
)
if not dry_run:
create_snapshot(dataset=dataset, label=label)
else:
log.info(
f"[{dataset.name}:{label}] "
f"Latest snapshot ({latest_snapshot.name}) is only {latest_age} old, "
"skipping."
)
def destroy_snapshots(
dataset: zfs.Dataset, label: str, retention: int, dry_run: bool
) -> None:
any_old_found = False
while True:
snapshots = get_snapshots(dataset=dataset, label=label)
if len(snapshots) > retention:
oldest_snapshot = snapshots[-1]
log.info(
f"[{dataset.name}:{label}] "
f"Found old snapshot ({oldest_snapshot.name}), destroying it."
)
if not dry_run:
zfs.destroy_snapshot(snapshot=oldest_snapshot)
any_old_found = True
else:
break
if not any_old_found:
log.info(f"[{dataset.name}:{label}] There are no old snapshots to destroy.") | zsm-lib | /zsm_lib-0.2.1-py36-none-any.whl/zsm_lib/manage.py | manage.py |
zsm
===
ZFS Snapshot manager.
|GitHub tests| |Documentation Status|
|PyPI version| |PyPI pyversions| |PyPI license|
Installing
----------
Install package from pkg in FreeBSD.
.. code-block:: text
pkg install zsm
Install package from PyPI.
.. code-block:: text
pip install zsm
Links
-----
- Documentation_
- PyPI_
- Source_
.. _Documentation: https://zsm.readthedocs.io/
.. _PyPI: https://pypi.org/project/zsm/
.. _Source: https://github.com/thnee/zsm
.. |GitHub tests| image:: https://github.com/thnee/zsm/actions/workflows/tests.yml/badge.svg
:target: https://github.com/thnee/zsm/actions
.. |Documentation Status| image:: https://readthedocs.org/projects/zsm/badge/?version=latest
:target: https://zsm.readthedocs.io/en/latest/
.. |PyPI version| image:: https://img.shields.io/pypi/v/zsm?color=1283c4&label=version
:target: https://pypi.org/project/zsm/
.. |PyPI pyversions| image:: https://img.shields.io/pypi/pyversions/zsm.svg
:target: https://pypi.org/project/zsm/
.. |PyPI license| image:: https://img.shields.io/pypi/l/zsm.svg
:target: https://pypi.org/project/zsm/
| zsm | /zsm-0.4.0.tar.gz/zsm-0.4.0/README.rst | README.rst |
## Emergenet Non-commercial License
© 2019 2020 2021 2022 The University of Chicago.
Redistribution and use for noncommercial purposes in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. The software is used solely for noncommercial purposes. It may not be used indirectly for commercial use, such as on a website that accepts advertising money for content. Noncommercial use does include use by a for-profit company in its research. For commercial use rights, contact The University of Chicago, Polsky Center for Entrepreneurship, and Innovation, at [email protected] or call 773-702-1692 and inquire about Tech ID XX-XX-XXX project.
2. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
3. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
4. Neither the name of The University of Chicago nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE UNIVERSITY OF CHICAGO AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE UNIVERSITY OF CHICAGO OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
| zsmash | /zsmash-0.0.2-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl/zsmash-0.0.2.dist-info/LICENSE.md | LICENSE.md |
__author__ = 'Zhang Fan'
try:
import pymysql as DB
except:
raise Exception('没有检查到pymysql模块, 请使用pip install pymysql')
def connect(host='localhost', user='root', password='root', db_name=None, charset="utf8"):
'''连接数据库'''
return zsql(host=host, user=user, password=password, db_name=db_name, charset=charset)
# 表结构
DescTable = {
0: 'Field', # 字段
1: 'Type', # 类型
2: 'Null', # 允许空值
3: 'Key', # 属性
4: 'Default', # 默认值
5: 'Extra' # 额外
}
class zsql:
def __init__(self, host='localhost', user='root', password='root', db_name=None, charset="utf8"):
'''连接数据库(ip地址,用户名,密码,可选 库名,编码)'''
self._base = DB.connect(host=host, user=user, password=password, database=db_name, charset=charset)
self._cursor = self._base.cursor()
self._db_name = db_name
self._charset = charset
self.__is_close = False
self.show_command = True
def close(self):
'''关闭'''
if not self.__is_close:
self.__is_close = True
self._cursor.close()
self._base.close()
def __del__(self):
self.close()
@property
def is_close(self):
return self.__is_close
@property
def cursor(self):
return self._cursor
@property
def db_name(self):
return self._db_name
# region 库操作
def create_db(self, db_name, charset='utf8', use=True):
'''创建库(库名,可选 编码)'''
s = 'CREATE DATABASE IF NOT EXISTS {} DEFAULT CHARSET={}'.format(db_name, charset)
self.sql_command(s, True)
if use:
self.use_db(db_name)
def drop_db(self, db_name):
'''删除库(库名)'''
s = 'DROP DATABASE IF EXISTS {}'.format(db_name)
if self._db_name == db_name:
self._db_name = None
self.sql_command(s, True)
def use_db(self, db_name):
'''使用库(库名)'''
s = 'USE {}'.format(db_name)
self._db_name = db_name
self.sql_command(s, True)
# endregion
# region 表操作
def create_table(self, table_name, *fields, comment=None, charset='utf8'):
'''
创建表\n
create_table(self,'tablename','ID int','Name char(20)')
'''
comment = self._comment_to_text(comment)
s = 'CREATE TABLE {} ({}) DEFAULT CHARSET={}{}'.format(table_name, ', '.join(fields), charset, comment)
self.sql_command(s, True)
def create_table_ex(self, table_name, fields: dict, comment=None, charset='utf8'):
'''
创建表\n
create_table_ex('tablename',ID='int',Name='char(20)')
'''
comment = '' if comment is None else " COMMENT {}".format(self._base.escape(comment))
field_fmt = '{} {}{}'
field_list = []
for key, value in fields.items():
comment = ''
if isinstance(value, tuple):
value, comment = value
comment = self._comment_to_text(comment)
field_list.append(field_fmt.format(key, value, comment))
field_text = ', '.join(field_list)
s = 'CREATE TABLE {} ({}) DEFAULT CHARSET={}{}'.format(table_name, field_text, charset, comment)
self.sql_command(s, True)
def change_table_name(self, table_name, new_name):
'''修改表名(表名,新表名)'''
s = 'ALTER TABLE {} RENAME {}'.format(table_name, new_name)
self.sql_command(s, True)
def drop_table(self, table_name):
'''删除表(表名)'''
s = 'DROP TABLE IF EXISTS {}'.format(table_name)
self.sql_command(s, True)
def set_primary(self, table_name, field):
'''设置主键(表名,字段名)'''
s = 'ALTER TABLE {} ADD PRIMARY KEY ({})'.format(table_name, field)
self.sql_command(s, True)
def drop_primary(self, table_name):
'''删除主键(表名)'''
s = 'ALTER TABLE {} DROP PRIMARY KEY'.format(table_name)
self.sql_command(s, True)
def set_unique(self, table_name, *fields):
'''设置字段为unique查询(表名,字段名元组)'''
s = 'ALTER TABLE {} ADD UNIQUE ({})'.format(table_name, ', '.join(fields))
self.sql_command(s, True)
def set_index(self, table_name, field, index_name=None):
'''创建索引字段'''
if index_name == None:
index_name = field
s = 'CREATE INDEX {} ON {}({})'.format(index_name, table_name, field)
self.sql_command(s, True)
# endregion
# region 数据库结构
def show_dbs(self):
'''查看所有库名'''
s = 'SHOW DATABASES'
datas = self.select_command(s)
return [x[0] for x in datas]
def show_tables(self, db_name=None):
'''查看一个库的所有表名(可选 库名)'''
s = 'SHOW TABLES' if db_name == None else 'SHOW TABLES FROM {}'.format(db_name)
datas = self.select_command(s)
return [x[0] for x in datas]
def show_desc(self, table_name):
'''查看当前库下一个表的结构(表名)'''
s = 'DESC {}'.format(table_name)
return self.select_command(s)
# endregion
# region 查询数据
def select(self, tname, *, fields: list or tuple = None, where: None or str or dict = None, groupby=None,
having=None, orderby=None,
limit=None):
'''查询记录(表名,*,(字段1,字段2,字段3),where=条件,limit=选择)'''
field = ', '.join(fields) if fields else '*'
where = self._where_to_text(where)
groupby = '' if groupby == None else ' GROUP BY {}'.format(groupby)
having = '' if having == None else ' HAVING {}'.format(having)
orderby = '' if orderby == None else ' ORDER BY {}'.format(orderby)
limit = '' if limit == None else ' LIMIT {}'.format(limit)
s = 'SELECT {} FROM {}{}{}{}{}{}'.format(field, tname, where, groupby, having, orderby, limit)
return self.select_command(s)
def select_all(self, table_name):
'''查询一个表的所有记录'''
s = 'SELECT * FROM {}'.format(table_name)
return self.select_command(s)
# endregion
# region 数据操作
def save_values(self, table_name, *save_item: list or tuple, commit=True):
'''添加数据(表名,['id', '用户名', '密码'])'''
value_text = ', '.join([self._base.escape(item) for item in save_item])
s = "INSERT INTO {} VALUES {}".format(table_name, value_text)
self.sql_command(s, commit)
def save_dict(self, table_name, save_item: dict, commit=True):
fields, values = [], []
for key, value in save_item.items():
fields.append(key)
values.append(value)
values_escape = self._base.escape(values)
field = '({})'.format(', '.join(fields)) if len(fields) > 0 else ''
s = "INSERT INTO {}{} VALUES {}".format(table_name, field, values_escape)
self.sql_command(s, commit)
def del_data(self, table_name, *, where: None or str or dict = None, commit=True):
'''删除数据(表名,where=条件,是否提交到数据库)'''
where = self._where_to_text(where)
s = 'DELETE FROM {}{}'.format(table_name, where)
self.sql_command(s, commit)
def update(self, table_name, new_item: dict, *, where: None or str or dict = None, commit=True):
'''更新数据(表名,新数据,where=条件)'''
set_values = [
'{} = {}'.format(key, self._base.escape(value))
for key, value in new_item.items()
]
where = self._where_to_text(where)
set_text = ', '.join(set_values)
s = 'UPDATE {} SET {}{}'.format(table_name, set_text, where)
self.sql_command(s, commit)
# endregion
# region 字段操作
def add_field(self, table_name, field_name, data_type, after: bool or str = True):
'''添加字段(表名,字段名,数据类型,位置)位置可以为字段名'''
if isinstance(after, str): # 字段后
after = ' AFTER {}'.format(after)
elif after: # 所有字段后
after = ''
else: # 所有字段前
after = ' FIRST'
s = 'ALTER TABLE {} ADD {} {}{}'.format(table_name, field_name, data_type, after)
self.sql_command(s, True)
def drop_field(self, table_name, field_name):
'''删除字段(表名,字段名)'''
s = 'ALTER TABLE {} DROP {}'.format(table_name, field_name)
self.sql_command(s, True)
def change_field_type(self, table_name, field_name, data_type):
'''修改字段数据类型(表名,字段名,数据类型)'''
s = 'ALTER TABLE {} MODIFY {} {}'.format(table_name, field_name, data_type)
self.sql_command(s, True)
def change_field_name(self, table_name, field_name, new_name, data_type):
'''修改字段名(表名,字段名,新字段名,数据类型)'''
s = 'ALTER TABLE {} CHANGE {} {} {}'.format(table_name, field_name, new_name, data_type)
self.sql_command(s, True)
# endregion
# region 性能分析
def on_property(self):
'''开启性能分析'''
s = 'SET PROFILING = 1'
self.sql_command(s)
def off_property(self):
'''关闭性能分析'''
s = 'SET PROFILING = 0'
self.sql_command(s)
def show_profiles(self):
'''显示性能结果'''
s = 'SHOW PROFILES'
return self.select_command(s)
# endregion
# region 数据库命令
def sql_command(self, command, commit=True):
'''执行命令(命令,是否提交到数据库执行),一般用于改变数据库数据的操作,不会返回任何数据'''
if commit:
self._print_command('\033[1;31m{};\033[0m'.format(command))
self._cursor.execute(command)
self._base.commit()
else:
self._print_command('\033[1;34m{};\033[0m'.format(command))
self._cursor.execute(command)
def select_command(self, command):
'''执行查询命令(命令)'''
self._print_command('\033[1;36m{};\033[0m'.format(command))
self._cursor.execute(command)
datas = self._cursor.fetchall()
return list(datas)
def sql_commit(self):
self._print_command('\033[1;31m db.commit()\033[0m')
self._base.commit()
def rollback(self):
'''回滚上次操作'''
self._print_command('\033[1;35m ROLLBACK();\033[0m')
self._base.rollback()
# endregion
# region 工具
def _where_to_text(self, where):
if where is None:
return ''
if isinstance(where, str):
return ' WHERE {}'.format(where)
if isinstance(where, dict):
where_text = ' AND '.join([
'{}={}'.format(key, self._base.escape(value))
for key, value in where.items()
])
return ' WHERE {}'.format(where_text)
raise Exception('类型为{}的where未定义处理方式'.format(type(where)))
def _comment_to_text(self, comment):
return '' if comment is None else " COMMENT {}".format(self._base.escape(comment))
def _print_command(self, text):
if self.show_command:
print(text)
# endregion
if __name__ == '__main__':
# 创建操作数据库的实例
sql = zsql()
# 创建一个库
sql.create_db('db_name')
# 使用库
sql.use_db('db_name')
# 创建一个表
sql.create_table_ex('table_name', ID='int', name='char(16)', pwd='char(32)')
# 保存数据
sql.save_values('table_name', (0, '用户0', '密码0'), (1, '用户1', '密码1'))
# 更新数据
sql.update('table_name', new_item=dict(name='新用户名', pwd='新密码'), where=dict(name='用户1', pwd='密码1'))
# 查询数据
data = sql.select_all('table_name')
# 删除表
sql.drop_table('table_name')
# 删除库
sql.drop_db('db_name')
# 显示数据
for v in data:
print(v)
# 关闭
sql.close()
'''
打印出以下结果
CREATE DATABASE IF NOT EXISTS db_name DEFAULT CHARSET=utf8;
USE db_name;
USE db_name;
CREATE TABLE table_name (ID int, name char(16), pwd char(32)) DEFAULT CHARSET=utf8;
INSERT INTO table_name VALUES (0,'用户0','密码0'), (1,'用户1','密码1');
UPDATE table_name SET name = '新用户名', pwd = '新密码' WHERE name='用户1' AND pwd='密码1';
SELECT * FROM table_name;
DROP TABLE IF EXISTS table_name;
DROP DATABASE IF EXISTS db_name;
(0, '用户0', '密码0')
(1, '新用户名', '新密码')
''' | zsql | /zsql-1.0.2-py3-none-any.whl/zsql.py | zsql.py |
Zhang-Shasha: Tree edit distance in Python
------------------------------------------
The ``zss`` module provides a function (``zss.distance``) that
computes the edit distance between the two given trees, as well as a small set
of utilities to make its use convenient.
If you'd like to learn more about how it works, see References below.
Brought to you by Tim Henderson ([email protected])
and Steve Johnson ([email protected]).
`Read the full documentation for more information.
<http://zhang-shasha.readthedocs.org/en/latest/>`_
Installation
------------
You can get ``zss`` and its soft requirements (
``editdist`` and ``numpy`` >= 1.7) from PyPI::
pip install zss
Both modules are optional. ``editdist`` uses string edit distance to
compare node labels rather than a simple equal/not-equal check, and
``numpy`` significantly speeds up the library. The only reason version
1.7 of ``numpy`` is required is that earlier versions have trouble
installing on current versions of Mac OS X.
You can install ``zss`` from the source code without dependencies in the
usual way::
python setup.py install
If you want to build the docs, you'll need to install Sphinx >= 1.0.
Usage
-----
To compare the distance between two trees, you need:
1. A tree.
2. Another tree.
3. A node-node distance function. By default, ``zss`` compares the edit
distance between the nodes' labels. ``zss`` currently only knows how
to handle nodes with string labels.
4. Functions to let ``zss.distance`` traverse your tree.
Here is an example using the library's built-in default node structure and edit
distance function
.. code-block:: python
from zss import simple_distance, Node
A = (
Node("f")
.addkid(Node("a")
.addkid(Node("h"))
.addkid(Node("c")
.addkid(Node("l"))))
.addkid(Node("e"))
)
B = (
Node("f")
.addkid(Node("a")
.addkid(Node("d"))
.addkid(Node("c")
.addkid(Node("b"))))
.addkid(Node("e"))
)
assert simple_distance(A, B) == 2
Specifying Custom Tree Formats
------------------------------
Specifying custom tree formats and distance metrics is easy. The
``zss.simple_distance`` function takes 3 extra parameters besides the two tree
to compare:
1. ``get_children`` - a function to retrieve a list of children from a node.
2. ``get_label`` - a function to retrieve the label object from a node.
3. ``label_dist`` - a function to compute the non-negative integer distance
between two node labels.
Example
^^^^^^^
.. code-block:: python
#!/usr/bin/env python
import zss
try:
from editdist import distance as strdist
except ImportError:
def strdist(a, b):
if a == b:
return 0
else:
return 1
def weird_dist(A, B):
return 10*strdist(A, B)
class WeirdNode(object):
def __init__(self, label):
self.my_label = label
self.my_children = list()
@staticmethod
def get_children(node):
return node.my_children
@staticmethod
def get_label(node):
return node.my_label
def addkid(self, node, before=False):
if before: self.my_children.insert(0, node)
else: self.my_children.append(node)
return self
A = (
WeirdNode("f")
.addkid(WeirdNode("d")
.addkid(WeirdNode("a"))
.addkid(WeirdNode("c")
.addkid(WeirdNode("b"))
)
)
.addkid(WeirdNode("e"))
)
B = (
WeirdNode("f")
.addkid(WeirdNode("c")
.addkid(WeirdNode("d")
.addkid(WeirdNode("a"))
.addkid(WeirdNode("b"))
)
)
.addkid(WeirdNode("e"))
)
dist = zss.simple_distance(
A, B, WeirdNode.get_children, WeirdNode.get_label, weird_dist)
print dist
assert dist == 20
References
----------
The algorithm used by ``zss`` is taken directly from the original paper by
Zhang and Shasha. If you would like to discuss the paper, or the the tree edit
distance problem (we have implemented a few other algorithms as well) please
email the authors.
`approxlib <http://www.inf.unibz.it/~augsten/src/>`_ by Dr. Nikolaus Augstent
contains a good Java implementation of Zhang-Shasha as well as a number of
other useful tree distance algorithms.
`Kaizhong Zhang and Dennis Shasha. Simple fast algorithms for the editing distance between trees and related problems. SIAM Journal of Computing, 18:1245–1262, 1989.`__ (the original paper)
__ http://www.grantjenks.com/wiki/_media/ideas:simple_fast_algorithms_for_the_editing_distance_between_tree_and_related_problems.pdf
`Slide deck overview of Zhang-Shasha <http://www.inf.unibz.it/dis/teaching/ATA/ata7-handout-1x1.pdf>`_
`Another paper describing Zhang-Shasha <http://research.cs.queensu.ca/TechReports/Reports/1995-372.pdf>`_
| zss | /zss-1.2.0.tar.gz/zss-1.2.0/README.rst | README.rst |
__author__ = 'Zhang Fan'
import pyssdb
from zretry import retry
_retry_func_list = []
def _except_retry(func):
_retry_func_list.append(func.__name__)
return func
class ssdb_inst():
def __init__(self, host: str or list, port=8888, cluster=False, collname='test', password=None,
decode_responses=True,
retry_interval=1, max_attempt_count=5,
**kw):
'''
创建一个ssdb客户端
:param host: 服务器ip
:param port: 服务器端口
:param cluster: 是否为集群
:param collname: 文档名
:param password: 密码
:param decode_responses: 是否自动解码, 默认为utf8
:param retry_interval: 尝试等待时间
:param max_attempt_count: 最大尝试次数
:param kw: 其他参数
'''
if cluster:
raise Exception('暂不支持集群连接')
else:
self._conn = pyssdb.Client(host=host, port=port, **kw)
if password:
self._conn.auth(password)
self.decode_responses = decode_responses
self.collname = collname
for retry_func_name in _retry_func_list:
func = getattr(self, retry_func_name)
decorator = retry(interval=retry_interval, max_attempt_count=max_attempt_count)(func)
setattr(self, retry_func_name, decorator)
def change_coll(self, collname):
self.collname = collname
# region 其他操作
@_except_retry
def _get_names(self, func, limit=65535):
return self._result_list_handler(func('', '', limit))
@_except_retry
def _get_names_iter(self, func, batch_size=100):
start = ''
while True:
datas = func(start, '', batch_size)
if not datas:
return
for name in datas:
yield self._result_handler(name)
start = datas[-1]
# endregion
# region 数据处理
def _result_handler(self, result):
return result.decode('utf8') if self.decode_responses and isinstance(result, bytes) else result
def _result_list_handler(self, result):
return [self._result_handler(value) for value in result]
def _list_to_mapping(self, result, value_to_int=False):
result_dict = dict()
for index in range(0, len(result), 2):
k = self._result_handler(result[index])
v = self._result_handler(result[index + 1])
if value_to_int:
v = int(v)
result_dict[k] = v
return result_dict
def _mapping_to_list(self, mapping: dict):
values = list()
for k, v in mapping.items():
values.append(k)
values.append(v)
return values
# endregion
# region 列表操作
@_except_retry
def list_push(self, *values, front=True):
# 放入一个字符串, 返回队列总数
if front:
return int(self._conn.qpush_front(self.collname, *values))
else:
return int(self._conn.qpush_back(self.collname, *values))
@_except_retry
def list_pop(self, front=True):
# 无数据时返回None
if front:
return self._result_handler(self._conn.qpop_front(self.collname))
else:
return self._result_handler(self._conn.qpop_back(self.collname))
@_except_retry
def list_count(self, collname=None):
return int(self._conn.qsize(collname or self.collname))
@_except_retry
def list_get_datas(self, collname=None, limit=65535):
# 返回一个list, 包含ssdb中该list的所有数据
return self._result_list_handler(self._conn.qrange(collname or self.collname, 0, limit))
@_except_retry
def list_iter(self, collname=None, batch_size=100):
# 迭代返回一个list中所有的数据
collname = collname or self.collname
start = 0
while True:
datas = self._conn.qrange(collname, start, batch_size)
if not datas:
return
for data in datas:
yield self._result_handler(data)
start += len(datas)
@_except_retry
def list_names(self, limit=65535):
# 返回所有的list文档名
return self._get_names(self._conn.qlist, limit)
@_except_retry
def list_names_iter(self, batch_size=100):
# 迭代返回所有的list文档名
yield from self._get_names_iter(self._conn.qlist, batch_size)
@_except_retry
def list_clear(self, collname=None):
# 删除这个list, 返回是否成功
return int(self._conn.qclear(collname or self.collname)) > 0
# endregion
# region 集合操作
@_except_retry
def set_add(self, key, score: int = 1):
# 新增返回1, 修改返回0
return int(self._conn.zset(self.collname, key, score))
@_except_retry
def set_add_values(self, mapping: dict or list or tuple or set, score=1):
# 设置多个数据, key作为键, value作为权重, 返回新增多少条数据
if isinstance(mapping, dict):
value = self._mapping_to_list(mapping)
elif isinstance(mapping, (list, tuple, set)):
value = list()
for key in mapping:
value.append(key)
value.append(score)
else:
raise TypeError('mapping must be of type (dict, list, tuple, set)')
return int(self._conn.multi_zset(self.collname, *value))
@_except_retry
def set_remove(self, *keys):
# 删除多个值, 返回删除的数量
return int(self._conn.multi_zdel(self.collname, *keys))
@_except_retry
def set_count(self, collname=None):
return int(self._conn.zsize(collname or self.collname))
@_except_retry
def set_has(self, value):
# 是否存在某个值
return int(self._conn.zexists(self.collname, value)) > 0
@_except_retry
def set_get(self, key, default=None):
# 返回一个数据的权重值, 权重值为整数
result = self._conn.zget(self.collname, key)
if result is None:
return default
return int(result)
@_except_retry
def set_keys(self, collname=None, limit=65535):
# 返回一个列表, 包含set中所有的key
return self._result_list_handler(self._conn.zkeys(collname or self.collname, '', '', '', limit))
@_except_retry
def set_get_values(self, *keys):
# 获取多个数据的权重, 返回一个字典
return self._list_to_mapping(self._conn.multi_zget(self.collname, *keys), value_to_int=True)
@_except_retry
def set_get_datas(self, collname=None, limit=65535):
# 返回一个dict, 包含ssdb中该set的所有数据
return self._list_to_mapping(self._conn.zrange(collname or self.collname, 0, limit), value_to_int=True)
@_except_retry
def set_iter(self, collname=None, batch_size=100):
# 迭代返回一个set中所有的数据, 每次返回的是一个元组(键, 权重)
collname = collname or self.collname
start = 0
while True:
datas = self._conn.zrange(collname, start, batch_size)
if not datas:
return
for index in range(0, len(datas), 2):
k = self._result_handler(datas[index])
v = int(self._result_handler(datas[index + 1]))
yield k, v
start += len(datas) // 2
@_except_retry
def set_names(self, limit=65535):
# 返回所有的set文档名
return self._get_names(self._conn.zlist, limit)
@_except_retry
def set_names_iter(self, batch_size=100):
# 迭代返回所有的set文档名
yield from self._get_names_iter(self._conn.zlist, batch_size)
@_except_retry
def set_clear(self, collname=None):
# 删除这个set, 返回是否成功
return int(self._conn.zclear(collname or self.collname)) > 0
# endregion
# region 哈希操作
@_except_retry
def hash_set(self, key, value):
# 设置数据, 返回0表示修改,返回1表示创建
return int(self._conn.hset(self.collname, key, value))
@_except_retry
def hash_set_values(self, mapping: dict):
# 设置多个数据, 返回成功数量
values = self._mapping_to_list(mapping)
return int(self._conn.multi_hset(self.collname, *values))
@_except_retry
def hash_get(self, key):
# 获取一个key的值, 失败返回None
return self._result_handler(self._conn.hget(self.collname, key))
@_except_retry
def hash_get_values(self, *keys):
# 获取多个key的值, 返回一个字典
return self._list_to_mapping(self._conn.multi_hget(self.collname, *keys))
@_except_retry
def hash_remove(self, *keys):
# 删除多个键, 返回实际删除数量
return int(self._conn.multi_hdel(self.collname, *keys))
@_except_retry
def hash_incrby(self, key, amount=1):
# 自增, 返回自增后的整数
return int(self._conn.hincr(self.collname, key, amount))
@_except_retry
def hash_count(self, collname=None):
# 返回元素数量
return int(self._conn.hsize(collname or self.collname))
@_except_retry
def hash_has(self, key):
# 是否存在某个键
return int(self._conn.hexists(self.collname, key)) > 0
@_except_retry
def hash_keys(self, collname=None, limit=65535):
# 返回一个列表, 包含hash中所有的key
return self._result_list_handler(self._conn.hkeys(collname or self.collname, '', '', limit))
@_except_retry
def hash_get_datas(self, collname=None):
# 返回一个字典, 一次性获取该hash中的所有数据
return self._list_to_mapping(self._conn.hgetall(collname or self.collname))
@_except_retry
def hash_iter(self, collname=None, batch_size=100):
# 迭代返回一个hash中所有的数据, 每次返回的是一个元组(键, 值)
collname = collname or self.collname
start = ''
while True:
datas = self._conn.hscan(collname, start, '', batch_size)
if not datas:
return
for index in range(0, len(datas), 2):
k = self._result_handler(datas[index])
v = self._result_handler(datas[index + 1])
yield k, v
start = datas[-2]
@_except_retry
def hash_names(self, limit=65535):
# 返回所有的hash文档名
return self._get_names(self._conn.hlist, limit)
@_except_retry
def hash_names_iter(self, batch_size=100):
# 迭代返回所有的hash文档名
yield from self._get_names_iter(self._conn.hlist, batch_size)
@_except_retry
def hash_clear(self, collname=None):
# 删除这个hash, 返回是否成功
return int(self._conn.hclear(collname or self.collname)) > 0
# endregion | zssdb | /zssdb-0.1.2-py3-none-any.whl/zssdb.py | zssdb.py |
================
python-zstandard
================
| |ci-test| |ci-wheel| |ci-typing| |ci-sdist| |ci-anaconda| |ci-sphinx|
This project provides Python bindings for interfacing with the
`Zstandard <http://www.zstd.net>`_ compression library. A C extension
and CFFI interface are provided.
The primary goal of the project is to provide a rich interface to the
underlying C API through a Pythonic interface while not sacrificing
performance. This means exposing most of the features and flexibility
of the C API while not sacrificing usability or safety that Python provides.
The canonical home for this project is
https://github.com/indygreg/python-zstandard.
For usage documentation, see https://python-zstandard.readthedocs.org/.
.. |ci-test| image:: https://github.com/indygreg/python-zstandard/workflows/.github/workflows/test.yml/badge.svg
:target: .github/workflows/test.yml
.. |ci-wheel| image:: https://github.com/indygreg/python-zstandard/workflows/.github/workflows/wheel.yml/badge.svg
:target: .github/workflows/wheel.yml
.. |ci-typing| image:: https://github.com/indygreg/python-zstandard/workflows/.github/workflows/typing.yml/badge.svg
:target: .github/workflows/typing.yml
.. |ci-sdist| image:: https://github.com/indygreg/python-zstandard/workflows/.github/workflows/sdist.yml/badge.svg
:target: .github/workflows/sdist.yml
.. |ci-anaconda| image:: https://github.com/indygreg/python-zstandard/workflows/.github/workflows/anaconda.yml/badge.svg
:target: .github/workflows/anaconda.yml
.. |ci-sphinx| image:: https://github.com/indygreg/python-zstandard/workflows/.github/workflows/sphinx.yml/badge.svg
:target: .github/workflows/sphinx.yml
| zstandard | /zstandard-0.18.0.tar.gz/zstandard-0.18.0/README.rst | README.rst |
import distutils.ccompiler
import distutils.command.build_ext
import distutils.extension
import distutils.util
import glob
import os
import shutil
import subprocess
import sys
ext_includes = [
"c-ext",
]
ext_sources = [
"c-ext/backend_c.c",
]
def get_c_extension(
support_legacy=False,
system_zstd=False,
name="zstandard.backend_c",
warnings_as_errors=False,
root=None,
):
"""Obtain a distutils.extension.Extension for the C extension.
``support_legacy`` controls whether to compile in legacy zstd format support.
``system_zstd`` controls whether to compile against the system zstd library.
For this to work, the system zstd library and headers must match what
python-zstandard is coded against exactly.
``name`` is the module name of the C extension to produce.
``warnings_as_errors`` controls whether compiler warnings are turned into
compiler errors.
``root`` defines a root path that source should be computed as relative
to. This should be the directory with the main ``setup.py`` that is
being invoked. If not defined, paths will be relative to this file.
"""
actual_root = os.path.abspath(os.path.dirname(__file__))
root = root or actual_root
sources = sorted(set([os.path.join(actual_root, p) for p in ext_sources]))
local_include_dirs = [os.path.join(actual_root, d) for d in ext_includes]
if not system_zstd:
local_include_dirs.append(os.path.join(actual_root, "zstd"))
depends = sorted(glob.glob(os.path.join(actual_root, "c-ext", "*")))
compiler = distutils.ccompiler.new_compiler()
# Needed for MSVC.
if hasattr(compiler, "initialize"):
compiler.initialize()
if compiler.compiler_type == "unix":
compiler_type = "unix"
elif compiler.compiler_type == "msvc":
compiler_type = "msvc"
elif compiler.compiler_type == "mingw32":
compiler_type = "mingw32"
else:
raise Exception("unhandled compiler type: %s" % compiler.compiler_type)
extra_args = []
if system_zstd:
extra_args.append("-DZSTD_MULTITHREAD")
else:
extra_args.append("-DZSTD_SINGLE_FILE")
extra_args.append("-DZSTDLIB_VISIBILITY=")
extra_args.append("-DZDICTLIB_VISIBILITY=")
extra_args.append("-DZSTDERRORLIB_VISIBILITY=")
if compiler_type == "unix":
extra_args.append("-fvisibility=hidden")
if not system_zstd and support_legacy:
extra_args.append("-DZSTD_LEGACY_SUPPORT=1")
if warnings_as_errors:
if compiler_type in ("unix", "mingw32"):
extra_args.append("-Werror")
elif compiler_type == "msvc":
extra_args.append("/WX")
else:
assert False
libraries = ["zstd"] if system_zstd else []
# Python 3.7 doesn't like absolute paths. So normalize to relative.
sources = [os.path.relpath(p, root) for p in sources]
local_include_dirs = [os.path.relpath(p, root) for p in local_include_dirs]
depends = [os.path.relpath(p, root) for p in depends]
if "ZSTD_EXTRA_COMPILER_ARGS" in os.environ:
extra_args.extend(
distutils.util.split_quoted(os.environ["ZSTD_EXTRA_COMPILER_ARGS"])
)
# TODO compile with optimizations.
return distutils.extension.Extension(
name,
sources,
include_dirs=local_include_dirs,
depends=depends,
extra_compile_args=extra_args,
libraries=libraries,
)
class RustExtension(distutils.extension.Extension):
def __init__(self, name, root):
super().__init__(name, [])
self.root = root
self.depends.extend(
[
os.path.join(root, "Cargo.toml"),
os.path.join(root, "rust-ext", "src", "lib.rs"),
]
)
def build(self, build_dir, get_ext_path_fn):
env = os.environ.copy()
env["PYTHON_SYS_EXECUTABLE"] = sys.executable
# Needed for try_reserve()
env["RUSTC_BOOTSTRAP"] = "1"
args = [
"cargo",
"build",
"--release",
"--target-dir",
str(build_dir),
]
subprocess.run(args, env=env, cwd=self.root, check=True)
dest_path = get_ext_path_fn(self.name)
libname = self.name.split(".")[-1]
if os.name == "nt":
rust_lib_filename = "%s.dll" % libname
elif sys.platform == "darwin":
rust_lib_filename = "lib%s.dylib" % libname
else:
rust_lib_filename = "lib%s.so" % libname
rust_lib = os.path.join(build_dir, "release", rust_lib_filename)
os.makedirs(os.path.dirname(rust_lib), exist_ok=True)
shutil.copy2(rust_lib, dest_path)
class RustBuildExt(distutils.command.build_ext.build_ext):
def build_extension(self, ext):
if isinstance(ext, RustExtension):
ext.build(
build_dir=os.path.abspath(self.build_temp),
get_ext_path_fn=self.get_ext_fullpath,
)
else:
super().build_extension(ext)
def get_rust_extension(
root=None,
):
actual_root = os.path.abspath(os.path.dirname(__file__))
root = root or actual_root
return RustExtension("zstandard.backend_rust", root) | zstandard | /zstandard-0.18.0.tar.gz/zstandard-0.18.0/setup_zstd.py | setup_zstd.py |
from __future__ import absolute_import
import cffi
import distutils.ccompiler
import distutils.sysconfig
import os
import re
import subprocess
import tempfile
HERE = os.path.abspath(os.path.dirname(__file__))
SOURCES = [
"zstd/zstdlib.c",
]
# Headers whose preprocessed output will be fed into cdef().
HEADERS = [os.path.join(HERE, "zstd", p) for p in ("zstd.h", "zdict.h")]
INCLUDE_DIRS = [
os.path.join(HERE, "zstd"),
]
# cffi can't parse some of the primitives in zstd.h. So we invoke the
# preprocessor and feed its output into cffi.
compiler = distutils.ccompiler.new_compiler()
# Needed for MSVC.
if hasattr(compiler, "initialize"):
compiler.initialize()
# This performs platform specific customizations, including honoring
# environment variables like CC.
distutils.sysconfig.customize_compiler(compiler)
# Distutils doesn't set compiler.preprocessor, so invoke the preprocessor
# manually.
if compiler.compiler_type == "unix":
# Using .compiler respects the CC environment variable.
args = [compiler.compiler[0]]
args.extend(
[
"-E",
"-DZSTD_STATIC_LINKING_ONLY",
"-DZDICT_STATIC_LINKING_ONLY",
]
)
elif compiler.compiler_type == "msvc":
args = [compiler.cc]
args.extend(
[
"/EP",
"/DZSTD_STATIC_LINKING_ONLY",
"/DZDICT_STATIC_LINKING_ONLY",
]
)
else:
raise Exception("unsupported compiler type: %s" % compiler.compiler_type)
def preprocess(path):
with open(path, "rb") as fh:
lines = []
it = iter(fh)
for l in it:
# zstd.h includes <stddef.h>, which is also included by cffi's
# boilerplate. This can lead to duplicate declarations. So we strip
# this include from the preprocessor invocation.
#
# The same things happens for including zstd.h, so give it the same
# treatment.
#
# We define ZSTD_STATIC_LINKING_ONLY, which is redundant with the inline
# #define in zstdmt_compress.h and results in a compiler warning. So drop
# the inline #define.
if l.startswith(
(
b"#include <stddef.h>",
b'#include "zstd.h"',
b"#define ZSTD_STATIC_LINKING_ONLY",
)
):
continue
# The preprocessor environment on Windows doesn't define include
# paths, so the #include of limits.h fails. We work around this
# by removing that import and defining INT_MAX ourselves. This is
# a bit hacky. But it gets the job done.
# TODO make limits.h work on Windows so we ensure INT_MAX is
# correct.
if l.startswith(b"#include <limits.h>"):
l = b"#define INT_MAX 2147483647\n"
# ZSTDLIB_API may not be defined if we dropped zstd.h. It isn't
# important so just filter it out.
if l.startswith(b"ZSTDLIB_API"):
l = l[len(b"ZSTDLIB_API ") :]
lines.append(l)
fd, input_file = tempfile.mkstemp(suffix=".h")
os.write(fd, b"".join(lines))
os.close(fd)
try:
env = dict(os.environ)
# cffi attempts to decode source as ascii. And the preprocessor
# may insert non-ascii for some annotations. So try to force
# ascii output via LC_ALL.
env["LC_ALL"] = "C"
if getattr(compiler, "_paths", None):
env["PATH"] = compiler._paths
process = subprocess.Popen(
args + [input_file], stdout=subprocess.PIPE, env=env
)
output = process.communicate()[0]
ret = process.poll()
if ret:
raise Exception("preprocessor exited with error")
return output
finally:
os.unlink(input_file)
def normalize_output(output):
lines = []
for line in output.splitlines():
# CFFI's parser doesn't like __attribute__ on UNIX compilers.
if line.startswith(b'__attribute__ ((visibility ("default"))) '):
line = line[len(b'__attribute__ ((visibility ("default"))) ') :]
if line.startswith(b"__attribute__((deprecated"):
continue
elif b"__declspec(deprecated(" in line:
continue
elif line.startswith(b"__attribute__((__unused__))"):
continue
lines.append(line)
return b"\n".join(lines)
ffi = cffi.FFI()
# zstd.h uses a possible undefined MIN(). Define it until
# https://github.com/facebook/zstd/issues/976 is fixed.
# *_DISABLE_DEPRECATE_WARNINGS prevents the compiler from emitting a warning
# when cffi uses the function. Since we statically link against zstd, even
# if we use the deprecated functions it shouldn't be a huge problem.
ffi.set_source(
"zstandard._cffi",
"""
#define MIN(a,b) ((a)<(b) ? (a) : (b))
#define ZSTD_STATIC_LINKING_ONLY
#define ZSTD_DISABLE_DEPRECATE_WARNINGS
#include <zstd.h>
#define ZDICT_STATIC_LINKING_ONLY
#define ZDICT_DISABLE_DEPRECATE_WARNINGS
#include <zdict.h>
""",
sources=SOURCES,
include_dirs=INCLUDE_DIRS,
)
DEFINE = re.compile(b"^\\#define ([a-zA-Z0-9_]+) ")
sources = []
# Feed normalized preprocessor output for headers into the cdef parser.
for header in HEADERS:
preprocessed = preprocess(header)
sources.append(normalize_output(preprocessed))
# #define's are effectively erased as part of going through preprocessor.
# So perform a manual pass to re-add those to the cdef source.
with open(header, "rb") as fh:
for line in fh:
line = line.strip()
m = DEFINE.match(line)
if not m:
continue
if m.group(1) == b"ZSTD_STATIC_LINKING_ONLY":
continue
# The parser doesn't like some constants with complex values.
if m.group(1) in (b"ZSTD_LIB_VERSION", b"ZSTD_VERSION_STRING"):
continue
# The ... is magic syntax by the cdef parser to resolve the
# value at compile time.
sources.append(m.group(0) + b" ...")
cdeflines = b"\n".join(sources).splitlines()
cdeflines = [l for l in cdeflines if l.strip()]
ffi.cdef(b"\n".join(cdeflines).decode("latin1"))
if __name__ == "__main__":
ffi.compile() | zstandard | /zstandard-0.18.0.tar.gz/zstandard-0.18.0/make_cffi.py | make_cffi.py |
zstat
========================
A bone-simple stats tool for displaying percentiles from stdin to stdout.
Example:
::
$ cat nums.txt
456
366
695
773
617
826
56
78
338
326
::
$ cat nums.txt | zstat
p0 = 56
p50 = 366
p90 = 773
p95 = 773
p99 = 826
p99.9 = 826
p100 = 826
Installation
=========================================================
Clone this repository locally, then:
``pip3 install -e .``
Building
=========================================================
``make``
Testing
=========================================================
``make test``
Deploy New Version
=========================================================
``make deploy``
| zstat-cli | /zstat-cli-0.1.4.tar.gz/zstat-cli-0.1.4/README.rst | README.rst |
The MIT License (MIT)
Copyright (c) 2020 Diogo B Freitas
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE
OR OTHER DEALINGS IN THE SOFTWARE.
| zstd-asgi | /zstd-asgi-0.1.tar.gz/zstd-asgi-0.1/LICENSE.md | LICENSE.md |
# zstd-asgi
[](https://pypi.org/project/zstd-asgi)
[](https://github.com/tuffnatty/zstd-asgi/actions?query=workflow%3ATests)
`ZstdMiddleware` adds [Zstd](https://github.com/facebook/zstd) response compression to ASGI applications (Starlette, FastAPI, Quart, etc.). It provides faster and more dense compression than GZip, and can be used as a drop in replacement for the `GZipMiddleware` shipped with Starlette.
**Installation**
```bash
pip install zstd-asgi
```
## Examples
### Starlette
```python
from starlette.applications import Starlette
from starlette.responses import JSONResponse
from starlette.routing import Route
from starlette.middleware import Middleware
from zstd_asgi import ZstdMiddleware
async def homepage(request):
return JSONResponse({"data": "a" * 4000})
app = Starlette(
routes=[Route("/", homepage)],
middleware=[Middleware(ZstdMiddleware)],
)
```
### FastAPI
```python
from fastapi import FastAPI
from zstd_asgi import ZstdMiddleware
app = FastAPI()
app.add_middleware(ZstdMiddleware)
@app.get("/")
def home() -> dict:
return {"data": "a" * 4000}
```
## API Reference
**Overview**
```python
app.add_middleware(
ZstdMiddleware,
level=3,
minimum_size=500,
threads=0,
write_checksum=True,
write_content_size=False,
gzip_fallback=True
)
```
**Parameters**:
- `level`: Compression level. Valid values are -2¹⁷ to 22.
- `minimum_size`: Only compress responses that are bigger than this value in bytes.
- `threads`: Number of threads to use to compress data concurrently. When set, compression operations are performed on multiple threads. The default value (0) disables multi-threaded compression. A value of -1 means to set the number of threads to the number of detected logical CPUs.
- `write_checksum`: If True, a 4 byte content checksum will be written with the compressed data, allowing the decompressor to perform content verification.
- `write_content_size`: If True (the default), the decompressed content size will be included in the header of the compressed data. This data will only be written if the compressor knows the size of the input data.
- `gzip_fallback`: If `True`, uses gzip encoding if `zstd` is not in the Accept-Encoding header.
## Performance
A simple comparative example using Python `sys.getsizof()` and `timeit`:
```python
# ipython console
import gzip
import sys
import brotli
import requests
import zstandard
page = requests.get("https://github.com/fullonic/brotli-asgi").content
%timeit zstandard.ZstdCompressor(level=3).compress(page)
# 788 µs ± 9.99 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
sys.getsizeof(zstandard.ZstdCompressor(level=3).compress(page))
# 36381
%timeit brotli.compress(page, quality=4)
# 2.55 ms ± 142 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
sys.getsizeof(brotli.compress(page, quality=4))
# 34361
%timeit gzip.compress(page, compresslevel=6)
# 4.05 ms ± 95 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
sys.getsizeof(gzip.compress(page, compresslevel=6))
# 36760
```
## Compatibility
- [RFC 8478](https://tools.ietf.org/search/rfc8478)
- [Zstd nginx module](https://github.com/tokers/zstd-nginx-module)
- [wget2](https://gitlab.com/gnuwget/wget2)
- Browser support is not known.
| zstd-asgi | /zstd-asgi-0.1.tar.gz/zstd-asgi-0.1/README.md | README.md |
import io
from starlette.datastructures import Headers, MutableHeaders
from starlette.middleware.gzip import GZipResponder
from starlette.types import ASGIApp, Message, Receive, Scope, Send
import zstandard
class ZstdMiddleware:
def __init__(
self,
app: ASGIApp,
level: int = 3,
minimum_size: int = 500,
threads: int = 0,
write_checksum: bool = False,
write_content_size: bool = True,
gzip_fallback: bool = True,
) -> None:
self.app = app
self.level = level
self.minimum_size = minimum_size
self.threads = threads
self.write_checksum = write_checksum
self.write_content_size = write_content_size
self.gzip_fallback = gzip_fallback
async def __call__(self,
scope: Scope,
receive: Receive,
send: Send) -> None:
if scope["type"] == "http":
accept_encoding = Headers(scope=scope).get("Accept-Encoding", "")
if "zstd" in accept_encoding:
responder = ZstdResponder(
self.app,
self.level,
self.threads,
self.write_checksum,
self.write_content_size,
self.minimum_size,
)
await responder(scope, receive, send)
return
if self.gzip_fallback and "gzip" in accept_encoding:
responder = GZipResponder(self.app, self.minimum_size)
await responder(scope, receive, send)
return
await self.app(scope, receive, send)
class ZstdResponder:
def __init__(
self,
app: ASGIApp,
level: int,
threads: int,
write_checksum: bool,
write_content_size: bool,
minimum_size: int,
) -> None:
self.app = app
self.level = level
self.minimum_size = minimum_size
self.send = unattached_send # type: Send
self.initial_message = {} # type: Message
self.started = False
self.zstd_buffer = io.BytesIO()
self.zstd_file = zstandard.ZstdCompressor(
level=level,
threads=threads,
write_checksum=write_checksum,
write_content_size=write_content_size,
).stream_writer(self.zstd_buffer)
async def __call__(self,
scope: Scope,
receive: Receive,
send: Send) -> None:
self.send = send
await self.app(scope, receive, self.send_with_zstd)
async def send_with_zstd(self, message: Message) -> None:
message_type = message["type"]
if message_type == "http.response.start":
# Don't send the initial message until we've determined how to
# modify the outgoing headers correctly.
self.initial_message = message
elif message_type == "http.response.body" and not self.started:
self.started = True
body = message.get("body", b"")
more_body = message.get("more_body", False)
if len(body) < self.minimum_size and not more_body:
# Don't apply Zstd to small outgoing responses.
await self.send(self.initial_message)
await self.send(message)
elif not more_body:
# Standard Zstd response.
self.zstd_file.write(body)
self.zstd_file.flush(zstandard.FLUSH_FRAME)
body = self.zstd_buffer.getvalue()
self.zstd_file.close()
headers = MutableHeaders(raw=self.initial_message["headers"])
headers["Content-Encoding"] = "zstd"
headers["Content-Length"] = str(len(body))
headers.add_vary_header("Accept-Encoding")
message["body"] = body
await self.send(self.initial_message)
await self.send(message)
else:
# Initial body in streaming Zstd response.
headers = MutableHeaders(raw=self.initial_message["headers"])
headers["Content-Encoding"] = "zstd"
headers.add_vary_header("Accept-Encoding")
del headers["Content-Length"]
self.zstd_file.write(body)
self.zstd_file.flush()
message["body"] = self.zstd_buffer.getvalue()
self.zstd_buffer.seek(0)
self.zstd_buffer.truncate()
await self.send(self.initial_message)
await self.send(message)
elif message_type == "http.response.body":
# Remaining body in streaming Zstd response.
body = message.get("body", b"")
more_body = message.get("more_body", False)
self.zstd_file.write(body)
if not more_body:
self.zstd_file.flush(zstandard.FLUSH_FRAME)
message["body"] = self.zstd_buffer.getvalue()
self.zstd_file.close()
else:
message["body"] = self.zstd_buffer.getvalue()
self.zstd_buffer.seek(0)
self.zstd_buffer.truncate()
await self.send(message)
async def unattached_send(message: Message) -> None:
raise RuntimeError("send awaitable not set") # pragma: no cover | zstd-asgi | /zstd-asgi-0.1.tar.gz/zstd-asgi-0.1/zstd_asgi/__init__.py | __init__.py |
=============
python-zstd
=============
.. image:: https://travis-ci.org/sergey-dryabzhinsky/python-zstd.svg?branch=master
:target: https://travis-ci.org/sergey-dryabzhinsky/python-zstd
Simple python bindings to Yann Collet ZSTD compression library.
**Zstd**, short for Zstandard, is a new lossless compression algorithm,
which provides both good compression ratio *and* speed for your standard compression needs.
"Standard" translates into everyday situations which neither look for highest possible ratio
(which LZMA and ZPAQ cover) nor extreme speeds (which LZ4 covers).
It is provided as a BSD-license package, hosted on GitHub_.
.. _GitHub: https://github.com/facebook/zstd
WARNING!!!
----------
If you setup 1.0.0.99.1 version - remove it manualy to able to update.
PIP matching version strings not tuple of numbers.
Result generated by versions prior to 1.0.0.99.1 is not compatible with orignial Zstd
by any means. It generates custom header and can be read only by zstd python module.
As of 1.0.0.99.1 version it uses standard Zstd output, not modified.
To prevent data loss there is two functions now: ```compress_old``` and ```decompress_old```.
They are works just like in old versions prior to 1.0.0.99.1.
As of 1.1.4 version module build without them by default.
As of 1.3.4 version these functions are deprecated and will be removed in future releases.
As of 1.5.0 version these functions are removed.
DISCLAIMER
__________
These python bindings are kept simple and blunt.
Support of dictionaries and streaming is not planned.
LINKS
-----
* Zstandard: https://github.com/facebook/zstd
* More full-featured and compatible with Zstandard python bindings by Gregory Szorc: https://github.com/indygreg/python-zstandard
Build from source
-----------------
>>> $ git clone https://github.com/sergey-dryabzhinsky/python-zstd
>>> $ git submodule update --init
>>> $ apt-get install python-dev python3-dev python-setuptools python3-setuptools
>>> $ python setup.py build_ext clean
>>> $ python3 setup.py build_ext clean
Note: Zstd legacy format support disabled by default.
To build with Zstd legacy versions support - pass ``--legacy`` option to setup.py script:
>>> $ python setup.py build_ext --legacy clean
Note: Python-Zstd legacy format support removed since 1.5.0.
If you need to convert old data - checkout 1.4.9.1 module version. Support of it disabled by default.
To build with python-zstd legacy format support (pre 1.1.2) - pass ``--pyzstd-legacy`` option to setup.py script:
>>> $ python setup.py build_ext --pyzstd-legacy clean
If you want to build with existing distribution of libzstd just add ``--external`` option.
But beware! Legacy formats support state is unknown in this case.
And if your version not equal with python-zstd - tests may not pass.
>>> $ python setup.py build_ext --external clean
If paths to header file ``zstd.h`` and libraries is uncommon - use common ``build`` params:
--libraries --include-dirs --library-dirs.
>>> $ python setup.py build_ext --external --include-dirs /opt/zstd/usr/include --libraries zstd --library-dirs /opt/zstd/lib clean
Install from pypi
-----------------
>>> # for Python 2.7+
>>> $ pip install zstd
>>> # or for Python 3.4+
>>> $ pip3 install zstd
API
___
Error
Standard python Exception for zstd module
ZSTD_compress (data[, level, threads]): string|bytes
Function, compress input data block via mutliple threads, return compressed block, or raises Error.
Params:
* **data**: string|bytes - input data block, length limited by 2Gb by Python API
* **level**: int - compression level, ultra-fast levels from -100 (ultra) to -1 (fast) available since zstd-1.3.4, and from 1 (fast) to 22 (slowest), 0 or unset - means default (3). Default - 3.
* **threads**: int - how many threads to use, from 0 to 200, 0 or unset - auto-tune by cpu cores count. Default - 0. Since: 1.4.4.1
Aliases: *compress(...)*, *dumps(...)*
Since: 0.1
ZSTD_uncompress (data): string|bytes
Function, decompress input compressed data block, return decompressed block, or raises Error.
Params:
* **data**: string|bytes - input compressed data block, length limited by 2Gb by Python API
Aliases: *decompress(...)*, *uncompress(...)*, *loads(...)*
Since: 0.1
version (): string|bytes
Returns this module doted version string.
The first three digits are folow libzstd version.
Fourth digit - module release number for that version.
Since: 1.3.4.3
ZSTD_version (): string|bytes
Returns ZSTD library doted version string.
Since: 1.3.4.3
ZSTD_version_number (): int
Returns ZSTD library version in format: MAJOR*100*100 + MINOR*100 + RELEASE.
Since: 1.3.4.3
ZSTD_external (): int
Returns 0 of 1 if ZSTD library build as external.
Since: 1.5.0.2
Removed
_______
ZSTD_compress_old (data[, level]): string|bytes
Function, compress input data block, return compressed block, or raises Error.
**DEPRECATED**: Returns not compatible with ZSTD block header
**REMOVED**: since 1.5.0
Params:
* **data**: string|bytes - input data block, length limited by 2Gb by Python API
* **level**: int - compression level, ultra-fast levels from -5 (ultra) to -1 (fast) available since zstd-1.3.4, and from 1 (fast) to 22 (slowest), 0 or unset - means default (3). Default - 3.
Since: 1.0.0.99.1
ZSTD_uncompress_old (data): string|bytes
Function, decompress input compressed data block, return decompressed block, or raises Error.
**DEPRECATED**: Accepts data with not compatible with ZSTD block header
**REMOVED**: since 1.5.0
Params:
* **data**: string|bytes - input compressed data block, length limited by 2Gb by Python API
Since: 1.0.0.99.1
Use
___
Module has simple API:
>>> import zstd
>>> dir(zstd)
['Error', 'ZSTD_compress', 'ZSTD_external', 'ZSTD_uncompress', 'ZSTD_version', 'ZSTD_version_number', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__', 'compress', 'decompress', 'dumps', 'loads', 'uncompress', 'version']
>>> zstd.version()
'1.5.1.0'
>>> zstd.ZSTD_version()
'1.5.1'
>>> zstd.ZSTD_version_number()
10501
>>> zstd.ZSTD_external()
0
In python2
>>> data = "123456qwert"
In python3 use bytes
>>> data = b"123456qwert"
>>> cdata = zstd.compress(data, 1)
>>> data == zstd.decompress(cdata)
True
>>> cdata_mt = zstd.compress(data, 1, 4)
>>> cdata == cdata_mt
True
>>> data == zstd.decompress(cdata_mt)
True
| zstd | /zstd-1.5.1.0.tar.gz/zstd-1.5.1.0/README.rst | README.rst |
# ZStreams
**Zeek + Kafka + Spark + KSQL = ZStreams**
ZStreams is the bridge between Zeek and the latest streaming toolkits. With ZStreams you can quickly and easily start processing your Zeek output with the world's best analytic tools. Our examples will lead you through the process.
## Install ZStreams
- **Step 1:**
Install the Zeek Kafka plugin/package - [Kafka_Setup](docs/Kafka_Setup.md)
- **Step 2:** ```pip install zstreams```
- **Step 3:** Follow our set of simple examples to get started
## Examples
| zstreams | /zstreams-0.0.2.tar.gz/zstreams-0.0.2/README.md | README.md |
zsv.ticker
~~~~~~~~~~
``zsv.ticker`` enables flexible and idiomatic regular execution of
tasks::
from zsv.ticker import Ticker
ticker = Ticker()
ticker.start(5)
while ticker.tick():
execute_task()
``Ticker`` aims to be more idiomatic and easy to use than a time calculation and
sleep call, and further enables the instantaneous termination of a waiting
task::
import signal
from time import sleep
from zsv.ticker import Ticker
ticker = Ticker()
ticker.start(5)
def abort(signum, frame):
ticker.stop()
signal.signal(signal.SIGINT, abort)
while ticker.tick():
print("tick")
sleep(2)
print("tock")
The above script wraps a `stop` call in a signal handler registered to SIGINT:
hitting Ctrl+C after the script prints "tick" but before it prints "tock"
will yield a final "tock" before it terminates.
| zsv.ticker | /zsv.ticker-1.1.0.tar.gz/zsv.ticker-1.1.0/README.rst | README.rst |
"""This module defines the `Ticker` class."""
import queue
import threading
import time
from typing import Optional
class Ticker:
"""Delivery of ticks at intervals.
Once started the `Ticker` will periodically make a boolean "tick" of True
available through the `tick` method. Uncollected ticks will not stack or
queue up, and the Ticker will continue to tick regardless. When stopped
`tick` will return False, and any uncollected tick will be lost.
Example::
ticker = Ticker()
ticker.start(5)
while ticker.tick():
execute_task()
"""
def __init__(self) -> None:
self._lock = threading.Lock()
self._tick: Optional[queue.LifoQueue] = None # pylint: disable=E1136
def _schedule(self, interval: int) -> None:
time.sleep(interval)
while True:
self._lock.acquire()
# There is a non-zero risk of _tick being set to None between entering the loop and acquiring the lock
# thus it's better to perform the check here rather than as the while expression.
if not self._tick:
break
# Only use one queue spot for ticking, the second spot is reserved for stopping.
if self._tick.qsize() == 0:
self._tick.put(True)
self._lock.release()
time.sleep(interval)
def start(self, interval: int, immediate: bool = False) -> None:
"""Start the ticker.
Args:
interval: Time between ticks.
immediate: Start the ticker with a tick delivery.
Raises:
Exception if already running.
"""
self._lock.acquire()
if self._tick:
raise RuntimeError("Ticker already started")
self._tick = queue.LifoQueue(2)
if immediate:
self._tick.put(True)
self._lock.release()
thread = threading.Thread(target=self._schedule, args=(interval,), daemon=True)
thread.start()
def tick(self) -> bool:
"""Wait for a tick to be delivered.
Will return immediately if ticker is stopped.
Returns:
True on tick, False if stopped.
"""
if not self._tick:
return False
tick = self._tick.get()
if not tick:
self._lock.acquire()
self._tick = None
self._lock.release()
return tick
def stop(self) -> None:
"""Stop the ticker."""
self._lock.acquire()
if self._tick and self._tick.qsize() != 2:
self._tick.put(False)
self._lock.release() | zsv.ticker | /zsv.ticker-1.1.0.tar.gz/zsv.ticker-1.1.0/src/zsv/ticker/ticker.py | ticker.py |
import requests
import zserio
import base64
from .spec import ZserioSwaggerSpec, HttpMethod, ParamFormat, ParamLocation, ParamSpec
from urllib.parse import urlparse
import os
class HttpClient(zserio.ServiceInterface):
"""
Implementation of HTTP client as Zserio generic service interface.
"""
def __init__(self, *, proto=None, host=None, port=None, spec):
"""
Brief
Constructor to instantiate a client based on an OpenApi specification.
The specification must be located at `spec`, which can be
a URL or a local path to a valid JSON/YAML OpenApi3 spec.
Note: The default URL for the spec with a ZserioSwaggerApp-based server is
{http|https}://{host}:{port}{/path}/openapi.json
Example
from my.package import Service
import zswag
client = Service.Client(zswag.HttpClient(spec=f"http://localhost:5000/openapi.json"))
Arguments
`spec`: URL or local path to a JSON or YAML file which holds the valid
OpenApi3 description of the service. The following information is
extracted from the specification:
- A path to the service is extracted from the first entry in the servers-list.
If such an entry is not available, the client will fall back to the
path-part of the spec URL (without the trailing openapi.json).
- For each operation:
* HTTP method (GET or POST)
* Argument passing scheme (Base64-URL-Param or Binary Body)
`proto`: (Optional) HTTP protocol type, such as "http" or "https".
If this argument is not given, the protocol will be extracted
from the `spec` string, qssuming that is is a URL.
`host`: (Optional) Hostname of the target server, such as an IP or
a DNS name. If this argument is not given, the protocol will be extracted
from the `spec` string, qssuming that is is a URL.
`port`: (Optional) Port to use for connection. MUST be issued together with
`host`. If the argument is set and `host` is not, it will be ignored.
"""
spec_url_parts = urlparse(spec)
netloc = \
host if host and not port else \
f"{host}:{port}" if host and port else \
spec_url_parts.netloc
self.spec = ZserioSwaggerSpec(spec)
path = self.spec.path() or os.path.split(spec_url_parts.path)[0]
self.path: str = f"{proto or spec_url_parts.scheme}://{netloc}{path}"
if not self.path.endswith("/"):
self.path += "/"
def callMethod(self, method_name, request_data, context=None):
"""
Implementation of ServiceInterface.callMethod.
"""
try:
method_spec = self.spec.method_spec(method_name)
kwargs = {}
for param in method_spec.params:
if param.location == ParamLocation.QUERY:
kwargs["params"] = {"requestData": base64.urlsafe_b64encode(request_data)}
elif param.location == ParamLocation.BODY:
kwargs["data"] = request_data
if method_spec.http_method == HttpMethod.GET:
response = requests.get(self.path + method_name, **kwargs)
elif method_spec.http_method == HttpMethod.POST:
response = requests.post(self.path + method_name, **kwargs)
elif method_spec.http_method == HttpMethod.DELETE:
response = requests.delete(self.path + method_name, **kwargs)
elif method_spec.http_method == HttpMethod.PUT:
response = requests.put(self.path + method_name, **kwargs)
elif method_spec.http_method == HttpMethod.PATCH:
response = requests.patch(self.path + method_name, **kwargs)
else:
raise zserio.ServiceException("Unsupported HTTP method!")
if response.status_code != requests.codes.ok:
raise zserio.ServiceException(str(response.status_code))
return response.content
except Exception as e:
raise zserio.ServiceException("HTTP call failed: " + str(e)) | zswag | /zswag-0.6.0rc1-py3-none-any.whl/zswag_client/client.py | client.py |
from typing import Dict, List, Set, Optional
from enum import Enum
import yaml
import json
import os
import requests
from urllib.parse import urlparse
ZSERIO_OBJECT_CONTENT_TYPE = "application/x-zserio-object"
ZSERIO_REQUEST_PART = "x-zserio-request-part"
ZSERIO_REQUEST_PART_WHOLE = "*"
class HttpMethod(Enum):
GET = 0
POST = 1
PUT = 2
DELETE = 3
PATCH = 4
class ParamLocation(Enum):
QUERY = 0
BODY = 1
PATH = 2
class ParamFormat(Enum):
STRING = 0
BYTE = 1
HEX = 2
BINARY = 3
class ParamSpec:
def __init__(self, *, name: str = "", format: ParamFormat, location: ParamLocation, zserio_request_part: str):
# https://github.com/Klebert-Engineering/zswag/issues/15
assert location in (ParamLocation.QUERY, ParamLocation.BODY)
# https://github.com/Klebert-Engineering/zswag/issues/19
assert zserio_request_part == ZSERIO_REQUEST_PART_WHOLE
# https://github.com/Klebert-Engineering/zswag/issues/20
assert \
(format == ParamFormat.BINARY and location == ParamLocation.BODY) or \
(format == ParamFormat.BYTE and location == ParamLocation.QUERY)
self.name = name
self.format = format
self.location = location
self.zserio_request_part = zserio_request_part
class MethodSpec:
def __init__(self, name: str, path: str, http_method: HttpMethod, params: List[ParamSpec]):
# https://github.com/Klebert-Engineering/zswag/issues/19
assert len(params) == 1
self.name = name
self.path = path
self.http_method = http_method
self.params = params
class ZserioSwaggerSpec:
def __init__(self, spec_url_or_path: str):
spec_url_parts = urlparse(spec_url_or_path)
if spec_url_parts.scheme in {"http", "https"}:
spec_str = requests.get(spec_url_or_path).text
else:
with open(spec_url_or_path, "r") as spec_file:
spec_str = spec_file.read()
self.spec = yaml.load(spec_str)
self.methods: Dict[str, MethodSpec] = {}
for path, path_spec in self.spec["paths"].items():
for method, method_spec in path_spec.items():
http_method = HttpMethod[method.upper()]
name = method_spec["operationId"]
params: List[ParamSpec] = []
if "requestBody" in method_spec:
assert "content" in method_spec["requestBody"]
for content_type in method_spec["requestBody"]["content"]:
if content_type == ZSERIO_OBJECT_CONTENT_TYPE:
params.append(ParamSpec(
format=ParamFormat.BINARY,
location=ParamLocation.BODY,
zserio_request_part=ZSERIO_REQUEST_PART_WHOLE))
if "parameters" in method_spec:
for param in method_spec["parameters"]:
if ZSERIO_REQUEST_PART not in param:
continue
assert "schema" in param and "format" in param["schema"]
params.append(ParamSpec(
name=param["name"],
format=ParamFormat[param["schema"]["format"].upper()],
location=ParamLocation[param["in"].upper()],
zserio_request_part=param[ZSERIO_REQUEST_PART]))
method_spec_object = MethodSpec(
name=name,
path=path,
http_method=http_method,
params=params)
assert name not in self.methods
self.methods[name] = method_spec_object
def method_spec(self, method: str) -> Optional[MethodSpec]:
if method not in self.methods:
return None
return self.methods[method]
def path(self) -> Optional[str]:
if "servers" in self.spec:
servers = self.spec["servers"]
if len(servers):
server = servers[0]
if "url" in server and server["url"]:
server = urlparse(server["url"])
return server.path
return None | zswag | /zswag-0.6.0rc1-py3-none-any.whl/zswag_client/spec.py | spec.py |
import numpy as np
from collections import namedtuple
from .basic_tools import mark_pval, simple_test
# 设置字体
# 后续改写为类
def set_plt_backend():
"""
ctex library has sth wrong now.
:return: None
"""
import matplotlib
import matplotlib.pyplot as plt
from matplotlib import rcParams
matplotlib.use("pgf")
pgf_config = {
"font.family": 'serif',
"font.size": 20,
"pgf.rcfonts": False,
"text.usetex": True,
"pgf.preamble": "\n".join([
r"\usepackage{unicode-math}",
r"\setmathfont{XITS Math}",
r"\setmainfont{Arial}",
r"\usepackage{xeCJK}",
# r"\xeCJKsetup{CJKmath=true}",
r"\setCJKmainfont{SimSun}",
]),
}
rcParams.update(pgf_config)
matplotlib.use('module://ipykernel.pylab.backend_inline')
# 自动换行
def line_feed(txt, max_len, lines=False):
"""
Add '\t' automatically, support Chinese and English.
:param txt: str, text
:param max_len: the max number of string in one line
:param lines: to control the max lines and auto_change max_len
:return: text with '\t' insertion
"""
new_txt = ''
if not lines:
txt_s = txt.split(' ')
length = [len(t) for t in txt_s]
step_len = 0
step_txt = ''
for i in range(len(txt_s)):
step_len += length[i]
step_txt += txt_s[i] + ' '
if step_len > max_len:
new_txt += step_txt.strip(' ') + '\n'
step_len = 0
step_txt = ''
elif i == len(txt_s) - 1:
new_txt += step_txt.strip(' ')
elif type(lines) == int:
max_len = len(txt) // lines
length = [max_len] * lines
rest = len(txt) % lines
for i in range(rest):
length[i] += 1
length = [0] + [sum(length[:(i + 1)]) for i in range(len(length))]
for i in range(lines):
new_txt += txt[length[i]:length[i + 1]] + '\n'
new_txt = new_txt.strip('\n')
return new_txt
def mark_significance(plt_, data1_, data2_, x1_, x2_, height='max', rounding=4, y_axis_ran=False,
color_font='black', block_ratio=30, block_ratio_p=30, one_tail=True, norm_fix=False,
vertical_ratio=1, alpha_p=0.05, size_txt_=15, test_pair=False, **kwargs):
"""
Automatically do test and plot significance mark.
:param plt_: plot canvas or subplot canvas obj
:param data1_: sample1
:param data2_: sample2
:param x1_: x coordinate for sample1
:param x2_: x coordinate for sample2
:param height: y coordinate to mark line, defalt is 'max', use the max(sample1) as height,
while 'mean' and int or float obj are also supported.
:param rounding: numpy.round(obj, rounding) will be performed for P-value
:param y_axis_ran: determin the max range of y coordinate, which is related to the block info.
defalt is Fasle, will call the max range of y coordinate now.
:param color_font: text color
:param block_ratio: the block ratio between the mark and data present
:param block_ratio_p: the block ratio between the mark and p-value text
:param one_tail: do one-tail or two-tail test
:param norm_fix: If the test fixs normal distribution or not.
Default is None, than will perform normal distribution test to determin it.
If True, normal distribution is directly determined, vice versa.
:param vertical_ratio: to change the height of vertical length of mark
:param alpha_p: Significant level
:param size_txt_: text fontsize
:param test_pair: if paired-test will be performed.
:param kwargs: plt.plot()'s keyword arguments
:return: x list for plot mark, y list for plot mark, test summary name tuple, y_axis_ran
"""
import numpy as np_
if not y_axis_ran:
yaxis_ = plt_.yticks()
y_axis_ran = max(yaxis_[0]) - min(yaxis_[0])
# 标注折线x
if height == 'max':
y_max = max([max(data1_), max(data2_)])
elif height == 'mean':
y_max = max([np_.mean(data1_), np_.mean(data2_)])
elif type(height) == float or type(height) == int:
y_max = height
if x1_ > x2_:
x1_, x2_ = x2_, x1_
block_y = y_axis_ran / block_ratio
block_p = y_axis_ran / block_ratio_p
dia_y = block_y / vertical_ratio
y1_ = y_max + block_y
y2_ = y_max + block_y + dia_y
x_ = [x1_, x1_, x2_, x2_]
y_ = [y1_, y2_, y2_, y1_]
plt_.plot(x_, y_, **kwargs)
# 标注显著性
test_ = simple_test(np_.array(data1_), np_.array(data2_), summary=True, rounding=rounding, norm_fix=norm_fix,
is_pair=test_pair, one_tail=one_tail)
p_value_ = test_.p_value
symbol_ = test_.symbol
if p_value_ <= alpha_p:
txt_ = symbol_
else:
txt_ = str(np_.round(p_value_, 3))
plt_.text((x1_ + x2_) / 2, y2_ + block_p, txt_, rotation=0, rotation_mode='anchor', color=color_font,
fontsize=size_txt_, verticalalignment="center", horizontalalignment="center")
# 用于扩充figure自适应范围
plt__ = plt_.scatter((x1_ + x2_) / 2, y2_ + dia_y)
plt__.set_visible(False)
return x_, y_, test_, y_axis_ran
# 十六进制、RGB、RGBA颜色转换
def hex_2_rgb(value_):
"""
hex to rgb color
:param value_: hex color
:return: numpy.array, rgb color
"""
value_ = value_.lstrip('#')
lv_ = len(value_)
return np.array(int(value_[i: i + lv_ / 3], 16) for i in range(0, lv_, lv_ / 3))
def rgb_2_hex(rgb_):
"""
rgb to hex color
:param rgb_: rgb color
:return: str, hex color
"""
return "#"+"".join([i[2:] if len(i[2:])>1 else '0'+i[2:] for i in [hex(rgb_[0]), hex(rgb_[1]), hex(rgb_[2])]])
def rgba_2_rgb(rgba_, background_color_=None):
"""
rgba to rgb color
:param rgba_: rgba color
:param background_color_: only rgb color is allowed
:return: numpy.array, rgb color
"""
if type(None) == type(background_color_):
background_color_ = [1, 1, 1]
rgba__ = rgba_.copy()
if len(rgba__) == 1:
rgba__ = rgba__[0]
rgb = np.array(rgba__[:3]) * rgba__[3] + np.array(background_color_) * (1 - rgba__[3])
return rgb
def set_base_color(plt_, base_color_):
"""
Set the axis color. Note that line.set_color did not refect, so just set disvisible for the scale line.
:param plt_: plot canvas or subplot canvas obj
:param base_color_: axis color
:return: plot canvas or subplot canvas obj
"""
ax1_ = plt_.gca()
ax1_.spines['top'].set_color(base_color_)
ax1_.spines['right'].set_color(base_color_)
ax1_.spines['bottom'].set_color(base_color_)
ax1_.spines['left'].set_color(base_color_)
# line.set_color did not refect, so just neglact
for line in ax1_.yaxis.get_ticklines():
line.set_visible(False)
for line in ax1_.xaxis.get_ticklines():
line.set_visible(False)
return ax1_
def set_spines(plt_, lw=2):
"""
Set top and right spines invisible, and set the linewidth of bottom and left spines.
:param plt_: plot canvas obj
:param lw: linewidth of bottom and left spines
:return: plot canvas obj
"""
ax1_ = plt_.gca()
ax1_.spines['top'].set_visible(False)
ax1_.spines['right'].set_visible(False)
ax1_.spines['bottom'].set_linewidth(lw) # 设置底部坐标轴的粗细
ax1_.spines['left'].set_linewidth(lw) # 设置左边坐标轴的粗细
return ax1_ | zsx-some-tools | /zsx_some_tools-2.0.1-py3-none-any.whl/zsx_some_tools/plot_tools.py | plot_tools.py |
import numpy as np
import pandas as pd
from collections import defaultdict, namedtuple
from functools import reduce
# 全排列函数
fn = lambda x, code='': reduce(lambda y, z: [str(i) + code + str(j) for i in y for j in z], x)
def count_list(list_like_, sort=False, ascending=True, dict_out=False):
"""
Perform a fast and decnet calculation of the element frequency number.
:param list_like_: iterable object (Any)
:param sort: sort the result or not
:param ascending: sort by ascending or not
:param dict_out: output as dictionary or a pandas.DataFrame form.
:return: decnet summary of the given list.
"""
if dict_out:
countlist = defaultdict(int)
for i in list_like_:
countlist[i] += 1
return countlist
else:
s_list = list(set(list_like_))
if len(s_list) > 12 or type(list_like_) != list:
countlist = defaultdict(int)
for i in list_like_:
countlist[i] += 1
countlist = pd.DataFrame.from_dict(countlist, orient='index')
countlist = countlist.reset_index()
else:
countlist = [[i, list_like_.count(i)] for i in s_list]
countlist = pd.DataFrame(countlist)
countlist.columns = [0, 1]
if sort:
countlist = countlist.sort_values(by=countlist.columns[1], ascending=ascending)
return countlist
# keep the same order
def sort_set_list(list_data__):
"""
Export a sorted result after perform set() function.
:param list_data__: data neet to be set and sort
:return: list that remove duplicates and sorted as the original order.
"""
list_data_ = list(list_data__)
set_list_ = list(set(list_data_))
set_list_.sort(key=list_data_.index)
return set_list_
# Normalize to normal distribution
def normalize_df(data_, axis=0):
"""
Normalize the df to normal distribution.
:param data_: pandas.DataFrame object
:param axis: perform normalize under columns/rows independently or not, use axis=0, 1, or 'all'.
Default is 'all'.
:return: Normalized df
"""
data__ = data_.copy()
if type(axis) == int:
data__ = (data__ - np.mean(data__, axis=axis)) / np.std(data__, axis=axis)
elif axis == 'all':
data__ = (data__ - np.mean(np.mean(data__)) / np.std(np.std(data__)))
else:
raise ValueError('parameter \'axis\' should be set among 0, 1 and \'all\'.')
return data__
# Normalize to separately [0, 1]
def double_norm(data_, axis='all'):
"""
Normalize the df separately for those values are positive number (0.5, 1] or negative number [0, 0.5),
while zero to 0.5. Usually used for heatmap plotting.
:param data_: pandas.DataFrame object
:param axis: perform double normalize under columns/rows independently or not, use axis=0, 1, or 'all'.
Default is 'all'.
:return: Double normalized df
"""
data__ = data_.copy()
if type(axis) == int:
max_value_ = np.max(data__, axis=axis)
min_value_ = -np.min(data__, axis=axis)
data__[data__ >= 0] = data__[data__ >= 0] / max_value_ / 2 + 0.5
data__[data__ < 0] = data__[data__ < 0] / min_value_ / 2 + 0.5
elif axis == 'all':
max_value_ = np.max(np.max(data__))
min_value_ = - np.min(np.min(data__))
data__[data__ >= 0] = data__[data__ >= 0] / max_value_ / 2 + 0.5
data__[data__ < 0] = data__[data__ < 0] / min_value_ / 2 + 0.5
else:
raise ValueError('parameter \'axis\' should be set among 0, 1 and \'all\'.')
return data__
# test data set follows normal distribution or not
def normal_test(p_data_):
"""
Perform Normal test.
:param p_data_: data to be tested
:return: p-value
"""
stat_ = []
for i in range(p_data_.shape[1]):
data_ = p_data_.loc[p_data_.iloc[:, i] != -1].iloc[:, i]
data_ = data_.astype(float)
stat_ += [stats.shapiro(data_)[1]]
if np.max(stat_) > 0.1:
norm_ = False
else:
norm_ = True
return norm_
# trans p value
def mark_pval(p_v_):
"""
Trans p-value to '*' marks
:param p_v_: p-value
:return: 'Ns', '.', or '*' * n
"""
if p_v_ < 0.0001:
return '*' * 4
elif p_v_ < 0.001:
return '*' * 3
elif p_v_ < 0.01:
return '*' * 2
elif p_v_ < 0.05:
return '*' * 1
elif p_v_ < 0.1:
return '·' * 1
else:
return 'Ns' * 1
def effect_size_hedges_g(test_data_, test_data2_, summary=True, rounding=False, rounding_int=False):
n1_ = len(test_data_)
n2_ = len(test_data2_)
m1_ = np.mean(test_data_)
m2_ = np.mean(test_data2_)
me1_ = np.quantile(test_data_, 0.5)
me2_ = np.quantile(test_data2_, 0.5)
sd1_ = np.std(test_data_)
sd2_ = np.std(test_data2_)
sd_star_ = ((sd1_**2 * (n1_ - 1) + sd2_**2 * (n2_ - 1)) / (n1_ + n2_ - 2)) ** 0.5
g_size_ = (m1_ - m2_) / sd_star_
if summary:
info_ = [g_size_, n1_, n2_, m1_, m2_, me1_, me2_, sd1_, sd2_, sd_star_]
if rounding_int:
info_ = [np.round(i_, rounding) for i_ in info_]
return info_
else:
if rounding_int:
g_size_ = np.round(g_size_, rounding)
return g_size_
def simple_test(test_data, test_data2=False, is_pair=False, summary=False, one_tail=True,
norm_fix=None, equal_var=None, rounding=4):
"""
Perform auto-test on two given samples.
:param test_data: sample1
:param test_data2: sample2
:param is_pair: If paired-test will be performed.
:param summary: output detailed test infomation or only P-value
:param one_tail: do one-tail or two-tail test
:param norm_fix: If the test fixs normal distribution or not.
Default is None, than will perform normal distribution test to determin it.
If True, normal distribution is directly determined, vice versa.
:param equal_var: If the test fixs equal_var or not, only used when t test is performed.
:param rounding: numpy.round(obj, rounding) will be performed for P-value
:return: Test result.
"""
from scipy import stats
# if len(test_data.shape) == 1:
# dim = 1
if len(test_data.shape) == 2:
if test_data.shape[1] != 1:
print('Error! Dim of data input is larger than 1!')
return
if type(test_data2) == bool:
print('Sorry Error! Single sample test is not supported now!')
return
if one_tail:
if np.mean(test_data) > np.mean(test_data2):
larger = 1
else:
larger = 0
if not rounding and type(rounding) == int:
rounding_int = 1
else:
rounding_int = rounding
if norm_fix is None:
norm1 = stats.shapiro(test_data)[1]
norm2 = stats.shapiro(test_data2)[1]
if min([norm1, norm2]) <= 0.05:
norm = False
else:
norm = True
elif norm_fix:
norm1 = 1
norm2 = 1
norm = True
else:
norm1 = 0
norm2 = 0
norm = False
is_equal_var = 'None'
if is_pair:
if len(test_data) != len(test_data2):
print('Jian Gui Error! unpaired length of data input can not do paired test!')
return
if norm:
stat = stats.ttest_rel(test_data, test_data2)[1]
name = 'Paired t-test'
if one_tail:
stat = stat / 2
else:
name = 'Wilcoxon signed rank test'
if one_tail:
if larger:
stat = stats.wilcoxon(test_data, test_data2, alternative='greater')[1]
else:
stat = stats.wilcoxon(test_data2, test_data, alternative='greater')[1]
else:
stat = stats.wilcoxon(test_data, test_data2, alternative='two-sided')[1]
else:
if norm:
if equal_var is None:
var_p_ = stats.levene(test_data, test_data2)[1]
elif equal_var:
var_p_ = 0.1
else:
var_p_ = 0.01
if var_p_ > 0.05:
is_equal_var = True
name = 'Homogeneity of variance unpaired t-test'
else:
is_equal_var = False
name = 'Heterogeneity of variance unpaired t-test'
stat = stats.ttest_ind(test_data, test_data2, equal_var=is_equal_var)[1]
if one_tail:
stat = stat / 2
else:
name = 'Mann-Whitney U test'
if one_tail:
if larger:
stat = stats.mannwhitneyu(test_data, test_data2, alternative='greater')[1]
else:
stat = stats.mannwhitneyu(test_data2, test_data, alternative='greater')[1]
else:
stat = stats.mannwhitneyu(test_data, test_data2, alternative='two-sided')[1]
if not summary:
if rounding_int:
return np.round(stat, rounding)
else:
return stat
else:
effect_size_info = effect_size_hedges_g(test_data, test_data2, summary=True,
rounding=rounding, rounding_int=rounding_int)
if stat >= 0.1:
larger_res = ' = '
elif one_tail:
if larger:
larger_res = ' > '
else:
larger_res = ' < '
else:
larger_res = ' != '
symbol = mark_pval(stat)
summary = namedtuple('test_summary', ['p_value', 'relationship', 'symbol', 'normal', 'pair',
'equal_var', 'name', 'effect_size', 'n1', 'n2', 'm1',
'm2', 'me1', 'me2', 'sd1', 'sd2', 'sd_star'])
if rounding_int:
summary_info = [np.round(stat, rounding), larger_res, symbol,
{'normal': str(norm), 'data1_p': np.round(norm1, rounding),
'data2_p': np.round(norm2, rounding)},
str(is_pair), str(is_equal_var), name] + effect_size_info
else:
summary_info = [stat, larger_res, symbol, {'normal': str(norm), 'data1_p': norm1, 'data2_p': norm2},
str(is_pair), str(is_equal_var), name] + effect_size_info
return summary._make(summary_info)
def permutation_test(sample1_, sample2_, iter_num=10000, summary=False, one_tail=False, rounding=4):
"""
Perform permutation test on two given samples.
:param sample1_: sample1
:param sample2_: sample2
:param iter_num: cycle number to divide mixed sample randomly that is the core link of the test.
:param summary: output detailed test infomation or only P-value
:param one_tail: do one-tail or two-tail test
:param rounding: numpy.round(obj, rounding) will be performed for P-value
:return: Test result.
"""
import math
import random
from collections import namedtuple
n1 = len(sample1_)
n2 = len(sample2_)
if not rounding and type(rounding) == int:
rounding_int = 1
else:
rounding_int = rounding
if math.comb(n1 + n2, n1) < iter_num:
return 'so easy!'
d0_ = np.mean(sample1_) - np.mean(sample2_)
if not one_tail:
d0_ = abs(d0_)
all_sample_ = list(sample1_) + list(sample2_)
new_distribution = []
for _ in range(iter_num):
random.shuffle(all_sample_)
d1_ = sum(all_sample_[: n1]) / n1 - sum(all_sample_[n1:]) / n2
if not one_tail:
d1_ = abs(d1_)
new_distribution += [d1_]
p_ = np.count_nonzero(d0_ >= new_distribution) / iter_num
larger = 0
if p_ > 0.5:
p_ = 1 - p_
larger = 1
if not one_tail:
p_ = p_ * 2
if not summary:
if rounding_int:
return np.round(p_, rounding)
else:
return p_
else:
if p_ >= 0.1:
larger_res = ' = '
elif one_tail:
if larger:
larger_res = ' > '
else:
larger_res = ' < '
else:
larger_res = ' != '
symbol = mark_pval(p_)
summary = namedtuple('test_summary', ['p_value', 'relationship', 'symbol', 'name'])
if rounding_int:
return summary(np.round(p_, rounding), larger_res, symbol, {'two-tail': not one_tail})
else:
return summary(p_, larger_res, symbol, {'two-tail': not one_tail})
# input function to test computing time
def comput_time(func, *args, iter_num=100, round_num=4, **kwargs):
"""
Test function time used.
:param func: function name
:param args: function args if necessary
:param iter_num: cycle number
:param round_num: numpy.round(obj, round_num) will be performed before output time info
:param kwargs: function keyword args if necessary
:return: mean time used per cycle
"""
import time
t1 = time.time()
for _ in range(iter_num):
func(*args, **kwargs)
t2 = time.time()
return np.round(t2 - t1, round_num)
def exactly_round(float_like_, round_num_):
"""
Round number to a string form in a given decimal places.
:param float_like_: float or int object
:param round_num_: round the given number
:return: string number
"""
float_like_ = str(np.round(float(float_like_), round_num_))
length_ = len(float_like_.split('.')[1])
if length_ == round_num_:
return float_like_
else:
diff = round_num_ - length_
return float_like_ + '0' * diff
def time_trans(seconds_, second_round_num=2):
"""
Trans time in a decent form.
:param seconds_: int or float object indicates pure seconds
:param second_round_num: out second will do np.round
:return: time in hour (if necessary), minute (if necessary) and seconds
"""
if seconds_ == 0:
return '0s'
out_ = []
m_, s_ = divmod(seconds_, 60)
h_, m_ = divmod(m_, 60)
s_ = np.round(s_, second_round_num)
if h_:
out_ += [str(h_) + 'h']
if m_:
out_ += [str(m_) + 'm']
if s_:
if 10 < seconds_ < 30:
s_ = np.round(s_, 1)
elif seconds_ > 30:
s_ = int(s_)
out_ += [str(s_) + 's']
if not out_:
return '0s'
return ' '.join(out_)
def list_selection(list_like_, target='', exception=False, sep=False, logic_and=True):
"""
Used for list with only str element.
:param list_like_: 1d iterable object with str element
:param target: element with such string will be selected (cooperated with param 'sep' and 'logic_and')
:param exception: element with such string will not be selected (cooperated with param 'sep' and 'logic_and')
:param sep: param 'target' and 'exception' will be split by sep
:param logic_and: splited param 'target' and 'exception' will be used in the logic 'and', 'or'
:return: list after selcted
"""
if sep:
target = target.split(sep)
else:
target = [target]
if logic_and:
list_like_ = [i for i in list_like_ if (not sum([0 if tar in i else 1 for tar in target]))]
else:
list_like_ = [i for i in list_like_ if sum([1 if tar in i else 0 for tar in target])]
if exception:
if sep:
exception = exception.split(sep)
else:
exception = [exception]
if logic_and:
list_like_ = [i for i in list_like_ if not sum([1 if exc in i else 0 for exc in exception])]
else:
list_like_ = [i for i in list_like_ if sum([0 if exc in i else 1 for exc in exception])]
return list_like_
# md5 check
def get_md5(file_path_, hash_type='md5', ram=4000000):
"""
Perform hash algorithm for file, default is md5.
:param file_path_: file path
:param hash_type: type of hash algorithm, default is md5
:param ram: ram using
:return: hash value
"""
import hashlib
hash_type_list = ['md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512', 'blake2b', 'blake2s']
if hash_type not in hash_type_list:
raise ValueError('hash_type only support md5, sha1, sha224, sha256, sha384, sha512, blake2b, and blake2s')
string_ = 'm_ = hashlib.' + hash_type + '()'
exec(string_, None, globals())
# m_ = hashlib.md5()
with open(file_path_, 'rb') as fobj:
while True:
data_ = fobj.read(ram)
if not data_:
break
m_.update(data_)
return m_.hexdigest()
def get_order_ind(list_like_, ascending=True):
list_like_ind_ = list(range(len(list_like_)))
process_obj = pd.DataFrame([list_like_ind_, list_like_]).T
process_obj = process_obj.sort_values(by=1, ascending=ascending)
process_obj = process_obj.reset_index(drop=True)
order_dict_ = {}
for order_, id_ in zip(process_obj.index, process_obj.iloc[:, 0]):
order_dict_[order_] = int(id_)
return order_dict_
def get_first_x_ind(list_like_, order=0, ascending=True):
order_dict_ = get_order_ind(list_like_, ascending=ascending)
try:
return [order_dict_[i_] for i_ in order]
except TypeError:
return order_dict_[order] | zsx-some-tools | /zsx_some_tools-2.0.1-py3-none-any.whl/zsx_some_tools/basic_tools.py | basic_tools.py |
import os
from collections.abc import Iterable
import pandas as pd
import numpy as np
import json
try:
from openpyxl import Workbook
from openpyxl import load_workbook
openpyxl_exist = True
except ModuleNotFoundError:
openpyxl_exist = False
print('\'openpyxl\' Module Not Found. Excel related function cannot to call.')
# 脚本路径
import sys
import inspect, sys
_code_path = sys.path[0].replace('\\', '/') + '/'
# _code_name = inspect.getsourcefile(sys._getframe(1))
# 用于追加输出logging信息,可以在某环节进行覆盖本行的输出
def my_logging(file, string, adding=False):
"""
Support to write the logging info in the form like end='\r'
:param file: the open file object
:param string: string object to write
:param adding: if True, just like '\a' form; if False, '\r'
:return: None
"""
if adding:
file.seek(0, 2)
string1 = (string + '\n').encode()
file.write(string1)
pos = file.tell()
file.seek(pos - len(string1), 0)
def mkdir(path, file=False, silence=False):
"""
Function to make direct direction.
:param path: the direct direction
:param file: itegrate to perform my_logging(file, path + ' - Successfully create', adding=True)
:param silence: if print or not
:return: None
"""
# 去除首位空格
path = path.strip()
# 去除尾部 \ 符号
path = path.rstrip("/")
# 判断路径是否存在
isExists = os.path.exists(path)
if not isExists:
# 如果不存在则创建目录
os.makedirs(path)
string = path + ' - Successfully create'
if file:
my_logging(file, string, adding=True)
if not silence:
print(string)
else:
# 如果目录存在则不创建,并提示目录已存在
string = path + ' - Directory already exists'
if file:
my_logging(file, string, adding=True)
if not silence:
print(string)
# wc
def wc_py(path, time_print=False):
"""
To get file line number as linux wc -l
:param path: file path
:param time_print: if print time used
:return: line number
"""
if time_print:
import time
start = time.time()
with open(path, 'rb') as f:
count = 0
last_data = '\n'
while True:
data = f.read(0x400000)
if not data:
break
count += data.count(b'\n')
last_data = data
if last_data[-1:] != b'\n':
count += 1 # Remove this if a wc-like count is needed
if time_print:
end = time.time()
print(round((end - start) * 1000, 2), 'ms')
return count
def listdir(path, target='', exception=False, sep=False, logic_and=True):
"""
An expanded function for os.listdir.
:param path: get files and folders under this given path. (not necessary end with '/')
:param target: element with such string will be selected (cooperated with param 'sep' and 'logic_and')
:param exception: element with such string will not be selected (cooperated with param 'sep' and 'logic_and')
:param sep: param 'target' and 'exception' will be split by sep
:param logic_and: splited param 'target' and 'exception' will be used in the logic 'and', 'or'
:return: file list after selcted
"""
file_list = os.listdir(path)
if sep:
target = target.split(sep)
else:
target = [target]
if logic_and:
file_list = [i for i in file_list if (not sum([0 if tar in i else 1 for tar in target])) and ('.' != i[0])]
else:
file_list = [i for i in file_list if sum([1 if tar in i else 0 for tar in target]) and ('.' != i[0])]
if exception:
if sep:
exception = exception.split(sep)
else:
exception = [exception]
if logic_and:
file_list = [i for i in file_list if not sum([1 if exc in i else 0 for exc in exception])]
else:
file_list = [i for i in file_list if sum([0 if exc in i else 1 for exc in exception])]
return file_list
def reform_size(size_, detail=False, remain=3, rounding=2):
if size_ < 0:
raise ValueError('size can not be a negative number.')
if size_ < 1024:
return str(size_) + 'B'
import math
magnitude_name = ['B', 'KB', 'MB', 'GB', 'TB', 'PB', 'EB', 'ZB', 'YB']
magnitude = int(math.log(size_, 1024))
if not detail:
size__ = np.round(size_ / (1024 ** (magnitude)), rounding)
return str(size__) + magnitude_name[magnitude]
else:
size__ = size_
infomation = []
for i_ in range(magnitude):
size__, size__0 = divmod(size__, 1024)
infomation += [[size__, size__0]]
out_num = infomation[-1]
for info in infomation[:-1][::-1]:
out_num += [info[1]]
out_string_ = []
for num_, name_ in zip(out_num, magnitude_name[: magnitude + 1][::-1]):
if num_ != 0:
out_string_ += [str(num_) + name_]
return ' '.join(out_string_[: remain])
def getdirsize(path_, **kwargs):
"""
Get whole size of the direction
:param path_: direction path
:param kwargs: keyword arguments for reform_size()
:return: size
"""
size_ = 0
for root_, dirs_, files_ in os.walk(path_):
size_ += sum([os.path.getsize(os.path.join(root_, file_)) for file_ in files_])
return reform_size(size_, **kwargs)
# 给出路径下所有文件的文件夹, 快速
def get_all_file_fast(path_):
"""
Get all file paths under the direction in a fast way
:param path_: direction path
:return: all file paths
"""
all_path_ = []
for root_, dirs_, files_ in os.walk(path_):
all_path_ += [[os.path.join(root_, '').replace('\\', '/'), os.path.join(root_, file_).replace('\\', '/')] for
file_ in files_]
return all_path_
# 给出路径下所有文件的文件夹, 包含了st.listdir效果
def get_all_file(path_, thres=99999, **kwargs):
"""
Get all file paths under the direction in a detailed way, by kwargs used in list_selection().
:param path_: direction path
:param thres: file number limit to escape from unaffordable task. Default is 99999.
:param kwargs: to select paths with keyword arguments used in list_selection().
:return: all selected file paths
"""
from .basic_tools import list_selection
path_use = [path_]
all_file_path = []
while True:
new_path_use = []
for path_u in path_use:
targets = listdir(path_u)
for target in targets:
if os.path.isfile(path_u + target):
all_file_path += [path_u + target]
else:
new_path_use += [path_u + target + '/']
if len(new_path_use) == 0:
print('All files finished! NICE!')
break
elif len(all_file_path) > thres:
print('Files more than thres! Use \'thres\' parameter to change the threshold number.')
break
path_use = new_path_use
all_file_path = list_selection(all_file_path, **kwargs)
return all_file_path
def path_diagnosis(path_):
if path_[-1] != '/' or path_[-1] != '\\':
path_ = path_ + '/'
return path_
# pkl文件写入 读取
def read_pickle(path_):
"""
Read a pickle file.
:param path_: file path
:return: obj
"""
import pickle
with open(path_, 'rb') as f:
data_ = pickle.load(f)
return data_
def write_pickle(path_, data_):
"""
Write the object to a pickle file.
:param path_: file save path
:param data_: obj
:return: None
"""
import pickle
with open(path_, 'wb+') as f:
pickle.dump(data_, f)
def read_json(path_):
"""
Read json file as a dictionary
:param path_: json file path
:return: dictionary obj
"""
with open(path_, 'r') as file_:
config_ = json.load(file_)
return config_
def write_json(path_, json_file_):
"""
Save the dictionary obj as a json file
:param path_: json file save path
:param json_file_: dictionary obj
:return: None
"""
with open(path_, 'w+') as file_:
json.dump(json_file_, file_)
# excel 相关函数
if openpyxl_exist:
def _read_sheet(sheets_, sheet_identify):
if type(sheet_identify) == int:
sheet1_ = sheets_[sheet_identify]
elif type(sheet_identify) == str:
sheet_use = np.argmax([i_.title == sheet_identify for i_ in sheets_])
sheet1_ = sheets_[sheet_use]
# 迭代读取所有的行
rows = sheet1_.rows
data_ = []
for i_, row in enumerate(rows):
row_val = [col.value for col in row]
if i_ == 0:
columns_use_ = row_val
else:
data_ += [row_val]
data_ = pd.DataFrame(data_, columns=columns_use_)
return data_
def read_xlsx(path_, sheet_identify=0, dict_out=False):
"""
Read excel file in a freedom framework
:param path_: excel file path
:param sheet_identify: int, str, list, or None, determine sheet(s) to be read
:param dict_out: if out dict or list of dfs
:return: df or list or dict of dfs in values
"""
wb_ = load_workbook(path_)
sheets = wb_.worksheets
is_iter = isinstance(sheet_identify, Iterable)
if not is_iter:
return _read_sheet(sheets, sheet_identify)
else:
if not dict_out:
return [_read_sheet(sheets, identify) for identify in sheet_identify]
else:
out_dict_ = {}
for identify in sheet_identify:
out_dict_[identify] = _read_sheet(sheets, identify)
return out_dict_
def _input_sheet(ws_, df_):
index_value = df_.index.names
info = list(index_value) + list(df_.columns)
ws_.append(info)
for i_ in range(df_.shape[0]):
index_value = df_.index[i_]
is_iter = isinstance(index_value, Iterable)
if not is_iter:
index_value = [index_value]
info = list(index_value) + list(df_.iloc[i_])
ws_.append(info)
return ws_
def save_xlsx(save_path_, df_list_, titles=None):
"""
Save df, list of dfs, or dict of dfs to excel file
:param save_path_: excel file path
:param df_list_: df, list of dfs, or dict of dfs
:param titles: giving the sheet names of each dfs if df_list_'s type is list
:return:
"""
is_df = type(df_list_) == pd.core.frame.DataFrame
if is_df:
df_list_ = [df_list_]
is_dict = type(df_list_) == dict
if not is_dict:
if not titles:
titles = ['sheet' + str(i_) for i_ in range(1, len(df_list_) + 1)]
out_dict_ = {}
for title, df_ in zip(titles, df_list_):
out_dict_[title] = df_
wb_ = Workbook()
for i_, zip_info in enumerate(zip(out_dict_.keys(), out_dict_.values())):
ws_ = wb_.create_sheet(zip_info[0], i_)
ws_ = _input_sheet(ws_, zip_info[1])
wb_.save(save_path_)
# universal read framework
def read_file(path_, sep=False, extension=None, sheet_identify=0, split=None, num_limit=0, decode='utf-8', **kwargs):
"""
A universal read framework
:param path_: file path
:param sep: delimiter to use
:param sheet_identify: str, int, list, or None
:param split: fot fa and fq files, id info will split by the given separation and keep the front part
:param num_limit: fot fa and fq files, stop reading at which line
:param decode: fot fa and fq files, decode parameter
:param kwargs: keyword argument for table like file
:return: file
"""
file_name = path_.rsplit('/', 1)[-1]
if extension is not None:
houzhui_ = extension
else:
houzhui_ = file_name.rsplit('.', 1)[-1]
if houzhui_ in ['txt', 'tsv', 'bed', 'tlx']:
if not sep:
sep = '\t'
return pd.read_csv(path_, sep=sep, **kwargs)
elif houzhui_ in ['csv']:
if not sep:
sep = ','
return pd.read_csv(path_, sep=sep, **kwargs)
elif houzhui_ in ['xls', 'xlsx', 'xlsm', 'xlsb', 'odf', 'ods', 'odt']:
return pd.read_excel(path_, sheet_name=sheet_identify, **kwargs)
elif houzhui_ in ['pkl']:
return read_pickle(path_)
elif houzhui_ in ['fa', 'fasta']:
from .bio_tools import read_fasta
return read_fasta(path_, split)
elif '.fastq' in file_name or '.fq' in file_name:
from .bio_tools import read_fastq, read_fastq_gz
if houzhui_ in ['gz']:
return read_fastq_gz(path_, split, num_limit, decode)
else:
return read_fastq(path_, split)
elif houzhui_ in ['pdb']:
from .bio_tools import read_pdb_file
return read_pdb_file(path_)
elif houzhui_ in ['json']:
return read_json(path_)
else:
raise ValueError('File type .' + houzhui_ + ' has not been added to this function yet. ')
def _mark_code_path(path_, name_, replace=False):
_path = path_ + 'result_from_code.txt'
isExist = os.path.exists(_path)
if replace:
_info_dict = {}
if isExist:
with open(_path, 'r') as _file:
for _line in _file:
_line_info = _line.strip('\n').split('\t')
_info_dict[_line_info[0]] = _line_info[1]
_info_dict[name_] = _code_path
with open(_path, 'w+') as _file:
for _key, _value in zip(_info_dict.keys(), _info_dict.values()):
_string = _key + '\t' + _value + '\n'
_file.write(_string)
else:
if isExist:
with open(_path, 'a+') as _file:
_string = name_ + '\t' + _code_path + '\n'
_file.write(_string)
else:
with open(_path, 'w+') as _file:
_string = name_ + '\t' + _code_path + '\n'
_file.write(_string)
# universal write framework
def write_file(path_, data_, sep=False, extension=None, sheet_identify='sheet1', mark_code_path=True, replace=False, **kwargs):
"""
A universal write framework
:param path_: file path
:param data_: data to save
:param sep: delimiter to use
:param sheet_identify: str, int, list, or None
:param mark_code_path: mark where the code is within a config-like file
:param replace: if replace the same result path in the config-like file
:param kwargs: keyword argument for table like file
:return: None
"""
file_name = path_.rsplit('/', 1)[-1]
if extension is not None:
houzhui_ = extension
else:
houzhui_ = file_name.rsplit('.', 1)[-1]
if houzhui_ in ['txt', 'tsv', 'bed', 'tlx']:
if not sep:
sep = '\t'
data_.to_csv(path_, sep=sep, **kwargs)
elif houzhui_ in ['csv']:
if not sep:
sep = ','
data_.to_csv(path_, sep=sep, **kwargs)
elif houzhui_ in ['xls', 'xlsx', 'xlsm', 'xlsb', 'odf', 'ods', 'odt']:
data_.to_excel(path_, sheet_name=sheet_identify, **kwargs)
elif houzhui_ in ['pkl']:
write_pickle(path_, data_)
elif houzhui_ in ['fa', 'fasta']:
from .bio_tools import write_fasta
write_fasta(path_, data_)
elif 'fastq' in file_name or 'fq' in file_name:
from .bio_tools import write_fastq, write_fastq_gz
if houzhui_ in ['gz']:
write_fastq_gz(path_, data_)
else:
write_fastq(path_, data_)
elif houzhui_ in ['pdb']:
from .bio_tools import write_pdb_file
write_pdb_file(path_, data_)
elif houzhui_ in ['json']:
write_json(path_, data_)
else:
raise ValueError('File type .' + houzhui_ + ' has not been added to this function yet. ')
if mark_code_path:
_mark_code_path(path_.rsplit('/', 1)[0] + '/', path_.rsplit('/', 1)[-1], replace=replace) | zsx-some-tools | /zsx_some_tools-2.0.1-py3-none-any.whl/zsx_some_tools/read_write_tools.py | read_write_tools.py |
from collections import defaultdict
from functools import reduce
import gzip
import os
import pandas as pd
import numpy as np
# 全排列函数
"""
Permutations list.
:x: two-layer nested list with str as fundamental element.
:code: joiner string, default is ''
:return: str list
"""
fn = lambda x, code='': reduce(lambda y, z: [str(i) + code + str(j) for i in y for j in z], x)
# PDB file related infomation
class PDBfilereader(object):
from collections import defaultdict as __defaultdict
from collections import namedtuple as __namedtuple
__columns = ['ATOM', 'SPACE2', 'serial', 'SPACE1', 'name', 'altLoc', 'resName', 'SPACE1', 'chainID', 'resSeq',
'iCode', 'SPACE3', 'x', 'y', 'z', 'occupancy', 'tempFactor', 'SPACE6', 'segID', 'element', 'charge']
__split = [0, 4, 6, 11, 12, 16, 17, 20, 21, 22, 26, 27,
30, 38, 46, 54, 60, 66, 72, 76, 78, 80]
__type = [str, str, int, str, str, str, str, str, str, int,
str, str, float, float, float, float, float, str, str, str, str]
__round = [False, False, False, False, False, False, False, False, False, False,
False, False, 3, 3, 3, 2, 2, False, False, False, False]
__direction = ['left', 'left', 'right', 'left', 'middle', 'left', 'left', 'left', 'left', 'right',
'left', 'left', 'right', 'right', 'right', 'right', 'right', 'left', 'left', 'right', 'left']
__info = __namedtuple('pdb_info', ['split_info', 'type_info', 'round_info', 'direction_info'])
__info_dict = __defaultdict(tuple)
for __i, __col in enumerate(__columns):
__info_use = __info(__split[__i: __i + 2], __type[__i], __round[__i], __direction[__i])
__info_dict[__col] = __info_use
def __init__(self, silence=False):
if not silence:
print('pdb_columns = pdb_file_reader.columns\n'
'pdb_split = pdb_file_reader.split\n'
'pdb_type = pdb_file_reader.typing\n'
'pdb_round = pdb_file_reader.rounding\n'
'pdb_direction = pdb_file_reader.direction\n'
'pdb_info_dict = pdb_file_reader.info_dict')
@property
def columns(self):
return self.__columns
@property
def split(self):
return self.__split
@property
def typing(self):
return self.__type
@property
def rounding(self):
return self.__round
@property
def direction(self):
return self.__direction
@property
def info_dict(self):
return self.__info_dict
# amino acid related infomation
class Aminoacid(object):
from collections import defaultdict as __defaultdict
__seqnum = ['A', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'K', 'L', 'M', 'N', 'P', 'Q', 'R', 'S', 'T', 'V', 'W', 'Y', '*']
__aa_dict = {
'TTT': 'F', 'TTC': 'F', 'TTA': 'L', 'TTG': 'L',
'TCT': 'S', 'TCC': 'S', 'TCA': 'S', 'TCG': 'S',
'TAT': 'Y', 'TAC': 'Y', 'TAA': '*', 'TAG': '*',
'TGT': 'C', 'TGC': 'C', 'TGA': '*', 'TGG': 'W',
'CTT': 'L', 'CTC': 'L', 'CTA': 'L', 'CTG': 'L',
'CCT': 'P', 'CCC': 'P', 'CCA': 'P', 'CCG': 'P',
'CAT': 'H', 'CAC': 'H', 'CAA': 'Q', 'CAG': 'Q',
'CGT': 'R', 'CGC': 'R', 'CGA': 'R', 'CGG': 'R',
'ATT': 'I', 'ATC': 'I', 'ATA': 'I', 'ATG': 'M',
'ACT': 'T', 'ACC': 'T', 'ACA': 'T', 'ACG': 'T',
"AAT": "N", "AAC": "N", "AAA": "K", "AAG": "K",
"AGT": "S", "AGC": "S", "AGA": "R", "AGG": "R",
"GTT": "V", "GTC": "V", "GTA": "V", "GTG": "V",
"GCT": "A", "GCC": "A", "GCA": "A", "GCG": "A",
"GAT": "D", "GAC": "D", "GAA": "E", "GAG": "E",
"GGT": "G", "GGC": "G", "GGA": "G", "GGG": "G"}
__nt_dict = __defaultdict(list)
for __nt in __aa_dict.keys():
__nt_dict[__aa_dict[__nt]] += [__nt]
# 三元组的命名为被排除者的后继字母
__degeneracy = {'W': ['A', 'T'], 'S': ['G', 'C'], 'K': ['G', 'T'], 'M': ['A', 'C'], 'R': ['A', 'G'],
'Y': ['C', 'T'],
'B': ['T', 'C', 'G'], 'D': ['T', 'A', 'G'], 'H': ['T', 'C', 'A'], 'V': ['A', 'C', 'G'],
'N': ['A', 'C', 'T', 'G']}
def __init__(self, silence=False):
if not silence:
print('aa_dict = amino_acid_info.aa_dict\n'
'nt_dict = amino_acid_info.nt_dict\n'
'degeneracy = amino_acid_info.degeneracy\n'
'seqnum = amino_acid_info.seqnum')
@property
def seqnum(self):
return self.__seqnum
@property
def aa_dict(self):
return self.__aa_dict
@property
def nt_dict(self):
return self.__nt_dict
@property
def degeneracy(self):
return self.__degeneracy
# PDB file read write
def read_pdb_file(path_):
"""
Read pdb file.
:param path_: pdb file path
:return: pdb file
"""
def read_pdb_line(line__, pdb_columns_, pdb_info_dict_):
line_info_ = []
for col__ in pdb_columns_:
split_info_ = pdb_info_dict_[col__].split_info
type_info_ = pdb_info_dict_[col__].type_info
try:
info_ = type_info_(line__[split_info_[0]: split_info_[1]].strip(' '))
except ValueError:
info_ = ''
line_info_ += [info_]
return line_info_
pdb_file_reader = PDBfilereader(silence=True)
pdb_columns = pdb_file_reader.columns
pdb_info_dict = pdb_file_reader.info_dict
data_ = []
with open(path_, 'r+') as pdb_file_:
for line_ in pdb_file_:
line_ = line_.strip('\n')
data_ += [read_pdb_line(line_, pdb_columns, pdb_info_dict)]
data_ = pd.DataFrame(data_, columns=pdb_columns)
return data_
def write_pdb_file(path_, data_):
"""
Save df to pdb file (Format requirements are strict)
:param path_: save path
:param data_: pdb df
:return: None
"""
from .basic_tools import exactly_round
def write_pdb_block(string__, value__, col__, i__, pdb_info_dict_):
split_info = pdb_info_dict_[col__].split_info
round_info = pdb_info_dict_[col__].round_info
direction_info = pdb_info_dict_[col__].direction_info
if round_info:
try:
value__ = exactly_round(value__, round_info)
except ValueError:
value__ = ''
value__ = str(value__)
length_exp = split_info[1] - split_info[0]
length_true = len(value__)
if length_true == length_exp:
string__ += value__
elif length_true > length_exp:
raise ValueError('Value in row \'' + str(i__) + '\' and in col \'' +
col__ + '\' (\'' + value__ + '\') is too long to be set in a PDB file.')
else:
diff = length_exp - length_true
if direction_info == 'right':
value__ = ' ' * diff + value__
elif direction_info == 'left':
value__ = value__ + ' ' * diff
elif direction_info == 'middle':
value__ = ' ' + value__ + ' ' * (diff - 1)
string__ += value__
return string__
pdb_file_reader = PDBfilereader(silence=True)
pdb_info_dict = pdb_file_reader.info_dict
with open(path_, 'w+') as pdb_file_:
for i_ in range(data_.shape[0]):
string_ = ''
for j_, col_ in enumerate(data_.columns):
value_ = data_.iloc[i_, j_]
string_ = write_pdb_block(string_, value_, col_, i_, pdb_info_dict)
pdb_file_.write(string_ + '\n')
# keep for those old code
def amino_acid():
"""
Replaced by class Aminoacid()
:return: amino acid related infomation
"""
seqnum = ['A', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'K', 'L', 'M', 'N', 'P', 'Q', 'R', 'S', 'T', 'V', 'W', 'Y', '*']
aa_dict = {
'TTT': 'F', 'TTC': 'F', 'TTA': 'L', 'TTG': 'L',
'TCT': 'S', 'TCC': 'S', 'TCA': 'S', 'TCG': 'S',
'TAT': 'Y', 'TAC': 'Y', 'TAA': '*', 'TAG': '*',
'TGT': 'C', 'TGC': 'C', 'TGA': '*', 'TGG': 'W',
'CTT': 'L', 'CTC': 'L', 'CTA': 'L', 'CTG': 'L',
'CCT': 'P', 'CCC': 'P', 'CCA': 'P', 'CCG': 'P',
'CAT': 'H', 'CAC': 'H', 'CAA': 'Q', 'CAG': 'Q',
'CGT': 'R', 'CGC': 'R', 'CGA': 'R', 'CGG': 'R',
'ATT': 'I', 'ATC': 'I', 'ATA': 'I', 'ATG': 'M',
'ACT': 'T', 'ACC': 'T', 'ACA': 'T', 'ACG': 'T',
"AAT": "N", "AAC": "N", "AAA": "K", "AAG": "K",
"AGT": "S", "AGC": "S", "AGA": "R", "AGG": "R",
"GTT": "V", "GTC": "V", "GTA": "V", "GTG": "V",
"GCT": "A", "GCC": "A", "GCA": "A", "GCG": "A",
"GAT": "D", "GAC": "D", "GAA": "E", "GAG": "E",
"GGT": "G", "GGC": "G", "GGA": "G", "GGG": "G"}
nt_dict = defaultdict(list)
for nt in aa_dict.keys():
nt_dict[aa_dict[nt]] += [nt]
# 三元组的命名为被排除者的后继字母
other_name = {'W': ['A', 'T'], 'S': ['G', 'C'], 'K': ['G', 'T'], 'M': ['A', 'C'], 'R': ['A', 'G'], 'Y': ['C', 'T'],
'B': ['T', 'C', 'G'], 'D': ['T', 'A', 'G'], 'H': ['T', 'C', 'A'], 'V': ['A', 'C', 'G']}
return seqnum, aa_dict, nt_dict, other_name
def design_motif(motif_name_):
"""
Use nucleotide degeneracy name to design motifs. For example, 'WRC' motif equal to ['AAC', 'AGC', 'TAC', 'TGC']
:param motif_name_: str, degeneracy name
:return: list of nucleotide motif
"""
amino_acid_info_ = Aminoacid(silence=True)
degeneracy_ = amino_acid_info_.degeneracy
motif_list_ = []
for alphabet_ in motif_name_:
if alphabet_ in degeneracy_.keys():
motif_list_ += [degeneracy_[alphabet_]]
else:
motif_list_ += [[alphabet_]]
return fn(motif_list_)
# dna to amino acid
def translate(dna_reference, aa_dictionary, silence=False):
"""
Translate nucleotide (DNA) sequence to amino acid (protein) sequence
:param dna_reference: DNA cequence
:param aa_dictionary: dict, keys are dna codons, vlaues are amino acid names.
:param silence: print warning message or not
:return: amino acid sequence
"""
length = len(dna_reference)
if length % 3 != 0:
if silence:
return None
else:
return print('DNA length can not be divided by 3.')
aa_seq = ''
for i in range(length // 3):
aa_seq += aa_dictionary[dna_reference[3 * i: 3 * (i + 1)]]
return aa_seq
def find_motif(sequence_, motif_, mot_p=2):
"""
To find motif positions in such sequence
:param sequence_: str, using sequence
:param motif_: list, motif that designed by design_motif()
:param mot_p: position start with the i_th nucleotide of the motif
:return: list of positions that exist the motif
"""
result_ = []
mot_l = len(motif_[0])
for i_ in range(len(sequence_) - len(motif_[0]) + 1):
mot_ = sequence_[i_: i_ + mot_l]
if mot_ in motif_:
result_ += [i_ + mot_p]
return result_
def count_motif(sequence_, motif_):
"""
To calculate the number of motif in such sequence
:param sequence_: str, using sequence
:param motif_: list, motif that designed by design_motif()
:return: int, motif number
"""
num_ = 0
mot_l = len(motif_[0])
for i_ in range(len(sequence_) - len(motif_[0]) + 1):
mot_ = sequence_[i_: i_ + mot_l]
if mot_ in motif_:
num_ += 1
return num_
def extract_motif(sequence_, motif_, extract_1=6, extract_2=False):
"""
:param sequence_:
:param motif_:
:param n_:
:param extract_1:
:param extract_2:
:return:
"""
start = extract_1
end = int(len(sequence_)) - (extract_2 + n_ - 1)
n_ = len(motif_[0])
result_ = []
for i_ in range(start, end):
mot = sequence_[i_: i_ + n_]
if mot in motif_:
seq_out = sequence_[i_ - extract_1: i_ + extract_2 + n_]
seq_left = seq_out[: extract_1]
seq_mid = seq_out[extract_1: extract_1 + n_]
seq_end = seq_out[extract_1 + n_:]
result_ += [[seq_left, seq_mid, seq_end]]
return result_
def get_unique_clonotype(clonotype_data_):
"""
:param clonotype_data_:
:return:
"""
iloc_ind = []
ind_use = set()
for ind in clonotype_data_.index:
if ind not in ind_use:
iloc_ind += [True]
else:
iloc_ind += [False]
ind_use.add(ind)
return clonotype_data_.iloc[iloc_ind]
def read_fasta(path_, split=None):
"""
Read fasta file as a df
:param path_: fasta file path
:param split: id info will split by the given separation and keep the front part
:return: df, index is id, and only one column is sequence
"""
i_ = 0
result_ = []
fasta2 = ''
for line_ in open(path_):
line_ = line_.strip("\n").strip("\r")
if '>' in line_:
if i_:
result_ += [[fasta1, fasta2]]
fasta2 = ''
fasta1 = line_.split('>', 1)[-1].split(split)[0] # need more parameter
i_ = 1
else:
fasta2 += line_
result_ += [[fasta1, fasta2]]
result_ = pd.DataFrame(result_, columns=['ID', 'seq'])
result_ = result_.set_index('ID')
return result_
def write_fasta(path_, data__):
"""
Save fasta file
:param path_: save path
:param data_: df
:return: None
"""
if data__.shape[1] == 1:
data_ = data__.reset_index()
elif data__.shape[1] > 2:
data_ = data__.iloc[:, :2]
else:
data_ = data__
with open(path_, 'w+') as file_:
for i in range(data_.shape[0]):
file_.write('>' + str(data_.iloc[i, 0]) + '\n')
file_.write(str(data_.iloc[i, 1]) + '\n')
def read_fastq(path_, split=None):
"""
Read fastq file as a df
:param path_: fastq file path
:param split: id info will split by the given separation and keep the front part
:return: df, index is id, and three columns for sequence, addition info, and sequencing quality
"""
fastq1 = []
fastq2 = []
fastq3 = []
fastq4 = []
for i_, line_ in enumerate(open(path_)):
line_ = line_.strip("\n").strip("\r")
if i_ % 4 == 0:
line_ = line_.split(sep='@')[-1].split(split)[0] # more parameter
fastq1 += [line_]
if i_ % 4 == 1:
fastq2 += [line_]
if i_ % 4 == 2:
fastq3 += [line_]
if i_ % 4 == 3:
fastq4 += [line_]
fastq1 = pd.DataFrame(fastq1)
fastq2 = pd.DataFrame(fastq2)
fastq3 = pd.DataFrame(fastq3)
fastq4 = pd.DataFrame(fastq4)
fastq = pd.concat([fastq1, fastq2, fastq3, fastq4], axis=1)
fastq.columns = ['ID', 'seq', '+', 'qua']
fastq = fastq.set_index('ID')
return fastq
def write_fastq(path_, data__):
"""
Save fastq file
:param path_: save path
:param data_: df
:return: None
"""
if data__.shape[1] == 3:
data_ = data__.reset_index()
elif data__.shape[1] > 4:
data_ = data__.iloc[:, :4]
else:
data_ = data__
with open(path_, 'w+') as file_:
for i in range(data_.shape[0]):
file_.write('@' + str(data_.iloc[i, 0]) + '\n')
file_.write(str(data_.iloc[i, 1]) + '\n')
file_.write(str(data_.iloc[i, 2]) + '\n')
file_.write(str(data_.iloc[i, 3]) + '\n')
def read_fastq_gz(path, split='.', num_limit=0, decode='utf-8'):
"""
Read fastq.gz file as a df
:param path_: fastq file path
:param split: id info will split by the given separation and keep the front part
:param num_limit: stop reading at which line
:param decode: decode parameter
:return: df, index is id, and three columns for sequence, addition info, and sequencing quality
"""
fastq1 = []
fastq2 = []
fastq3 = []
fastq4 = []
i = 0
if num_limit == 0:
num_limit = 99999999
with gzip.open(path, 'r') as file_:
for line_ in file_:
line_ = line_.decode(decode).strip('\n').strip("\r")
if ('\x00' in line_) and i > 0:
continue
if i % 4 == 0:
line_ = line_.split(sep='@')[-1].split(sep=split)[0] # more parameter
fastq1 += [line_]
elif i % 4 == 1:
fastq2 += [line_]
elif i % 4 == 2:
fastq3 += [line_]
else:
fastq4 += [line_]
i += 1
if i == num_limit * 4:
break
fastq1 = pd.DataFrame(fastq1)
fastq2 = pd.DataFrame(fastq2)
fastq3 = pd.DataFrame(fastq3)
fastq4 = pd.DataFrame(fastq4)
fastq = pd.concat([fastq1, fastq2, fastq3, fastq4], axis=1)
fastq.columns = ['ID', 'seq', '+', 'qua']
fastq = fastq.set_index('ID')
return fastq
def write_fastq_gz(path_, data_):
"""
Save fasta.gz file
:param path_: save path
:param data_: df
:return: None
"""
with gzip.open(path_, 'w+') as file_:
for i in range(data_.shape[0]):
file_.write(('@' + str(data_.iloc[i, 0]) + '\t').encode())
file_.write((str(data_.iloc[i, 1]) + '\t').encode())
file_.write((str(data_.iloc[i, 2]) + '\t').encode())
file_.write((str(data_.iloc[i, 3]) + '\t').encode())
def read_fastq_gz_onlyseq(path_, split='.', num_limit=False, decode=None):
"""
Read fastq.gz file as a df only contain id and sequence
:param path_: fastq file path
:param split: id info will split by the given separation and keep the front part
:param num_limit: stop reading at which line.
:param decode: decode parameter
:return:
"""
fastq1 = []
fastq2 = []
i = 0
with gzip.open(path_, 'r') as f1:
# con = f1.readlines()
for line in f1:
line = line.decode(decode).strip('\n').strip("\r")
if ('\x00' in line) and i > 0:
continue
if i % 4 == 0:
line = line.split(sep='@')[-1].split(sep=split)[0] # more parameter
fastq1 += [line]
elif i % 4 == 1:
fastq2 += [line]
i += 1
if i == num_limit * 4:
break
fastq1 = pd.DataFrame(fastq1)
fastq2 = pd.DataFrame(fastq2)
fastq = pd.concat([fastq1, fastq2], axis=1)
fastq.columns = ['ID', 'seq']
fastq = fastq.set_index('ID')
return fastq
def complementary(sequence, reverse=True):
"""
Reverse and complementary the sequence
:param sequence: sequence
:param reverse: reverse or not
:return: processed sequence
"""
if reverse:
sequence = sequence[::-1]
cha = ""
for c in range(len(sequence)):
if sequence[c] == "A":
cha += "T"
elif sequence[c] == "T":
cha += "A"
elif sequence[c] == "C":
cha += "G"
else:
cha += "C"
return cha | zsx-some-tools | /zsx_some_tools-2.0.1-py3-none-any.whl/zsx_some_tools/bio_tools.py | bio_tools.py |
from prettytable import PrettyTable, ORGMODE
from dataclasses import dataclass, asdict
from configparser import ConfigParser
from datetime import datetime
from pathlib import Path
from enum import Enum
import parsedatetime as pdt
import pandas as pd
import requests
import json
import sys
import re
import os
# needs the ztask.ini in home .ztask
class TablePrinter:
def __init__(self):
# ;background
self.R = "\033[0;31;10m" # RED
self.G = "\033[0;32;10m" # GREEN
self.Y = "\033[0;33;10m" # Yellow
self.B = "\033[0;34;10m" # Blue
self.N = "\033[0m" # Reset
def colorize(self, df, key):
df[key] = self.R + df[key] + self.N
return df
def colorize_header(self, df):
df.columns = self.G + df.columns + self.N
return df
def print_table(self, df):
df = self.colorize_header(df)
x = PrettyTable()
x.set_style(ORGMODE)
x.field_names = list(df.keys())
x.add_rows(df.values.tolist())
print(x)
class ZohoClient:
def __init__(self, client_id, client_secret, refresh_token):
self.client_id = client_id
self.client_secret = client_secret
self.refresh_token = refresh_token
self.access_token = None
self.headers = None
self.get_access_token()
self.counter = 0
def get_access_token(self):
"""
gets/refresh access token
"""
account_url = 'https://accounts.zoho.eu'
url = f'{account_url}/oauth/v2/token?refresh_token={self.refresh_token}&client_id={self.client_id}' \
f'&client_secret={self.client_secret}&grant_type=refresh_token'
try:
self.access_token = self.post_request(url)["access_token"]
self.headers = {'Authorization': f'Zoho-oauthtoken {self.access_token}'}
except (Exception,):
print('\nGet access token failed')
print('ztask get_refresh_token grant_token')
def get_refresh_token(self, grant_token):
"""
Not really working but kind of useful for first set up, need to change the grant_token
make this from https://api-console.zoho.eu/client/{self.client_id}
scope
ZohoProjects.tasks.ALL,ZohoProjects.timesheets.ALL,ZohoProjects.projects.ALL,ZohoProjects.portals.READ,
ZohoProjects.bugs.ALL
copy the grant_token and run this function with it to get the refresh token
"""
url = f'https://accounts.zoho.eu/oauth/v2/token?code={grant_token}&client_id={self.client_id}' \
f'&client_secret={self.client_secret}&grant_type=authorization_code'
r = requests.post(url)
refresh_token = json.loads(r.text)['refresh_token']
print("refresh_token:", refresh_token)
return refresh_token
def request(self, fun, url, params=''):
"""
Get request even if the token is expired
"""
r = fun(url, params=params, headers=self.headers)
try:
r.raise_for_status()
except requests.exceptions.HTTPError as e:
print('HTTPError maybe oauth expired, will be renovated...')
print(r)
self.get_access_token()
if self.counter < 2:
self.counter += 1
r = fun(url, params=params, headers=self.headers)
else:
print("limit trials exceeded")
print('generate new refresh token using get_refresh_token grant_token')
return json.loads(r.text)
def post_request(self, url, params=''):
"""
post request even if the token is expired, maybe a decorator would be better...
or making a new token all the time... don't know
"""
return self.request(requests.post, url, params=params)
def get_request(self, url, params=''):
"""
Get request even if the token is expired
"""
return self.request(requests.get, url, params=params)
class DateParser:
def __init__(self):
pass
@staticmethod
def natural_date(string_date):
"""
from natural language to date
"""
cal, now = pdt.Calendar(), datetime.now()
return cal.parseDT(string_date, now)[0].strftime("%m-%d-%Y")
def date(self, string_date):
""" for printing dates from terminal """
print('mm-dd-yyyy:', self.natural_date(string_date))
@staticmethod
def parse_hours(string):
"""
This function parses different input hours,
such as 7-20 in 07:20 or 4 into 04:00
"""
reg = re.match("([0-9]{0,2})[:-]?([0-9]{0,2})", string)
hours, minutes = int(reg[1].strip() or 0), int(reg[2].strip() or 0)
return f'{hours:02d}:{minutes:02d}'
def time(self, string):
print(self.parse_hours(string))
class NoValue(Enum):
def __repr__(self):
return '<%s.%s>' % (self.__class__.__name__, self.name)
class TaskBugType(NoValue):
BUG = 'bugs'
TASK = 'tasks'
@dataclass
class TaskBug:
_id: str
task_name: str
project_id: int
project_name: str
completed: str
status: str
key: str
class ZohoManager(ZohoClient, DateParser):
def __init__(self, user_id, client_id, client_secret, refresh_token):
"""
would be good to use env variables here
"""
super().__init__(client_id, client_secret, refresh_token)
self.url_portal = f"https://projectsapi.zoho.eu/restapi/portal"
self.url_portals = self.url_portal + 's/'
self.user_id = user_id
self.portal_id = None
# self.get_user_data()
self.get_my_portal()
self.printer = TablePrinter()
def get_user_data(self):
"""
Can be obtained from zoho projects easily but would be better scripted
"""
pass
def get_my_portal(self):
"""
There is only one portal in thi case... could be hard coded
"""
try:
portals = self.get_request(self.url_portals)
self.portal_id = portals['portals'][0]['id']
except (Exception,):
print('get_my_portal_failed')
def my_task_bugs_raw(self, tasks_bugs: TaskBugType): # tasks_bugs "tasks"|"bugs"
url = f"{self.url_portal}/{self.portal_id}/my{tasks_bugs.value}/"
return self.get_request(url)[tasks_bugs.value]
@property
def my_tasks_raw(self):
return self.my_task_bugs_raw(TaskBugType.TASK)
@property
def my_bugs_raw(self):
return self.my_task_bugs_raw(TaskBugType.BUG)
def make_task_bugs_list(self, tasks_bugs: TaskBugType):
"""
Fetches my task into a df
"""
tasks = []
for task in self.my_task_bugs_raw(tasks_bugs):
if tasks_bugs == TaskBugType.TASK:
info = (task['name'], task['project']['id'], task['project']['name'],
task['completed'], task['status']['name'])
elif tasks_bugs == TaskBugType.BUG:
info = (task['title'], task['project_id'], task['project_name'], task['closed'], task['status']['type'])
else:
raise print("tasks_bugs should be either task or bug")
task_dict = TaskBug(task['id'], *info, task['key'])
tasks.append(asdict(task_dict))
return tasks
def make_dataframe(self, task_bug_list):
"""Makes a df from task bug list"""
tasks = task_bug_list
tasks = pd.DataFrame(tasks).sort_values(by=['project_name'])
tasks = self.parse_task_type(tasks)
tasks = tasks[~tasks.status.isin(['Task Completed', 'Closed', 'Cancelled'])]
return tasks.reset_index(drop=True)
@staticmethod
def parse_task_type(df):
"""converts task key identifier in task or issue str"""
df['key'] = df['key'].str.extract(r'-([IT])[0-9]*$')
df['key'] = df['key'].replace("T", "Task")
df['key'] = df['key'].str.replace("I", "Bug")
return df
@property
def my_task(self):
return self.make_dataframe(self.make_task_bugs_list(TaskBugType.TASK))
@property
def my_bugs(self):
return self.make_dataframe(self.make_task_bugs_list(TaskBugType.BUG))
@property
def my_task_bugs(self):
task_bugs_list = self.make_task_bugs_list(TaskBugType.TASK) + self.make_task_bugs_list(TaskBugType.BUG)
return self.make_dataframe(task_bugs_list)
def get_task_from_simple_id(self, task_simple_id):
return self.my_task_bugs.loc[int(task_simple_id)]
def log(self, task_simple_id, date, hours):
"""
Main function to log task
"""
task = self.get_task_from_simple_id(task_simple_id)
task_id, project_id = task['_id'], task['project_id']
date = self.natural_date(date)
hours = self.parse_hours(hours)
tasks_bugs = task['key'].lower()+'s'
url = f"{self.url_portal}/{self.portal_id}/projects/{project_id}/{tasks_bugs}/{task_id}/logs/"
params = {"date": date, "bill_status": "Billable", "hours": hours, "owner": self.user_id}
r = self.post_request(url, params=params)
print(f"Task {task_simple_id}: {task['task_name']}, logged :)")
return r
def update(self, task_simple_id, params):
"""
Update task from params and task id
"""
task = self.get_task_from_simple_id(task_simple_id)
url = f"{self.url_portal}/{self.portal_id}/projects/{task['project_id']}/tasks/{task['id']}/"
print(url)
r = self.post_request(url, params=str(params))
if "error" not in dict(r):
print(f"Task {task_simple_id} {task['task_name']} Updated :)")
else:
print(r)
return r
def list(self):
"""
tasks list
"""
df = self.my_task_bugs[['project_name', 'task_name', 'status', 'key']].copy()
df['type'] = df['key'].str.replace("Bug", "Issue")
df['task name'] = df['task_name'].str.slice(0, 50)
df['project name'] = df['project_name'].str.slice(0, 18)
df['status'] = df['status'].str.slice(0, 15)
df['index'] = df.index.values
self.printer.print_table(df[['index', 'project name', 'task name', 'status', 'type']])
def example(self):
df = pd.DataFrame({'index': [1, 2, 3, 4],
'project name': ['Project - ONE', 'Project - ONE', 'Project - ONE', 'Another - project'],
'task name': ['Cool task ', 'Boring task 2', 'Another task with a long name',
'One more task'],
'status': ['Not Started', 'BackLog', 'To be tested', 'In Dev'],
'type': ['Bug', 'Task', 'Task', 'Task'],
})
self.printer.print_table(df[['index', 'project name', 'task name', 'status', 'type']])
def long(self):
"""gets all my task, even closed ones"""
tasks = self.my_task_bugs
tasks['index'] = tasks.index.values
tasks['type'] = tasks['key'].str.replace("Bug", "Issue")
self.printer.print_table(tasks[['index', 'project_name', 'task_name', 'status', 'type']])
return tasks
def __str__(self):
""" prints task"""
self.list()
return ""
class ConfigCreator:
def __init__(self):
self.var_directory = os.path.join(str(Path.home()), '.ztask')
self.config_path = os.path.join(self.var_directory, 'ztask.ini')
if not Path(self.config_path).exists():
self.create_config_file_if_not_exist()
else:
parser = ConfigParser()
parser.read(self.config_path)
var = parser['variables']
self.client_id, self.client_secret = var['client_id'], var['client_secret']
self.refresh_token, self.user_id = var['refresh_token'], var['user_id']
def create_config_file_if_not_exist(self):
print('As it is your first time here we need to create your api connection credentials.')
Path(self.var_directory).mkdir(parents=True, exist_ok=True)
config = ConfigParser()
self.client_id = input('Introduce here your client id from https://api-console.zoho.eu create self client')
self.client_secret = input('Introduce here your client_secret from the same place')
grant_token = input('Introduce here your grant token, you will need to copy and paste the scope\n'
'ZohoProjects.tasks.ALL,ZohoProjects.timesheets.ALL,ZohoProjects.projects.ALL,'
'ZohoProjects.portals.READ,ZohoProjects.bugs.ALL')
self.refresh_token = ZohoClient(self.client_id, self.client_secret, '').get_refresh_token(grant_token)
self.user_id = input('Introduce here your user_id from zoho projects right corner clicking on user')
config['variables'] = {'client_id': self.client_id, 'client_secret': self.client_secret,
'refresh_token': self.refresh_token, 'user_id': self.user_id}
with open(Path(self.config_path), 'w') as configfile:
config.write(configfile)
print(f'Configuration file created at {self.config_path}')
def main():
cc = ConfigCreator()
ztask = ZohoManager(cc.user_id, cc.client_id, cc.client_secret, cc.refresh_token)
if len(sys.argv) == 1:
ztask.list()
else:
getattr(ztask, sys.argv[1])(*sys.argv[2:])
if __name__ == '__main__':
main() | ztask | /ztask-0.1.5-py3-none-any.whl/ztask.py | ztask.py |
# My Custom Debugger
## Features
- [ ] Setup email address once
- [x] Job fail email alert
- [x] Easy-to-use functions
## Install
```bash
pip install ztdebugger
```
## Simple Usage
```python
from ztdebugger import ic
def foo(i):
return i + 333
ic(foo(123))
```
Prints
```bash
ic| foo(123): 456
```
```python
d = {'key': {1: 'one'}}
ic(d['key'][1])
class klass():
attr = 'yep'
ic(klass.attr)
```
Prints
```bash
ic| d['key'][1]: 'one'
ic| klass.attr: 'yep'
```
## Find where you are
```python
from ztdebugger import ic
def hello():
ic()
if 1:
ic()
else:
ic()
hello()
```
Prints
```bash
ic| tmp.py:5 in hello() at 20:59:30.457
ic| tmp.py:7 in hello() at 20:59:30.458
```
## Checkpoint
Add this inside your code anywhere you like
```python
ic.d()
```
## Decorator
```python
@ic.snoop()
def hello()
s = "123"
return s
print(hello())
```
Prints
```bash
Source path:... /Users/admin/Downloads/tmp.py
20:56:24.294518 call 5 def hello():
20:56:24.294597 line 6 s = "123"
New var:....... s = '123'
20:56:24.294608 line 7 return s
20:56:24.294623 return 7 return s
Return value:.. '123'
Elapsed time: 00:00:00.000135
123
```
## Colorful Debugger
As same as [rich.traceback.install()](https://rich.readthedocs.io/en/stable/traceback.html#automatic-traceback-handler)
```python
from ztdebugger import ic
# By default
ic.init()
# Add which email you want to send your exceptions to, only support qq sender now
# Send to yourself by default
ic.init(sender='[email protected]', key='abcdefg')
# Or you can assign receiver to any email address you like
ic.init(sender='[email protected]', receiver='[email protected]', key='abcdefg')
if __name__ == '__main__':
main()
```
| ztdebugger | /ztdebugger-0.1.7.tar.gz/ztdebugger-0.1.7/README.md | README.md |
import io
import re
from datetime import datetime
from urllib.parse import urlparse
import boto3
import pandas as pd
from retrying import retry
import pythena.Utils as Utils
import pythena.Exceptions as Exceptions
from botocore.errorfactory import ClientError
class Athena:
__database = ''
__region = ''
__session=None
__athena = None
__s3 = None
__glue = None
__s3_path_regex = '^s3:\/\/[a-zA-Z0-9.\-_\/]*$'
def __init__(self, database, region='us-east-1', session=None):
self.__database = database
self.__region = region
if region is None:
region = boto3.session.Session().region_name
if region is None:
raise Exceptions.NoRegionFoundError("No default aws region configuration found. Must specify a region.")
self.__session = session
if session:
self.__athena = session.client('athena', region_name=region)
self.__s3 = session.client('s3', region_name=region)
self.__glue = session.client('glue', region_name=region)
else:
self.__athena = boto3.client('athena', region_name=region)
self.__s3 = boto3.client('s3', region_name=region)
self.__glue = boto3.client('glue', region_name=region)
if database not in Utils.get_databases(region):
raise Exceptions.DatabaseNotFound("Database " + database + " not found.")
def get_tables(self):
result = self.__glue.get_tables(DatabaseName=self.__database)
tables = []
for item in result["TableList"]:
tables.append(item["Name"])
return tables
def print_tables(self):
Utils.print_list(self.get_tables())
def execute(self, query, s3_output_url=None, save_results=False, run_async=False, dtype=None, return_results=True):
'''
Execute a query on Athena
-- If run_async is false, returns dataframe and query id. If true, returns just the query id
-- Data deleted unless save_results true, to keep s3 bucket clean
-- Uses default s3 output url unless otherwise specified
'''
if s3_output_url is None:
s3_output_url = self.__get_default_s3_url()
else:
save_results = True
return self.__execute_query(database=self.__database,
query=query,
s3_output_url=s3_output_url,
save_results=save_results,
run_async=run_async,
return_results=return_results,
dtype=dtype)
def __execute_query(self, database, query, s3_output_url,
return_results=True, save_results=True, run_async=False, dtype=None):
s3_bucket, s3_path = self.__parse_s3_path(s3_output_url)
response = self.__athena.start_query_execution(
QueryString=query,
QueryExecutionContext={
'Database': database
},
ResultConfiguration={
'OutputLocation': 's3://' + s3_bucket + "/" + s3_path,
})
query_execution_id = response['QueryExecutionId']
# If executing asynchronously, just return the id so results can be fetched later. Else, return dataframe (or error message)
if run_async or not return_results:
return query_execution_id
else:
status = self.__poll_status(query_execution_id)
df = self.get_result(query_execution_id, dtype=dtype)
return df, query_execution_id
def get_result(self, query_execution_id, save_results=False, dtype=None):
'''
Given an execution id, returns result as a pandas df if successful. Prints error otherwise.
-- Data deleted unless save_results true
'''
# Get execution status and save path, which we can then split into bucket and key. Automatically handles csv/txt
res = self.__athena.get_query_execution(QueryExecutionId = query_execution_id)
s3_bucket, s3_key = self.__parse_s3_path(res['QueryExecution']['ResultConfiguration']['OutputLocation'])
# If succeed, return df
if res['QueryExecution']['Status']['State'] == 'SUCCEEDED':
obj = self.__s3.get_object(Bucket=s3_bucket, Key=s3_key)
df = pd.read_csv(io.BytesIO(obj['Body'].read()), dtype=dtype)
# Remove results from s3
if not save_results:
self.__s3.delete_object(Bucket=s3_bucket, Key=s3_key)
self.__s3.delete_object(Bucket=s3_bucket, Key=s3_key + '.metadata')
return df
# If failed, return error message
elif res['QueryExecution']['Status']['State'] == 'FAILED':
raise Exceptions.QueryExecutionFailedException("Query failed with response: %s" % (self.get_query_error(query_execution_id)))
elif res['QueryExecution']['Status']['State'] == 'RUNNING':
raise Exceptions.QueryStillRunningException("Query has not finished executing.")
else:
raise Exceptions.QueryUnknownStatusException("Query is in an unknown status. Check athena logs for more info.")
@retry(stop_max_attempt_number=10,
wait_exponential_multiplier=300,
wait_exponential_max=60 * 1000)
def __poll_status(self, query_execution_id):
status = self.get_query_status(query_execution_id)
if status in ['SUCCEEDED', 'FAILED']:
return status
else:
raise Exceptions.QueryExecutionTimeoutException("Query to athena has timed out. Try running the query in the athena or asynchronously")
# This returns the same bucket and key the AWS Athena console would use for its queries
def __get_default_s3_url(self):
if self.__session:
account_id = self.__session.client('sts').get_caller_identity().get('Account')
else:
account_id = boto3.client('sts').get_caller_identity().get('Account')
return 's3://aws-athena-query-results-' + account_id + '-' + self.__region + "/Unsaved/" + datetime.now().strftime("%Y/%m/%d")
def __parse_s3_path(self, s3_path):
if not re.compile(self.__s3_path_regex).match(s3_path):
raise Exceptions.InvalidS3PathException("s3 Path must follow format: " + self.__s3_path_regex)
url = urlparse(s3_path)
bucket = url.netloc
path = url.path.lstrip('/')
return bucket, path
# A few functions to surface boto client functions to pythena: get status, get query error, and cancel a query
def get_query_status(self, query_execution_id):
res = self.__athena.get_query_execution(QueryExecutionId=query_execution_id)
return res['QueryExecution']['Status']['State']
def get_query_error(self, query_execution_id):
res = self.__athena.get_query_execution(QueryExecutionId=query_execution_id)
if res['QueryExecution']['Status']['State']=='FAILED':
return res['QueryExecution']['Status']['StateChangeReason']
else:
return "Query has not failed: check status or see Athena log for more details"
def cancel_query(self, query_execution_id):
self.__athena.stop_query_execution(QueryExecutionId=query_execution_id)
return | ztech-pythena | /ztech_pythena-0.0.3-py3-none-any.whl/pythena/Athena.py | Athena.py |
# ztext
This project is designed for NLP analysis eaily, event you don't have any background of NLP you still can use it for text insights.
Functions:
1. Text clean.
2. Topic analysis
3. SVO (Subject Verb and Object extraction)
4. NER (Entity extraction)
5. Topic and SVO visualization (for now Visualization only support run in Jupyter notebook and Colab)

## install
In python3.6 or later environment
`pip install ztext`
In IPython, Jupyter notebook or Colab
`!pip install ztext`
from source:
`pip3 install git+https://github.com/ZackAnalysis/ztext.git`
## Quick Start
Start a Jupyter notebook locally or a Colab notebook ([https://colab.research.google.com/](https://colab.research.google.com/))
### find a demo at
[https://colab.research.google.com/drive/1W2mD6QHOGdVEfGShOR_tBnYHxz_D5ore?usp=sharing](https://colab.research.google.com/drive/1W2mD6QHOGdVEfGShOR_tBnYHxz_D5ore?usp=sharing)
install package:
`!pip install ztext`
`import ztext`
load sampledata
from sampledata:
`df = ztext.sampledata()`
`zt = ztext.Ztext(df=df, textCol='content',nTopics=5, custom_stopwrods=['sell','home'], samplesize=200)`
from file
`!wget https://github.com/ZackAnalysis/ztext/blob/master/ztext/sampleData.xlsx?raw=true`
`filename = "sampleData.xlsx"`
`zt = ztext.Ztext()`
`zt.loadfile(filename, textCol='content')`
`zt.nTopics = 6`
`zt.custom_stopwords = ['text','not','emotion']`
from pandas dataframe
`zt.loaddf(df)`
### Functions
#### Sentiment analysis:
`zt.sentiment()`
#### Topic analysis:
`zt.get_topics()`
#### SVO and NER
`zt.getSVO('topic2')`
#### Visulzation
`zt.getldaVis()`
`zt.getSVOvis('topic2',options="any")`
#### save output
`zt.df.to_excel('filename.xlsx`)`
| ztext | /ztext-0.0.8.tar.gz/ztext-0.0.8/README.md | README.md |
# ZTF_Auth
## Setup for handling ZTF authentication
In order to use this, we keep a json file with the credentials in a directory on the client computer in a `ztfdir` directory. `ztfdir` is `${HOME}/.ztf`, but this can be changed.
## Installation
```
python setup.py install
```
### Example code
```
# After setting up a JSON file eg. at `${HOME}/.ztf/ztffps_auth.json`:
# {"username": "myusername", "password": "mypasswd!", "email": "my_email@myprovider"}
from ztf_auth import get_ztf_auth
auth_dict = get_ztf_auth()
```
| ztf-auth | /ztf_auth-0.0.3.tar.gz/ztf_auth-0.0.3/README.md | README.md |
import time, os, warnings, typing
import astropy
from astropy.time import Time
from astropy import units as u
from astropy.coordinates import SkyCoord, AltAz
import os, time, re
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import astroplan as ap
from astroplan import Observer, is_observable
from astroplan.plots import plot_finder_image
from datetime import datetime
from astroplan.plots import plot_airmass, plot_altitude
from ztfquery import fields, query
from shapely.geometry import Polygon
from ztf_plan_obs import gcn_parser, utils
icecube = ["IceCube", "IC", "icecube", "ICECUBE", "Icecube"]
ztf = ["ZTF", "ztf"]
class PlanObservation:
"""
Class for planning observations
"""
def __init__(
self,
name: str,
ra: float = None,
dec: float = None,
arrivaltime: str = None,
date: str = None,
max_airmass=2.0,
observationlength: float = 300,
bands: list = ["g", "r"],
multiday: bool = False,
alertsource: str = None,
site: str = "Palomar",
verbose: bool = True,
**kwargs,
) -> None:
self.name = name
self.arrivaltime = arrivaltime
self.alertsource = alertsource
self.site = site
self.max_airmass = max_airmass
self.observationlength = observationlength
self.bands = bands
self.multiday = multiday
self.ra_err = None
self.dec_err = None
self.warning = None
self.observable = True
self.rejection_reason = None
self.datasource = None
self.found_in_archive = False
self.search_full_archive = False
self.coverage = None
self.recommended_field = None
self.summary = {}
if ra is None and self.alertsource in icecube:
if verbose:
print("Parsing an IceCube alert")
# Check if request is archival:
archive, latest_archive_no = gcn_parser.get_gcn_circulars_archive()
# Check if the alert is younger than latest archive entry
archival_names = [entry[0] for entry in archive]
archival_dates = [int(entry[2:-1]) for entry in archival_names]
latest_archival = max(archival_dates)
this_alert_date = int(self.name[2:-1])
if this_alert_date > latest_archival:
if verbose:
print(
"Alert too new, no GCN circular available yet. Using latest GCN notice"
)
else:
if verbose:
print("Alert info should be in GCN circular archive")
self.search_full_archive = True
if self.search_full_archive:
self.search_match_in_archive(archive)
# Well, if it's not in the latest archive, use the full
# backwards search
while self.found_in_archive is False:
archive, _ = gcn_parser.get_gcn_circulars_archive(latest_archive_no)
self.search_match_in_archive(archive)
latest_archive_no -= 1
if self.found_in_archive:
gcn_info = gcn_parser.parse_gcn_circular(self.gcn_nr)
self.ra = gcn_info["ra"]
self.ra_err = gcn_info["ra_err"]
self.dec = gcn_info["dec"]
self.dec_err = gcn_info["dec_err"]
self.arrivaltime = gcn_info["time"]
else:
if verbose:
print("No archival GCN circular found. Using newest notice!")
(
ra_notice,
dec_notice,
self.arrivaltime,
revision,
) = gcn_parser.parse_latest_gcn_notice()
gcn_nr_latest = archive[0][1]
gcn_info = gcn_parser.parse_gcn_circular(gcn_nr_latest)
ra_circ = gcn_info["ra"]
ra_err_circ = gcn_info["ra_err"]
dec_circ = gcn_info["dec"]
dec_err_circ = gcn_info["dec_err"]
coords_notice = SkyCoord(
ra_notice * u.deg, dec_notice * u.deg, frame="icrs"
)
coords_circular = SkyCoord(
ra_circ * u.deg, dec_circ * u.deg, frame="icrs"
)
separation = coords_notice.separation(coords_circular).deg
if separation < 1:
self.ra = ra_circ
self.dec = dec_circ
self.ra_err = ra_err_circ
self.dec_err = dec_err_circ
self.datasource = f"GCN Circular {gcn_nr_latest}\n"
else:
self.ra = ra_notice
self.dec = dec_notice
self.datasource = f"GCN Notice (Rev. {revision})\n"
elif ra is None and self.alertsource in ztf:
if utils.is_ztf_name(name):
print(f"{name} is a ZTF name. Looking in Fritz database for ra/dec")
from ztf_plan_obs.fritzconnector import FritzInfo
fritz = FritzInfo([name])
self.ra = fritz.queryresult["ra"]
self.dec = fritz.queryresult["dec"]
self.datasource = "Fritz\n"
if np.isnan(self.ra):
raise ValueError("Object apparently not found on Fritz")
print("\nFound ZTF object information on Fritz")
elif ra is None:
raise ValueError("Please enter ra and dec")
else:
self.ra = ra
self.dec = dec
self.coordinates = SkyCoord(self.ra * u.deg, self.dec * u.deg, frame="icrs")
self.coordinates_galactic = self.coordinates.galactic
self.target = ap.FixedTarget(name=self.name, coord=self.coordinates)
self.site = Observer.at_site(self.site, timezone="US/Pacific")
self.now = Time(datetime.utcnow())
self.date = date
if self.date is not None:
self.start_obswindow = Time(self.date + " 00:00:00.000000")
else:
self.start_obswindow = Time(self.now, format="iso")
self.end_obswindow = Time(self.start_obswindow.mjd + 1, format="mjd").iso
constraints = [
ap.AltitudeConstraint(20 * u.deg, 90 * u.deg),
ap.AirmassConstraint(max_airmass),
ap.AtNightConstraint.twilight_astronomical(),
]
# Obtain moon coordinates at Palomar for the full time window (default: 24 hours from running the script)
times = Time(self.start_obswindow + np.linspace(0, 24, 1000) * u.hour)
moon_times = Time(self.start_obswindow + np.linspace(0, 24, 50) * u.hour)
moon_coords = []
for time in moon_times:
moon_coord = astropy.coordinates.get_moon(
time=time, location=self.site.location
)
moon_coords.append(moon_coord)
self.moon = moon_coords
airmass = self.site.altaz(times, self.target).secz
airmass = np.ma.array(airmass, mask=airmass < 1)
airmass = airmass.filled(fill_value=99)
airmass = [x.value for x in airmass]
self.twilight_evening = self.site.twilight_evening_astronomical(
Time(self.start_obswindow), which="next"
)
self.twilight_morning = self.site.twilight_morning_astronomical(
Time(self.start_obswindow), which="next"
)
"""
Check if if we are before morning or before evening
in_night = True means it's currently dark at the site
and morning comes before evening.
"""
if self.twilight_evening - self.twilight_morning > 0:
self.in_night = True
else:
self.in_night = False
indices_included = []
airmasses_included = []
times_included = []
for index, t_mjd in enumerate(times.mjd):
if self.in_night:
if (
(t_mjd < self.twilight_morning.mjd - 0.03)
or (t_mjd > self.twilight_evening.mjd + 0.03)
) and airmass[index] < 2.0:
indices_included.append(index)
airmasses_included.append(airmass[index])
times_included.append(times[index])
else:
if (
(t_mjd > self.twilight_evening.mjd + 0.01)
and (t_mjd < self.twilight_morning.mjd - 0.01)
) and airmass[index] < 2.0:
indices_included.append(index)
airmasses_included.append(airmass[index])
times_included.append(times[index])
if len(airmasses_included) == 0:
self.observable = False
self.rejection_reason = "airmass"
if np.abs(self.coordinates_galactic.b.deg) < 10:
self.observable = False
self.rejection_reason = "proximity to gal. plane"
self.g_band_recommended_time_start = None
self.g_band_recommended_time_end = None
self.r_band_recommended_time_start = None
self.r_band_recommended_time_end = None
if self.observable:
min_airmass = np.min(airmasses_included)
min_airmass_index = np.argmin(airmasses_included)
min_airmass_time = times_included[min_airmass_index]
distance_to_evening = min_airmass_time.mjd - self.twilight_evening.mjd
distance_to_morning = self.twilight_morning.mjd - min_airmass_time.mjd
if distance_to_morning < distance_to_evening:
if "g" in self.bands:
self.g_band_recommended_time_start = utils.round_time(
min_airmass_time - self.observationlength * u.s - 0.5 * u.hour
)
self.g_band_recommended_time_end = (
self.g_band_recommended_time_start
+ self.observationlength * u.s
)
if "r" in self.bands:
self.r_band_recommended_time_start = utils.round_time(
min_airmass_time - self.observationlength * u.s
)
self.r_band_recommended_time_end = (
self.r_band_recommended_time_start
+ self.observationlength * u.s
)
else:
if "g" in self.bands:
self.g_band_recommended_time_start = utils.round_time(
min_airmass_time + self.observationlength * u.s + 0.5 * u.hour
)
self.g_band_recommended_time_end = (
self.g_band_recommended_time_start
+ self.observationlength * u.s
)
if "r" in self.bands:
self.r_band_recommended_time_start = utils.round_time(
min_airmass_time + self.observationlength * u.s
)
self.r_band_recommended_time_end = (
self.r_band_recommended_time_start
+ self.observationlength * u.s
)
if self.alertsource in icecube:
summarytext = f"Name = IceCube-{self.name[2:]}\n"
else:
summarytext = f"Name = {self.name}\n"
if self.ra_err:
if self.ra_err[0]:
summarytext += f"RA = {self.coordinates.ra.deg} + {self.ra_err[0]} - {self.ra_err[1]*-1}\nDec = {self.coordinates.dec.deg} + {self.dec_err[0]} - {self.dec_err[1]*-1}\n"
else:
summarytext += f"RADEC = {self.coordinates.ra.deg:.8f} {self.coordinates.dec.deg:.8f}\n"
if self.datasource is not None:
summarytext += f"Data source: {self.datasource}"
if self.observable:
summarytext += (
f"Minimal airmass ({min_airmass:.2f}) at {min_airmass_time}\n"
)
summarytext += f"Separation from galactic plane: {self.coordinates_galactic.b.deg:.2f} deg\n"
if self.site.name != "Palomar":
summarytext += f"Site: {self.site.name}"
if self.site.name == "Palomar":
if self.observable and not self.multiday:
summarytext += "Recommended observation times:\n"
if "g" in self.bands:
gbandtext = f"g-band: {utils.short_time(self.g_band_recommended_time_start)} - {utils.short_time(self.g_band_recommended_time_end)} [UTC]"
if "r" in self.bands:
rbandtext = f"r-band: {utils.short_time(self.r_band_recommended_time_start)} - {utils.short_time(self.r_band_recommended_time_end)} [UTC]"
if (
"g" in bands
and "r" in bands
and self.g_band_recommended_time_start
< self.r_band_recommended_time_start
):
bandtexts = [gbandtext + "\n", rbandtext]
elif (
"g" in bands
and "r" in bands
and self.g_band_recommended_time_start
> self.r_band_recommended_time_start
):
bandtexts = [rbandtext + "\n", gbandtext]
elif "g" in bands and "r" not in bands:
bandtexts = [gbandtext]
else:
bandtexts = [rbandtext]
for item in bandtexts:
summarytext += item
if verbose:
print(summarytext)
if not os.path.exists(self.name):
os.makedirs(self.name)
self.summarytext = summarytext
def plot_target(self):
"""
Plot the observation window, including moon, altitude
constraint and target on sky
"""
now_mjd = Time(self.now, format="iso").mjd
if self.date is not None:
_date = self.date + " 12:00:00.000000"
time_center = _date
else:
time_center = Time(now_mjd + 0.45, format="mjd").iso
ax = plot_altitude(
self.target,
self.site,
time_center,
min_altitude=10,
)
if self.in_night:
ax.axvspan(
(self.now - 0.05).plot_date,
self.twilight_morning.plot_date,
alpha=0.2,
color="gray",
)
ax.axvspan(
self.twilight_evening.plot_date,
(self.now + 0.95).plot_date,
alpha=0.2,
color="gray",
)
duration1 = (self.twilight_morning - (self.now - 0.05)) / 2
duration2 = (self.twilight_evening - (self.now + 0.95)) / 2
nightmarker1 = (self.twilight_morning - duration1).plot_date
nightmarker2 = (self.twilight_evening - duration2).plot_date
ax.annotate(
"Night",
xy=[nightmarker1, 85],
color="dimgray",
ha="center",
fontsize=12,
)
ax.annotate(
"Night",
xy=[nightmarker2, 85],
color="dimgray",
ha="center",
fontsize=12,
)
else:
ax.axvspan(
self.twilight_evening.plot_date,
self.twilight_morning.plot_date,
alpha=0.2,
color="gray",
)
midnight = min(self.twilight_evening, self.twilight_morning) + 0.5 * (
max(self.twilight_evening, self.twilight_morning)
- min(self.twilight_evening, self.twilight_morning)
)
ax.annotate(
"Night",
xy=[midnight.plot_date, 85],
color="dimgray",
ha="center",
fontsize=12,
)
# Plot a vertical line for the current time
ax.axvline(Time(self.now).plot_date, color="black", label="now", ls="dotted")
# Plot a vertical line for the neutrino arrival time if available
if self.arrivaltime is not None:
ax.axvline(
Time(self.arrivaltime).plot_date,
color="indigo",
label="neutrino arrival",
ls="dashed",
)
start, end = ax.get_xlim()
plt.text(
start,
100,
self.summarytext,
fontsize=8,
)
# if self.date is not None:
# ax.set_xlabel(f"{self.date} [UTC]")
# else:
# ax.set_xlabel(f"{self.now.datetime.date()} [UTC]")
plt.grid(True, color="gray", linestyle="dotted", which="both", alpha=0.5)
if self.site.name == "Palomar":
if self.observable:
if "g" in self.bands:
ax.axvspan(
self.g_band_recommended_time_start.plot_date,
self.g_band_recommended_time_end.plot_date,
alpha=0.5,
color="green",
)
if "r" in self.bands:
ax.axvspan(
self.r_band_recommended_time_start.plot_date,
self.r_band_recommended_time_end.plot_date,
alpha=0.5,
color="red",
)
# Now we plot the moon altitudes and separation
moon_altitudes = []
moon_times = []
moon_separations = []
for moon in self.moon:
moonalt = moon.transform_to(
AltAz(obstime=moon.obstime, location=self.site.location)
).alt.deg
moon_altitudes.append(moonalt)
moon_times.append(moon.obstime.plot_date)
separation = moon.separation(self.coordinates).deg
moon_separations.append(separation)
ax.plot(
moon_times,
moon_altitudes,
color="orange",
linestyle=(0, (1, 2)),
label="moon",
)
# And we annotate the separations
for i, moonalt in enumerate(moon_altitudes):
if moonalt > 20 and i % 3 == 0:
if moon_separations[i] < 20:
color = "red"
else:
color = "green"
ax.annotate(
f"{moon_separations[i]:.0f}",
xy=(moon_times[i], moonalt),
textcoords="data",
fontsize=6,
color=color,
)
x = np.linspace(start + 0.03, end + 0.03, 9)
# Add recommended upper limit for airmass
y = np.full((len(x), 0), 30)
y = np.ones(len(x)) * 30
ax.errorbar(x, y, 2, color="red", lolims=True, fmt=" ")
# Plot an airmass scale
ax2 = ax.secondary_yaxis(
"right", functions=(self.altitude_to_airmass, self.airmass_to_altitude)
)
altitude_ticks = np.linspace(10, 90, 9)
airmass_ticks = np.round(self.altitude_to_airmass(altitude_ticks), 2)
ax2.set_yticks(airmass_ticks)
ax2.set_ylabel("Airmass")
if self.observable:
plt.legend()
if self.observable is False:
plt.text(
0.5,
0.5,
f"NOT OBSERVABLE\ndue to {self.rejection_reason}",
size=20,
rotation=30.0,
ha="center",
va="center",
bbox=dict(
boxstyle="round",
ec=(1.0, 0.5, 0.5),
fc=(1.0, 0.8, 0.8),
),
transform=ax.transAxes,
)
plt.tight_layout()
if self.site.name == "Palomar":
outpath_png = os.path.join(self.name, f"{self.name}_airmass.png")
outpath_pdf = os.path.join(self.name, f"{self.name}_airmass.pdf")
else:
outpath_png = os.path.join(
self.name, f"{self.name}_airmass_{self.site.name}.png"
)
outpath_pdf = os.path.join(
self.name, f"{self.name}_airmass_{self.site.name}.pdf"
)
plt.savefig(outpath_png, dpi=300, bbox_inches="tight")
plt.savefig(outpath_pdf, bbox_inches="tight")
return ax
def search_match_in_archive(self, archive) -> None:
""" """
for archival_name, archival_number in archive:
if self.name == archival_name:
self.gcn_nr = archival_number
self.found_in_archive = True
self.datasource = f"GCN Circular {self.gcn_nr}\n"
print("Archival data found, using these.")
def request_ztf_fields(self, plot=True) -> list:
"""
This looks at yupana.caltech.edu for the fields matching
your location and downloads the camera grid plots for these
"""
# URL = "http://yupana.caltech.edu/cgi-bin/ptf/tb//zoc"
# image_url = "http://yupana.caltech.edu/marshals/tb//igmo_0_"
# image_urls = [image_url + f"{x}.png" for x in [0, 1, 2, 3]]
objra = self.ra
objdec = self.dec
radius = 0
fieldids = list(fields.get_fields_containing_target(ra=self.ra, dec=self.dec))
fieldids_ref = []
zq = query.ZTFQuery()
querystring = f"field={fieldids[0]}"
if len(fieldids) > 1:
for f in fieldids[1:]:
querystring += f" OR field={f}"
print(
f"Checking IPAC if references are available in g- and r-band for fields {fieldids}"
)
zq.load_metadata(kind="ref", sql_query=querystring)
mt = zq.metatable
for f in mt.field.unique():
d = {k: k in mt["filtercode"].values for k in ["zg", "zr", "zi"]}
if d["zg"] == True and d["zr"] == True:
fieldids_ref.append(int(f))
print(f"Fields that contain target: {fieldids}")
print(f"Of these have a reference: {fieldids_ref}")
self.fieldids_ref = fieldids_ref
if plot:
self.plot_fields()
return fieldids_ref
def plot_fields(self):
"""
Plot the ZTF field(s) with the target
"""
ccds = fields._CCD_COORDS
coverage = {}
for f in self.fieldids_ref:
centroid = fields.get_field_centroid(f)
fig, ax = plt.subplots(dpi=300)
ax.set_aspect("equal")
ccd_polygons = []
covered_area = 0
for c in ccds.CCD.unique():
ccd = ccds[ccds.CCD == c][["EW", "NS"]].values
ccd_draw = Polygon(ccd + centroid)
ccd_polygons.append(ccd_draw)
x, y = ccd_draw.exterior.xy
ax.plot(x, y, color="black")
if self.ra_err:
# Create errorbox
ul = [self.ra + self.ra_err[1], self.dec + self.dec_err[0]]
ur = [self.ra + self.ra_err[0], self.dec + self.dec_err[1]]
ll = [self.ra + self.ra_err[1], self.dec + self.dec_err[1]]
lr = [self.ra + self.ra_err[0], self.dec + self.dec_err[0]]
errorbox = Polygon([ul, ll, ur, lr])
x, y = errorbox.exterior.xy
ax.plot(x, y, color="red")
for ccd in ccd_polygons:
covered_area += errorbox.intersection(ccd).area
cov = covered_area / errorbox.area * 100
coverage.update({f: cov})
ax.scatter([self.ra], [self.dec], color="red")
ax.set_xlabel("RA")
ax.set_ylabel("Dec")
if self.ra_err:
ax.set_title(f"Field {f} (Coverage: {cov:.2f}%)")
else:
ax.set_title(f"Field {f}")
plt.tight_layout()
outpath_png = os.path.join(self.name, f"{self.name}_grid_{f}.png")
fig.savefig(outpath_png, dpi=300)
plt.close()
self.coverage = coverage
if len(self.coverage) > 0:
max_coverage_field = max(coverage, key=coverage.get)
self.recommended_field = max_coverage_field
else:
self.recommended_field = None
def plot_finding_chart(self):
""" """
ax, hdu = plot_finder_image(
self.target,
fov_radius=2 * u.arcmin,
survey="DSS2 Blue",
grid=True,
reticle=False,
)
outpath_png = os.path.join(self.name, f"{self.name}_finding_chart.png")
plt.savefig(outpath_png, dpi=300)
plt.close()
def get_summary(self):
return self.summarytext
@staticmethod
def airmass_to_altitude(altitude):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
airmass = 90 - np.degrees(np.arccos(1 / altitude))
return airmass
@staticmethod
def altitude_to_airmass(airmass):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
altitude = 1.0 / np.cos(np.radians(90 - airmass))
return altitude
class ParsingError(Exception):
"""Base class for parsing error"""
pass
class AirmassError(Exception):
"""Base class for parsing error"""
pass | ztf-plan-obs | /ztf_plan_obs-0.34-py3-none-any.whl/ztf_plan_obs/plan.py | plan.py |
import os, time
from typing import Union
from penquins import Kowalski
class APIError(Exception):
pass
class Queue:
"""
Submit observation triggers to Kowalski, query the queue and delete observation triggers
"""
def __init__(
self,
user: str,
) -> None:
self.user = user
self.protocol: str = "https"
self.host: str = os.environ.get("KOWALSKI_HOST", default="localhost")
self.port: int = 443
self.api_token: str = os.environ.get("KOWALSKI_API_TOKEN")
self.queue: dict = {}
if self.api_token is None:
err = (
"No kowalski API token found. Set the environment variable with \n"
"export KOWALSKI_API_TOKEN=api_token"
)
raise APIError(err)
self.kowalski = Kowalski(
token=self.api_token, protocol=self.protocol, host=self.host, port=self.port
)
if not self.kowalski.ping():
err = f"Ping of Kowalski with specified token failed. Are you sure this token is correct? Provided token: {self.api_token}"
raise APIError(err)
def get_all_queues(self, names_only: bool = False) -> Union[list, dict]:
"""
Get all the queues
"""
res = self.kowalski.api("get", "/api/triggers/ztf")
if res["status"] != "success":
err = f"API call failed with status '{res['status']}'' and message '{res['message']}''"
raise APIError(err)
if names_only:
res = [x["queue_name"] for x in res["data"]]
return res
def get_too_queues(self, names_only: bool = False) -> Union[list, dict]:
"""
Get all the queues and return ToO triggers only
"""
res = self.get_all_queues()
res["data"] = [x for x in res["data"] if x["is_TOO"]]
if names_only:
res = [x["queue_name"] for x in res["data"]]
return res
def add_trigger_to_queue(
self,
trigger_name: str,
validity_window_start_mjd: float,
field_id: list,
filter_id: list,
request_id: int = 1,
subprogram_name: str = "ToO_Neutrino",
exposure_time: int = [30],
validity_window_end_mjd: float = None,
program_id: int = 2,
program_pi: str = "Kulkarni",
) -> None:
"""
Add one trigger (requesting a single observation)
to the queue (containing all the triggers that will be
subbmitted)
"""
if trigger_name[:4] != "ToO_":
raise ValueError(
f"Trigger names must begin with 'ToO_', but you entered '{trigger_name}'"
)
if validity_window_end_mjd is None:
validity_window_end_mjd = validity_window_start_mjd + exposure_time / 86400
targets = [
{
"request_id": request_id,
"field_id": field_id,
"filter_id": filter_id,
"subprogram_name": subprogram_name,
"program_pi": program_pi,
"program_id": program_id,
"exposure_time": exposure_time,
}
]
trigger_id = len(self.queue)
trigger = {
trigger_id: {
"user": self.user,
"queue_name": f"{trigger_name}_{trigger_id}",
"queue_type": "list",
"validity_window_mjd": [
validity_window_start_mjd,
validity_window_end_mjd,
],
"targets": targets,
}
}
self.queue.update(trigger)
def submit_queue(self) -> None:
"""
Submit the queue of triggers via the Kowalski API
"""
results = []
for i, trigger in self.queue.items():
res = self.kowalski.api(
method="put", endpoint="/api/triggers/ztf", data=trigger
)
results.append(res)
if res["status"] != "success":
err = "something went wrong with submitting."
raise APIError(err)
print(f"Submitted {len(self.queue)} triggers to Kowalski.")
return results
def delete_queue(self) -> None:
"""
Delete all triggers of the queue that have been submitted to Kowalski
"""
for i, trigger in self.queue.items():
req = {"user": self.user, "queue_name": trigger["queue_name"]}
self.kowalski.api(method="delete", endpoint="/api/triggers/ztf", data=req)
def delete_trigger(self, trigger_name) -> None:
"""
Delete a trigger that has been submitted
"""
req = {"user": self.user, "queue_name": trigger_name}
res = self.kowalski.api(method="delete", endpoint="/api/triggers/ztf", data=req)
if res["status"] != "success":
err = "something went wrong with deleting the trigger."
raise APIError(err)
return res
def print(self) -> None:
"""
Print the content of the queue
"""
for i, trigger in self.queue.items():
print(trigger)
def get_triggers(self) -> list:
"""
Print the content of the queue
"""
return [t for t in self.queue.items()]
def __del__(self):
"""
Close the connection
"""
self.kowalski.close() | ztf-plan-obs | /ztf_plan_obs-0.34-py3-none-any.whl/ztf_plan_obs/api.py | api.py |
import os, time, re
import numpy as np
import pandas as pd
from astropy.time import Time
import requests
def get_gcn_circulars_archive(archive_no=None):
if archive_no is None:
response = requests.get("https://gcn.gsfc.nasa.gov/gcn3_archive.html")
else:
response = requests.get(
f"https://gcn.gsfc.nasa.gov/gcn3_arch_old{archive_no}.html"
)
gcns = []
_archive_numbers = []
for line in response.text.splitlines():
if "IceCube observation of a high-energy neutrino" in line:
res = line.split(">")
gcn_no = "".join([x for x in res[2] if x.isdigit()])
long_name = re.findall(
r"(IceCube-[12][0-9][0-9][0-9][0-3][0-9][A-Z])", line
)[0]
short_name = "IC" + long_name[8:]
gcns.append((short_name, gcn_no))
elif "gcn3_arch_old" in line:
url = line.split('"')[1]
_archive_no = int(url[13:].split(".")[0])
_archive_numbers.append(_archive_no)
if archive_no is not None:
print(f"Processed archive number {archive_no}")
return gcns, max(_archive_numbers)
def parse_gcn_circular(gcn_number):
url = f"https://gcn.gsfc.nasa.gov/gcn3/{gcn_number}.gcn3"
response = requests.get(url)
returndict = {}
mainbody_starts_here = 999
splittext = response.text.splitlines()
splittext = list(filter(None, splittext))
for i, line in enumerate(splittext):
if "SUBJECT" in line:
name = line.split(" - ")[0].split(": ")[1]
returndict.update({"name": name})
elif "FROM" in line:
base = line.split("at")[0].split(": ")[1].split(" ")
author = [x for x in base if x != ""][1]
returndict.update({"author": author})
elif (
("RA" in line or "Ra" in line)
and ("DEC" in splittext[i + 1] or "Dec" in splittext[i + 1])
and i < mainbody_starts_here
):
ra, ra_upper, ra_lower = parse_radec(line)
dec, dec_upper, dec_lower = parse_radec(splittext[i + 1])
if ra_upper and ra_lower:
ra_err = [ra_upper, -ra_lower]
else:
ra_err = [None, None]
if dec_upper and dec_lower:
dec_err = [dec_upper, -dec_lower]
else:
dec_err = [None, None]
returndict.update(
{"ra": ra, "ra_err": ra_err, "dec": dec, "dec_err": dec_err}
)
mainbody_starts_here = i + 2
elif ("Time" in line or "TIME" in line) and i < mainbody_starts_here:
raw_time = [
x for x in line.split(" ") if x not in ["Time", "", "UT", "UTC"]
][1]
raw_time = "".join(
[x for x in raw_time if np.logical_or(x.isdigit(), x in [":", "."])]
)
raw_date = name.split("-")[1][:6]
ut_time = f"20{raw_date[0:2]}-{raw_date[2:4]}-{raw_date[4:6]}T{raw_time}"
time = Time(ut_time, format="isot", scale="utc")
returndict.update({"time": time})
return returndict
def parse_radec(str: str):
""" """
regex_findall = re.findall(r"[-+]?\d*\.\d+|\d+", str)
if len(regex_findall) == 2:
pos = float(regex_findall[0])
pos_upper = None
pos_lower = None
elif len(regex_findall) == 4:
pos = float(regex_findall[0])
pos_upper = float(regex_findall[1])
pos_lower = float(regex_findall[1])
elif len(regex_findall) == 5:
pos, pos_upper, pos_lower = regex_findall[0:3]
pos = float(pos)
pos_upper = float(pos_upper.replace("+", ""))
pos_lower = float(pos_lower.replace("-", ""))
else:
raise ParsingError(f"Could not parse GCN ra and dec")
return pos, pos_upper, pos_lower
def parse_latest_gcn_notice():
""" """
url = "https://gcn.gsfc.nasa.gov/amon_icecube_gold_bronze_events.html"
response = requests.get(url)
table = pd.read_html(response.text)[0]
latest = table.head(1)
revision = latest["EVENT"]["Rev"][0]
date = latest["EVENT"]["Date"][0].replace("/", "-")
obstime = latest["EVENT"]["Time UT"][0]
ra = latest["OBSERVATION"]["RA [deg]"][0]
dec = latest["OBSERVATION"]["Dec [deg]"][0]
arrivaltime = Time(f"20{date} {obstime}")
return ra, dec, arrivaltime, revision
class ParsingError(Exception):
pass | ztf-plan-obs | /ztf_plan_obs-0.34-py3-none-any.whl/ztf_plan_obs/gcn_parser.py | gcn_parser.py |
import multiprocessing
import numpy as np
from ztfquery import io
from astropy.utils.console import ProgressBar
MARSHAL_BASEURL = "http://skipper.caltech.edu:8080/cgi-bin/growth/view_avro.cgi?name="
class MarshalInfo:
""" """
def __init__(self, ztf_names, nprocess=16, logger=None):
import requests
import pandas as pd
auth = io._load_id_("marshal")
urls = []
for ztf_name in ztf_names:
url = MARSHAL_BASEURL + ztf_name
urls.append(url)
object_count = len(ztf_names)
auth_ = [auth] * object_count
from astropy.utils.console import ProgressBar
bar = ProgressBar(object_count)
results = []
with multiprocessing.Pool(nprocess) as p:
for index, result in enumerate(
p.map(self.get_info_multiprocessor, zip(ztf_names, urls, auth_))
):
bar.update(index)
results.append(result)
bar.update(object_count)
self.queryresult = results
@staticmethod
def get_info_multiprocessor(args):
""" """
import requests
import pandas as pd
ztf_name, url, auth = args
request = requests.get(url, auth=auth)
tables = pd.read_html(request.content)
mtb = tables[len(tables) - 1]
ndet = len(mtb)
if ndet == 0:
ra = 999
dec = 999
jd = 999
else:
ra = np.zeros(ndet)
dec = np.zeros(ndet)
jd = np.zeros(ndet)
mag = np.full(ndet, 99.0)
magerr = np.zeros(ndet)
maglim = np.zeros(ndet)
jd = np.zeros(ndet)
fid = np.full(ndet, 99)
magzp = np.zeros(ndet)
magzp_err = np.zeros(ndet)
for i in range(ndet):
isdiffpos = True
try:
line = mtb.values[i][0].split(",")
except:
print(mtb.values[i][0])
for j in range(len(line)):
if line[j][:14] == ' "isdiffpos":':
isdiffpos = str(line[j].split(":")[1])
if isdiffpos[2:-1] == "f":
isdiffpos = False
if line[j][:7] == ' "ra":':
ra[i] = float(line[j].split(":")[1])
elif line[j][:8] == ' "dec":':
dec[i] = float(line[j].split(":")[1])
# Throw away all alert datapoints
# with negative diff images
if isdiffpos == False:
ra[i] = 0
ras = ra[ra != 0]
decs = dec[ra != 0]
jds = jd[ra != 0]
ind = np.argsort(jds)
ra_median = np.median(ras[ind])
dec_median = np.median(decs[ind])
return ra_median, dec_median
class FritzInfo:
""" Testing only """
def __init__(self, ztf_names):
self.ztf_names = ztf_names
self.queryresult = self.get_info()
def get_info(self):
from ztfquery import fritz
returndict = {}
object_count = len(self.ztf_names)
bar = ProgressBar(object_count)
queryresult = []
for i, name in enumerate(self.ztf_names):
query_res = fritz.download_alerts(name)
queryresult.append(query_res)
bar.update(i)
bar.update(object_count)
ras = []
decs = []
for entry in queryresult[0]:
ras.append(entry["candidate"]["ra"])
decs.append(entry["candidate"]["dec"])
ra = np.median(ras)
dec = np.median(decs)
returndict.update({"ra": ra, "dec": dec})
return returndict | ztf-plan-obs | /ztf_plan_obs-0.34-py3-none-any.whl/ztf_plan_obs/fritzconnector.py | fritzconnector.py |
import matplotlib.pyplot as plt
import os
from datetime import datetime, date
from matplotlib.backends.backend_pdf import PdfPages
from tqdm import tqdm
from astropy.time import Time
from astropy import units as u
from ztf_plan_obs.plan import PlanObservation
from ztf_plan_obs.utils import (
round_time,
short_time,
isotime_delta_to_seconds,
isotime_to_mjd,
mjd_to_isotime,
)
NIGHTS = [1, 2, 3, 5, 7, 9]
SHORT_NIGHTS = NIGHTS[1:]
ONE_FILTER_NIGHTS = NIGHTS[1:-1]
class MultiDayObservation:
""" """
def __init__(
self,
name: str,
ra: float = None,
dec: float = None,
startdate=None,
verbose: bool = True,
**kwargs,
):
self.name = name
self.ra = ra
self.dec = dec
self.triggers: list = []
today = date.today()
now = datetime.now()
if self.ra is None:
plan_initial = PlanObservation(name=name, alertsource="icecube")
else:
plan_initial = PlanObservation(name=name, ra=self.ra, dec=self.dec)
if startdate is None:
first_obs = plan_initial.g_band_recommended_time_start
first_obs_day = Time(
first_obs, format="iso", scale="utc", out_subfmt="date"
)
next_days = [(first_obs_day + i - 1).value for i in NIGHTS]
else:
startdate_astropy = Time(
str(startdate), format="iso", scale="utc", out_subfmt="date"
)
next_days = [(startdate_astropy + i - 1).value for i in NIGHTS]
# if startdate is None:
# now_astropy = Time(str(now), format="iso", scale="utc", out_subfmt="date")
# next_days = [(now_astropy + i - 1).value for i in NIGHTS]
# else:
# startdate_astropy = Time(
# str(startdate), format="iso", scale="utc", out_subfmt="date"
# )
# next_days = [(startdate_astropy + i - 1).value for i in NIGHTS]
# if self.ra is None:
# plan_initial = PlanObservation(
# name=name, date=str(today), alertsource="icecube"
# )
# else:
# plan_initial = PlanObservation(
# name=name, date=str(today), ra=self.ra, dec=self.dec
# )
# print(plan_initial.g_band_recommended_time_start)
# quit()
ra = plan_initial.ra
dec = plan_initial.dec
observable = []
g_band_start = []
g_band_end = []
r_band_start = []
r_band_end = []
plan_initial.request_ztf_fields()
if plan_initial.ra_err:
recommended_field = plan_initial.recommended_field
pdf_outfile = os.path.join(name, f"{name}_multiday.pdf")
with PdfPages(pdf_outfile) as pdf:
for i, day in enumerate(tqdm(next_days)):
if NIGHTS[i] not in SHORT_NIGHTS:
plan = PlanObservation(
name=name, date=day, ra=ra, dec=dec, verbose=False
)
else:
if NIGHTS[i] in ONE_FILTER_NIGHTS:
bands = ["g"]
else:
bands = ["g", "r"]
plan = PlanObservation(
name=name,
date=day,
ra=ra,
dec=dec,
observationlength=30,
bands=bands,
verbose=False,
)
observable.append(plan.observable)
if observable:
g_band_start.append(plan.g_band_recommended_time_start)
g_band_end.append(plan.g_band_recommended_time_end)
r_band_start.append(plan.r_band_recommended_time_start)
r_band_end.append(plan.r_band_recommended_time_end)
else:
g_band_start.append(None)
g_band_end.append(None)
r_band_start.append(None)
r_band_end.append(None)
ax = plan.plot_target()
plt.tight_layout()
pdf.savefig()
plt.close()
self.summarytext = f"\nYour multi-day observation plan for {name}\n"
self.summarytext += "-------------------------------------------------\n"
self.summarytext += "g-band observations\n"
for i, item in enumerate(g_band_start):
if item is not None:
if observable[i]:
self.summarytext += f"Night {NIGHTS[i]} {short_time(item.value)} - {short_time(g_band_end[i].value)}\n"
exposure_time = isotime_delta_to_seconds(
isotime_start=item.value, isotime_end=g_band_end[i].value
)
self.triggers.append(
{
"field_id": recommended_field,
"filter_id": 1,
"mjd_start": isotime_to_mjd(item.value),
"exposure_time": exposure_time,
}
)
else:
self.summarytext += f"Night {NIGHTS[i]} NOT OBSERVABLE\n"
self.summarytext += "-------------------------------------------------\n"
self.summarytext += "\n-------------------------------------------------\n"
self.summarytext += "r-band observations\n"
for i, item in enumerate(r_band_start):
if NIGHTS[i] not in ONE_FILTER_NIGHTS:
if item is not None:
if observable[i]:
self.summarytext += f"Night {NIGHTS[i]} {short_time(item.value)} - {short_time(r_band_end[i].value)}\n"
exposure_time = isotime_delta_to_seconds(
isotime_start=item.value, isotime_end=r_band_end[i].value
)
self.triggers.append(
{
"field_id": recommended_field,
"filter_id": 2,
"mjd_start": isotime_to_mjd(item.value),
"exposure_time": exposure_time,
}
)
else:
self.summarytext += f"Night {NIGHTS[i]} NOT OBSERVABLE\n"
self.summarytext += "-------------------------------------------------\n\n"
def print_plan(self):
print(self.summarytext)
def print_triggers(self):
bands = {1: "g", 2: "r", 3: "i"}
message = ""
for i, trigger in enumerate(self.triggers):
t_start = short_time(mjd_to_isotime(trigger["mjd_start"]))
message += f"{t_start} // {trigger['exposure_time']} s exposure // filter={bands[trigger['filter_id']]} // field={trigger['field_id']}\n"
message = message[:-1]
print(message)
return message | ztf-plan-obs | /ztf_plan_obs-0.34-py3-none-any.whl/ztf_plan_obs/multiday_plan.py | multiday_plan.py |
import argparse
from astropy import units as u
from astropy.coordinates import SkyCoord
from astropy.time import Time
from datetime import datetime
import email
import imaplib
import numpy as np
import os
import pandas as pd
import random
import re
import shutil
import string
import subprocess
import time
import warnings
# Disable warnings from log10 when there are non-detections
warnings.filterwarnings("ignore")
import matplotlib
matplotlib.use('AGG') # Run faster, comment out for interactive plotting
import matplotlib.pyplot as plt
#
# Generic ZTF webdav login
#
_ztfuser = "ztffps"
_ztfinfo = "dontgocrazy!"
#
# Import ZTF email user information
#
def import_credentials() -> None:
'''Load ZTF credentials from environmental variables.'''
try:
global _ztffp_email_address
_ztffp_email_address = os.getenv("ztf_email_address", None)
global _ztffp_email_password
_ztffp_email_password = os.getenv("ztf_email_password", None)
global _ztffp_email_server
_ztffp_email_server = os.getenv("ztf_email_imapserver", None)
# The email address associated with the ZTF FP server may be an alias,
# so allow for that possiblility
global _ztffp_user_address
if 'ztf_user_address' in os.environ:
_ztffp_user_address = os.getenv("ztf_user_address", None)
# Assume ZTF and login email address are the same
else:
_ztffp_user_address = _ztffp_email_address
global _ztffp_user_password
_ztffp_user_password = os.getenv("ztf_user_password", None)
# Success!
return True
except:
print("ZTF credentials are not found in the environmental variables.")
print("Please check the README file on github for setup instructions.")
# Unsuccessful
return False
def wget_check() -> bool:
'''Check if wget is installed on the system.'''
wget_installed: bool = False
if shutil.which("wget") is None:
wget_text = (f"wget is not installed on your system "
"(not the Python library). "
f"Please install wget before continuing.\n")
print(wget_text)
else:
wget_installed = True
return wget_installed
def random_log_file_name() -> str:
'''Generate a random log file name.'''
log_file_name: str | None = None
while log_file_name is None or os.path.exists(log_file_name):
random_chars = ''.join(random.choices(string.ascii_uppercase + string.digits,
k=10))
log_file_name = f"ztffp_{random_chars}.txt"
return log_file_name
def download_ztf_url(url: str, verbose: bool = True) -> str | None:
'''Download a ZTF files using wget.'''
# Wget is required to download the ZTF forced photometry request submission
wget_installed = wget_check()
if wget_installed==False:
return None
wget_command = (f"wget --http-user={_ztfuser} "
f"--http-password={_ztfinfo} "
f"-O {url.split('/')[-1]} {url}")
if verbose:
print("Downloading file...")
print(f'\t{wget_command}')
subprocess.run(wget_command.split(), capture_output=True)
return url.split('/')[-1]
def match_ztf_message(job_info, message_body, message_time_epoch, time_delta=10, new_email_matching=False, angular_separation=2):
'''
Check if the given email matches the information passed from the log file via job_info.
If a similar request was passed prior to the submission, you may need to use the
only_require_new_email parameter because the information won't exactly match.
In this case, look only for a close position, the relevant body text, and ensure that the
email was sent after the request.
'''
match = False
#
# Only continue if the message was received AFTER the job was submitted
#
if message_time_epoch < job_info['cdatetime'].to_list()[0]:
return match
message_lines = message_body.splitlines()
for line in message_lines:
#
# Incomplete data product
#
if re.search("A request similar to yours is waiting to be processed", line):
match = False
break # Stop early if this isn't a finished data product
if re.search("reqid", line):
inputs = line.split('(')[-1]
# Two ways
# Processing has completed for reqid=XXXX ()
test_ra = inputs.split('ra=')[-1].split(',')[0]
test_decl = inputs.split('dec=')[-1].split(')')[0]
if re.search('minJD', line) and re.search('maxJD', line):
test_minjd = inputs.split('minJD=')[-1].split(',')[0]
test_maxjd = inputs.split('maxJD=')[-1].split(',')[0]
else:
test_minjd = inputs.split('startJD=')[-1].split(',')[0]
test_maxjd = inputs.split('endJD=')[-1].split(',')[0]
if new_email_matching:
# Call this a match only if parameters match
if np.format_float_positional(float(test_ra), precision=6, pad_right=6).replace(' ','0') == job_info['ra'].to_list()[0] and \
np.format_float_positional(float(test_decl), precision=6, pad_right=6).replace(' ','0') == job_info['dec'].to_list()[0] and \
(np.format_float_positional(float(test_minjd), precision=6, pad_right=6).replace(' ','0') == job_info['jdstart'].to_list()[0] and \
np.format_float_positional(float(test_maxjd), precision=6, pad_right=6).replace(' ','0') == job_info['jdend'].to_list()[0]) or ( \
float(test_minjd) - time_delta < float(job_info['jdstart'].to_list()[0]) and float(test_minjd) + time_delta > float(job_info['jdstart'].to_list()[0]) and \
float(test_maxjd) - time_delta < float(job_info['jdend'].to_list()[0]) and float(test_maxjd) + time_delta > float(job_info['jdend'].to_list()[0])):
match = True
else:
# Check if new and positions are similar
submitted_skycoord = SkyCoord(job_info["ra"], job_info["dec"], frame='icrs', unit='deg')
email_skycoord = SkyCoord(test_ra, test_decl, frame='icrs', unit='deg')
if submitted_skycoord.separation(email_skycoord).arcsecond < angular_separation and \
message_time_epoch > job_info['cdatetime'].to_list()[0]:
match = True
return match
def read_job_log(file_name: str) -> pd.DataFrame:
job_info = pd.read_html(file_name)[0]
job_info['ra'] = np.format_float_positional(float(job_info['ra'].to_list()[0]),
precision=6, pad_right=6).replace(' ','0')
job_info['dec'] = np.format_float_positional(float(job_info['dec'].to_list()[0]),
precision=6, pad_right=6).replace(' ','0')
job_info['jdstart'] = np.format_float_positional(float(job_info['jdstart'].to_list()[0]),
precision=6, pad_right=6).replace(' ','0')
job_info['jdend'] = np.format_float_positional(float(job_info['jdend'].to_list()[0]),
precision=6, pad_right=6).replace(' ','0')
job_info['isostart'] = Time(float(job_info['jdstart'].to_list()[0]),
format='jd', scale='utc').iso
job_info['isoend'] = Time(float(job_info['jdend'].to_list()[0]),
format='jd', scale='utc').iso
job_info['ctime'] = os.path.getctime(file_name) - time.localtime().tm_gmtoff
job_info['cdatetime'] = datetime.fromtimestamp(os.path.getctime(file_name))
return job_info
def test_email_connection(n_attempts = 5):
'''
Checks the given email address and password
to see if a connection can be made.
'''
# Try a few times to be certain.
for attempt in range(n_attempts):
try:
imap = imaplib.IMAP4_SSL(_ztffp_email_server)
imap.login(_ztffp_email_address, _ztffp_email_password)
status, messages = imap.select("INBOX")
if status=='OK':
found_message = ("Your email inbox was found and contains "
f"{int(messages[0])} messages.\n"
"If this is not correct, please check your settings.")
print(found_message)
else:
print(f"Your inbox was not located. Please check your settings.")
imap.close()
imap.logout()
# A successful connection was made
return True
# Connection could be broken
except Exception:
print("Encountered an exception when connecting to your email address. Trying again.")
# Give a small timeout in the case of an intermittent connection issue.
time.sleep(10)
# No successful connection was made
return False
def query_ztf_email(log_file_name: str,
source_name: str ='temp',
new_email_matching: bool = False,
verbose: bool = True):
'''
Checks the given email address for a message from ZTF.
Parameters
----------
log_file_name : str
The name of the log file to check for a match.
source_name : str, optional
The name of the source that will be used for output files.
new_email_matching : bool, optional
If True, the email must be new.
verbose : bool, optional
If True, print out more information for logging.
'''
downloaded_file_names = None
if not os.path.exists(log_file_name):
print(f"{log_file_name} does not exist.")
return -1
# Interpret the request sent to the ZTF forced photometry server
job_info = read_job_log(log_file_name)
try:
imap = imaplib.IMAP4_SSL(_ztffp_email_server)
imap.login(_ztffp_email_address, _ztffp_email_password)
status, messages = imap.select("INBOX")
processing_match = False
for i in range(int(messages[0]), 0, -1):
if processing_match:
break
# Fetch the email message by ID
res, msg = imap.fetch(str(i), "(RFC822)")
for response in msg:
if isinstance(response, tuple):
# Parse a bytes email into a message object
msg = email.message_from_bytes(response[1])
# decode the email subject
sender, encoding = email.header.decode_header(msg.get("From"))[0]
if not isinstance(sender, bytes) and re.search("ztfpo@ipac\.caltech\.edu", sender):
#
# Get message body
#
content_type = msg.get_content_type()
body = msg.get_payload(decode=True).decode()
this_date = msg['Date']
this_date_tuple = email.utils.parsedate_tz(msg['Date'])
local_date = datetime.fromtimestamp(email.utils.mktime_tz(this_date_tuple))
#
# Check if this is the correct one
#
if content_type=="text/plain":
processing_match = match_ztf_message(job_info, body, local_date, new_email_matching)
subject, encoding = email.header.decode_header(msg.get("Subject"))[0]
if processing_match:
# Grab the appropriate URLs
lc_url = 'https' + (body.split('_lc.txt')[0] + '_lc.txt').split('https')[-1]
log_url = 'https' + (body.split('_log.txt')[0] + '_log.txt').split('https')[-1]
# Download each file
lc_initial_file_name = download_ztf_url(lc_url, verbose=verbose)
log_initial_file_name = download_ztf_url(log_url, verbose=verbose)
# Rename files
lc_final_name = f"{source_name.replace(' ','')}_{lc_initial_file_name.split('_')[-1]}"
log_final_name = f"{source_name.replace(' ','')}_{log_initial_file_name.split('_')[-1]}"
os.rename(lc_initial_file_name, lc_final_name)
os.rename(log_initial_file_name, log_final_name)
downloaded_file_names = [lc_final_name, log_final_name]
imap.close()
imap.logout()
# Connection could be broken
except Exception:
pass
if downloaded_file_names is not None:
for file_name in downloaded_file_names:
if verbose:
print(f"Downloaded: {file_name}")
return downloaded_file_names
def ztf_forced_photometry(ra: int | float | str | None,
decl: int | float | str | None,
jdstart: float | None = None,
jdend: float | None = None,
days: int | float = 60,
send: bool = True,
verbose: bool = True) -> str | None:
'''
Submits a request to the ZTF Forced Photometry service.
Parameters
----------
ra : int, float, str, or None
The right ascension of the source in decimal degrees or sexagesimal.
decl : int, float, str, or None
The declination of the source in decimal degrees or sexagesimal.
jdstart : float, optional
The start Julian date for the query.
If None, the current date minus 60 days will be used.
jdend : float, optional
The end Julian date for the query.
If None, the current date will be used.
days : int or float, optional
The number of days to query.
This is only used if jdstart and jdend are None.
send : bool, optional
If True, the request will be sent to the ZTF Forced Photometry service.
'''
# Wget is required for the ZTF forced photometry request submission
wget_installed = wget_check()
if wget_installed==False:
return None
#
# Set dates
#
if jdend is None:
jdend = Time(datetime.utcnow(), scale='utc').jd
if jdstart is None:
jdstart = jdend - days
if ra is not None and decl is not None:
# Check if ra is a decimal
try:
# These will trigger the exception if they aren't float
float(ra)
float(decl)
skycoord = SkyCoord(ra, decl, frame='icrs', unit='deg')
# Else assume sexagesimal
except Exception:
skycoord = SkyCoord(ra, decl, frame='icrs', unit=(u.hourangle, u.deg))
# Convert to string to keep same precision.
# This will make matching easier in the case of submitting multiple jobs.
jdend_str = np.format_float_positional(float(jdend), precision=6)
jdstart_str = np.format_float_positional(float(jdstart), precision=6)
ra_str = np.format_float_positional(float(skycoord.ra.deg), precision=6)
decl_str = np.format_float_positional(float(skycoord.dec.deg), precision=6)
log_file_name = random_log_file_name() # Unique file name
if verbose:
print(f"Sending ZTF request for (R.A.,Decl)=({ra},{decl})")
wget_command = (f"wget --http-user={_ztfuser} "
f"--http-passwd={_ztfinfo} "
f"-O {log_file_name} "
"https://ztfweb.ipac.caltech.edu/cgi-bin/requestForcedPhotometry.cgi?"
f"ra={ra_str}&"
f"dec={decl_str}&"
f"jdstart={jdstart_str}&"
f"jdend={jdend_str}&"
f"email={_ztffp_user_address}&userpass={_ztffp_user_password}")
# Replace .& with .0& to avoid wget error
wget_command = wget_command.replace('.&', '.0&')
if verbose:
print(wget_command)
if send:
subprocess.run(wget_command.split(), capture_output=True)
return log_file_name
else:
if verbose:
print("Missing necessary R.A. or declination.")
return None
def plot_ztf_fp(lc_file_name: str,
file_format: str = '.png',
threshold: int | float = 3.0,
upperlimit: int | float = 5.0,
verbose: bool = False):
'''
Create a simple ZTF forced photometry light curve.
'''
# Color mapping for figures
filter_colors: dict = {'ZTF_g': 'g',
'ZTF_r': 'r',
'ZTF_i': 'darkorange'}
try:
ztf_fp_df = pd.read_csv(lc_file_name, delimiter=' ', comment='#')
except:
if verbose:
print(f"Empty ZTF light curve file ({lc_file_name}). Check the log file.")
return
# Rename columns due to mix of , and ' ' separations in the files
new_cols = {}
for col in ztf_fp_df.columns:
new_cols[col] = col.replace(',','')
# Make a cleaned-up version
ztf_fp_df.rename(columns=new_cols, inplace=True)
ztf_fp_df.drop(columns=['Unnamed: 0'], inplace=True)
# Create additional columns with useful calculations
ztf_fp_df['mjd_midpoint'] = ztf_fp_df['jd'] - 2400000.5 - ztf_fp_df['exptime']/2./86400.
ztf_fp_df['fp_mag'] = ztf_fp_df['zpdiff'] - 2.5*np.log10(ztf_fp_df['forcediffimflux'])
ztf_fp_df['fp_mag_unc'] = 1.0857 * ztf_fp_df['forcediffimfluxunc']/ztf_fp_df['forcediffimflux']
ztf_fp_df['fp_ul'] = ztf_fp_df['zpdiff'] - 2.5*np.log10(upperlimit * ztf_fp_df['forcediffimfluxunc'])
fig = plt.figure(figsize=(12,6))
# Iterate over filters
for ztf_filter in set(ztf_fp_df['filter']):
filter_df = ztf_fp_df[ztf_fp_df['filter']==ztf_filter]
# Upper limit df
ul_filter_df = filter_df[filter_df.forcediffimflux/filter_df.forcediffimfluxunc < threshold]
# Detections df
detection_filter_df = filter_df[filter_df.forcediffimflux/filter_df.forcediffimfluxunc >= threshold]
if verbose:
print(f"{ztf_filter}: {detection_filter_df.shape[0]} detections and {ul_filter_df.shape[0]} upper limits.")
# Plot detections
plt.plot(detection_filter_df.mjd_midpoint, detection_filter_df.fp_mag, color=filter_colors[ztf_filter], marker='o', linestyle='', zorder=3)
plt.errorbar(detection_filter_df.mjd_midpoint, detection_filter_df.fp_mag, yerr=detection_filter_df.fp_mag_unc, color=filter_colors[ztf_filter], linestyle='', zorder=1)
# Plot non-detections
plt.plot(ul_filter_df.mjd_midpoint, ul_filter_df.fp_mag, color=filter_colors[ztf_filter], marker='v', linestyle='', zorder=2)
# Final touches
plt.gca().invert_yaxis()
plt.grid(True)
plt.tick_params(bottom=True, top=True, left=True, right=True, direction='in', labelsize='18', grid_linestyle=':')
plt.ylabel('ZTF FP (Mag.)', fontsize='20')
plt.xlabel('Time (MJD)', fontsize='20')
plt.tight_layout()
output_file_name = lc_file_name.rsplit('.', 1)[0] + file_format
fig.savefig(output_file_name)
plt.close(fig)
return output_file_name
#
# Wrapper function so that other python code can call this
#
def run_ztf_fp(all_jd: bool = False,
days: int | float = 60,
decl: int | float | None = None,
directory_path: str = '.',
do_plot: bool = True,
emailcheck: int = 20,
fivemindelay: int =60,
jdend: float | int | None = None,
jdstart: float | int | None = None,
logfile: str | None = None,
mjdend: float | int | None = None,
mjdstart: float | int | None = None,
plotfile: str | None = None,
ra: int | float | None = None,
skip_clean: bool = False,
source_name: str = 'temp',
test_email: bool = False,
new_email_matching: bool = False,
verbose: bool = False):
'''
Wrapper function to run the ZTF Forced Photometry code.
Parameters
----------
all_jd : bool, optional
If True, will run the code for all JDs in the given range. If False, will only run for the first JD in the range. The default is False.
days : int or float, optional
Number of days to run the code for. The default is 60.
decl : int or float, optional
Declination of the source in degrees. The default is None.
directory_path : str, optional
Path to the directory where the code will be run. The default is '.'.
do_plot : bool, optional
If True, will create a plot of the light curve. The default is True.
emailcheck : int, optional
Number of minutes between email checks. The default is 20.
fivemindelay : int, optional
Number of minutes to wait before checking for new data. The default is 60.
jdend : float or int, optional
Last JD to run the code for. The default is None.
jdstart : float or int, optional
First JD to run the code for. The default is None.
logfile : str, optional
Name of the log file. The default is None.
mjdend : float or int, optional
Last MJD to run the code for. The default is None.
mjdstart : float or int, optional
First MJD to run the code for. The default is None.
plotfile : str, optional
Name of the plot file. The default is None.
ra : int or float, optional
Right ascension of the source in degrees. The default is None.
skip_clean : bool, optional
If True, will skip the cleaning step. The default is False.
source_name : str, optional
Name of the source. The default is 'temp'.
test_email : bool, optional
If True, will test the email connection. The default is False.
new_email_matching : bool, optional
If True, will require the email is new. The default is False.
verbose : bool, optional
If True, will print out more information. The default is False.
'''
# Stop early if credentials were not found
credentials_imported = import_credentials()
if credentials_imported == False:
return -1
# Exit early if no sufficient conditions given to run
run = False
if (ra is not None and decl is not None) or (logfile is not None) or \
(plotfile is not None) or (test_email==True):
run = True
# Go home early
if run==False:
print("Insufficient parameters given to run.")
return -1
# Perform an email test if necessary
if test_email==True:
# Use comments from the function for now
email_connection_status = test_email_connection()
return
#
# Change necessary variables based on what was provided
#
# Override jd values if mjd arguments are supplied
if mjdstart is not None:
jdstart = mjdstart + 2400000.5
if mjdend is not None:
jdend = mjdend + 2400000.5
# Set to full ZTF range
if all_jd:
jdstart = 2458194.5
jdend = Time(datetime.utcnow(), scale='utc').jd
log_file_name = None
if logfile is None and plotfile is None:
log_file_name = ztf_forced_photometry(ra=ra,
decl=decl,
jdstart=jdstart,
jdend=jdend,
days=days)
else:
log_file_name = logfile
plot_file_name = plotfile
if log_file_name is not None:
# Download via email
downloaded_file_names = None
time_start_seconds = time.time()
while downloaded_file_names is None:
if time.time() - time_start_seconds < emailcheck:
if verbose:
print(f"Waiting for the email (rechecking every {emailcheck} seconds).")
downloaded_file_names = query_ztf_email(log_file_name,
source_name=source_name,
new_email_matching=new_email_matching,
verbose=verbose)
if downloaded_file_names == -1:
if verbose:
print(f"{log_file_name} was not found.")
elif downloaded_file_names is None:
if emailcheck < fivemindelay and time.time() - time_start_seconds > 600: # After 5 minutes, change to checking every 1 minute
emailcheck = fivemindelay
if verbose:
print(f"Changing to re-checking every {emailcheck} seconds.")
time.sleep(emailcheck)
else:
downloaded_file_names = [plot_file_name]
if downloaded_file_names[0] is not None:
# Open LC file and plot it
if do_plot:
figure_file_name = plot_ztf_fp(downloaded_file_names[0], verbose=verbose)
else:
figure_file_name = None
#
# Clean-up
#
if skip_clean==False:
output_directory = f"{directory_path}/{source_name}".replace('//','/')
# Trim potential extra '/'
if output_directory[-1:]=='/':
output_directory = output_directory[:-1]
# Create directory
if not os.path.exists(output_directory):
if verbose:
print(f"Creating {output_directory}")
os.makedirs(output_directory)
# Move all files to this location
# Wget log file
output_files = list()
if log_file_name is not None and os.path.exists(log_file_name):
shutil.move(log_file_name, f"{output_directory}/{log_file_name.split('/')[-1]}")
if verbose:
print(f"{' '*5}ZTF wget log: {output_directory}/{log_file_name.split('/')[-1]}")
output_files.append(f"{output_directory}/{log_file_name.split('/')[-1]}")
# Downloaded files
if isinstance(downloaded_file_names, list):
for downloaded_file_name in downloaded_file_names:
if os.path.exists(downloaded_file_name):
shutil.move(downloaded_file_name, f"{output_directory}/{downloaded_file_name.split('/')[-1]}")
if verbose:
print(f"{' '*5}ZTF downloaded file: {output_directory}/{downloaded_file_name.split('/')[-1]}")
output_files.append(f"{output_directory}/{downloaded_file_name.split('/')[-1]}")
# Figure
if figure_file_name is not None and os.path.exists(figure_file_name):
shutil.move(figure_file_name, f"{output_directory}/{figure_file_name.split('/')[-1]}")
if verbose:
print(f"{' '*5}ZTF figure: {output_directory}/{figure_file_name.split('/')[-1]}")
output_files.append(f"{output_directory}/{figure_file_name.split('/')[-1]}")
if len(output_files)==0 or isinstance(output_files, list)==False:
output_files = None
# Useful for automation
return output_files | ztffp | /ztffp-2.0.1-py3-none-any.whl/ztffp.py | ztffp.py |
[](https://pypi.python.org/pypi/ztfidr)
# ztfidr
package to read and parse the ZTF SN IDR dataset
# Install
`pip install ztfidr` [](https://pypi.python.org/pypi/ztfidr)
or (for the latest implementations)
```bash
git clone https://github.com/MickaelRigault/ztfidr.git
cd ztfidr
python setup.py install
```
**AND**
you need to have clone the [ZTF Interal datarelease](https://github.com/ZwickyTransientFacility/ztfcosmoidr) (password protected).
**you need to provide the fullpath as a global environment variable using: $ZTFIDRPATH**
# Usage
Assuming your `ztfcosmoidr/dr2` repository is stored at a location provided by `$ZTFIDRPATH`:
```python
import ztfidr
sample = ztfidr.get_sample() # UPDATE FOR YOUR CASE
```
The dataframe containing all the relevant information is accessible as `sample.data`:
```python
sample.data
```
and `sample.get_data()` you have many options like `x1_range`, `t0_range`, `redshift_range` or `goodcoverage=True`...
<p align="left">
<img src="images/example_data.png" width="900" title="data">
</p>
To visualise an individual target lightcurve (e.g. `ZTF19aampqcq`), do:
```python
lc = sample.get_target_lightcurve("ZTF19aampqcq")
lc.show()
```
<p align="left">
<img src="images/example_show_target.png" width="500" title="show_target">
</p>
***
# Host data
host data are accessible using `io.get_host_data()` that returns a multi-index columns dataframe.
`host_data = io.get_host_data()`
the top-level columns are "2kpc", "4kpc" and "global".
If you want the dataframe only for the "global" data, simply do:
`global_host_data = host_data.xs("global", axis=1)`
same for "2kpc" local
`local_host_data = host_data.xs("2kpc", axis=1)`
The "Direct Light Ratio" (dlr) corresponding to effective distance from the host center is inside the "global" columns as `host_dlr`.
You can either access it as:
`dlr = global_host_data["host_dlr"]`
or directly from host_data as:
`dlr = host_data.xs("host_dlr", axis=1, level=1)`
*More functionalities are implemented, check ztfidr.get_hosts_data() etc.*
| ztfidr | /ztfidr-0.9.7.tar.gz/ztfidr-0.9.7/README.md | README.md |
# ztfimg
ZTF Images tools
# Installation
[](https://pypi.python.org/pypi/ztfimg)
[](https://ztfimg.readthedocs.io/en/latest/?badge=latest)
or git:
```
git clone https://github.com/MickaelRigault/ztfimg.git
cd ztfim
pyton setup.py install
```
#### Dependencies
- This package uses `sep` for source extraction, background estimation and aperture photometry. [sep](https://sep.readthedocs.io/en/v1.0.x/api/sep.extract.html) is a python version of sextractor.
- This packages uses ztfquery for accessing ztf image product. [ztfquery](https://github.com/MickaelRigault/ztfquery)
***
# Read the doc:
It is here: https://ztfimg.readthedocs.io/en/latest/
| ztfimg | /ztfimg-0.19.0.tar.gz/ztfimg-0.19.0/README.md | README.md |
# ztflc
LightCurve Estimation from ZTF data
[](https://pypi.python.org/pypi/ztflc)
### Credit
M. Rigault (corresponding author, [email protected], CNRS/IN2P3).
A similar code exists [here](https://github.com/yaoyuhan/ForcePhotZTF), used for the ZTF high cadence SNeIa paper [Yao et al. 2019](http://cdsads.u-strasbg.fr/abs/2019arXiv191002967Y).
`ztflc` is however solely based on [ztfquery](https://github.com/MickaelRigault/ztfquery).
### Acknowledgment
If you have used `ztflc` for a research you are publishing, please **include the following in your acknowledgments**:
_"The ztflc code was funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement n°759194 - USNAC, PI: Rigault)."_
# Usage
### First time using `ztflc`
1) You may want to make sure `$ZTFDATA` is defined (see [ztfquery](https://github.com/MickaelRigault/ztfquery).)
2) You should first run *Storing or updating marshal data.* for your marshal's science program.
### Example 1, force photometry from coordinates:
For a target with coordinates `ra`, `dec`, you want to do force photometry on ZTF data from julian data `jdmin` up to julian date `jdmax`:
```python
from ztflc import forcephotometry
# Setup the target
fp = forcephotometry.ForcePhotometry.from_coords(ra, dec, jdmin=jdmin,jdmax=jdmax)
# Load the ztf metadata
fp.load_metadata()
# Load the ztf file you are going to need (diffimg and psfimg)
fp.load_filepathes()
# and run the forcephotometry
fp.run_forcefit(verbose=True)
```
Once the force photometry has run, data are stored as:
```python
fp.data_forcefit
```
and use `fp.show_lc()` to plot the lightcurve.
To store the `date_forcefit`, simply do:
```python
fp.store()
```
the dataframe will be store in `$ZTFDATA/forcephotometry/name.csv` (c.f. `ztfquery` for `$ZTFDATA`). you can also directly provide the filename if you want to dataframe stored elsewhere `fp.store(filename)`.
### Example 2, from ztf target name:
Say you want to get the force photometry from a ztf target named `ZTF18aaaaaaa`. Use the same code as example 1, except that, this time, you load `fp` using:
```python
from ztflc import forcephotometry
# Setup the target
fp = forcephotometry.ForcePhotometry.from_name(ZTF18aaaaaaa)
```
the code will use `ztfquery.marshal` to recover the information you need (ra, dec jdmin, jdmax).
# Storing or updating marshal data.
You can store ztf mashal target information locally using `ztfquery`. For instance, if you want to store the "Cosmology" program, simply do:
```python
from ztfquery import marshal
m = marshal.MarshalAccess()
m.load_target_sources("Cosmology")
m.store()
```
Once you did that, the code will use the locally stored data when you use `forcephotometry.ForcePhotometry.from_name(ZTF18aaaaaaa)`
# Downloading the files you need.
The first time you are going to run the forcephotometry for a given target, you most likely need to download the associated data. For instance for `ZTF18aaaaaaa`
```python
from ztflc import forcephotometry
# Setup the target
fp = forcephotometry.ForcePhotometry.from_name(ZTF18aaaaaaa)
# Download the data:
fp.io.download_data()
```
| ztflc | /ztflc-0.3.1.tar.gz/ztflc-0.3.1/README.md | README.md |
# ztfparsnip
[](https://badge.fury.io/py/ztfparsnip)
[](https://github.com/simeonreusch/ztfparsnip/actions/workflows/ci.yaml)
[](https://coveralls.io/github/simeonreusch/ztfparsnip?branch=main)
Retrain [Parsnip](https://github.com/LSSTDESC/parsnip) for ZTF. This is achieved by using [fpbot](https://github.com/simeonreusch/fpbot) forced photometry lightcurves of the [Bright Transient Survey](https://sites.astro.caltech.edu/ztf/bts/bts.php). These are augmented (redshifted, noisifed and - when possible - K-corrected).
The package is maintained by [A. Townsend](https://github.com/aotownsend) (HU Berlin) and [S. Reusch](https://github.com/simeonreusch) (DESY).
The following augmentation steps are taken:
- draw uniformly from a redshift distribution with maximum redshift increase `delta_z`
- only accept lightcurves with at least one datapoint making the signal-to-noise threshold `SN_threshold`
- only accept lightcurves with at least `n_det_threshold` datapoints
- for those lightcurves that have an existing SNCosmo template, apply a K-correction at that magnitude (if `k_corr=True`)
- randomly drop datapoints until `subsampling_rate` is reached
- add some scatter to the observed dates (`jd_scatter_sigma` in days)
- if `phase_lim=True`, only keep datapoints drugin a typical duration (depends on the type of source)
## Usage
### Create augmented training sample
```python
from pathlib import Path
from ztfparsnip.create import CreateLightcurves
weights = {"sn_ia": 9400, "tde": 9400, "sn_other": 9400, "agn": 9400, "star": 9400}
if __name__ == "__main__":
sample = CreateLightcurves(
output_format="parsnip",
classkey="simpleclasses",
weights=weights,
train_dir=Path("train"),
plot_dir=Path("plot"),
seed=None,
phase_lim=True,
k_corr=True,
)
sample.select()
sample.create(plot_debug=False)
```
### Train Parsnip with the augmented sample
```python
from ztfparsnip.train import Train
if __name__ == "__main__":
train = Train(classkey="simpleclasses", seed=None)
train.run()
```
### Evaluate
Coming soon. | ztfparsnip | /ztfparsnip-0.2.2.tar.gz/ztfparsnip-0.2.2/README.md | README.md |
# ztfquery
[](https://pypi.python.org/pypi/ztfquery)
[](https://doi.org/10.5281/zenodo.1345222)
[](https://github.com/mickaelrigault/ztfquery/actions/workflows/ci.yaml)
This package is made to ease access to Zwicky Transient Facility data and data-products. It is maintained by M. Rigault (CNRS/IN2P3) and S. Reusch (DESY).
[cite ztfquery](https://ui.adsabs.harvard.edu/abs/2018zndo...1345222R/abstract)
# ztfquery: a python tool to access ztf (and SEDM) data
`ztfquery` contains a list of tools:
- **ZTF products:** a wrapper of the [IRSA web API](https://irsa.ipac.caltech.edu/docs/program_interface/ztf_api.html) that enable to get ztf data _(requires access for full data, but not public data)_:
- Images and pipeline products, e.g. catalog ; See the [`ztfquery.query.py` documentation](doc/query.md)
- LightCurves (not from image subtraction): See the [`ztfquery.lightcurve.py` documentation](doc/lightcurve.md)
- ZTF observing logs: See the [`ztfquery.skyvision.py` documentation](doc/skyvision.md)
- **Marshal/Fritz:**
Download the source information and data, such as lightcurves, spectra, coordinates and redshift:
- from the [ZTF-I Marshal](http://skipper.caltech.edu:8080/cgi-bin/growth/marshal.cgi): See the [`ztfquery.marshal.py` documentation](doc/marshal.md)
- from the [ZTF-II Fritz](https://fritz.science/): See the [`ztfquery.fritz.py` documentation](doc/fritz.md)
- **SEDM Data:** tools to download SEDM data, including IFU cubes and target spectra, from [pharos](http://pharos.caltech.edu)
See the [`ztfquery.sedm.py` documentation](doc/sedm.md)
- **ZTF alert:** Currently only a simple alert reader. See the [`ztfquery.alert.py` documentation](doc/alert.md)
***
# Credits
## Citation
Mickael Rigault. (2018, August 14). ztfquery, a python tool to access ZTF data (Version doi). Zenodo. http://doi.org/10.5281/zenodo.1345222
## Acknowledgments
If you have used `ztfquery` for a research you are publishing, please **include the following in your acknowledgments**:
_"The ztfquery code was funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement n°759194 - USNAC, PI: Rigault)."_
## Corresponding Authors:
- M. Rigault: [email protected], CNRS/IN2P3
- S. Reusch: [email protected], DESY
***
# Installation
ztfquery requires `python >= 3.8`
## Install the code
using pip: `pip install ztfquery` (favored)
or for the latest version:
go wherever you want to save the folder and then
```bash
git clone https://github.com/MickaelRigault/ztfquery.git
cd ztfquery
poetry install
```
## Set your environment
You should also create the global variable `$ZTFDATA` (usually in your `~/.bash_profile` or `~/.cshrc`). Data you will download from IRSA will be saved in the directory indicated by `$ZTFDATA` following the IRSA data structure.
## Login and Password storage
Your credentials will requested the first time you need to access a service (IRSA, Marshal, etc.). They will then be stored, crypted, under ~/.ztfquery.
Use `ztfquery.io.set_account(servicename)` to reset it.
You can also directly provide account settings when running `load_metadata` and `download_data` using the `auth=[your_username, your_password]` parameter. Similarly, directly provide the username and password to the ztf ops page when loading `NightSummary` using the `ztfops_auth` parameter.
***
# Quick Examples
| ztfquery | /ztfquery-1.26.1.tar.gz/ztfquery-1.26.1/README.md | README.md |
====================
ztfy.alchemy package
====================
You may find documentation in:
- Global README.txt: ztfy/alchemy/docs/README.txt
- General: ztfy/alchemy/docs
- Technical: ztfy/alchemy/doctests
More informations can be found on ZTFY.org_ web site ; a decicated Trac environment is available
on ZTFY.alchemy_.
This package is created and maintained by Thierry Florac_.
.. _Thierry Florac: mailto:[email protected]
.. _ZTFY.org: http://www.ztfy.org
.. _ZTFY.alchemy: http://trac.ztfy.org/ztfy.alchemy
| ztfy.alchemy | /ztfy.alchemy-0.3.6.tar.gz/ztfy.alchemy-0.3.6/README.txt | README.txt |
import transaction as zope_transaction
from transaction.interfaces import ISavepointDataManager, IDataManagerSavepoint
from transaction._transaction import Status as ZopeStatus
from sqlalchemy.exc import DBAPIError
from sqlalchemy.orm.exc import ConcurrentModificationError
from sqlalchemy.orm.session import SessionExtension
from sqlalchemy.engine.base import Engine
from zope.interface import implements
_retryable_errors = []
try:
import psycopg2.extensions
except ImportError:
pass
else:
_retryable_errors.append((psycopg2.extensions.TransactionRollbackError, None))
# ORA-08177: can't serialize access for this transaction
try:
import cx_Oracle
except ImportError:
pass
else:
_retryable_errors.append((cx_Oracle.DatabaseError, lambda e: e.args[0].code == 8177))
# The status of the session is stored on the connection info
STATUS_ACTIVE = 'active' # session joined to transaction, writes allowed.
STATUS_CHANGED = 'changed' # data has been written
STATUS_READONLY = 'readonly' # session joined to transaction, no writes allowed.
STATUS_INVALIDATED = STATUS_CHANGED # BBB
NO_SAVEPOINT_SUPPORT = set(['sqlite'])
_SESSION_STATE = {} # a mapping of id(session) -> status
# This is thread safe because you are using scoped sessions
#
# The two variants of the DataManager.
#
class SessionDataManager(object):
"""Integrate a top level sqlalchemy session transaction into a zope transaction
One phase variant.
"""
implements(ISavepointDataManager)
def __init__(self, session, status, transaction_manager):
self.transaction_manager = transaction_manager
self.tx = session.transaction._iterate_parents()[-1]
self.session = session
_SESSION_STATE[id(session)] = status
self.state = 'init'
def _finish(self, final_state):
assert self.tx is not None
session = self.session
del _SESSION_STATE[id(self.session)]
self.tx = self.session = None
self.state = final_state
session.close()
def abort(self, trans):
if self.tx is not None: # this could happen after a tpc_abort
self._finish('aborted')
def tpc_begin(self, trans):
self.session.flush()
def commit(self, trans):
status = _SESSION_STATE[id(self.session)]
if status is not STATUS_INVALIDATED:
self._finish('no work')
def tpc_vote(self, trans):
# for a one phase data manager commit last in tpc_vote
if self.tx is not None: # there may have been no work to do
self.tx.commit()
self._finish('committed')
def tpc_finish(self, trans):
pass
def tpc_abort(self, trans):
assert self.state is not 'committed'
def sortKey(self):
# Try to sort last, so that we vote last - we may commit in tpc_vote(),
# which allows Zope to roll back its transaction if the RDBMS
# threw a conflict error.
return "~sqlalchemy:%d" % id(self.tx)
@property
def savepoint(self):
"""Savepoints are only supported when all connections support subtransactions
"""
if set(engine.url.drivername
for engine in self.session.transaction._connections.keys()
if isinstance(engine, Engine)
).intersection(NO_SAVEPOINT_SUPPORT):
raise AttributeError('savepoint')
return self._savepoint
def _savepoint(self):
return SessionSavepoint(self.session)
def should_retry(self, error):
if isinstance(error, ConcurrentModificationError):
return True
if isinstance(error, DBAPIError):
orig = error.orig
for error_type, test in _retryable_errors:
if isinstance(orig, error_type):
if test is None:
return True
if test(orig):
return True
class TwoPhaseSessionDataManager(SessionDataManager):
"""Two phase variant.
"""
def tpc_vote(self, trans):
if self.tx is not None: # there may have been no work to do
self.tx.prepare()
self.state = 'voted'
def tpc_finish(self, trans):
if self.tx is not None:
self.tx.commit()
self._finish('committed')
def tpc_abort(self, trans):
if self.tx is not None: # we may not have voted, and been aborted already
self.tx.rollback()
self._finish('aborted commit')
def sortKey(self):
# Sort normally
return "sqlalchemy.twophase:%d" % id(self.tx)
class SessionSavepoint:
implements(IDataManagerSavepoint)
def __init__(self, session):
self.session = session
self.transaction = session.begin_nested()
def rollback(self):
# no need to check validity, sqlalchemy should raise an exception. I think.
self.transaction.rollback()
def join_transaction(session, initial_state=STATUS_ACTIVE, transaction_manager=zope_transaction.manager):
"""Join a session to a transaction using the appropriate datamanager.
It is safe to call this multiple times, if the session is already joined
then it just returns.
`initial_state` is either STATUS_ACTIVE, STATUS_INVALIDATED or STATUS_READONLY
If using the default initial status of STATUS_ACTIVE, you must ensure that
mark_changed(session) is called when data is written to the database.
The ZopeTransactionExtension SessionExtension can be used to ensure that this is
called automatically after session write operations.
"""
if _SESSION_STATE.get(id(session), None) is None:
if session.twophase:
DataManager = TwoPhaseSessionDataManager
else:
DataManager = SessionDataManager
transaction_manager.get().join(DataManager(session, initial_state, transaction_manager))
def mark_changed(session, transaction_manager=zope_transaction.manager):
"""Mark a session as needing to be committed
"""
session_id = id(session)
assert _SESSION_STATE.get(session_id, None) is not STATUS_READONLY, "Session already registered as read only"
join_transaction(session, STATUS_CHANGED, transaction_manager)
_SESSION_STATE[session_id] = STATUS_CHANGED
class ZopeTransactionExtension(SessionExtension):
"""Record that a flush has occurred on a session's connection. This allows
the DataManager to rollback rather than commit on read only transactions.
"""
def __init__(self, initial_state=STATUS_ACTIVE, transaction_manager=zope_transaction.manager):
if initial_state == 'invalidated':
initial_state = STATUS_CHANGED #BBB
SessionExtension.__init__(self)
self.initial_state = initial_state
self.transaction_manager = transaction_manager
def after_begin(self, session, transaction, connection):
join_transaction(session, self.initial_state, self.transaction_manager)
def after_attach(self, session, instance):
join_transaction(session, self.initial_state, self.transaction_manager)
def after_flush(self, session, flush_context):
mark_changed(session, self.transaction_manager)
def after_bulk_update(self, session, query, query_context, result):
mark_changed(session, self.transaction_manager)
def after_bulk_delete(self, session, query, query_context, result):
mark_changed(session, self.transaction_manager)
def before_commit(self, session):
assert self.transaction_manager.get().status == ZopeStatus.COMMITTING, "Transaction must be committed using the transaction manager" | ztfy.alchemy | /ztfy.alchemy-0.3.6.tar.gz/ztfy.alchemy-0.3.6/src/ztfy/alchemy/datamanager.py | datamanager.py |
import logging
logger = logging.getLogger('ZTFY (SQLAlchemy)')
from datetime import datetime
from persistent import Persistent
from persistent.dict import PersistentDict
import sqlalchemy
from sqlalchemy.event import listen
from sqlalchemy.orm import scoped_session, sessionmaker, class_mapper
from sqlalchemy.pool import Pool
from threading import Thread, Lock
import time
# import Zope3 __interfaces
from zope.security.interfaces import NoInteraction
# import local __interfaces
from ztfy.alchemy.interfaces import IAlchemyEngineUtility, REQUEST_SESSION_KEY
# import Zope3 packages
from zope.container.contained import Contained
from zope.interface import implements
from zope.component import getUtility
from zope.schema.fieldproperty import FieldProperty
# import local packages
from ztfy.alchemy.datamanager import ZopeTransactionExtension, join_transaction, \
_SESSION_STATE, STATUS_ACTIVE, STATUS_READONLY
from ztfy.utils.request import getRequest, getRequestData, setRequestData
CONNECTIONS_TIMESTAMP = {}
CONNECTIONS_LOCK = Lock()
def handle_pool_checkout(connection, record, proxy):
"""Pool connection checkout
Called when a connection is retrieved from the pool.
If the connection record is already marked, we remove it from the mapping.
"""
with CONNECTIONS_LOCK:
if record in CONNECTIONS_TIMESTAMP:
logger.debug("Removing timestamp for checked-out connection {0!r} ({1!r})".format(connection, record))
del CONNECTIONS_TIMESTAMP[record]
listen(Pool, 'checkout', handle_pool_checkout)
def handle_pool_checkin(connection, record):
"""Pool connection checkin
Called when a connection returns to the pool.
We apply a timestamp on the connection record to be able to close it automatically
after 5 minutes without being used.
"""
with CONNECTIONS_LOCK:
logger.debug("Setting inactivity timestamp for checked-in connection {0!r} ({1!r})".format(connection, record))
CONNECTIONS_TIMESTAMP[record] = datetime.utcnow()
listen(Pool, 'checkin', handle_pool_checkin)
def close_all_connections():
"""SQLALchemy connections cleaner"""
for connection, value in list(CONNECTIONS_TIMESTAMP.items()):
logger.info("Invalidating connection {0!r} from pool".format(connection))
with CONNECTIONS_LOCK:
connection.invalidate()
del CONNECTIONS_TIMESTAMP[connection]
class ConnectionCleanerThread(Thread):
"""Background thread used to clean unused database connections
Each connection is referenced in CONNECTION_TIMESTAMP on checkin and is invalidated
if not being used after 5 minutes
"""
timeout = 300
def run(self):
while True:
now = datetime.utcnow()
for connection, value in list(CONNECTIONS_TIMESTAMP.items()):
delta = now - value
if delta.total_seconds() > self.timeout:
logger.info("Invalidating unused connection {0!r} from pool".format(connection))
with CONNECTIONS_LOCK:
connection.invalidate()
del CONNECTIONS_TIMESTAMP[connection]
time.sleep(60)
logger.info("Starting pool connections management thread")
cleaner_thread = ConnectionCleanerThread()
cleaner_thread.daemon = True
cleaner_thread.start()
class AlchemyEngineUtility(Persistent):
"""A persistent utility providing a database engine"""
implements(IAlchemyEngineUtility)
name = FieldProperty(IAlchemyEngineUtility['name'])
dsn = FieldProperty(IAlchemyEngineUtility['dsn'])
echo = FieldProperty(IAlchemyEngineUtility['echo'])
pool_size = FieldProperty(IAlchemyEngineUtility['pool_size'])
pool_recycle = FieldProperty(IAlchemyEngineUtility['pool_recycle'])
register_geotypes = FieldProperty(IAlchemyEngineUtility['register_geotypes'])
register_opengis = FieldProperty(IAlchemyEngineUtility['register_opengis'])
encoding = FieldProperty(IAlchemyEngineUtility['encoding'])
convert_unicode = FieldProperty(IAlchemyEngineUtility['convert_unicode'])
def __init__(self, name=u'', dsn=u'', echo=False, pool_size=25, pool_recycle=-1,
register_geotypes=False, register_opengis=False, encoding='utf-8', convert_unicode=False, **kw):
self.name = name
self.dsn = dsn
self.encoding = encoding
self.convert_unicode = convert_unicode
self.echo = echo
self.pool_size = pool_size
self.pool_recycle = pool_recycle
self.register_geotypes = register_geotypes
self.register_opengis = register_opengis
self.kw = PersistentDict()
self.kw.update(kw)
def __setattr__(self, name, value):
super(AlchemyEngineUtility, self).__setattr__(name, value)
if (name != '_v_engine') and hasattr(self, '_v_engine'):
delattr(self, '_v_engine')
def getEngine(self):
engine = getattr(self, '_v_engine', None)
if engine is not None:
return engine
kw = {}
kw.update(self.kw)
self._v_engine = sqlalchemy.create_engine(str(self.dsn),
echo=self.echo,
pool_size=self.pool_size,
pool_recycle=self.pool_recycle,
encoding=self.encoding,
convert_unicode=self.convert_unicode,
strategy='threadlocal', **kw)
if self.register_geotypes:
try:
import psycopg2
import psycopg2.extensions as psycoext
from GeoTypes import initialisePsycopgTypes
url = self._v_engine.url
initialisePsycopgTypes(psycopg_module=psycopg2,
psycopg_extensions_module=psycoext,
connect_string='host=%(host)s port=%(port)s dbname=%(dbname)s user=%(user)s password=%(password)s' % \
{'host': url.host,
'port': url.port,
'dbname': url.database,
'user': url.username,
'password': url.password},
register_opengis_types=self.register_opengis)
except:
pass
return self._v_engine
def _resetEngine(self):
engine = getattr(self, '_v_engine', None)
if engine is not None:
engine.dispose()
self._v_engine = None
class PersistentAlchemyEngineUtility(AlchemyEngineUtility, Contained):
"""A persistent implementation of AlchemyEngineUtility stored into ZODB"""
def getEngine(engine):
if isinstance(engine, (str, unicode)):
engine = getUtility(IAlchemyEngineUtility, engine).getEngine()
return engine
def getSession(engine, join=True, status=STATUS_ACTIVE, request=None, alias=None,
twophase=True, use_zope_extension=True):
"""Get a new SQLAlchemy session
Session is stored in request and in a sessions storage"""
session = None
if request is None:
try:
request = getRequest()
except NoInteraction:
pass
if not alias:
alias = engine
if request is not None:
session_data = getRequestData(REQUEST_SESSION_KEY, request, {})
session = session_data.get(alias)
if session is None:
_engine = getEngine(engine)
if use_zope_extension:
factory = scoped_session(sessionmaker(bind=_engine, twophase=twophase,
extension=ZopeTransactionExtension()))
else:
factory = sessionmaker(bind=_engine, twophase=twophase)
session = factory()
if join:
join_transaction(session, initial_state=status)
if status != STATUS_READONLY:
_SESSION_STATE[id(session)] = status
if (request is not None) and (session is not None):
session_data[alias] = session
setRequestData(REQUEST_SESSION_KEY, session_data, request)
return session
def getUserSession(engine, join=True, status=STATUS_ACTIVE, request=None, alias=None,
twophase=True, use_zope_extension=True):
"""Shortcut function to get user session"""
if isinstance(engine, basestring):
session = getSession(engine, join=join, status=status, request=request, alias=alias,
twophase=twophase, use_zope_extension=use_zope_extension)
else:
session = engine
return session
class MetadataManager(object):
"""A manager for metadata management, to be able to use the same table name
in different databases
"""
def __init__(self):
self.metadata = {}
def getTable(self, engine, table, fallback):
md = self.metadata.get(engine)
if md and table in md.tables:
return md.tables[table]
if fallback and engine:
md = self.metadata.get('')
if md and table in md.tables:
return md.tables[table]
return None
def __call__(self, engine=''):
md = self.metadata.get(engine)
if md is None:
md = self.metadata[engine] = sqlalchemy.MetaData()
return md
metadata = MetadataManager()
_tableToEngine = {}
_classToEngine = {}
def _assignTable(table, engine, session=None):
_table = metadata.getTable(engine, table, True)
util = getUtility(IAlchemyEngineUtility, name=engine)
if session is None:
session = getSession(engine)
session.bind_table(_table, util.getEngine())
def assignTable(table, engine, immediate=True):
_tableToEngine[table] = engine
if immediate:
_assignTable(table, engine)
def _assignClass(class_, engine, session=None):
_mapper = class_mapper(class_)
util = getUtility(IAlchemyEngineUtility, name=engine)
if session is None:
session = getSession(engine)
session.bind_mapper(_mapper, util.getEngine())
def assignClass(class_, engine, immediate=True):
_classToEngine[class_] = engine
if immediate:
_assignClass(class_, engine) | ztfy.alchemy | /ztfy.alchemy-0.3.6.tar.gz/ztfy.alchemy-0.3.6/src/ztfy/alchemy/engine.py | engine.py |
# import Zope3 __interfaces
# import local __interfaces
# import Zope3 packages
from zope.interface import Interface
from zope.schema import TextLine, Bool, Int
from zope.configuration.fields import GlobalObject
# import local packages
from ztfy.alchemy import _
class IEngineDirective(Interface):
"""Define a new SQLAlchemy engine"""
dsn = TextLine(title=_('Database URL'),
description=_('RFC-1738 compliant URL for the database connection'),
required=True)
name = TextLine(title=_('Engine name'),
description=_('Empty if this engine is the default engine.'),
required=False,
default=u'')
echo = Bool(title=_('Echo SQL statements'),
required=False,
default=False)
pool_size = Int(title=_("Pool size"),
description=_("SQLAlchemy connections pool size"),
required=False,
default=25)
pool_recycle = Int(title=_("Pool recycle time"),
description=_("SQLAlchemy connection recycle time (-1 for none)"),
required=False,
default= -1)
register_geotypes = Bool(title=_("Register GeoTypes"),
description=_("Should engine register PostGis GeoTypes"),
default=False)
register_opengis = Bool(title=_("Register OpenGIS"),
description=_("Should engine register OpenGIS types"),
default=False)
class ITableAssignDirective(Interface):
"""Assign a table to a given engine"""
table = TextLine(title=_("Table name"),
description=_("Name of the table to assign"),
required=True)
engine = TextLine(title=_("SQLAlchemy engine"),
description=_("Name of the engine to connect the table to"),
required=True)
class IClassAssignDirective(Interface):
"""Assign a table to a given engine"""
class_ = GlobalObject(title=_("Class name"),
description=_("Name of the class to assign"),
required=True)
engine = TextLine(title=_("SQLAlchemy engine"),
description=_("Name of the engine to connect the table to"),
required=True) | ztfy.alchemy | /ztfy.alchemy-0.3.6.tar.gz/ztfy.alchemy-0.3.6/src/ztfy/alchemy/metadirectives.py | metadirectives.py |
====================
ztfy.alchemy package
====================
.. contents::
What is ztfy.alchemy ?
======================
ZTFY.alchemy is a Zope3 package which can be used to connect Zope3 applications with SQLAlchemy.
Main features include :
- integration of SQLAlchemy transactions with Zope3 transaction manager
- register PostgreSQL geometric data types (PostGIS) through GeoTypes package.
Most code fragments are based on zope.sqlalchemy, z3c.sqlalchemy and z3c.zalchemy elements and
source codes, except for elements handling PostGIS data types..
How to use ztfy.alchemy ?
=========================
#TODO: To be written...
| ztfy.alchemy | /ztfy.alchemy-0.3.6.tar.gz/ztfy.alchemy-0.3.6/docs/README.txt | README.txt |
# import standard packages
from httplib import UNAUTHORIZED
# import Zope3 interfaces
from z3c.form.interfaces import HIDDEN_MODE, IErrorViewSnippet
from zope.authentication.interfaces import IAuthentication
from zope.component.interfaces import ISite
from zope.publisher.interfaces.browser import IBrowserSkinType
from zope.security.interfaces import IUnauthorized
from zope.session.interfaces import ISession
from ztfy.appskin.interfaces import IApplicationBase, IApplicationResources, IAnonymousPage, ILoginView, ILoginViewHelp
from ztfy.skin.interfaces import IDefaultView
# import local interfaces
# import Zope3 packages
from z3c.form import field, button
from zope.component import queryMultiAdapter, getMultiAdapter, queryUtility, getUtilitiesFor
from zope.interface import implements, Interface, Invalid
from zope.publisher.skinnable import applySkin
from zope.schema import TextLine, Password
from zope.site import hooks
from zope.traversing.browser.absoluteurl import absoluteURL
from ztfy.skin.form import BaseAddForm
from ztfy.utils.traversing import getParent
# import local packages
from ztfy.appskin import _
class ILoginFormFields(Interface):
"""Login form fields interface"""
username = TextLine(title=_("login-field", "Login"),
description=_("Principal ID"),
required=True)
password = Password(title=_("password-field", "Password"),
required=True)
came_from = TextLine(title=_("camefrom-field", "Login origin"),
required=False)
def isUnauthorized(form):
return IUnauthorized.providedBy(form.context)
class LoginView(BaseAddForm):
"""Main login view"""
implements(ILoginView, IAnonymousPage)
legend = _("Please enter valid credentials to login")
css_class = 'login_view'
icon_class = 'icon-lock'
fields = field.Fields(ILoginFormFields)
def __call__(self):
if isUnauthorized(self):
context, _action, _permission = self.context.args
self.request.response.setStatus(UNAUTHORIZED)
else:
context = self.context
self.app = getParent(context, IApplicationBase)
return super(LoginView, self).__call__()
@property
def action(self):
return '%s/@@login.html' % absoluteURL(self.app, self.request)
@property
def help(self):
helper = queryMultiAdapter((self.context, self.request, self), ILoginViewHelp)
if helper is not None:
return helper.help
def update(self):
super(LoginView, self).update()
adapter = queryMultiAdapter((self.context, self.request, self), IApplicationResources)
if adapter is None:
adapter = queryMultiAdapter((self.context, self.request), IApplicationResources)
if adapter is not None:
for resource in adapter.resources:
resource.need()
def updateWidgets(self):
super(LoginView, self).updateWidgets()
self.widgets['came_from'].mode = HIDDEN_MODE
origin = self.request.get('came_from') or self.request.get(self.prefix + self.widgets.prefix + 'came_from')
if not origin:
origin = self.request.getURL()
stack = self.request.getTraversalStack()
if stack:
origin += '/' + '/'.join(stack[::-1])
self.widgets['came_from'].value = origin
def updateActions(self):
super(LoginView, self).updateActions()
self.actions['login'].addClass('btn')
def extractData(self, setErrors=True):
data, errors = super(LoginView, self).extractData(setErrors=setErrors)
if errors:
self.logout()
return data, errors
self.request.form['login'] = data['username']
self.request.form['password'] = data['password']
self.principal = None
context = getParent(self.context, ISite)
while context is not None:
old_site = hooks.getSite()
try:
hooks.setSite(context)
for _name, auth in getUtilitiesFor(IAuthentication):
try:
self.principal = auth.authenticate(self.request)
if self.principal is not None:
return data, errors
except:
continue
finally:
hooks.setSite(old_site)
context = getParent(context, ISite, allow_context=False)
if self.principal is None:
error = Invalid(_("Invalid credentials"))
view = getMultiAdapter((error, self.request, None, None, self, self.context),
IErrorViewSnippet)
view.update()
errors += (view,)
if setErrors:
self.widgets.errors = errors
self.logout()
return data, errors
@button.buttonAndHandler(_("login-button", "Login"), name="login")
def handleLogin(self, action):
data, errors = self.extractData()
if errors:
self.status = self.formErrorsMessage
return
if self.principal is not None:
if isUnauthorized(self):
context, _action, _permission = self.context.args
self.request.response.redirect(absoluteURL(context, self.request))
else:
came_from = data.get('came_from')
if came_from:
self.request.response.redirect(came_from)
else:
target = queryMultiAdapter((self.context, self.request, Interface), IDefaultView)
self.request.response.redirect('%s/%s' %
(absoluteURL(self.context, self.request),
target.viewname if target is not None else '@@index.html'))
return ''
else:
self.request.response.redirect('%s/@@login.html?came_from=%s' % (absoluteURL(self.context, self.request),
data.get('came_from')))
def logout(self):
sessionData = ISession(self.request)['zope.pluggableauth.browserplugins']
sessionData['credentials'] = None
class LogoutView(BaseAddForm):
"""Main logout view"""
def __call__(self):
skin = queryUtility(IBrowserSkinType, self.context.getSkin())
applySkin(self.request, skin)
context = getParent(self.context, ISite)
while context is not None:
old_site = hooks.getSite()
try:
hooks.setSite(context)
for _name, auth in getUtilitiesFor(IAuthentication):
auth.logout(self.request)
finally:
hooks.setSite(old_site)
context = getParent(context, ISite, allow_context=False)
target = queryMultiAdapter((self.context, self.request, Interface), IDefaultView)
self.request.response.redirect('%s/%s' % (absoluteURL(self.context, self.request),
target.viewname if target is not None
else '@@SelectedManagementView.html'))
return '' | ztfy.appskin | /ztfy.appskin-0.3.7.tar.gz/ztfy.appskin-0.3.7/src/ztfy/appskin/login.py | login.py |
__docformat__ = "restructuredtext"
# import standard packages
# import Zope3 interfaces
# import local interfaces
from ztfy.i18n.interfaces import II18nAttributesAware
# import Zope3 packages
from zope.interface import Interface, Attribute
# import local packages
from ztfy.file.schema import ImageField
from ztfy.i18n.schema import I18nTextLine
from ztfy.appskin import _
#
# Application base marker interface
#
class IApplicationBase(Interface):
"""Base application marker interface"""
class IApplicationResources(Interface):
"""Application resources interface"""
resources = Attribute(_("Tuple of Fanstatic resources needed by application's layout"))
#
# Marker interface for anonymous pages
#
class IAnonymousPage(Interface):
"""Marker interface for anonymous pages"""
class ILoginView(Interface):
"""Login page marker interface"""
class ILoginViewHelp(Interface):
"""Login view help interface"""
help = Attribute(_("Help text"))
#
# Base application presentation info
#
class IApplicationPresentationInfo(II18nAttributesAware):
"""Base application presentation info"""
site_icon = ImageField(title=_("Application icon"),
description=_("Site 'favicon' image"),
required=False)
logo = ImageField(title=_("Logo image"),
required=False)
footer_text = I18nTextLine(title=_("Footer text"),
required=False)
#
# Table interfaces
#
class IInnerListViewContainer(Interface):
"""Inner list marker interface"""
legend = Attribute(_("Container legend header"))
empty_message = Attribute(_("Empty container message text"))
class IInnerDialogListViewContainer(IInnerListViewContainer):
"""Dialog inner list marker interface"""
class ITableActionsViewletManager(Interface):
"""Table actions viewlet manager"""
class ITableActionViewlet(Interface):
"""Table action viewlet button"""
target = Attribute(_("Target URL relative to viewlet context"))
label = Attribute(_("Button label")) | ztfy.appskin | /ztfy.appskin-0.3.7.tar.gz/ztfy.appskin-0.3.7/src/ztfy/appskin/interfaces.py | interfaces.py |
$(document).ready(function(){
// === Sidebar navigation === //
$('.submenu > a').click(function(e) {
e.preventDefault();
var submenu = $(this).siblings('ul');
var li = $(this).parents('li');
var submenus = $('#sidebar li.submenu ul');
var submenus_parents = $('#sidebar li.submenu');
if(li.hasClass('open')) {
if(($(window).width() > 768) || ($(window).width() < 479)) {
submenu.slideUp();
} else {
submenu.fadeOut(250);
}
li.removeClass('open');
} else {
if(($(window).width() > 768) || ($(window).width() < 479)) {
submenus.slideUp();
submenu.slideDown();
} else {
submenus.fadeOut(250);
submenu.fadeIn(250);
}
submenus_parents.removeClass('open');
li.addClass('open');
}
});
var ul = $('#sidebar > ul');
$('#sidebar > a').click(function(e) {
e.preventDefault();
var sidebar = $('#sidebar');
if(sidebar.hasClass('open')) {
sidebar.removeClass('open');
ul.slideUp(250);
} else {
sidebar.addClass('open');
ul.slideDown(250);
}
});
// === Resize window related === //
$(window).resize(function() {
if($(window).width() > 479) {
ul.css({'display':'block'});
$('#content-header .btn-group').css({width:'auto'});
}
if($(window).width() < 479) {
ul.css({'display':'none'});
fix_position();
}
if($(window).width() > 768) {
$('#user-nav > ul').css({width:'auto',margin:'0'});
$('#content-header .btn-group').css({width:'auto'});
}
});
if($(window).width() < 468) {
ul.css({'display':'none'});
fix_position();
}
if($(window).width() > 479) {
$('#content-header .btn-group').css({width:'auto'});
ul.css({'display':'block'});
}
// === Tooltips === //
$('.tip').tooltip();
$('.tip-left').tooltip({ placement: 'left' });
$('.tip-right').tooltip({ placement: 'right' });
$('.tip-top').tooltip({ placement: 'top' });
$('.tip-bottom').tooltip({ placement: 'bottom' });
// === Fixes the position of buttons group in content header and top user navigation === //
function fix_position() {
var uwidth = $('#user-nav > ul').width();
$('#user-nav > ul').css({width:uwidth,'margin-left':'-' + uwidth / 2 + 'px'});
var cwidth = $('#content-header .btn-group').width();
$('#content-header .btn-group').css({width:cwidth,'margin-left':'-' + uwidth / 2 + 'px'});
}
// === Style switcher === //
$('#style-switcher i').click(function() {
if($(this).hasClass('open'))
{
$(this).parent().animate({marginRight:'-=220'});
$(this).removeClass('open');
} else
{
$(this).parent().animate({marginRight:'+=220'});
$(this).addClass('open');
}
$(this).toggleClass('icon-arrow-left');
$(this).toggleClass('icon-arrow-right');
});
$('#style-switcher a').click(function() {
var style = $(this).attr('href').replace('#','');
$('.skin-color').attr('href','css/unicorn.'+style+'.css');
$(this).siblings('a').css({'border-color':'transparent'});
$(this).css({'border-color':'#aaaaaa'});
});
}); | ztfy.appskin | /ztfy.appskin-0.3.7.tar.gz/ztfy.appskin-0.3.7/src/ztfy/appskin/resources/js/ztfy.appskin.js | ztfy.appskin.js |
$(document).ready(function(){$(".submenu > a").click(function(h){h.preventDefault();var f=$(this).siblings("ul");var c=$(this).parents("li");var d=$("#sidebar li.submenu ul");var g=$("#sidebar li.submenu");if(c.hasClass("open")){if(($(window).width()>768)||($(window).width()<479)){f.slideUp()}else{f.fadeOut(250)}c.removeClass("open")}else{if(($(window).width()>768)||($(window).width()<479)){d.slideUp();f.slideDown()}else{d.fadeOut(250);f.fadeIn(250)}g.removeClass("open");c.addClass("open")}});var a=$("#sidebar > ul");$("#sidebar > a").click(function(d){d.preventDefault();var c=$("#sidebar");if(c.hasClass("open")){c.removeClass("open");a.slideUp(250)}else{c.addClass("open");a.slideDown(250)}});$(window).resize(function(){if($(window).width()>479){a.css({display:"block"});$("#content-header .btn-group").css({width:"auto"})}if($(window).width()<479){a.css({display:"none"});b()}if($(window).width()>768){$("#user-nav > ul").css({width:"auto",margin:"0"});$("#content-header .btn-group").css({width:"auto"})}});if($(window).width()<468){a.css({display:"none"});b()}if($(window).width()>479){$("#content-header .btn-group").css({width:"auto"});a.css({display:"block"})}$(".tip").tooltip();$(".tip-left").tooltip({placement:"left"});$(".tip-right").tooltip({placement:"right"});$(".tip-top").tooltip({placement:"top"});$(".tip-bottom").tooltip({placement:"bottom"});function b(){var c=$("#user-nav > ul").width();$("#user-nav > ul").css({width:c,"margin-left":"-"+c/2+"px"});var d=$("#content-header .btn-group").width();$("#content-header .btn-group").css({width:d,"margin-left":"-"+c/2+"px"})}$("#style-switcher i").click(function(){if($(this).hasClass("open")){$(this).parent().animate({marginRight:"-=220"});$(this).removeClass("open")}else{$(this).parent().animate({marginRight:"+=220"});$(this).addClass("open")}$(this).toggleClass("icon-arrow-left");$(this).toggleClass("icon-arrow-right")});$("#style-switcher a").click(function(){var c=$(this).attr("href").replace("#","");$(".skin-color").attr("href","css/unicorn."+c+".css");$(this).siblings("a").css({"border-color":"transparent"});$(this).css({"border-color":"#aaaaaa"})})}); | ztfy.appskin | /ztfy.appskin-0.3.7.tar.gz/ztfy.appskin-0.3.7/src/ztfy/appskin/resources/js/ztfy.appskin.min.js | ztfy.appskin.min.js |
# import standard packages
# import Zope3 interfaces
# import local interfaces
from urllib import quote
from zope.authentication.interfaces import IUnauthenticatedPrincipal
from zope.traversing.browser.absoluteurl import absoluteURL
from ztfy.appskin.viewlets.usernav.interfaces import IUserNavigationViewletManager, IUserNavigationMenu
# import Zope3 packages
from zope.interface import implements
# import local packages
from ztfy.skin.viewlet import ViewletBase, WeightViewletManagerBase
from ztfy.appskin import _
class UserNavigationViewletManager(WeightViewletManagerBase):
"""User navigation viewlet manager"""
implements(IUserNavigationViewletManager)
class UserNavigationMenu(ViewletBase):
"""User navigation menu"""
implements(IUserNavigationMenu)
cssClass = None
label = None
#
# Standard user actions
#
class UserLoginAction(UserNavigationMenu):
"""User login action"""
title = _("Login user name")
@property
def label(self):
if IUnauthenticatedPrincipal.providedBy(self.request.principal):
return _("Login...")
else:
return self.request.principal.title
@property
def viewURL(self):
if IUnauthenticatedPrincipal.providedBy(self.request.principal):
return '%(base)s/@@login.html?came_from=%(source)s' % {'base': absoluteURL(self.context, self.request),
'source': quote(absoluteURL(self.__parent__, self.request), ':/')}
else:
return u'#'
class UserManagementAction(UserNavigationMenu):
"""User access to management interface"""
title = _("Access management interface")
label = _("Admin.")
viewURL = "@@properties.html"
class UserLogoutAction(UserNavigationMenu):
"""User logout action"""
title = _("Quit application")
label = _("Logout")
viewURL = "@@logout.html"
def render(self):
if IUnauthenticatedPrincipal.providedBy(self.request.principal):
return u''
return super(UserLogoutAction, self).render() | ztfy.appskin | /ztfy.appskin-0.3.7.tar.gz/ztfy.appskin-0.3.7/src/ztfy/appskin/viewlets/usernav/__init__.py | __init__.py |
__docformat__ = "restructuredtext"
# import standard packages
# import Zope3 interfaces
# import local interfaces
# import Zope3 packages
from zope.interface import Interface, Attribute
from zope.schema import TextLine, Text, List
# import local packages
from ztfy.file.schema import ImageField
from ztfy.base import _
#
# Generic interface
#
class IUniqueID(Interface):
"""Interface for objects with unique ID"""
oid = TextLine(title=_("Unique ID"),
description=_("Globally unique identifier of this content can be used to create internal links"),
readonly=True)
class IPathElements(Interface):
"""Interface used to index object's path"""
paths = List(title=_("Path elements"),
description=_("List of path elements matching adapted object"),
value_type=TextLine())
#
# Content base interface
#
class IBaseContentType(Interface):
"""Base content type interface"""
content_type = Attribute(_("Content type"))
class IBaseContent(IBaseContentType):
"""Base content interface"""
title = TextLine(title=_("Title"),
description=_("Content title"),
required=True)
shortname = TextLine(title=_("Short name"),
description=_("Short name of the content can be displayed by several templates"),
required=True)
description = Text(title=_("Description"),
description=_("Internal description included in HTML 'meta' headers"),
required=False)
keywords = TextLine(title=_("Keywords"),
description=_("A list of keywords matching content, separated by commas"),
required=False)
header = ImageField(title=_("Header image"),
description=_("This banner can be displayed by skins on page headers"),
required=False)
heading = Text(title=_("Heading"),
description=_("Short header description of the content"),
required=False)
illustration = ImageField(title=_("Illustration"),
description=_("This illustration can be displayed by several presentation templates"),
required=False)
illustration_title = TextLine(title=_("Illustration alternate title"),
description=_("This text will be used as an alternate title for the illustration"),
required=False)
class IMainContent(Interface):
"""Marker element for first level site contents""" | ztfy.base | /ztfy.base-0.1.3.tar.gz/ztfy.base-0.1.3/src/ztfy/base/interfaces/__init__.py | __init__.py |
from z3c.template.interfaces import IPageTemplate
from z3c.template.template import getPageTemplate
from zope.component import adapts, getAdapters
from zope.component._api import getMultiAdapter
from zope.interface import Interface, implements
from ztfy.base.interfaces import IBaseContent
from ztfy.baseskin.interfaces.metas import IContentMetaHeader, IContentMetasHeaders, \
IHTTPEquivMetaHeader, ILinkMetaHeader, IPageMetasHeaders, IPropertyMetaHeader, IScriptMetaHeader
class ContentMeta(object):
"""Base content meta header"""
implements(IContentMetaHeader)
def __init__(self, name, value):
self.name = name
self.value = value
def render(self):
return """<meta name="%(name)s" content="%(content)s" />""" % {'name': self.name,
'content': self.value}
class HTTPEquivMeta(object):
"""HTTP-Equiv meta header, mainly used for content-type"""
implements(IHTTPEquivMetaHeader)
def __init__(self, http_equiv, value):
self.http_equiv = http_equiv
self.value = value
def render(self):
return """<meta http-equiv="%(http_equiv)s" content="%(content)s" />""" % {'http_equiv': self.http_equiv,
'content': self.value}
class PropertyMeta(object):
"""Property meta header, mainly used for Facebook app_id"""
implements(IPropertyMetaHeader)
def __init__(self, property, value):
self.property = property
self.value = value
def render(self):
return """<meta property="%(property)s" content="%(content)s" />""" % {'property': self.property,
'content': self.value}
class LinkMeta(object):
"""Link meta header, mainly used for CSS or RSS links"""
implements(ILinkMetaHeader)
def __init__(self, rel, type, href):
self.rel = rel
self.type = type
self.href = href
def render(self):
return """<link rel="%(rel)s" type="%(type)s" href="%(href)s" />""" % {'rel': self.rel,
'type': self.type,
'href': self.href}
class ScriptMeta(object):
"""Script meta header, based on a template"""
implements(IScriptMetaHeader)
template = getPageTemplate()
def __init__(self, context, request):
self.context = context
self.request = request
def render(self):
if self.template is None:
template = getMultiAdapter((self, self.request), IPageTemplate)
return template(self)
return self.template()
class ContentMetasAdapter(object):
"""Generic content metas adapter"""
adapts(Interface, Interface)
implements(IPageMetasHeaders)
def __init__(self, context, request):
self.context = context
self.request = request
@property
def metas(self):
"""Extract headers from all available metas adapters"""
result = []
for _name, adapter in getAdapters((self.context, self.request), IContentMetasHeaders):
result.extend(adapter.metas)
return result
class BaseContentMetasHeadersAdapter(object):
"""Base content metas adapter"""
adapts(IBaseContent, Interface)
implements(IContentMetasHeaders)
def __init__(self, context, request):
self.context = context
self.request = request
@property
def metas(self):
result = []
result.append(HTTPEquivMeta('Content-Type', 'text/html; charset=UTF-8'))
description = self.context.description
if description:
result.append(ContentMeta('description', description.replace('\n', ' ')))
keywords = self.context.keywords
if keywords:
result.append(ContentMeta('keywords', keywords))
return result | ztfy.baseskin | /ztfy.baseskin-0.2.0.tar.gz/ztfy.baseskin-0.2.0/src/ztfy/baseskin/metas.py | metas.py |
__docformat__ = "restructuredtext"
# import standard packages
# import Zope3 interfaces
from zope.contentprovider.interfaces import IContentProvider
# import local interfaces
from ztfy.baseskin.interfaces.form import IFormViewletsManager, IFormPrefixViewletsManager, \
IWidgetsPrefixViewletsManager, IWidgetsSuffixViewletsManager, IFormSuffixViewletsManager
# import Zope3 packages
from z3c.template.template import getViewTemplate
from zope.component import adapts
from zope.interface import implements, Interface
from zope.viewlet.viewlet import ViewletBase as Viewlet
from zope.viewlet.manager import ViewletManagerBase as ViewletManager, WeightOrderedViewletManager
# import local packages
from ztfy.baseskin.layer import IBaseSkinLayer
class ViewletManagerBase(ViewletManager):
"""Template based viewlet manager class"""
template = getViewTemplate()
class WeightViewletManagerBase(WeightOrderedViewletManager):
"""Template based weighted viewlet manager class"""
template = getViewTemplate()
class ViewletBase(Viewlet):
"""Template based viewlet"""
render = getViewTemplate()
class ContentProviderBase(object):
"""Generic template based content provider"""
adapts(Interface, IBaseSkinLayer, Interface)
implements(IContentProvider)
def __init__(self, context, request, view):
self.context = context
self.request = request
self.__parent__ = view
def update(self):
pass
render = getViewTemplate()
class FormViewletManager(WeightOrderedViewletManager):
"""Base form viewlet manager"""
implements(IFormViewletsManager)
class FormPrefixViewletManager(FormViewletManager):
"""Form prefix viewlet manager, displayed before form"""
implements(IFormPrefixViewletsManager)
class WidgetsPrefixViewletManager(FormViewletManager):
"""Form widgets prefix display manager, displayed before widgets"""
implements(IWidgetsPrefixViewletsManager)
class WidgetsSuffixViewletManager(FormViewletManager):
"""Form widgets suffix viewlet manager, displayed after widgets"""
implements(IWidgetsSuffixViewletsManager)
class FormSuffixViewletManager(FormViewletManager):
"""Form suffix viewlet manager, displayed after form"""
implements(IFormSuffixViewletsManager) | ztfy.baseskin | /ztfy.baseskin-0.2.0.tar.gz/ztfy.baseskin-0.2.0/src/ztfy/baseskin/viewlet.py | viewlet.py |
__docformat__ = "restructuredtext"
# import standard packages
# import Zope3 interfaces
from z3c.form.interfaces import INPUT_MODE, IWidget, ISubForm, ISubmitWidget
from zope.component.interfaces import IObjectEvent
from zope.lifecycleevent.interfaces import IObjectCreatedEvent, IObjectModifiedEvent
from zope.viewlet.interfaces import IViewletManager, IViewlet
# import local interfaces
# import Zope3 packages
from zope.interface import Interface, Attribute
from zope.schema import Bool, TextLine, Choice, List, Dict, Object
# import local packages
from ztfy.baseskin import _
#
# Custom widgets interfaces
#
class IResetWidget(ISubmitWidget):
"""Reset button widget interface"""
class ICloseWidget(ISubmitWidget):
"""Close button widget interface"""
#
# Custom forms interfaces
#
def checkSubmitButton(form):
"""Check form and widgets mode before displaying submit button"""
if form.mode != INPUT_MODE:
return False
for widget in form.widgets.values():
if widget.mode == INPUT_MODE:
return True
if IForm.providedBy(form):
for subform in form.subforms:
for widget in subform.widgets.values():
if widget.mode == INPUT_MODE:
return True
class IWidgetsGroup(Interface):
"""Form widgets group interface"""
id = TextLine(title=_("Group ID"),
required=False)
css_class = TextLine(title=_("CSS class"),
required=False)
legend = TextLine(title=_("Group legend"),
required=False)
help = TextLine(title=_("Group help"),
required=False)
widgets = List(title=_("Group's widgets list"),
value_type=Object(schema=IWidget))
switch = Bool(title=_("Switchable group?"),
required=True,
default=False)
checkbox_switch = Bool(title=_("Group switched via checkbox?"),
required=True,
default=False)
checkbox_field = TextLine(title=_("Field name matching switch checkbox?"),
required=False)
checkbox_widget = Object(schema=IWidget,
required=False)
checkbox_on = Attribute(_("Checkbox on?"))
hide_if_empty = Bool(title=_("Hide group if empty?"),
description=_("""If 'Yes', a switchable group containing only """
"""widgets with default values is hidden"""),
required=True,
default=False)
visible = Attribute(_("Visible group?"))
switchable = Attribute(_("Switchable group?"))
class IBaseForm(Interface):
"""Marker interface for any form"""
class IGroupsBasedForm(IBaseForm):
"""Groups based form"""
groups = Attribute(_("Form groups"))
def addGroup(self, group):
"""Add given group to form"""
class IForm(IBaseForm):
"""Base form interface"""
title = TextLine(title=_("Form title"))
legend = TextLine(title=_("Form legend"),
required=False)
subforms = List(title=_("Sub-forms"),
value_type=Object(schema=ISubForm),
required=False)
subforms_legend = TextLine(title=_("Subforms legend"),
required=False)
tabforms = List(title=_("Tab-forms"),
value_type=Object(schema=ISubForm),
required=False)
autocomplete = Choice(title=_("Auto-complete"),
values=('on', 'off'),
default='on')
label_css_class = TextLine(title=_("Labels CSS class"),
required=False,
default=u'control-label col-md-3')
input_css_class = TextLine(title=_("Inputs CSS class"),
required=False,
default=u'col-md-9')
display_hints_on_widgets = Bool(title=_("Display hints on input widgets?"),
required=True,
default=False)
handle_upload = Bool(title=_("Handle uploads in form?"),
description=_("Set to true when form handle uploads to get progress bar"),
required=True,
default=False)
callbacks = Dict(title=_("Widgets validation callbacks"),
key_type=TextLine(),
value_type=TextLine(),
required=False)
def isDialog(self):
"""Check to know if current form is in a modal dialog"""
def getForms(self):
"""Get full list of main form and subforms"""
def createSubForms(self):
"""Initialize sub-forms"""
def createTabForms(self):
"""Initialize tab-forms"""
def getWidgetCallback(self, widget):
"""Get submit callback associated with a given widget"""
def updateContent(self, object, data):
"""Update given object with form data"""
def getSubmitOutput(self, writer, changes):
"""Get submit output"""
class IAJAXForm(IForm):
"""AJAX form interface"""
handler = TextLine(title=_("Form AJAX handler"),
description=_("Relative URL of AJAX handler"),
required=False)
data_type = Choice(title=_("Form AJAX data type"),
description=_(""),
required=False,
values=('json', 'jsonp', 'text', 'html', 'xml', 'script'))
form_options = Dict(title=_("Form AJAX data options"),
required=False)
callback = TextLine(title=_("Submit callback"),
description=_("Name of a custom form submit callback"),
required=False)
def getFormOptions(self):
"""Get custom AJAX POST data"""
def getAjaxErrors(self):
"""Get errors associated with their respective widgets in a JSON dictionary"""
class IInnerSubForm(ISubForm):
"""Inner subform marker interface"""
class IInnerTabForm(ISubForm):
"""Inner tabform marker interface"""
tabLabel = TextLine(title=_("Tab label"),
required=True)
class IViewletsBasedForm(IForm):
"""Viewlets based form interface"""
managers = List(title=_("Names list of viewlets managers included in this form"),
value_type=TextLine(),
required=True)
class ISubFormViewlet(IViewlet):
"""Sub-form viewlet interface"""
legend = Attribute(_("Sub-form legend"))
switchable = Attribute(_("Can the subform be hidden ?"))
visible = Attribute(_("Is the subform initially visible ?"))
callbacks = Dict(title=_("Widgets callbacks"),
key_type=TextLine(),
value_type=TextLine())
def getWidgetCallback(self, widget):
"""Get submit callback associated with a given widget"""
class ICustomExtractSubForm(ISubForm):
"""SubForm interface with custom extract method"""
def extract(self):
"""Extract data and errors from input request"""
class ICustomUpdateSubForm(ISubForm):
"""SubForm interface with custom update method"""
def updateContent(self, object, data):
"""Update custom content with given data"""
#
# Default form content providers
#
class IFormViewletsManager(IViewletManager):
"""Base forms viewlets manager interface"""
class IFormPrefixViewletsManager(IFormViewletsManager):
"""Form prefix viewlets manager interface"""
class IWidgetsPrefixViewletsManager(IFormViewletsManager):
"""Form widgets prefix viewlets manager interface"""
class IWidgetsSuffixViewletsManager(IFormViewletsManager):
"""Form widgets suffix viewlets manager interface"""
class IFormSuffixViewletsManager(IFormViewletsManager):
"""Form suffix viewlets manager interface"""
#
# Custom events interfaces
#
class IViewObjectEvent(IObjectEvent):
"""View object event interface"""
view = Attribute(_("View in which event was fired"))
class IFormObjectCreatedEvent(IObjectCreatedEvent, IViewObjectEvent):
"""Object added event notify by form after final object creation"""
class IFormObjectModifiedEvent(IObjectModifiedEvent, IViewObjectEvent):
"""Form object modified event interface""" | ztfy.baseskin | /ztfy.baseskin-0.2.0.tar.gz/ztfy.baseskin-0.2.0/src/ztfy/baseskin/interfaces/form.py | form.py |
__docformat__ = "restructuredtext"
# import standard packages
# import Zope3 interfaces
# import local interfaces
# import Zope3 packages
from zope.interface import Interface, Attribute
from zope.schema import TextLine, Password
# import local packages
from ztfy.baseskin import _
#
# Skinning interfaces
#
class ISkinnable(Interface):
"""Base skinnable content interface
This interface is used for any object managing a skin.
An adapter is used during traversal t automatically
apply selected skin.
"""
def getSkin(self):
"""Get skin name matching current context"""
#
# Default view interfaces
#
class IDefaultView(Interface):
"""Interface used to get object's default view"""
viewname = TextLine(title=_("View name"),
description=_("Name of the default view matching object, request and (optionally) current view"),
required=True,
default=u'@@index.html')
def getAbsoluteURL(self):
"""Get full absolute URL of the default view"""
class IContainedDefaultView(IDefaultView):
"""Interface used to get object's default view while displayed inside a container"""
#
# Dialogs interfaces
#
class IDialog(Interface):
"""Base interface for AJAX dialogs"""
dialog_class = Attribute(_("Default dialog CSS class"))
resources = Attribute(_("List of resources needed by this dialog"))
class IDialogTitle(Interface):
"""Dialog title getter interface"""
def getTitle(self):
"""Get dialog title"""
#
# Base front-office views
#
class IBaseViewlet(Interface):
"""Marker interface for base viewlets"""
class IBaseIndexView(Interface):
"""Marker interface for base index view"""
#
# Presentation management interfaces
#
class IBasePresentationInfo(Interface):
"""Base interface for presentation infos"""
class IPresentationForm(Interface):
"""Marker interface for default presentation edit form"""
class IPresentationTarget(Interface):
"""Interface used inside skin-related edit forms"""
target_interface = Attribute(_("Presentation form target interface"))
#
# Login form attributes
#
class ILoginFormFields(Interface):
"""Login form fields interface"""
username = TextLine(title=_("login-field", "Login"),
required=True)
password = Password(title=_("password-field", "Password"),
required=True)
came_from = TextLine(title=_("came-from", "Original address"),
required=False) | ztfy.baseskin | /ztfy.baseskin-0.2.0.tar.gz/ztfy.baseskin-0.2.0/src/ztfy/baseskin/interfaces/__init__.py | __init__.py |
# import standard packages
from persistent import Persistent
# import Zope3 interfaces
from zope.componentvocabulary.vocabulary import UtilityVocabulary
from zope.configuration.exceptions import ConfigurationError
from zope.schema.interfaces import IVocabularyFactory
from zope.session.interfaces import ISessionDataContainer, ISessionData, ISessionPkgData
# import local interfaces
from ztfy.beaker.interfaces import IBeakerSessionUtility
from ztfy.beaker.metadirectives import IBeakerFileSessionConfiguration, IBeakerSessionConfiguration, \
IBeakerMemcachedSessionConfiguration, IBeakerSessionConfigurationInfo, \
IBeakerMemorySessionConfiguration, IBeakerDBMSessionConfiguration, IBeakerAlchemySessionConfiguration
# import Zope3 packages
from zope.component import queryUtility
from zope.container.contained import Contained
from zope.interface import implements, classProvides
from zope.minmax import Maximum
from zope.schema import getFieldNames
from zope.schema.fieldproperty import FieldProperty
# import local packages
from ztfy.utils.request import getRequest
from ztfy.beaker import _
class BeakerPkgData(dict):
"""Beaker package data
See zope.session.interfaces.ISessionPkgData
>>> session = BeakerPkgData()
>>> ISessionPkgData.providedBy(session)
True
"""
implements(ISessionPkgData)
class BeakerSessionData(dict):
"""Beaker session data"""
implements(ISessionData)
_lastAccessTime = None
def __init__(self):
self._lastAccessTime = Maximum(0)
def __getitem__(self, key):
try:
return super(BeakerSessionData, self).__getitem__(key)
except KeyError:
data = self[key] = BeakerPkgData()
return data
# we include this for parallelism with setLastAccessTime
@property
def lastAccessTime(self):
# this conditional is for legacy sessions; this comment and
# the next two lines will be removed in a later release
if self._lastAccessTime is None:
return self.__dict__.get('lastAccessTime', 0)
return self._lastAccessTime.value
# we need to set this value with setters in order to get optimal conflict
# resolution behavior
@lastAccessTime.setter
def lastAccessTime(self, value):
# this conditional is for legacy sessions; this comment and
# the next two lines will be removed in a later release
if self._lastAccessTime is None:
self._lastAccessTime = Maximum(0)
self._lastAccessTime.value = value
class BeakerSessionUtility(Persistent, Contained):
"""Beaker base session utility"""
implements(ISessionDataContainer, IBeakerSessionUtility)
timeout = FieldProperty(ISessionDataContainer['timeout'])
resolution = FieldProperty(ISessionDataContainer['resolution'])
configuration_name = FieldProperty(IBeakerSessionUtility['configuration_name'])
def get(self, key, default=None):
return self[key]
def _getSession(self):
request = getRequest()
config = queryUtility(IBeakerSessionConfiguration, name=self.configuration_name or u'')
if config is None:
raise ConfigurationError(_("Can't find beaker configuration"))
session = request.get(config.environ_key)
if session is None:
raise ConfigurationError(_("No Beaker session defined in current request"))
return session
def __getitem__(self, pkg_id):
session = self._getSession()
result = session.get(pkg_id)
if result is None:
result = session[pkg_id] = BeakerSessionData()
return result
def __setitem__(self, pkg_id, session_data):
session = self._getSession()
session[pkg_id] = session_data
class BeakerSessionConfiguration(Persistent, Contained):
"""Beaker base session configuration"""
implements(IBeakerSessionConfiguration, IBeakerSessionConfigurationInfo)
auto = FieldProperty(IBeakerSessionConfiguration['auto'])
environ_key = FieldProperty(IBeakerSessionConfiguration['environ_key'])
invalidate_corrupt = FieldProperty(IBeakerSessionConfiguration['invalidate_corrupt'])
key = FieldProperty(IBeakerSessionConfiguration['key'])
cookie_expires = FieldProperty(IBeakerSessionConfiguration['cookie_expires'])
cookie_domain = FieldProperty(IBeakerSessionConfiguration['cookie_domain'])
cookie_path = FieldProperty(IBeakerSessionConfiguration['cookie_path'])
secure = FieldProperty(IBeakerSessionConfiguration['secure'])
httponly = FieldProperty(IBeakerSessionConfiguration['httponly'])
encrypt_key = FieldProperty(IBeakerSessionConfiguration['encrypt_key'])
validate_key = FieldProperty(IBeakerSessionConfiguration['validate_key'])
lock_dir = FieldProperty(IBeakerSessionConfiguration['lock_dir'])
configuration = None
def getConfigurationDict(self):
result = {'session.auto': True}
if self.configuration:
for fieldname in getFieldNames(self.configuration):
value = getattr(self, fieldname, None)
if value is not None:
result['session.' + fieldname] = value
return result
class BeakerMemorySessionConfiguration(BeakerSessionConfiguration):
"""Beaker memory session configuration"""
implements(IBeakerMemorySessionConfiguration)
configuration = IBeakerMemorySessionConfiguration
type = 'memory'
class BeakerDBMSessionConfiguration(BeakerSessionConfiguration):
"""Beaker DBM session configuration"""
implements(IBeakerDBMSessionConfiguration)
configuration = IBeakerDBMSessionConfiguration
type = 'dbm'
data_dir = FieldProperty(IBeakerFileSessionConfiguration['data_dir'])
class BeakerFileSessionConfiguration(BeakerSessionConfiguration):
"""Beaker file session configuration"""
implements(IBeakerFileSessionConfiguration)
configuration = IBeakerFileSessionConfiguration
type = 'file'
data_dir = FieldProperty(IBeakerFileSessionConfiguration['data_dir'])
class BeakerMemcachedSessionConfiguration(BeakerSessionConfiguration):
"""Beaker memcached session configuration"""
implements(IBeakerMemcachedSessionConfiguration)
configuration = IBeakerMemcachedSessionConfiguration
type = 'ext:memcached'
url = FieldProperty(IBeakerMemcachedSessionConfiguration['url'])
memcached_module = FieldProperty(IBeakerMemcachedSessionConfiguration['memcached_module'])
class BeakerAlchemySessionConfiguration(BeakerSessionConfiguration):
"""Beaker SQLalchemy session configuration"""
implements(IBeakerAlchemySessionConfiguration)
configuration = IBeakerAlchemySessionConfiguration
type = 'ext:sqla'
url = FieldProperty(IBeakerAlchemySessionConfiguration['url'])
schema_name = FieldProperty(IBeakerAlchemySessionConfiguration['schema_name'])
table_name = FieldProperty(IBeakerAlchemySessionConfiguration['table_name'])
class BeakerSessionConfigurationVocabulary(UtilityVocabulary):
"""Beaker session configuration utilities vocabulary"""
classProvides(IVocabularyFactory)
interface = IBeakerSessionConfiguration
nameOnly = True | ztfy.beaker | /ztfy.beaker-0.1.0.tar.gz/ztfy.beaker-0.1.0/src/ztfy/beaker/session.py | session.py |
# import standard packages
# import Zope3 interfaces
# import local interfaces
# import Zope3 packages
from zope.interface import Interface
from zope.schema import Bool, TextLine, Int, Choice, InterfaceField
# import local packages
from ztfy.beaker import _
class IBeakerSessionConfiguration(Interface):
"""Beaker session base configuration"""
type = TextLine(title=_("Storage type"),
required=False,
readonly=True)
auto = Bool(title=_("Auto save session data?"),
required=True,
default=True)
environ_key = TextLine(title=_("Request environment key"),
description=_("Name of the WSGI environment key holding session data"),
default=u'beaker.session')
invalidate_corrupt = Bool(title=_("Invalide corrupt"),
description=_("How to handle corrupt data when loading. When set to True, then corrupt "
"data will be silently invalidated and a new session created, otherwise "
"invalid data will cause an exception."),
required=False,
default=False)
key = TextLine(title=_("Cookie name"),
description=_("The name the cookie should be set to"),
required=False,
default=u'beaker.session.id')
timeout = Int(title=_("Session timeout"),
description=_("How long session data is considered valid. This is used regardless of the cookie "
"being present or not to determine whether session data is still valid"),
required=False,
default=3600)
cookie_expires = Bool(title=_("Expiring cookie?"),
description=_("Does cookie have an expiration date?"),
required=False,
default=False)
cookie_domain = TextLine(title=_("Cookie domain"),
description=_("Domain to use for the cookie"),
required=False)
cookie_path = TextLine(title=_("Cookie path"),
description=_("Path to use for the cookie"),
required=False,
default=u'/')
secure = Bool(title=_("Secure cookie?"),
description=_("Whether or not the cookie should only be sent over SSL"),
required=False,
default=False)
httponly = Bool(title=_("HTTP only?"),
description=_("Whether or not the cookie should only be accessible by the browser not by "
"JavaScript"),
required=False,
default=False)
encrypt_key = TextLine(title=_("Encryption key"),
description=_("The key to use for the local session encryption, if not provided the "
"session will not be encrypted"),
required=False)
validate_key = TextLine(title=_("Validation key"),
description=_("The key used to sign the local encrypted session"),
required=False)
lock_dir = TextLine(title=_("Lock directory"),
description=_("Used to coordinate locking and to ensure that multiple processes/threads "
"aren't attempting to re-create the same value at the same time"),
required=True)
class IBeakerSessionConfigurationInfo(Interface):
"""Beaker session configuration info"""
configuration = InterfaceField(title=_("Configuration interface"),
required=False)
def getConfigurationDict(self):
"""Get configuration options as dict"""
class IBeakerMemorySessionConfiguration(IBeakerSessionConfiguration):
"""Beaker memory session configuration"""
class IBeakerBaseFileSessionConfiguration(IBeakerSessionConfiguration):
"""Beaker file storage session configuration"""
data_dir = TextLine(title=_("Data directory"),
description=_("Absolute path to the directory that stores the files"),
required=True)
class IBeakerDBMSessionConfiguration(IBeakerBaseFileSessionConfiguration):
"""Beaker DBM file storage session configuration"""
class IBeakerFileSessionConfiguration(IBeakerBaseFileSessionConfiguration):
"""Beaker file session configuration"""
class IBeakerDatabaseSessionConfiguration(IBeakerSessionConfiguration):
"""Beaker database storage session configuration"""
url = TextLine(title=_("Database URL"),
required=True)
class IBeakerMemcachedSessionConfiguration(IBeakerDatabaseSessionConfiguration):
"""Beaker memached storage session configuration"""
url = TextLine(title=_("Memcached servers URL"),
description=_("Semi-colon separated list of memcached servers"),
required=True)
memcached_module = Choice(title=_("Memcached module to use"),
description=_("Specifies which memcached client library should be imported"),
required=True,
values=(u'auto', u'memcache', u'cmemcache', u'pylibmc'),
default=u'auto')
class IBeakerAlchemySessionConfiguration(IBeakerDatabaseSessionConfiguration):
"""Beaker SQLAlchemy storage session configuration"""
url = TextLine(title=_("SQLAlchemy database URL"),
description=_("Valid SQLAlchemy database connection string"),
required=True)
schema_name = TextLine(title=_("Database schema name"),
description=_("The schema name to use in the database"),
required=False)
table_name = TextLine(title=_("Database table name"),
description=_("The table name to use in the database"),
required=True,
default=u'beaker_session') | ztfy.beaker | /ztfy.beaker-0.1.0.tar.gz/ztfy.beaker-0.1.0/src/ztfy/beaker/metadirectives.py | metadirectives.py |
# import standard packages
# import Zope3 interfaces
# import local interfaces
from ztfy.beaker.metadirectives import IBeakerSessionConfiguration
# import Zope3 packages
from zope.component.security import PublicPermission
from zope.component.zcml import utility
# import local packages
from ztfy.beaker.session import BeakerMemorySessionConfiguration, BeakerDBMSessionConfiguration, \
BeakerFileSessionConfiguration, BeakerMemcachedSessionConfiguration, BeakerAlchemySessionConfiguration
def memorySession(context, name='', **kwargs):
"""Beaker memory session configuration declaration"""
config = BeakerMemorySessionConfiguration()
for key, value in kwargs.iteritems():
setattr(config, key, value)
utility(context, IBeakerSessionConfiguration, config, permission=PublicPermission, name=name)
def dbmSession(context, name='', **kwargs):
"""Beaker DBM session configuration declaration"""
config = BeakerDBMSessionConfiguration()
for key, value in kwargs.iteritems():
setattr(config, key, value)
utility(context, IBeakerSessionConfiguration, config, permission=PublicPermission, name=name)
def fileSession(context, name='', **kwargs):
"""Beaker file session configuration declaration"""
config = BeakerFileSessionConfiguration()
for key, value in kwargs.iteritems():
setattr(config, key, value)
utility(context, IBeakerSessionConfiguration, config, permission=PublicPermission, name=name)
def memcachedSession(context, name='', **kwargs):
"""Beaker memcached session configuration declaration"""
config = BeakerMemcachedSessionConfiguration()
for key, value in kwargs.iteritems():
setattr(config, key, value)
utility(context, IBeakerSessionConfiguration, config, permission=PublicPermission, name=name)
def alchemySession(context, name='', **kwargs):
"""Beaker SQLAlchemy session configuration declaration"""
config = BeakerAlchemySessionConfiguration()
for key, value in kwargs.iteritems():
setattr(config, key, value)
utility(context, IBeakerSessionConfiguration, config, permission=PublicPermission, name=name) | ztfy.beaker | /ztfy.beaker-0.1.0.tar.gz/ztfy.beaker-0.1.0/src/ztfy/beaker/metaconfigure.py | metaconfigure.py |
.. contents::
Introduction
============
ZTFY.beaker is a small wrapper around Beaker (http://pypi.python.org/pypi/Beaker)
session and caching library.
It allows you to define a Beaker session throught a ZCML configuration directive to include it
in a WSGI application based on ZTFY packages.
A BeakerSessionUtility can then be created and registered to act as a session data container.
Beaker session configurations
=============================
All Beaker session options can be defined throught ZCML directives. See `metadirectives.py` to get
the complete list of configuration options.
For example, to define a Memcached session configuration::
<configure
xmlns:beaker="http://namespaces.ztfy.org/beaker">
<beaker:memcachedSession
url="127.0.0.1:11211"
cookie_expires="False"
lock_dir="/var/lock/sessions" />
</configure>
Directives are available for memory, DBM, file, Memcached and SQLAlchemy sessions storages.
| ztfy.beaker | /ztfy.beaker-0.1.0.tar.gz/ztfy.beaker-0.1.0/docs/README.txt | README.txt |
__docformat__ = "restructuredtext"
# import standard packages
# import Zope3 interfaces
from z3c.language.switch.interfaces import II18n
# import local interfaces
from ztfy.blog.interfaces.section import ISection, ISectionContainer
from ztfy.blog.interfaces.site import ITreeViewContents
from ztfy.blog.interfaces.topic import ITopic
# import Zope3 packages
from zope.app.content import queryContentType
from zope.component import adapts
from zope.interface import implements
from zope.schema.fieldproperty import FieldProperty
# import local packages
from ztfy.base.ordered import OrderedContainer
from ztfy.blog.skin import InheritedSkin
from ztfy.extfile.blob import BlobImage
from ztfy.i18n.property import I18nTextProperty, I18nImageProperty
from ztfy.security.property import RolePrincipalsProperty
from ztfy.utils.security import unproxied
from ztfy.utils.unicode import translateString
from ztfy.workflow.interfaces import IWorkflowContent
class Section(OrderedContainer, InheritedSkin):
implements(ISection, ISectionContainer)
__roles__ = ('ztfy.BlogManager', 'ztfy.BlogContributor')
title = I18nTextProperty(ISection['title'])
shortname = I18nTextProperty(ISection['shortname'])
description = I18nTextProperty(ISection['description'])
keywords = I18nTextProperty(ISection['keywords'])
heading = I18nTextProperty(ISection['heading'])
header = I18nImageProperty(ISection['header'], klass=BlobImage, img_klass=BlobImage)
illustration = I18nImageProperty(ISection['illustration'], klass=BlobImage, img_klass=BlobImage)
illustration_title = I18nTextProperty(ISection['illustration_title'])
visible = FieldProperty(ISection['visible'])
administrators = RolePrincipalsProperty(ISection['administrators'], role='ztfy.BlogManager')
contributors = RolePrincipalsProperty(ISection['contributors'], role='ztfy.BlogContributor')
@property
def content_type(self):
return queryContentType(self).__name__
@property
def sections(self):
"""See `ISectionContainer` interface"""
return [v for v in self.values() if ISection.providedBy(v)]
def getVisibleSections(self):
"""See `ISectionContainer` interface"""
return [v for v in self.sections if v.visible]
@property
def topics(self):
"""See `ITopicContainer` interface"""
return [v for v in self.values() if ITopic.providedBy(v)]
def getVisibleTopics(self):
"""See `ITopicContainer` interface"""
return [t for t in self.topics if IWorkflowContent(t).isVisible()]
def addTopic(self, topic):
"""See `ITopicContainer` interface"""
language = II18n(self).getDefaultLanguage()
title = translateString(topic.shortname.get(language), forceLower=True, spaces='-')
if len(title) > 40:
title = title[:40]
title = title[:title.rfind('-')]
self[title + '.html'] = unproxied(topic)
class SectionTreeViewContentsAdapter(object):
adapts(ISection)
implements(ITreeViewContents)
def __init__(self, context):
self.context = context
@property
def values(self):
return self.context.values() | ztfy.blog | /ztfy.blog-0.6.2.tar.gz/ztfy.blog-0.6.2/src/ztfy/blog/section.py | section.py |
__docformat__ = "restructuredtext"
# import standard packages
from persistent import Persistent
# import Zope3 interfaces
from zope.annotation.interfaces import IAnnotations
# import local interfaces
from ztfy.blog.interfaces.google import IGoogleAnalytics, IGoogleAdSense, TOP, TOP_TOPICS, BOTTOM, BOTTOM_TOPICS
from ztfy.blog.interfaces.site import ISiteManager
from ztfy.blog.interfaces.topic import ITopic
# import Zope3 packages
from zope.component import adapter
from zope.interface import implementer, implements
from zope.schema.fieldproperty import FieldProperty
# import local packages
class GoogleAnalytics(Persistent):
"""Google Analytics persistent class"""
implements(IGoogleAnalytics)
enabled = FieldProperty(IGoogleAnalytics['enabled'])
website_id = FieldProperty(IGoogleAnalytics['website_id'])
verification_code = FieldProperty(IGoogleAnalytics['verification_code'])
ANALYTICS_ANNOTATIONS_KEY = 'ztfy.blog.google.analytics'
@adapter(ISiteManager)
@implementer(IGoogleAnalytics)
def GoogleAnalyticsFactory(context):
"""Google Analytics adapter factory"""
annotations = IAnnotations(context)
adapter = annotations.get(ANALYTICS_ANNOTATIONS_KEY)
if adapter is None:
adapter = annotations[ANALYTICS_ANNOTATIONS_KEY] = GoogleAnalytics()
return adapter
class GoogleAdSense(Persistent):
"""Google AdSense persistent class"""
implements(IGoogleAdSense)
enabled = FieldProperty(IGoogleAdSense['enabled'])
client_id = FieldProperty(IGoogleAdSense['client_id'])
slot_id = FieldProperty(IGoogleAdSense['slot_id'])
slot_width = FieldProperty(IGoogleAdSense['slot_width'])
slot_height = FieldProperty(IGoogleAdSense['slot_height'])
slot_position = FieldProperty(IGoogleAdSense['slot_position'])
def display(self, context, position):
if not self.enabled:
return False
if ((position == 'top') and (self.slot_position in (BOTTOM, BOTTOM_TOPICS))) or \
((position == 'bottom') and (self.slot_position in (TOP, TOP_TOPICS))):
return False
return ITopic.providedBy(context) or (self.slot_position in (TOP, BOTTOM))
ADSENSE_ANNOTATIONS_KEY = 'ztfy.blog.google.adsense'
@adapter(ISiteManager)
@implementer(IGoogleAdSense)
def GoogleAdSenseFactory(context):
"""Google AdSense adapter"""
annotations = IAnnotations(context)
adapter = annotations.get(ADSENSE_ANNOTATIONS_KEY)
if adapter is None:
adapter = annotations[ADSENSE_ANNOTATIONS_KEY] = GoogleAdSense()
return adapter | ztfy.blog | /ztfy.blog-0.6.2.tar.gz/ztfy.blog-0.6.2/src/ztfy/blog/google.py | google.py |
__docformat__ = "restructuredtext"
# import standard packages
from persistent import Persistent
# import Zope3 interfaces
from z3c.language.switch.interfaces import II18n
from zope.annotation.interfaces import IAnnotations
from zope.schema.interfaces import IVocabularyFactory
# import local interfaces
from ztfy.blog.interfaces.resource import IResource, IResourceContainer, IResourceContainerTarget
# import Zope3 packages
from zope.component import adapter
from zope.container.contained import Contained
from zope.interface import implementer, implements, classProvides
from zope.location import locate
from zope.schema.fieldproperty import FieldProperty
from zope.schema.vocabulary import SimpleVocabulary, SimpleTerm
from zope.security.proxy import removeSecurityProxy
from zope.traversing.api import getName
# import local packages
from ztfy.base.ordered import OrderedContainer
from ztfy.extfile.blob import BlobFile, BlobImage
from ztfy.file.property import FileProperty
from ztfy.i18n.property import I18nTextProperty
from ztfy.utils.traversing import getParent
class Resource(Persistent, Contained):
implements(IResource)
title = I18nTextProperty(IResource['title'])
description = I18nTextProperty(IResource['description'])
content = FileProperty(IResource['content'], klass=BlobFile, img_klass=BlobImage)
filename = FieldProperty(IResource['filename'])
language = FieldProperty(IResource['language'])
class ResourceContainer(OrderedContainer):
implements(IResourceContainer)
RESOURCES_ANNOTATIONS_KEY = 'ztfy.blog.resource.container'
@adapter(IResourceContainerTarget)
@implementer(IResourceContainer)
def ResourceContainerFactory(context):
"""Resources container adapter"""
annotations = IAnnotations(context)
container = annotations.get(RESOURCES_ANNOTATIONS_KEY)
if container is None:
container = annotations[RESOURCES_ANNOTATIONS_KEY] = ResourceContainer()
locate(container, context, '++static++')
return container
class ResourceContainerResourcesVocabulary(SimpleVocabulary):
"""List of resources of a given content"""
classProvides(IVocabularyFactory)
def __init__(self, context):
container = getParent(context, IResourceContainerTarget)
terms = [SimpleTerm(removeSecurityProxy(r), getName(r), II18n(r).queryAttribute('title') or '{{ %s }}' % r.filename) for r in IResourceContainer(container).values()]
super(ResourceContainerResourcesVocabulary, self).__init__(terms) | ztfy.blog | /ztfy.blog-0.6.2.tar.gz/ztfy.blog-0.6.2/src/ztfy/blog/resource.py | resource.py |
__docformat__ = "restructuredtext"
# import standard packages
# import Zope3 interfaces
from zope.dublincore.interfaces import IZopeDublinCore
from zope.lifecycleevent.interfaces import IObjectCreatedEvent
# import local interfaces
from hurry.workflow.interfaces import IWorkflowInfo, IWorkflowState
from ztfy.blog.interfaces.topic import ITopic
# import Zope3 packages
from zope.app.content import queryContentType
from zope.component import adapter
from zope.i18n import translate
from zope.interface import implements
from zope.schema.fieldproperty import FieldProperty
# import local packages
from ztfy.base.ordered import OrderedContainer
from ztfy.extfile.blob import BlobImage
from ztfy.i18n.property import I18nTextProperty, I18nImageProperty
from ztfy.utils.request import getRequest
from ztfy.blog import _
class Topic(OrderedContainer):
implements(ITopic)
title = I18nTextProperty(ITopic['title'])
shortname = I18nTextProperty(ITopic['shortname'])
description = I18nTextProperty(ITopic['description'])
keywords = I18nTextProperty(ITopic['keywords'])
heading = I18nTextProperty(ITopic['heading'])
header = I18nImageProperty(ITopic['header'], klass=BlobImage, img_klass=BlobImage)
illustration = I18nImageProperty(ITopic['illustration'], klass=BlobImage, img_klass=BlobImage)
illustration_title = I18nTextProperty(ITopic['illustration_title'])
commentable = FieldProperty(ITopic['commentable'])
workflow_name = FieldProperty(ITopic['workflow_name'])
@property
def content_type(self):
return queryContentType(self).__name__
@property
def paragraphs(self):
return self.values()
def getVisibleParagraphs(self, request=None):
return [v for v in self.paragraphs if v.visible]
@property
def publication_year(self):
return IZopeDublinCore(self).created.year
@property
def publication_month(self):
return IZopeDublinCore(self).created.month
@adapter(ITopic, IObjectCreatedEvent)
def handleNewTopic(object, event):
"""Init workflow status of a new topic"""
IWorkflowState(object).setState(None)
IWorkflowInfo(object).fireTransition('init', translate(_("Create new topic"), context=getRequest())) | ztfy.blog | /ztfy.blog-0.6.2.tar.gz/ztfy.blog-0.6.2/src/ztfy/blog/topic.py | topic.py |
__docformat__ = "restructuredtext"
# import standard packages
import pytz
from datetime import datetime
# import Zope3 interfaces
from zope.dublincore.interfaces import IZopeDublinCore
# import local interfaces
from ztfy.blog.interfaces import STATUS_DRAFT, STATUS_PUBLISHED, STATUS_RETIRED, STATUS_ARCHIVED, STATUS_DELETED
from ztfy.blog.interfaces import STATUS_VOCABULARY
from ztfy.workflow.interfaces import IWorkflowContent
# import Zope3 packages
from zope.traversing.api import getName, getParent
# import local packages
from hurry.workflow.workflow import Transition
from ztfy.security.security import getSecurityManager
from ztfy.utils.request import getRequest
from ztfy.workflow.workflow import Workflow
from ztfy.blog import _
def canPublish(wf, context):
sm = getSecurityManager(context)
if sm is None:
request = getRequest()
dc = IZopeDublinCore(context)
return request.principal.id == dc.creators[0]
return sm.canUsePermission('ztfy.ManageContent') or sm.canUsePermission('ztfy.ManageBlog')
def publishAction(wf, context):
"""Publihs draft content"""
now = datetime.now(pytz.UTC)
wf_content = IWorkflowContent(context)
if wf_content.first_publication_date is None:
wf_content.first_publication_date = now
IWorkflowContent(context).publication_date = now
def canRetire(wf, context):
sm = getSecurityManager(context)
if sm is None:
request = getRequest()
dc = IZopeDublinCore(context)
return request.principal.id == dc.creators[0]
return sm.canUsePermission('ztfy.ManageContent') or sm.canUsePermission('ztfy.ManageBlog')
def retireAction(wf, context):
"""Archive published content"""
now = datetime.now(pytz.UTC)
IWorkflowContent(context).publication_expiration_date = now
def canArchive(wf, context):
sm = getSecurityManager(context)
if sm is None:
request = getRequest()
dc = IZopeDublinCore(context)
return request.principal.id == dc.creators[0]
return sm.canUsePermission('ztfy.ManageContent') or sm.canUsePermission('ztfy.ManageBlog')
def archiveAction(wf, context):
"""Archive published content"""
now = datetime.now(pytz.UTC)
content = IWorkflowContent(context)
content.publication_expiration_date = min(content.publication_expiration_date or now, now)
def canDelete(wf, context):
sm = getSecurityManager(context)
if sm is None:
request = getRequest()
dc = IZopeDublinCore(context)
return request.principal.id == dc.creators[0]
return sm.canUsePermission('ztfy.ManageContent') or sm.canUsePermission('ztfy.ManageBlog')
def deleteAction(wf, context):
"""Delete draft version"""
parent = getParent(context)
name = getName(context)
del parent[name]
init = Transition('init',
title=_("Initialize"),
source=None,
destination=STATUS_DRAFT,
order=0)
draft_to_published = Transition('draft_to_published',
title=_("Publish"),
source=STATUS_DRAFT,
destination=STATUS_PUBLISHED,
condition=canPublish,
action=publishAction,
order=1,
view='wf_publish.html',
html_help=_('''This content is currently in DRAFT mode.
Publishing it will make it publicly visible.'''))
published_to_retired = Transition('published_to_retired',
title=_("Retire"),
source=STATUS_PUBLISHED,
destination=STATUS_RETIRED,
condition=canRetire,
action=retireAction,
order=2,
view='wf_default.html',
html_help=_('''This content is actually published.
You can retire it to make it unvisible.'''))
retired_to_published = Transition('retired_to_published',
title=_("Re-publish"),
source=STATUS_RETIRED,
destination=STATUS_PUBLISHED,
condition=canPublish,
action=publishAction,
order=3,
view='wf_publish.html',
html_help=_('''This content was published and retired after.
You can re-publish it to make it visible again.'''))
published_to_archived = Transition('published_to_archived',
title=_("Archive"),
source=STATUS_PUBLISHED,
destination=STATUS_ARCHIVED,
condition=canArchive,
action=archiveAction,
order=4,
view='wf_default.html',
html_help=_('''This content is currently published.
If it is archived, it will not be possible to make it visible again !'''))
retired_to_archived = Transition('retired_to_archived',
title=_("Archive"),
source=STATUS_RETIRED,
destination=STATUS_ARCHIVED,
condition=canArchive,
action=archiveAction,
order=5,
view='wf_default.html',
html_help=_('''This content has been published but is currently retired.
If it is archived, it will not be possible to make it visible again !'''))
deleted = Transition('delete',
title=_("Delete"),
source=STATUS_DRAFT,
destination=STATUS_DELETED,
condition=canDelete,
action=deleteAction,
order=6,
view='wf_delete.html',
html_help=_('''This content has never been published.
It can be removed and definitely deleted.'''))
wf_transitions = [init,
draft_to_published,
published_to_retired,
retired_to_published,
published_to_archived,
retired_to_archived,
deleted]
wf = Workflow(wf_transitions,
states=STATUS_VOCABULARY,
published_states=(STATUS_PUBLISHED,)) | ztfy.blog | /ztfy.blog-0.6.2.tar.gz/ztfy.blog-0.6.2/src/ztfy/blog/workflow.py | workflow.py |
__docformat__ = "restructuredtext"
# import standard packages
from BTrees.OOBTree import OOBTree
from persistent import Persistent
from persistent.list import PersistentList
# import Zope3 interfaces
from zope.annotation.interfaces import IAnnotations
# import local interfaces
from ztfy.blog.interfaces.blog import IBlog
from ztfy.blog.interfaces.section import ISection
from ztfy.blog.interfaces.site import ISiteManager, ISiteManagerBackInfo, ITreeViewContents
# import Zope3 packages
from zope.app.content import queryContentType
from zope.component import adapter, adapts
from zope.event import notify
from zope.interface import implementer, implements
from zope.location.location import Location, locate
from zope.site import SiteManagerContainer
# import local packages
from ztfy.base.ordered import OrderedContainer
from ztfy.blog.skin import InheritedSkin
from ztfy.extfile.blob import BlobFile, BlobImage
from ztfy.file.property import ImageProperty, FileProperty
from ztfy.i18n.property import I18nTextProperty, I18nImageProperty
from ztfy.security.property import RolePrincipalsProperty
from ztfy.utils.site import NewSiteManagerEvent
class SiteManager(OrderedContainer, SiteManagerContainer, InheritedSkin):
"""Main site manager class"""
implements(ISiteManager)
__roles__ = ('zope.Manager', 'ztfy.BlogManager', 'ztfy.BlogContributor', 'ztfy.BlogOperator')
title = I18nTextProperty(ISiteManager['title'])
shortname = I18nTextProperty(ISiteManager['shortname'])
description = I18nTextProperty(ISiteManager['description'])
keywords = I18nTextProperty(ISiteManager['keywords'])
heading = I18nTextProperty(ISiteManager['heading'])
header = I18nImageProperty(ISiteManager['header'], klass=BlobImage, img_klass=BlobImage)
illustration = I18nImageProperty(ISiteManager['illustration'], klass=BlobImage, img_klass=BlobImage)
illustration_title = I18nTextProperty(ISiteManager['illustration_title'])
administrators = RolePrincipalsProperty(ISiteManager['administrators'], role='ztfy.BlogManager')
contributors = RolePrincipalsProperty(ISiteManager['contributors'], role='ztfy.BlogContributor')
back_interface = ISiteManagerBackInfo
def __init__(self, *args, **kw):
self._data = OOBTree()
self._order = PersistentList()
@property
def content_type(self):
return queryContentType(self).__name__
def setSiteManager(self, sm):
SiteManagerContainer.setSiteManager(self, sm)
notify(NewSiteManagerEvent(self))
def getVisibleContents(self):
return [v for v in self.values() if getattr(v, 'visible', True)]
def getVisibleSections(self):
"""See `ISectionContainer` interface"""
return [v for v in self.getVisibleContents() if ISection.providedBy(v)]
@property
def blogs(self):
return [v for v in self.values() if IBlog.providedBy(v)]
class SiteManagerBackInfo(Persistent, Location):
"""Main site back-office presentation options"""
implements(ISiteManagerBackInfo)
custom_css = FileProperty(ISiteManagerBackInfo['custom_css'], klass=BlobFile)
custom_banner = ImageProperty(ISiteManagerBackInfo['custom_banner'], klass=BlobImage, img_klass=BlobImage)
custom_logo = ImageProperty(ISiteManagerBackInfo['custom_logo'], klass=BlobImage, img_klass=BlobImage)
custom_icon = ImageProperty(ISiteManagerBackInfo['custom_icon'])
SITE_MANAGER_BACK_INFO_KEY = 'ztfy.blog.backoffice.presentation'
@adapter(ISiteManager)
@implementer(ISiteManagerBackInfo)
def SiteManagerBackInfoFactory(context):
annotations = IAnnotations(context)
info = annotations.get(SITE_MANAGER_BACK_INFO_KEY)
if info is None:
info = annotations[SITE_MANAGER_BACK_INFO_KEY] = SiteManagerBackInfo()
locate(info, context, '++back++')
return info
class SiteManagerTreeViewContentsAdapter(object):
adapts(ISiteManager)
implements(ITreeViewContents)
def __init__(self, context):
self.context = context
@property
def values(self):
return self.context.values() | ztfy.blog | /ztfy.blog-0.6.2.tar.gz/ztfy.blog-0.6.2/src/ztfy/blog/site.py | site.py |
__docformat__ = "restructuredtext"
# import standard packages
from persistent import Persistent
# import Zope3 interfaces
from zope.annotation.interfaces import IAnnotations
from zope.intid.interfaces import IIntIds
# import local interfaces
from ztfy.blog.interfaces.link import IBaseLinkInfo, IInternalLink, IExternalLink, ILinkContainer, ILinkContainerTarget, ILinkFormatter, ILinkChecker
from ztfy.workflow.interfaces import IWorkflowTarget, IWorkflowContent
# import Zope3 packages
from zope.component import adapter, getUtility, queryMultiAdapter
from zope.container.contained import Contained
from zope.interface import implementer, implements, Interface
from zope.location import locate
from zope.schema.fieldproperty import FieldProperty
# import local packages
from ztfy.base.ordered import OrderedContainer
from ztfy.i18n.property import I18nTextProperty
from ztfy.utils.request import getRequest
from ztfy.utils.traversing import getParent
class BaseLink(Persistent, Contained):
title = I18nTextProperty(IBaseLinkInfo['title'])
description = I18nTextProperty(IBaseLinkInfo['description'])
language = FieldProperty(IBaseLinkInfo['language'])
def getLink(self, request=None, view=None):
if request is None:
request = getRequest()
if view is None:
view = Interface
adapter = queryMultiAdapter((self, request, view), ILinkFormatter)
if adapter is not None:
return adapter.render()
return u''
class InternalLink(BaseLink):
implements(IInternalLink, ILinkChecker)
target_oid = FieldProperty(IInternalLink['target_oid'])
@property
def target(self):
if not self.target_oid:
return None
intids = getUtility(IIntIds)
return intids.queryObject(self.target_oid)
def canView(self):
"""See `ILinkChecker` interface"""
target = self.target
if target is None:
return False
wf_parent = getParent(target, IWorkflowTarget)
return (wf_parent is None) or IWorkflowContent(wf_parent).isVisible()
def getLink(self, request=None, view=None):
if not self.canView():
return u''
return super(InternalLink, self).getLink(request, view)
class ExternalLink(BaseLink):
implements(IExternalLink, ILinkChecker)
url = I18nTextProperty(IExternalLink['url'])
def canView(self):
"""See `ILinkChecker` interface"""
return True
class LinkContainer(OrderedContainer):
implements(ILinkContainer)
def getVisibleLinks(self):
return [link for link in self.values() if ILinkChecker(link).canView()]
LINKS_ANNOTATION_KEY = 'ztfy.blog.link.container'
@adapter(ILinkContainerTarget)
@implementer(ILinkContainer)
def LinkContainerFactory(context):
"""Links container adapter"""
annotations = IAnnotations(context)
container = annotations.get(LINKS_ANNOTATION_KEY)
if container is None:
container = annotations[LINKS_ANNOTATION_KEY] = LinkContainer()
locate(container, context, '++links++')
return container | ztfy.blog | /ztfy.blog-0.6.2.tar.gz/ztfy.blog-0.6.2/src/ztfy/blog/link.py | link.py |
__docformat__ = "restructuredtext"
# import standard packages
from persistent import Persistent
# import Zope3 interfaces
from zope.annotation.interfaces import IAnnotations
# import local interfaces
from hurry.query.interfaces import IQuery
from ztfy.blog.interfaces.category import ICategory, ICategoryManager, ICategoryManagerTarget
from ztfy.blog.interfaces.category import ICategorizedContent, ICategoriesTarget
# import Zope3 packages
from zope.component import adapter, getUtility
from zope.container.folder import Folder
from zope.interface import implementer, implements, alsoProvides
from zope.intid.interfaces import IIntIds
from zope.location import locate
from zope.schema.fieldproperty import FieldProperty
# import local packages
from hurry.query.set import AnyOf
from ztfy.i18n.property import I18nTextProperty
from ztfy.workflow.interfaces import IWorkflowContent
class Category(Folder):
"""Category persistence class"""
implements(ICategory)
title = I18nTextProperty(ICategory['title'])
shortname = I18nTextProperty(ICategory['shortname'])
heading = I18nTextProperty(ICategory['heading'])
def getCategoryIds(self):
"""See `ICategory` interface"""
intids = getUtility(IIntIds)
result = [intids.queryId(self), ]
for category in self.values():
result.extend(category.getCategoryIds())
return result
def getVisibleTopics(self):
"""See `ICategory` interface"""
query = getUtility(IQuery)
results = query.searchResults(AnyOf(('Catalog', 'categories'), self.getCategoryIds()))
return sorted([v for v in results if IWorkflowContent(v).isVisible()],
key=lambda x: IWorkflowContent(x).publication_effective_date,
reverse=True)
CATEGORY_MANAGER_ANNOTATIONS_KEY = 'ztfy.blog.category.manager'
@adapter(ICategoryManagerTarget)
@implementer(ICategoryManager)
def CategoryManagerFactory(context):
annotations = IAnnotations(context)
manager = annotations.get(CATEGORY_MANAGER_ANNOTATIONS_KEY)
if manager is None:
manager = annotations[CATEGORY_MANAGER_ANNOTATIONS_KEY] = Category()
alsoProvides(manager, ICategoryManager)
locate(manager, context, '++category++')
return manager
class CategoriesList(Persistent):
"""Content categories container"""
implements(ICategorizedContent)
categories = FieldProperty(ICategorizedContent['categories'])
@property
def categories_ids(self):
intids = getUtility(IIntIds)
return [intids.register(cat) for cat in self.categories]
CATEGORIES_ANNOTATIONS_KEY = 'ztfy.blog.category.content'
@adapter(ICategoriesTarget)
@implementer(ICategorizedContent)
def CategorizedContentFactory(context):
"""Content categories adapter"""
annotations = IAnnotations(context)
container = annotations.get(CATEGORIES_ANNOTATIONS_KEY)
if container is None:
container = annotations[CATEGORIES_ANNOTATIONS_KEY] = CategoriesList()
return container | ztfy.blog | /ztfy.blog-0.6.2.tar.gz/ztfy.blog-0.6.2/src/ztfy/blog/category.py | category.py |
__docformat__ = "restructuredtext"
# import standard packages
import transaction
# import Zope3 interfaces
from z3c.language.negotiator.interfaces import INegotiatorManager
from zope.authentication.interfaces import IAuthentication
from zope.catalog.interfaces import ICatalog
from zope.component.interfaces import IComponentRegistry, ISite
from zope.i18n.interfaces import INegotiator
from zope.intid.interfaces import IIntIds
from zope.processlifetime import IDatabaseOpenedWithRoot
# import local interfaces
from ztfy.base.interfaces import IPathElements, IBaseContentType
from ztfy.blog.interfaces.category import ICategorizedContent
from ztfy.i18n.interfaces.content import II18nBaseContent
from ztfy.security.interfaces import ISecurityManager, ILocalRoleManager, ILocalRoleIndexer
from ztfy.utils.interfaces import INewSiteManagerEvent
from ztfy.utils.timezone.interfaces import IServerTimezone
# import Zope3 packages
from z3c.language.negotiator.app import Negotiator
from zc.catalog.catalogindex import SetIndex, ValueIndex
from zope.app.publication.zopepublication import ZopePublication
from zope.catalog.catalog import Catalog
from zope.component import adapter, queryUtility
from zope.intid import IntIds
from zope.location import locate
from zope.pluggableauth.authentication import PluggableAuthentication
from zope.pluggableauth.plugins.groupfolder import GroupFolder, GroupInformation
from zope.pluggableauth.plugins.principalfolder import PrincipalFolder
from zope.site import hooks
# import local packages
from ztfy.utils.catalog.index import TextIndexNG
from ztfy.utils.site import locateAndRegister
from ztfy.utils.timezone.utility import ServerTimezoneUtility
def updateDatabaseIfNeeded(context):
"""Check for missing utilities at application startup"""
try:
sm = context.getSiteManager()
except:
return
default = sm['default']
# Check for required IIntIds utility
intids = queryUtility(IIntIds)
if intids is None:
intids = default.get('IntIds')
if intids is None:
intids = IntIds()
locate(intids, default)
IComponentRegistry(sm).registerUtility(intids, IIntIds, '')
default['IntIds'] = intids
# Check authentication utility
auth = default.get('Authentication')
if auth is None:
auth = PluggableAuthentication()
locateAndRegister(auth, default, 'Authentication', intids)
auth.credentialsPlugins = [ u'No Challenge if Authenticated',
u'Session Credentials',
u'Zope Realm Basic-Auth' ]
IComponentRegistry(sm).registerUtility(auth, IAuthentication)
if 'users' not in auth:
folder = PrincipalFolder('usr.')
locateAndRegister(folder, auth, 'users', intids)
auth.authenticatorPlugins += ('users',)
groups = auth.get('groups', None)
if groups is None:
groups = GroupFolder('grp.')
locateAndRegister(groups, auth, 'groups', intids)
auth.authenticatorPlugins += ('groups',)
roles_manager = ILocalRoleManager(context, None)
if 'administrators' not in groups:
group = GroupInformation('Administrators (group)', "Group of site services and utilities managers")
locateAndRegister(group, groups, 'administrators', intids)
if (roles_manager is None) or ('zope.Manager' in roles_manager.__roles__):
ISecurityManager(context).grantRole('zope.Manager', 'grp.administrators', False)
if 'managers' not in groups:
group = GroupInformation('Managers (group)', "Group of site managers, which handle site's structure")
locateAndRegister(group, groups, 'managers', intids)
if (roles_manager is None) or ('ztfy.BlogManager' in roles_manager.__roles__):
ISecurityManager(context).grantRole('ztfy.BlogManager', 'grp.managers', False)
if 'contributors' not in groups:
group = GroupInformation('Contributors (group)', "Group of site contributors, which handle site's contents")
locateAndRegister(group, groups, 'contributors', intids)
if (roles_manager is None) or ('ztfy.BlogContributor' in roles_manager.__roles__):
ISecurityManager(context).grantRole('ztfy.BlogContributor', 'grp.contributors', False)
if 'operators' not in groups:
group = GroupInformation('Operators (group)', "Group of site operators, which can get access to management interface")
group.principals = [ 'grp.managers', 'grp.administrators', 'grp.contributors' ]
locateAndRegister(group, groups, 'operators', intids)
if (roles_manager is None) or ('ztfy.BlogOperator' in roles_manager.__roles__):
ISecurityManager(context).grantRole('ztfy.BlogOperator', 'grp.operators', False)
# Check server timezone
tz = queryUtility(IServerTimezone)
if tz is None:
tz = default.get('Timezone')
if tz is None:
tz = ServerTimezoneUtility()
locateAndRegister(tz, default, 'Timezone', intids)
IComponentRegistry(sm).registerUtility(tz, IServerTimezone)
# Check I18n negotiator
i18n = queryUtility(INegotiatorManager)
if i18n is None:
i18n = default.get('I18n')
if i18n is None:
i18n = Negotiator()
locateAndRegister(i18n, default, 'I18n', intids)
i18n.serverLanguage = u'en'
i18n.offeredLanguages = [u'en']
IComponentRegistry(sm).registerUtility(i18n, INegotiator)
IComponentRegistry(sm).registerUtility(i18n, INegotiatorManager)
# Check for required catalog and index
catalog = default.get('Catalog')
if catalog is None:
catalog = Catalog()
locateAndRegister(catalog, default, 'Catalog', intids)
IComponentRegistry(sm).registerUtility(catalog, ICatalog, 'Catalog')
if catalog is not None:
if 'paths' not in catalog:
index = SetIndex('paths', IPathElements, False)
locateAndRegister(index, catalog, 'paths', intids)
if 'categories' not in catalog:
index = SetIndex('categories_ids', ICategorizedContent, False)
locateAndRegister(index, catalog, 'categories', intids)
if 'content_type' not in catalog:
index = ValueIndex('content_type', IBaseContentType, False)
locateAndRegister(index, catalog, 'content_type', intids)
if 'title' not in catalog:
index = TextIndexNG('title shortname description heading', II18nBaseContent, False,
languages=('fr en'),
storage='txng.storages.term_frequencies',
dedicated_storage=False,
use_stopwords=True,
use_normalizer=True,
ranking=True)
locateAndRegister(index, catalog, 'title', intids)
# Check for security catalog and indexes
catalog = default.get('SecurityCatalog')
if catalog is None:
catalog = Catalog()
locateAndRegister(catalog, default, 'SecurityCatalog', intids)
IComponentRegistry(sm).registerUtility(catalog, ICatalog, 'SecurityCatalog')
if catalog is not None:
if 'ztfy.BlogManager' not in catalog:
index = SetIndex('ztfy.BlogManager', ILocalRoleIndexer, False)
locateAndRegister(index, catalog, 'ztfy.BlogManager', intids)
if 'ztfy.BlogContributor' not in catalog:
index = SetIndex('ztfy.BlogContributor', ILocalRoleIndexer, False)
locateAndRegister(index, catalog, 'ztfy.BlogContributor', intids)
if 'ztfy.BlogOperator' not in catalog:
index = SetIndex('ztfy.BlogOperator', ILocalRoleIndexer, False)
locateAndRegister(index, catalog, 'ztfy.BlogOperator', intids)
@adapter(IDatabaseOpenedWithRoot)
def handleOpenedDatabase(event):
db = event.database
connection = db.open()
root = connection.root()
root_folder = root.get(ZopePublication.root_name, None)
for site in root_folder.values():
if ISite(site, None) is not None:
hooks.setSite(site)
updateDatabaseIfNeeded(site)
transaction.commit()
@adapter(INewSiteManagerEvent)
def handleNewSiteManager(event):
updateDatabaseIfNeeded(event.object) | ztfy.blog | /ztfy.blog-0.6.2.tar.gz/ztfy.blog-0.6.2/src/ztfy/blog/database.py | database.py |
__docformat__ = "restructuredtext"
# import standard packages
import pytz
from datetime import datetime
# import Zope3 interfaces
from z3c.language.negotiator.interfaces import INegotiatorManager
from z3c.language.switch.interfaces import II18n
from zope.dublincore.interfaces import IZopeDublinCore
from zope.schema.interfaces import IVocabularyFactory
# import local interfaces
from hurry.query.interfaces import IQuery
from ztfy.blog.interfaces.blog import IBlog, IBlogFolder, IBlogContainer
from ztfy.blog.interfaces.topic import ITopic
# import Zope3 packages
from zope.app.content import queryContentType
from zope.component import getUtility, queryUtility
from zope.site.folder import Folder
from zope.event import notify
from zope.interface import implements, classProvides
from zope.schema.fieldproperty import FieldProperty
from zope.schema.vocabulary import SimpleTerm, SimpleVocabulary
from zope.security.proxy import removeSecurityProxy
from zope.traversing.api import getName, getPath
# import local packages
from hurry.query.set import AnyOf
from ztfy.blog.skin import InheritedSkin
from ztfy.extfile.blob import BlobImage
from ztfy.i18n.property import I18nTextProperty, I18nImageProperty
from ztfy.security.property import RolePrincipalsProperty
from ztfy.utils.security import unproxied
from ztfy.utils.site import NewSiteManagerEvent
from ztfy.utils.traversing import getParent
from ztfy.utils.unicode import translateString
from ztfy.workflow.interfaces import IWorkflowContent
class BlogFolder(Folder):
"""Custom class used to store topics"""
implements(IBlogFolder)
@property
def topics(self):
return [v for v in self.values() if ITopic.providedBy(v)]
class Blog(Folder, InheritedSkin):
implements(IBlog)
__roles__ = ('zope.Manager', 'ztfy.BlogManager', 'ztfy.BlogContributor', 'ztfy.BlogOperator')
title = I18nTextProperty(IBlog['title'])
shortname = I18nTextProperty(IBlog['shortname'])
description = I18nTextProperty(IBlog['description'])
keywords = I18nTextProperty(IBlog['keywords'])
heading = I18nTextProperty(IBlog['heading'])
header = I18nImageProperty(IBlog['header'], klass=BlobImage, img_klass=BlobImage)
illustration = I18nImageProperty(IBlog['illustration'], klass=BlobImage, img_klass=BlobImage)
illustration_title = I18nTextProperty(IBlog['illustration_title'])
visible = FieldProperty(IBlog['visible'])
administrators = RolePrincipalsProperty(IBlog['administrators'], role='ztfy.BlogManager')
contributors = RolePrincipalsProperty(IBlog['contributors'], role='ztfy.BlogContributor')
@property
def content_type(self):
return queryContentType(self).__name__
def setSiteManager(self, sm):
Folder.setSiteManager(self, sm)
notify(NewSiteManagerEvent(self))
@property
def topics(self):
"""See `ITopicContainer` interface"""
query = getUtility(IQuery)
items = query.searchResults(AnyOf(('Catalog', 'paths'), (getPath(self),)))
return sorted([item for item in items if ITopic.providedBy(item)],
key=lambda x: IZopeDublinCore(x).modified,
reverse=True)
def getVisibleTopics(self):
"""See `ITopicContainer` interface"""
return sorted([t for t in self.topics if IWorkflowContent(t).isVisible()],
key=lambda x: IWorkflowContent(x).publication_effective_date,
reverse=True)
def addTopic(self, topic):
"""See `ITopicContainer` interface"""
# initialize sub-folders
now = datetime.now(pytz.UTC)
year, month = str(now.year), '%02d' % now.month
y_folder = self.get(year)
if y_folder is None:
self[year] = y_folder = BlogFolder()
m_folder = y_folder.get(month)
if m_folder is None:
y_folder[month] = m_folder = BlogFolder()
# lookup server language
manager = queryUtility(INegotiatorManager)
if manager is not None:
lang = INegotiatorManager(manager).serverLanguage
else:
lang = 'en'
# get topic name
title = translateString(topic.shortname.get(lang), forceLower=True, spaces='-')
if len(title) > 40:
title = title[:40]
title = title[:title.rfind('-')]
index = 0
base_title = title + '.html'
while base_title in m_folder:
index += 1
base_title = '%s-%02d.html' % (title, index)
m_folder[base_title] = unproxied(topic)
class BlogsVocabulary(SimpleVocabulary):
classProvides(IVocabularyFactory)
def __init__(self, context):
container = getParent(context, IBlogContainer)
terms = [SimpleTerm(removeSecurityProxy(b), getName(b), II18n(b).queryAttribute('title')) for b in container.blogs]
super(BlogsVocabulary, self).__init__(terms) | ztfy.blog | /ztfy.blog-0.6.2.tar.gz/ztfy.blog-0.6.2/src/ztfy/blog/blog.py | blog.py |
__docformat__ = "restructuredtext"
# import standard packages
# import Zope3 interfaces
# import local interfaces
# import Zope3 packages
from zope.interface import Interface
from zope.schema import Bool, Int, TextLine, Choice
from zope.schema.vocabulary import SimpleTerm, SimpleVocabulary
# import local packages
from ztfy.blog import _
class IGoogleAnalytics(Interface):
"""Google analytics interface"""
enabled = Bool(title=_("Activate Google analytics ?"),
description=_("Are Google analytics statistics activated ?"),
required=True,
default=False)
website_id = TextLine(title=_("Web site ID"),
description=_("Google analytics web site ID"),
required=False)
verification_code = TextLine(title=_("Web site verification code"),
description=_("Google site verification code"),
required=False)
BOTTOM = 0
BOTTOM_TOPICS = 1
TOP = 2
TOP_TOPICS = 3
SLOT_POSITIONS_LABELS = (_("Bottom (all pages)"),
_("Bottom (topics only)"),
_("Top (all pages)"),
_("Top (topics only"))
SLOT_POSITIONS = SimpleVocabulary([SimpleTerm(i, i, t) for i, t in enumerate(SLOT_POSITIONS_LABELS)])
class IGoogleAdSense(Interface):
"""GoogleAds interface"""
enabled = Bool(title=_("Activate Google AdSense ?"),
description=_("Integrate GoogleAdSense into your web site ?"),
required=True,
default=False)
client_id = TextLine(title=_("Client ID"),
description=_("Google AdSense client ID"),
required=False)
slot_id = TextLine(title=_("Slot ID"),
description=_("ID of the selected slot"),
required=False)
slot_width = Int(title=_("Slot width"),
description=_("Width of the selected slot, in pixels"),
required=False)
slot_height = Int(title=_("Slot height"),
description=_("Height of the selected slot, in pixels"),
required=False)
slot_position = Choice(title=_("Slot position"),
description=_("Position of the selected slot in the generated pages"),
vocabulary=SLOT_POSITIONS,
default=BOTTOM,
required=True)
def display(context, position):
"""Return boolean value to say if content provider should be displayed""" | ztfy.blog | /ztfy.blog-0.6.2.tar.gz/ztfy.blog-0.6.2/src/ztfy/blog/interfaces/google.py | google.py |
__docformat__ = "restructuredtext"
# import standard packages
# import Zope3 interfaces
from zope.container.interfaces import IContained
# import local interfaces
from ztfy.blog.interfaces.container import IOrderedContainer
from ztfy.i18n.interfaces import II18nAttributesAware
# import Zope3 packages
from zope.container.constraints import containers, contains
from zope.interface import Interface
from zope.schema import TextLine
# import local packages
from ztfy.file.schema import FileField
from ztfy.i18n.schema import I18nText, I18nTextLine, Language
from ztfy.blog import _
#
# Resources management
#
class IResourceNamespaceTarget(Interface):
"""Marker interface for targets handling '++static++' namespace traverser"""
class IResourceInfo(II18nAttributesAware):
"""Static resources base interface"""
title = I18nTextLine(title=_("Title"),
description=_("Name of the resource"),
required=False)
description = I18nText(title=_("Description"),
description=_("Short description of resource content"),
required=False)
content = FileField(title=_("Resource data"),
description=_("Current content of the given external resource"),
required=True)
filename = TextLine(title=_("Filename"),
description=_("Resource's public filename"),
required=False)
language = Language(title=_("Resource language"),
description=_("Actual language of the resource"),
required=False)
class IResource(IResourceInfo, IContained):
"""Static resource interface"""
containers('ztfy.blog.interfaces.resource.IResourceContainer')
class IResourceContainer(IOrderedContainer, IResourceNamespaceTarget):
"""Static resources container interface"""
contains(IResource)
class IResourceContainerTarget(IResourceNamespaceTarget):
"""Marker interface for static resources container target""" | ztfy.blog | /ztfy.blog-0.6.2.tar.gz/ztfy.blog-0.6.2/src/ztfy/blog/interfaces/resource.py | resource.py |
__docformat__ = "restructuredtext"
# import standard packages
# import Zope3 interfaces
from zope.container.interfaces import IContainer
# import local interfaces
from ztfy.blog.interfaces.category import ICategoriesTarget
from ztfy.blog.interfaces.link import ILinkContainerTarget
from ztfy.blog.interfaces.paragraph import IParagraphContainer
from ztfy.blog.interfaces.resource import IResourceContainerTarget
from ztfy.i18n.interfaces.content import II18nBaseContent
from ztfy.workflow.interfaces import IWorkflowTarget
# import Zope3 packages
from zope.container.constraints import containers, contains
from zope.interface import Interface
from zope.schema import Int, Bool, List, Object
# import local packages
from ztfy.blog import _
#
# Topics management
#
class ITopicInfo(II18nBaseContent):
"""Base topic interface"""
publication_year = Int(title=_("Publication year"),
description=_("Topic publication year, used for indexing"),
readonly=True)
publication_month = Int(title=_("Publication month"),
description=_("Topic publication month, used for indexing"),
readonly=True)
commentable = Bool(title=_("Allow comments ?"),
description=_("Are free comments allowed on this topic ?"),
required=True,
default=True)
class ITopicWriter(Interface):
"""Topic writer interface"""
class ITopic(ITopicInfo, ITopicWriter, IParagraphContainer, IWorkflowTarget,
ICategoriesTarget, IResourceContainerTarget, ILinkContainerTarget):
"""Topic full interface"""
containers('ztfy.blog.interfaces.topic.ITopicContainer')
contains('ztfy.blog.interfaces.ITopicElement')
class ITopicContainerInfo(Interface):
"""Topic container interface"""
topics = List(title=_("Topics list"),
value_type=Object(schema=ITopic),
readonly=True)
def getVisibleTopics(self, request):
"""Get list of topics visible from given request"""
class ITopicContainerWriter(Interface):
"""Topic container writer interface"""
def addTopic(self, topic):
"""Add new topic to container"""
class ITopicContainer(IContainer, ITopicContainerInfo, ITopicContainerWriter):
"""Topic container interface""" | ztfy.blog | /ztfy.blog-0.6.2.tar.gz/ztfy.blog-0.6.2/src/ztfy/blog/interfaces/topic.py | topic.py |
__docformat__ = "restructuredtext"
# import standard packages
# import Zope3 interfaces
from zope.location.interfaces import ILocation, IPossibleSite
# import local interfaces
from ztfy.blog.interfaces import IMainContent, IBaseContentRoles
from ztfy.blog.interfaces.blog import IBlogContainer
from ztfy.blog.interfaces.category import ICategoryManagerTarget
from ztfy.blog.interfaces.section import ISectionContainer
from ztfy.i18n.interfaces.content import II18nBaseContent
from ztfy.security.interfaces import ILocalRoleManager
from ztfy.skin.interfaces import ICustomBackOfficeInfoTarget
# import Zope3 packages
from zope.container.constraints import contains
from zope.interface import Interface
from zope.schema import List, Object
# import local packages
from ztfy.file.schema import FileField, ImageField
from ztfy.blog import _
#
# Site management
#
class ISiteManagerInfo(II18nBaseContent):
"""Base site interface"""
def getVisibleContents(self, request):
"""Get list of contents visible from given request"""
class ISiteManagerWriter(Interface):
"""Site writer interface"""
class ISiteManager(ISiteManagerInfo, ISiteManagerWriter, IBaseContentRoles,
ICategoryManagerTarget, ISectionContainer, IBlogContainer,
ICustomBackOfficeInfoTarget, ILocation, IPossibleSite, ILocalRoleManager):
"""Site full interface"""
contains(IMainContent)
class ISiteManagerBackInfo(Interface):
"""Site manager back-office presentation options"""
custom_css = FileField(title=_("Custom CSS"),
description=_("You can provide a custom CSS for your back-office"),
required=False)
custom_banner = ImageField(title=_("Custom banner"),
description=_("You can provide a custom image file which will be displayed on pages top"),
required=False)
custom_logo = ImageField(title=_("Custom logo"),
description=_("You can provide a custom logo which will be displayed on top of left column"),
required=False)
custom_icon = ImageField(title=_("Custom icon"),
description=_("You can provide a custom image file to be used as favorite icon"),
required=False)
class ITreeViewContents(Interface):
"""Marker interface for contents which should be displayed inside tree views"""
values = List(title=_("Container values"),
value_type=Object(schema=Interface),
readonly=True) | ztfy.blog | /ztfy.blog-0.6.2.tar.gz/ztfy.blog-0.6.2/src/ztfy/blog/interfaces/site.py | site.py |
__docformat__ = "restructuredtext"
# import standard packages
# import Zope3 interfaces
from zope.container.interfaces import IContained
# import local interfaces
from ztfy.blog.interfaces import IBaseContent
from ztfy.blog.interfaces.container import IOrderedContainer
from ztfy.i18n.interfaces import II18nAttributesAware
# import Zope3 packages
from zope.container.constraints import containers, contains
from zope.interface import Interface
from zope.schema import Object
# import local packages
from ztfy.base.schema import InternalReference
from ztfy.i18n.schema import I18nText, I18nTextLine, Language
from ztfy.blog import _
#
# Links management
#
class ILinkNamespaceTarget(Interface):
"""Marker interface for targets handling '++links++' namespace traverser"""
class ILinkFormatter(Interface):
"""Link renderer interface"""
def render():
"""Render link's HTML"""
class ILinkChecker(Interface):
"""Link visibility checker interface"""
def canView():
"""Check link visibility"""
class IBaseLinkInfo(II18nAttributesAware):
"""Links base interface"""
title = I18nTextLine(title=_("Title"),
description=_("Displayed link's title"),
required=False)
description = I18nText(title=_("Description"),
description=_("Short description of provided link"),
required=False)
language = Language(title=_("Link's target language"),
description=_("Actual language of link's target content"),
required=False)
def getLink(request=None, view=None):
"""Get full link for given link"""
class IInternalLinkInfo(IBaseLinkInfo):
"""Internal link base interface"""
target_oid = InternalReference(title=_("Link's target"),
description=_("Internal ID of link's target"),
required=True)
target = Object(title=_("Link's target"),
schema=IBaseContent,
readonly=True)
class IInternalLink(IInternalLinkInfo, IContained):
"""Internal link interface"""
containers('ztfy.blog.interfaces.link.ILinkContainer')
class IExternalLinkInfo(IBaseLinkInfo):
"""External link base interface"""
url = I18nTextLine(title=_("URL"),
description=_("External URL ; maybe http:, https:, mailto:..."),
required=True)
class IExternalLink(IExternalLinkInfo, IContained):
"""External link interface"""
containers('ztfy.blog.interfaces.link.ILinkContainer')
class ILinkContainer(IOrderedContainer, ILinkNamespaceTarget):
"""Links container interface"""
contains(IBaseLinkInfo)
def getVisibleLinks():
"""Get list of visible links"""
class ILinkContainerTarget(ILinkNamespaceTarget):
"""Marker interface for links container target""" | ztfy.blog | /ztfy.blog-0.6.2.tar.gz/ztfy.blog-0.6.2/src/ztfy/blog/interfaces/link.py | link.py |
__docformat__ = "restructuredtext"
# import standard packages
# import Zope3 interfaces
from zope.container.interfaces import IContainer, IContained
# import local interfaces
from ztfy.i18n.interfaces import II18nAttributesAware
# import Zope3 packages
from zope.container.constraints import containers, contains
from zope.interface import Interface
from zope.schema import List, Object, Int
# import local packages
from ztfy.i18n.schema import I18nText, I18nTextLine
from ztfy.blog import _
#
# Categories management interfaces
#
class ICategoryInfo(II18nAttributesAware):
"""Marker interface used to handle circular references"""
title = I18nTextLine(title=_("Title"),
description=_("Title of the category"),
required=True)
shortname = I18nTextLine(title=_("Short name"),
description=_("Short name of the category"),
required=True)
heading = I18nText(title=_("Heading"),
description=_("Short description of the category"),
required=False)
def getCategoryIds():
"""Get IDs of category and sub-categories"""
def getVisibleTopics():
"""Get list of visible topics matching this category"""
class ICategoryWriter(Interface):
"""Category writer interface"""
class ICategory(ICategoryInfo, ICategoryWriter, IContainer, IContained):
"""Category full interface"""
contains('ztfy.blog.interfaces.category.ICategory')
containers('ztfy.blog.interfaces.category.ICategory',
'ztfy.blog.interfaces.category.ICategoryManager')
class ICategoryManager(ICategory):
"""Categories management interface"""
class ICategoryManagerTarget(Interface):
"""Marker interface for categories management"""
class ICategorizedContent(Interface):
"""Content catagory target interface"""
categories = List(title=_("Categories"),
description=_("List of categories associated with this content"),
required=False,
default=[],
value_type=Object(schema=ICategory))
categories_ids = List(title=_("Categories IDs"),
description=_("Internal IDs of content's categories, used for indexing"),
required=False,
readonly=True,
value_type=Int())
class ICategoriesTarget(Interface):
"""Marker interface for contents handling categories""" | ztfy.blog | /ztfy.blog-0.6.2.tar.gz/ztfy.blog-0.6.2/src/ztfy/blog/interfaces/category.py | category.py |
__docformat__ = "restructuredtext"
# import standard packages
# import Zope3 interfaces
from z3c.language.switch.interfaces import II18n
from zope.intid.interfaces import IIntIds
from zope.publisher.interfaces import NotFound
# import local interfaces
from ztfy.blog.browser.interfaces import ISectionAddFormMenuTarget, ITopicAddFormMenuTarget, \
ISiteManagerTreeView
from ztfy.blog.browser.interfaces.skin import ISectionIndexView
from ztfy.blog.interfaces import ISkinnable, IBaseContentRoles
from ztfy.blog.interfaces.section import ISection, ISectionInfo, ISectionContainer
from ztfy.skin.interfaces.container import IContainerBaseView, IActionsColumn, IContainerTableViewActionsCell
from ztfy.skin.layer import IZTFYBrowserLayer, IZTFYBackLayer
# import Zope3 packages
from z3c.form import field
from z3c.template.template import getLayoutTemplate
from zope.component import adapts, getUtility
from zope.i18n import translate
from zope.interface import implements
from zope.traversing.browser import absoluteURL
# import local packages
from ztfy.blog.browser.skin import SkinSelectWidgetFactory
from ztfy.blog.section import Section
from ztfy.security.browser.roles import RolesEditForm
from ztfy.skin.container import OrderedContainerBaseView
from ztfy.skin.content import BaseContentDefaultBackViewAdapter
from ztfy.skin.form import AddForm, EditForm
from ztfy.skin.menu import MenuItem, DialogMenuItem
from ztfy.skin.presentation import BasePresentationEditForm, BaseIndexView
from ztfy.utils.unicode import translateString
from ztfy.blog import _
class SectionTreeViewDefaultViewAdapter(BaseContentDefaultBackViewAdapter):
adapts(ISection, IZTFYBackLayer, ISiteManagerTreeView)
viewname = '@@contents.html'
def getAbsoluteURL(self):
intids = getUtility(IIntIds)
return '++oid++%d/%s' % (intids.register(self.context), self.viewname)
class SectionContainerContentsViewMenu(MenuItem):
"""Sections container contents menu"""
title = _("Section contents")
class SectionContainerContentsView(OrderedContainerBaseView):
"""Sections container contents view"""
implements(ISectionAddFormMenuTarget, ITopicAddFormMenuTarget)
legend = _("Container's sections")
cssClasses = { 'table': 'orderable' }
class SectionContainerContentsViewCellActions(object):
adapts(ISectionContainer, IZTFYBrowserLayer, IContainerBaseView, IActionsColumn)
implements(IContainerTableViewActionsCell)
def __init__(self, context, request, view, column):
self.context = context
self.request = request
self.view = view
self.column = column
@property
def content(self):
container = ISectionContainer(self.context)
if not (container.sections or container.topics):
klass = "workflow icon icon-trash"
intids = getUtility(IIntIds)
return '''<span class="%s" title="%s" onclick="$.ZTFY.container.remove(%s,this);"></span>''' % (klass,
translate(_("Delete section"), context=self.request),
intids.register(self.context))
return ''
class SectionAddFormMenu(MenuItem):
"""Sections container add form menu"""
title = _(" :: Add section...")
class SectionAddForm(AddForm):
implements(ISectionAddFormMenuTarget)
@property
def title(self):
return II18n(self.context).queryAttribute('title', request=self.request)
legend = _("Adding new section")
fields = field.Fields(ISectionInfo, ISkinnable)
fields['skin'].widgetFactory = SkinSelectWidgetFactory
def updateWidgets(self):
super(SectionAddForm, self).updateWidgets()
self.widgets['heading'].cols = 80
self.widgets['heading'].rows = 10
self.widgets['description'].cols = 80
self.widgets['description'].rows = 3
def create(self, data):
section = Section()
section.shortname = data.get('shortname', {})
return section
def add(self, section):
language = II18n(self.context).getDefaultLanguage()
name = translateString(section.shortname.get(language), forceLower=True, spaces='-')
ids = list(self.context.keys()) + [name, ]
self.context[name] = section
self.context.updateOrder(ids)
def nextURL(self):
return '%s/@@contents.html' % absoluteURL(self.context, self.request)
class SectionEditForm(EditForm):
legend = _("Section properties")
fields = field.Fields(ISectionInfo, ISkinnable)
fields['skin'].widgetFactory = SkinSelectWidgetFactory
def updateWidgets(self):
super(SectionEditForm, self).updateWidgets()
self.widgets['heading'].cols = 80
self.widgets['heading'].rows = 10
self.widgets['description'].cols = 80
self.widgets['description'].rows = 3
class SectionRolesEditForm(RolesEditForm):
interfaces = (IBaseContentRoles,)
layout = getLayoutTemplate()
parent_interface = ISection
class SectionRolesMenuItem(DialogMenuItem):
"""Section roles menu item"""
title = _(":: Roles...")
target = SectionRolesEditForm
class SectionPresentationEditForm(BasePresentationEditForm):
"""Section presentation edit form"""
legend = _("Edit section presentation properties")
parent_interface = ISection
class BaseSectionIndexView(BaseIndexView):
"""Base section index view"""
implements(ISectionIndexView)
def update(self):
if not self.context.visible:
raise NotFound(self.context, 'index.html', self.request)
super(BaseSectionIndexView, self).update()
self.topics = self.context.getVisibleTopics() | ztfy.blog | /ztfy.blog-0.6.2.tar.gz/ztfy.blog-0.6.2/src/ztfy/blog/browser/section.py | section.py |
__docformat__ = "restructuredtext"
# import standard packages
from cStringIO import StringIO
import tarfile
import zipfile
# import Zope3 interfaces
from z3c.language.switch.interfaces import II18n
from zope.app.file.interfaces import IFile, IImage
from zope.dublincore.interfaces import IZopeDublinCore
from zope.intid.interfaces import IIntIds
from zope.traversing.interfaces import TraversalError
# import local interfaces
from ztfy.blog.interfaces.resource import IResource, IResourceInfo, IResourceContainer, IResourceContainerTarget
from ztfy.file.interfaces import IImageDisplay
from ztfy.skin.interfaces.container import IActionsColumn, IContainerTableViewActionsCell
from ztfy.skin.layer import IZTFYBrowserLayer, IZTFYBackLayer
# import Zope3 packages
from z3c.form import field
from z3c.formjs import ajax
from z3c.table.column import Column
from z3c.template.template import getLayoutTemplate
from zope.component import adapts, getUtility, queryMultiAdapter
from zope.event import notify
from zope.i18n import translate
from zope.interface import implements, Interface
from zope.lifecycleevent import ObjectCreatedEvent
from zope.publisher.browser import BrowserPage, BrowserView
from zope.traversing import namespace
from zope.traversing.api import getParent as getParentAPI, getName
from zope.traversing.browser import absoluteURL
# import local packages
from ztfy.blog.resource import Resource
from ztfy.file.schema import FileField
from ztfy.i18n.browser import ztfy_i18n
from ztfy.skin.container import OrderedContainerBaseView
from ztfy.skin.content import BaseContentDefaultBackViewAdapter
from ztfy.skin.form import DialogAddForm, DialogEditForm
from ztfy.skin.menu import MenuItem, DialogMenuItem
from ztfy.utils.container import getContentName
from ztfy.utils.traversing import getParent
from ztfy.blog import _
class ResourceDefaultViewAdapter(BaseContentDefaultBackViewAdapter):
adapts(IResource, IZTFYBackLayer, Interface)
def getAbsoluteURL(self):
return '''javascript:$.ZTFY.dialog.open('%s/%s')''' % (absoluteURL(self.context, self.request), self.viewname)
class ResourceContainerNamespaceTraverser(namespace.view):
"""++static++ namespace"""
def traverse(self, name, ignored):
result = getParent(self.context, IResourceContainerTarget)
if result is not None:
return IResourceContainer(result)
raise TraversalError('++static++')
class IResourceAddFormMenuTarget(Interface):
"""Marker interface for resource add menu"""
class ResourceContainerContentsViewMenuItem(MenuItem):
"""Resources container contents menu"""
title = _("Resources")
class IResourceContainerContentsView(Interface):
"""Marker interface for resource container contents view"""
class ResourceContainerContentsView(OrderedContainerBaseView):
"""Resources container contents view"""
implements(IResourceAddFormMenuTarget, IResourceContainerContentsView)
legend = _("Topic resources")
cssClasses = { 'table': 'orderable' }
@property
def values(self):
return IResourceContainer(self.context).values()
@ajax.handler
def ajaxRemove(self):
oid = self.request.form.get('id')
if oid:
intids = getUtility(IIntIds)
target = intids.getObject(int(oid))
parent = getParentAPI(target)
del parent[getName(target)]
return "OK"
return "NOK"
@ajax.handler
def ajaxUpdateOrder(self):
self.updateOrder(IResourceContainer(self.context))
class IResourceContainerPreviewColumn(Interface):
"""Marker interface for resource container preview column"""
class ResourceContainerPreviewColumn(Column):
"""Resource container preview column"""
implements(IResourceContainerPreviewColumn)
header = u''
weight = 5
cssClasses = { 'th': 'preview',
'td': 'preview' }
def renderCell(self, item):
image = IImage(item.content, None)
if image is None:
return u''
i18n = II18n(image, None)
if i18n is None:
alt = IZopeDublinCore(image).title
else:
alt = II18n(image).queryAttribute('title', request=self.request)
display = IImageDisplay(image).getDisplay('64x64')
return '''<img src="%s" alt="%s" />''' % (absoluteURL(display, self.request), alt)
class IResourceContainerSizeColumn(Interface):
"""Marker interface for resource container size column"""
class ResourceContainerSizeColumn(Column):
"""Resource container size column"""
implements(IResourceContainerSizeColumn)
header = _("Size")
weight = 15
cssClasses = { 'td': 'size' }
def __init__(self, context, request, table):
super(ResourceContainerSizeColumn, self).__init__(context, request, table)
self.formatter = self.request.locale.numbers.getFormatter('decimal')
def renderCell(self, item):
file = IFile(item.content, None)
if file is None:
return u''
size = file.getSize()
if size < 1024:
return translate(_("%d bytes"), context=self.request) % size
size = size / 1024.0
if size < 1024:
return translate(_("%s Kb"), context=self.request) % self.formatter.format(size, '0.00')
size = size / 1024.0
return translate(_("%s Mb"), context=self.request) % self.formatter.format(size, '0.00')
class ResourceContainerContentsViewActionsColumnCellAdapter(object):
adapts(IResource, IZTFYBrowserLayer, IResourceContainerContentsView, IActionsColumn)
implements(IContainerTableViewActionsCell)
def __init__(self, context, request, view, column):
self.context = context
self.request = request
self.view = view
self.column = column
self.intids = getUtility(IIntIds)
@property
def content(self):
klass = "workflow icon icon-trash"
result = '''<span class="%s" title="%s" onclick="$.ZTFY.form.remove(%d, this);"></span>''' % (klass,
translate(_("Delete resource"), context=self.request),
self.intids.register(self.context))
return result
class ResourceContainerResourcesList(BrowserView):
"""Get list of images resources used by HTML editor"""
def getImagesList(self):
self.request.response.setHeader('Content-Type', 'text/javascript')
resources = IResourceContainer(self.context).values()
images = [img for img in resources if img.content.contentType.startswith('image/')]
return '''var tinyMCEImageList = new Array(
%s
);''' % ',\n'.join(['["%s","%s"]' % (II18n(img).queryAttribute('title', request=self.request) or ('{{ %s }}' % getName(img)),
absoluteURL(img, self.request)) for img in images])
def getLinksList(self):
self.request.response.setHeader('Content-Type', 'text/javascript')
resources = IResourceContainer(self.context).values()
return '''var tinyMCELinkList = new Array(
%s
);''' % ',\n'.join(['["%s","%s"]' % (II18n(res).queryAttribute('title', request=self.request) or ('{{ %s }}' % getName(res)),
absoluteURL(res, self.request)) for res in resources])
class ResourceAddForm(DialogAddForm):
"""Resource add form"""
legend = _("Adding new resource")
fields = field.Fields(IResourceInfo)
layout = getLayoutTemplate()
parent_interface = IResourceContainerTarget
parent_view = ResourceContainerContentsView
handle_upload = True
resources = (ztfy_i18n,)
def create(self, data):
return Resource()
def add(self, resource):
prefix = self.prefix + self.widgets.prefix
filename = self.request.form.get(prefix + 'filename')
if not filename:
filename = self.request.form.get(prefix + 'content').filename
name = getContentName(self.context, filename)
self.context[name] = resource
class ResourceContainerAddResourceMenuItem(DialogMenuItem):
"""Resource add menu"""
title = _(":: Add resource...")
target = ResourceAddForm
class IZipResourceAddInfo(Interface):
"""ZipResourceAddForm schema"""
content = FileField(title=_("Archive data"),
description=_("Archive content's will be extracted as resources ; format can be any ZIP, tar.gz or tar.bz2 file"),
required=True)
class ZipArchiveExtractor(object):
def __init__(self, data):
self.data = zipfile.ZipFile(StringIO(data), 'r')
def getMembers(self):
return self.data.infolist()
def getFilename(self, member):
return member.filename
def extract(self, member):
return self.data.read(member.filename)
class TarArchiveExtractor(object):
def __init__(self, data):
self.data = tarfile.open(fileobj=StringIO(data), mode='r')
def getMembers(self):
return self.data.getmembers()
def getFilename(self, member):
return member.name
def extract(self, member):
output = self.data.extractfile(member)
if output is not None:
return output.read()
return None
class ResourcesFromZipAddForm(DialogAddForm):
"""Add a set of resources included in a ZIP archive file"""
legend = _("Adding new resources from ZIP file")
fields = field.Fields(IZipResourceAddInfo)
layout = getLayoutTemplate()
parent_interface = IResourceContainerTarget
parent_view = ResourceContainerContentsView
handle_upload = True
resources = (ztfy_i18n,)
def createAndAdd(self, data):
prefix = self.prefix + self.widgets.prefix
filename = self.request.form.get(prefix + 'content').filename
if filename.lower().endswith('.zip'):
extractor = ZipArchiveExtractor
else:
extractor = TarArchiveExtractor
content = data.get('content')
if isinstance(content, tuple):
content = content[0]
extractor = extractor(content)
for info in extractor.getMembers():
content = extractor.extract(info)
if content:
resource = Resource()
notify(ObjectCreatedEvent(resource))
name = getContentName(self.context, extractor.getFilename(info))
self.context[name] = resource
resource.filename = name
resource.content = content
class ResourceContainerAddResourcesFromZipMenuItem(DialogMenuItem):
"""Resources from ZIP add menu"""
title = _(":: Add resources from archive...")
target = ResourcesFromZipAddForm
class ResourceEditForm(DialogEditForm):
"""Resource edit form"""
legend = _("Edit resource properties")
fields = field.Fields(IResourceInfo)
layout = getLayoutTemplate()
parent_interface = IResourceContainerTarget
parent_view = ResourceContainerContentsView
handle_upload = True
class ResourceIndexView(BrowserPage):
"""Resource default view"""
def __call__(self):
view = queryMultiAdapter((self.context.content, self.request), Interface, 'index.html')
return view() | ztfy.blog | /ztfy.blog-0.6.2.tar.gz/ztfy.blog-0.6.2/src/ztfy/blog/browser/resource.py | resource.py |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.