code
stringlengths 501
5.19M
| package
stringlengths 2
81
| path
stringlengths 9
304
| filename
stringlengths 4
145
|
---|---|---|---|
# Adifpy
## Table of Contents
- [Introduction](#Introduction)
- [Background](#Background)
- [How to Use](#How-to-Use)
- [Software Organization](#Software-Organization)
- [Directory Structure](#Directory-Structure)
- [Subpackages](#Subpackages)
- [Implementation](#Implementation)
- [Libraries](#Libraries)
- [Modules and Classes](#Modules-and-Classes)
- [Elementary Funcions](#Elementary-Functions)
- [Extention](#Future-Features)
- [Reverse Mode](#Reverse-Mode)
- [Visualization](#Visualization)
- [Impact](#Impact)
- [Future](#Future)
## Introduction
This software is aimed at allowing users to evaluate and differentiate their function. This is a powerful tool that will allow users to find solutions to complex derivations and visualize their results. It serves as an efficient way to take derivatives when compared to other solutions such as symbolic differentiation.
Applications are widespread, ranging from graphing simple functions to taking the derivative of complex, high dimension functions that can have widespread uses such as in optimization problems, machine learning, and data analysis.
## Background
Traditional methods for differentiation include symbolic differentiation and numerical differentiation. Each of these techniques brings its own challenges when used for computational science - symbolic differentiation requires converting complex computer programs into simple components and often results in complex and cryptic expressions, while numerical differentiation is susceptible to floating point and rounding errors.
Automatic differentiation (AD) solves these problems: any mathematical function (for which a derivative is needed) can be broken down into a series of constituent elementary (binary and unary) operations, executed in a specific order on a predetermined set of inputs. A technique for visualizing the sequence of operations corresponding to the function is the computational graph, with nodes representing intermediate variables and lines leaving from nodes representing operations used on intermediate variables. AD combines the known derivatives of the constituent elementary operations (e.g. arithmetic and transcendental functions) via the chain rule to find the derivative of the overall composition.
For example, for the hypothetical function ))), where 
all represent elementary operations, we can pose ,%5Cquad%20v_2%20=%20g(v_1),%5Cquad%20y%20=%20v_3%20=%20h(v_2)). The desired output is , and by the chain rule and simple derivatives, we obtain:
<p align="center">
<img src="https://latex.codecogs.com/gif.image?%5Cbg_white%20%5Cdpi%7B110%7D%5Cfrac%7Bdy%7D%7Bdx%7D%20=%20%5Cfrac%7Bdv_3%7D%7Bdv_2%7D%20%5Ccdot%20%5Cfrac%7Bdv_2%7D%7Bdv_1%7D%20%5Ccdot%20%5Cfrac%7Bdv_1%7D%7Bdv_0%7D">
</p>
Our implementation of AD uses dual numbers to calculate derivatives of individual components. Dual numbers have real and dual components, taking the form  and  and where `a` and `b` are real. By the Taylor series expansion of a function around a point, notice that evaluating a function at  yields:
<p align="center">
<img src="https://latex.codecogs.com/gif.image?%5Cbg_white%20%5Cdpi%7B110%7Df(a%20+%20%5Cepsilon)%20=%20f(a)%20+%20%5Cfrac%7Bf'(a)%7D%7B1!%7D%20%5Cepsilon%20+%20%5Cfrac%7Bf''(a)%7D%7B2!%7D%20%5Cepsilon%5E2%20+%20...%20=%20f(a)%20+%20f'(a)%20%5Cepsilon">
</p>
Hence, by evaluating the function at the desired point , the outputted real and dual components are the function evaluated at `a` and derivative of the function evaluated at `a` respectively. This is an efficient way of calculating requisite derivatives.
## How to Use
First, ensure that you are using Python 3.10 or newer. All future steps can/should be completed in a virtual environment so as not to pollute your base Python installation. To create and activate a new virtual environment, use the following:
```
python3 -m venv [desired/path/to/venv]
source [desired/path/to/venv]/bin/activate
```
Next, clone the package from this GitHub repository and install the needed dependencies and the package:
```
git clone https://code.harvard.edu/CS107/team33.git
python3 -m pip install -r requirements.txt
python3 -m pip install .
```
Now, you're ready to use the package. Continue to the [Example](#Example) to test our the package!
### Example
First, import the package in your python code:
```
import Adifpy
```
and create an `Evaluator` object, which takes a callable function as an argument:
```
evaluator = Adifpy.Evaluator(lambda x : x**2)
```
Next, we want to find the value and derivative of the function at a point (currently, only scalar functions with 1 input and 1 output are supported). We can use the `Evaluator`'s `eval` function, passing in the point at which you want to evaluate (and optionally, a scalar seed vector):
```
output = evaluator.eval(3)
```
This function returns a tuple, in the form `(value, derivative)`, where the value is the evaluation of the function at that point (in this case, 9) and the derivative is the derivative of the function at that point (in this case, 6).
Additionally a seed vector (for now, only scalars such as type `int` or `float` are supported) can be passed to take the derivative with respect to a different seed vector. For example, if you want to take the derivative with respect to a seed vector of `2` you could call the following:
```
output2 = evaluator.eval(3, seed_vector=2)
```
which would return `(9,12)` (since the directional derivative is in the same direction, with twice the magnitude).
## Software Organization
The following section outlines our plans for organizing the package directory, sub-packages, modules, classes, and deployment.
### Directory Structure
<pre>
adifpy/
โโโ docs
โ โโโ milestone1
โ โโโ milestone2
โ โโโ milestone2_progress
โ โโโ documentation
โโโ LICENSE
โโโ README.md
โโโ requirements.txt
โโโ pyproject.toml
โโโ Adifpy
โ โโโ differentiate
โ โ โโโ <a href="#dual_numberpy">dual_number.py</a>
โ โ โโโ <a href="#evaluatorpy">evaluator.py</a>
โ โ โโโ <a href="#forward_modepy">forward_mode.py</a>
โ โ โโโ <a href="#function_treepy">function_tree.py</a>
โ โ โโโ <a href="#reverse_modepy">reverse_mode.py</a>
โ โโโ visualize
โ โ โโโ <a href="#graph_functionpy">graph_function.py</a>
โ โโโ test
โ โ โโโ README.md
โ โ โโโ run_tests.sh
โ โ โโโ test_dual_number.py
โ โ โโโ ... (unit and integration tests)
โ โโโ __init__.py
โ โโโ config.py
โโโ .github
โโโ workflows
โโโ coverage.yaml
โโโ test.yaml
</pre>
### Subpackages
The `Adifpy` directory contains the source code of our package, which contains 3 subpackages: `differentiate`, `visualize`, and `test`, described below.
#### Differentiate
The differentiate subpackage currently contains modules required to perform forward mode AD on functions from R to R. Contained in this subpackage are the modules `dual_number.py`, `elementary_functions.py`, `evaluator.py`, `forward_mode.py`, `function_tree.py`, and `reverse_mode.py`. For more information on each module, see [Modules and Classes](#Modules-and-Classes).
#### Visualize
This subpackage has not been implemented yet. Check out our implementation plan [below](#Visualization).
#### Test
The test suite is contained in the test sub-package, as shown above in the [Directory Structure](#Directory-Structure). The test directory contains a `run_tests.sh`, which installs the package and runs the relevant `pytest` commands to display data on the testing suite (similar to the CI workflows).
The individual test files, each of which are named in the `test_*.py` format, test a different aspect of the package. Within each file, each function (also named `test_*`) tests a smaller detail of that aspect. For example, the `test_dual_number.py` test module tests the implementation of the `DualNumber` class. Each function in that module tests one of the overloaded operators. Thus, error messaging will be comprehensive, should one of these operators be changed and fail to work.
The easiest way to run the test suite is to go to the `test` directory and run `./run_tests.sh`.
## Implementation
Major data structures, including descriptions on how dual numbers are implemented, are described in the [Modules and Classes](#Modules-and-Classes) section below.
### Libraries
The `differentiate` sub-package requires the `NumPy` library. Additionally, the `visualization` sub-package will require `MatPlotLib` for displaying graphs. Additional libraries may be required later for additional ease of computation or visualization.
These requirements are specified in the `requirements.txt` for easy installation.
### Modules and Classes
#### `dual_number.py`
the `DualNumber` class, stored in this module, contains the functionality for dual numbers for automatic differentiation. When a forward pass (in forward mode) is performed on a user function, a `DualNumber` object is passed to mimic the function's numeric or vector input. All of `DualNumber`'s major mathematical dunder methods are overloaded so that the `DualNumber` is updated for each of the function's elementary operations.
Each of the binary dunder methods (addition, division, etc.) work with both other numeric types (integers and floats) as well as other `DualNumber`s.
#### `evaluator.py`
The `Evaluator` class, stored in this module, is the user's main communication with the package. An `Evaluator` object is defined by its function, which is provided by the user on creation. A derivative can be calculated at any point, with any seed vector, by calling an `Evaluator`'s `eval` function. The `Evaluator` class ensures that a user's function is valid, decides whether to use forward or reverse mode (based on performance), and returns the derivative on `eval` calls.
*When reverse mode is implemented, the `Evaluator` class may also contain optimizations for making future `eval` calls faster by storing a computational graph.*
#### `forward_mode.py`
This module contains only the `forward_mode` method, which takes a user function, evaluation point, and seed vector. Its implementation is incredibly simple: a `DualNumber` is created with the real part as the evaluation point and the dual part as the seed vector. This `DualNumber` is then passed through the user's function, and the resulting real and dual components of the output `DualNumber` are the function output and derivative.
#### `function_tree.py`
The `FunctionTree` class, stored in this module, is a representation of a computational graph in the form of a tree, where intermediate variables are stored as nodes. The parent-child relationship between these nodes represents the elementary operations for these intermediate variables. This class contains optimizations like ensuring duplicate nodes are avoided.
*This module is currently unused (and un-implemented). When reverse mode is implemented, a given `Evaluator` object will build up and store a `FunctionTree` for optimization.*
#### `reverse_mode.py`
This module contains only the `reverse_mode` method, which takes the same arguments as `forward_pass`. This function is not yet implemented.
#### `graph_tree.py`
This module will contain functionality for displaying a presentable representation of a computation graph in an image. Using a `FunctionTree` object, the resulting image will be a tree-like structure with nodes and connections representing intermediate variables and elementary operations. This functionality is not yet implemented.
#### `graph_function.py`
This module will contain functionality for graphing a function and its derivative. It will create an `Evaluator` object and make the necessary `eval` calls to fill a graph for display. This functionality is not yet implemented.
### Elementary Functions
Many elementary functions like trigonometric, inverse trigonometric and exponential cannot be overloaded by Python's dunder methods (like addition and subtraction can). However, a user must still be able to use these operators in their functions, but cannot use the standard `math` or `np` versions, since a `DualNumber` object is passed to the function for forward passes.
Thus, we define a module `elementary_functions.py` that contains methods which take a `DualNumber`, and return a `DualNumber`, with the real part equal to the elementary operation applied to the real part, and the derivative of the operation applied to the dual part. Thus, these functions are essentially our package's **storage** for the common derivatives (cosine is the derivative of sine, etc.), where the storage of the derivative is the assignment of the dual part of the output of these elementary operations.
These operations will be automatically imported in the package's `__init__.py` so that users can simply call `Adifpy.sin()` or `Adifpy.cos()` (for this milestone our implementation requires users to call `ef.sin()` and `ef.cos()`, not `Adifpy.sin()` or `Adifpy.cos()`), as they would with `np.sin()` and `np.cos()`.
## Extension
Now that our forward mode implementation is complete, we will move on to implement additional features and conveniences for the user.
### Reverse Mode
We will implement reverse mode AD in the differentiate subpackage. Given that we have already been quizzed on the background math, encoding this process should not be too onerous. One of the biggest challenges that we foresee is determining when it is best to use Reverse Mode and when it is best to use Forward Mode. Obviously, it is better to use forward mode when there are far more outputs than inputs and vice-versa for reverse mode, but in the cases where number of inputs and outputs are similar it is not so simple. To address this we will do a series of practical tests on functions of different dimensions, and manually encode the most efficient results into `evaluator.py`.
### Visualization
We are planning on creating a visualization tool with `MatPlotLib` that can plot the computational graph (calculated in Reverse Mode) of simple functions that are being differentiated. Obviously, the computational graph of very complex functions with many different inputs and outputs can be impractical to represent on a screen, so one of the biggest challenges that we will face is to have our program able determine when it can produce a visual tool that can be easily rendered, and when it cannot.
## Impact
## Future
| Adifpy | /adifpy-0.0.3.tar.gz/adifpy-0.0.3/docs/documentation.md | documentation.md |
## Distribution of Tasks
- [User Interaction](#User-Interaction)
- [Differentiation](#Differentiation)
- [Visualization](#Visualization)
- [Testing Suite](#Testing-Suite)
### User Interaction
**Aaron** will handle the interaction with the user, which includes the `main.py` file and the `construct` sub-package, which includes the `function_tree.py` and `node.py` files.
### Differentiation
**Ream** and **Alex** will handle the `differentiate` sub-package, which includes implementing Dual Numbers (in `dual_number.py`) and forward and reverse pass (in `forward_pass.py` and `reverse_pass.py`).
### Visualization
**Jack** will handle the `visualize` sub-package, which includes the `graph_tree.py` file and any other visualizations we find are practical and useful.
### Testing Suite
**Eli** will lead the testing suite (the `test` sub-package) and all of its unit and other tests. **Jack** will also assist with the testing suite, since the workload may be very high (especially in the beginning for creating black box tests).
## Progress
Before the deadline for Milestone 2B, each group member will have completed the outlines for their part of the package. This outline includes creating the Python files, classes, and functions. No implementation for these will be completed yet, so all functions will just `pass` for now.
This will allow us to have a better idea of our own workloads and re-distribute tasks if needed. | Adifpy | /adifpy-0.0.3.tar.gz/adifpy-0.0.3/docs/milestone2_progress.md | milestone2_progress.md |
import math
import matplotlib.pyplot as plt
from .Generaldistribution import Distribution
class Gaussian(Distribution):
""" Gaussian distribution class for calculating and
visualizing a Gaussian distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats extracted from the data file
"""
def __init__(self, mu=0, sigma=1):
Distribution.__init__(self, mu, sigma)
def calculate_mean(self):
"""Function to calculate the mean of the data set.
Args:
None
Returns:
float: mean of the data set
"""
avg = 1.0 * sum(self.data) / len(self.data)
self.mean = avg
return self.mean
def calculate_stdev(self, sample=True):
"""Function to calculate the standard deviation of the data set.
Args:
sample (bool): whether the data represents a sample or population
Returns:
float: standard deviation of the data set
"""
if sample:
n = len(self.data) - 1
else:
n = len(self.data)
mean = self.calculate_mean()
sigma = 0
for d in self.data:
sigma += (d - mean) ** 2
sigma = math.sqrt(sigma / n)
self.stdev = sigma
return self.stdev
def plot_histogram(self):
"""Function to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
plt.hist(self.data)
plt.title('Histogram of Data')
plt.xlabel('data')
plt.ylabel('count')
def pdf(self, x):
"""Probability density function calculator for the gaussian distribution.
Args:
x (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
return (1.0 / (self.stdev * math.sqrt(2*math.pi))) * math.exp(-0.5*((x - self.mean) / self.stdev) ** 2)
def plot_histogram_pdf(self, n_spaces = 50):
"""Function to plot the normalized histogram of the data and a plot of the
probability density function along the same range
Args:
n_spaces (int): number of data points
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
mu = self.mean
sigma = self.stdev
min_range = min(self.data)
max_range = max(self.data)
# calculates the interval between x values
interval = 1.0 * (max_range - min_range) / n_spaces
x = []
y = []
# calculate the x values to visualize
for i in range(n_spaces):
tmp = min_range + interval*i
x.append(tmp)
y.append(self.pdf(tmp))
# make the plots
fig, axes = plt.subplots(2,sharex=True)
fig.subplots_adjust(hspace=.5)
axes[0].hist(self.data, density=True)
axes[0].set_title('Normed Histogram of Data')
axes[0].set_ylabel('Density')
axes[1].plot(x, y)
axes[1].set_title('Normal Distribution for \n Sample Mean and Sample Standard Deviation')
axes[0].set_ylabel('Density')
plt.show()
return x, y
def __add__(self, other):
"""Function to add together two Gaussian distributions
Args:
other (Gaussian): Gaussian instance
Returns:
Gaussian: Gaussian distribution
"""
result = Gaussian()
result.mean = self.mean + other.mean
result.stdev = math.sqrt(self.stdev ** 2 + other.stdev ** 2)
return result
def __repr__(self):
"""Function to output the characteristics of the Gaussian instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
return "mean {}, standard deviation {}".format(self.mean, self.stdev) | Adithya-Gaussian-Distribution | /Adithya%20Gaussian%20Distribution-0.1.tar.gz/Adithya Gaussian Distribution-0.1/Adithya Gaussian Distribution/Gaussiandistribution.py | Gaussiandistribution.py |
import math
import matplotlib.pyplot as plt
from .Generaldistribution import Distribution
class Binomial(Distribution):
""" Binomial distribution class for calculating and
visualizing a Binomial distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats to be extracted from the data file
p (float) representing the probability of an event occurring
n (int) number of trials
TODO: Fill out all functions below
"""
def __init__(self, prob=.5, size=20):
self.n = size
self.p = prob
Distribution.__init__(self, self.calculate_mean(), self.calculate_stdev())
def calculate_mean(self):
"""Function to calculate the mean from p and n
Args:
None
Returns:
float: mean of the data set
"""
self.mean = self.p * self.n
return self.mean
def calculate_stdev(self):
"""Function to calculate the standard deviation from p and n.
Args:
None
Returns:
float: standard deviation of the data set
"""
self.stdev = math.sqrt(self.n * self.p * (1 - self.p))
return self.stdev
def replace_stats_with_data(self):
"""Function to calculate p and n from the data set
Args:
None
Returns:
float: the p value
float: the n value
"""
self.n = len(self.data)
self.p = 1.0 * sum(self.data) / len(self.data)
self.mean = self.calculate_mean()
self.stdev = self.calculate_stdev()
def plot_bar(self):
"""Function to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
plt.bar(x = ['0', '1'], height = [(1 - self.p) * self.n, self.p * self.n])
plt.title('Bar Chart of Data')
plt.xlabel('outcome')
plt.ylabel('count')
def pdf(self, k):
"""Probability density function calculator for the gaussian distribution.
Args:
x (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
a = math.factorial(self.n) / (math.factorial(k) * (math.factorial(self.n - k)))
b = (self.p ** k) * (1 - self.p) ** (self.n - k)
return a * b
def plot_bar_pdf(self):
"""Function to plot the pdf of the binomial distribution
Args:
None
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
x = []
y = []
# calculate the x values to visualize
for i in range(self.n + 1):
x.append(i)
y.append(self.pdf(i))
# make the plots
plt.bar(x, y)
plt.title('Distribution of Outcomes')
plt.ylabel('Probability')
plt.xlabel('Outcome')
plt.show()
return x, y
def __add__(self, other):
"""Function to add together two Binomial distributions with equal p
Args:
other (Binomial): Binomial instance
Returns:
Binomial: Binomial distribution
"""
try:
assert self.p == other.p, 'p values are not equal'
except AssertionError as error:
raise
result = Binomial()
result.n = self.n + other.n
result.p = self.p
result.calculate_mean()
result.calculate_stdev()
return result
def __repr__(self):
"""Function to output the characteristics of the Binomial instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
return "mean {}, standard deviation {}, p {}, n {}".\
format(self.mean, self.stdev, self.p, self.n) | Adithya-Gaussian-Distribution | /Adithya%20Gaussian%20Distribution-0.1.tar.gz/Adithya Gaussian Distribution-0.1/Adithya Gaussian Distribution/Binomialdistribution.py | Binomialdistribution.py |
__project__ = "Draugr"
__author__ = "Christian Heider Nielsen"
__version__ = "1.0.1"
__doc__ = r"""
Created on 27/04/2019
@author: cnheider
"""
import datetime
import os
from logging import warning
from pathlib import Path
from typing import Any
import pkg_resources
from apppath import AppPath
with open(Path(__file__).parent / "README.md", "r") as this_init_file:
__doc__ += this_init_file.read()
# with open(Path(__file__).parent.parent / "README.md", "r") as this_init_file:
# __doc__ += this_init_file.read()
# __all__ = ["PROJECT_APP_PATH", "PROJECT_NAME", "PROJECT_VERSION", "get_version"]
def dist_is_editable(dist: Any) -> bool:
"""
Return True if given Distribution is an editable installation."""
import sys
for path_item in sys.path:
egg_link = Path(path_item) / f"{dist.project_name}.egg-link"
if egg_link.is_file():
return True
return False
PROJECT_ORGANISATION = "pything"
PROJECT_NAME = __project__.lower().strip().replace(" ", "_")
PROJECT_VERSION = __version__
PROJECT_YEAR = 2018
PROJECT_AUTHOR = __author__.lower().strip().replace(" ", "_")
PROJECT_APP_PATH = AppPath(app_name=PROJECT_NAME, app_author=PROJECT_AUTHOR)
PACKAGE_DATA_PATH = Path(pkg_resources.resource_filename(PROJECT_NAME, "data"))
INCLUDE_PROJECT_READMES = False
distributions = {v.key: v for v in pkg_resources.working_set}
if PROJECT_NAME in distributions:
distribution = distributions[PROJECT_NAME]
DEVELOP = dist_is_editable(distribution)
else:
DEVELOP = True
def get_version(append_time: Any = DEVELOP) -> str:
"""description"""
version = __version__
if not version:
version = os.getenv("VERSION", "0.0.0")
if append_time:
now = datetime.datetime.utcnow()
date_version = now.strftime("%Y%m%d%H%M%S")
# date_version = time.time()
if version:
# Most git tags are prefixed with 'v' (example: v1.2.3) this is
# never desirable for artifact repositories, so we strip the
# leading 'v' if it's present.
version = (
version[1:]
if isinstance(version, str) and version.startswith("v")
else version
)
else:
# Default version is an ISO8601 compliant datetime. PyPI doesn't allow
# the colon ':' character in its versions, and time is required to allow
# for multiple publications to master in one day. This datetime string
# uses the 'basic' ISO8601 format for both its date and time components
# to avoid issues with the colon character (ISO requires that date and
# time components of a date-time string must be uniformly basic or
# extended, which is why the date component does not have dashes.
#
# Publications using datetime versions should only be made from master
# to represent the HEAD moving forward.
warning(
f"Environment variable VERSION is not set, only using datetime: {date_version}"
)
# warn(f'Environment variable VERSION is not set, only using timestamp: {version}')
version = f"{version}.{date_version}"
return version
if __version__ is None:
__version__ = get_version(append_time=True)
__version_info__ = tuple(int(segment) for segment in __version__.split(".")) | Adjacency | /Adjacency-1.0.1-py3-none-any.whl/adjacency/__init__.py | __init__.py |
__all__ = ('adjax_response',)
from django.http import HttpResponse
from django.shortcuts import render_to_response, redirect
from django.template import RequestContext
from django.conf import settings
from django.core.serializers import json, serialize
from adjax.base import get_store
from django.utils.functional import Promise
from django.utils.encoding import force_unicode
class LazyEncoder(json.DjangoJSONEncoder):
def default(self, obj):
if isinstance(obj, Promise):
return force_unicode(obj)
return super(LazyEncoder, self).default(obj)
class JsonResponse(HttpResponse):
def __init__(self, object):
if isinstance(object, QuerySet):
content = serialize('json', object)
else:
content = simplejson.dumps(
object, indent=2, cls=LazyEncoder,
ensure_ascii=False)
super(JsonResponse, self).__init__(
content, content_type='application/json')
# Where to redirect to when view is called without an ajax request.
DEFAULT_REDIRECT = getattr(settings, 'ADJAX_DEFAULT_REDIRECT', None)
ADJAX_CONTEXT_KEY = 'adjax'
def adjax_response(func):
""" Renders the response using JSON, if appropriate.
"""
# TODO allow a template to be given for non-ajax requests
template_name = None
def wrapper(request, *args, **kw):
output = func(request, *args, **kw)
store = get_store(request)
# If a dict is given, add that to the output
if output is None:
output = {}
elif isinstance(output, dict):
output = output.copy()
output.pop('request', None)
for key, val in output.items():
store.extra(key, val)
# Intercept redirects
elif isinstance(output, HttpResponse) and output.status_code in (301, 302):
store.redirect(output['Location'])
if request.is_ajax():
return store.json_response
if isinstance(output, dict):
# If we have a template, render that
if template_name:
output.setdefault(ADJAX_CONTEXT_KEY, store)
return render_to_response(template_name, output, context_instance=RequestContext(request))
# Try and redirect somewhere useful
if 'HTTP_REFERER' in request.META:
return redirect(request.META['HTTP_REFERER'])
elif DEFAULT_REDIRECT:
return redirect(DEFAULT_REDIRECT)
else:
return HttpResponse()
return output
return wrapper | Adjax | /Adjax-1.0.1.tar.gz/Adjax-1.0.1/adjax/decorators.py | decorators.py |
from utils import get_key, JsonResponse, get_template_include_key
from django.contrib import messages
from django.core import urlresolvers
from django.template.context import RequestContext
from django.template.loader import render_to_string
def get_store(request):
""" Gets a relevant store object from the given request. """
if not hasattr(request, '_adjax_store'):
request._adjax_store = AdjaxStore(request)
return request._adjax_store
class AdjaxStore(object):
""" This class will help store ajax data collected in views. """
def __init__(self, request):
self.request = request
self.update_data = {}
self.form_data = {}
self.replace_data = {}
self.hide_data = []
self.extra_data = {}
self.redirect_data = None
@property
def messages_data(self):
return [{'tags': m.tags, 'content': unicode(m), 'level': m.level} for m in messages.get_messages(self.request)]
def update(self, obj, attributes=None):
""" Make values from a given object available. """
for attr in attributes:
value = getattr(obj, attr)
if callable(value):
value = value()
self.update_data[get_key(obj, attr)] = value
def form(self, form_obj):
""" Validate the given form and send errors to browser. """
if not form_obj.is_valid():
for name, errors in form_obj.errors.items():
if form_obj.prefix:
key = 'id_%s-%s' % (form_obj.prefix, name)
else:
key = 'id_%s' % name
self.form_data[key] = errors
def replace(self, element, html):
""" Replace the given DOM element with the given html.
The DOM element is specified using css identifiers.
Some javascript libraries may have an extended syntax,
which can be used if you don't value portability.
"""
self.replace_data[element] = html
def hide(self, element):
""" Hides the given DOM element.
The DOM element is specified using css identifiers.
Some javascript libraries may have an extended syntax,
which can be used if you don't value portability.
"""
self.hide_data.append(element)
def redirect(self, to, *args, **kwargs):
""" Redirect the browser dynamically to another page. """
if hasattr(to, 'get_absolute_url'):
self.redirect_data = to.get_absolute_url()
return
try:
self.redirect_data = urlresolvers.reverse(to, args=args, kwargs=kwargs)
return
except urlresolvers.NoReverseMatch:
# If this is a callable, re-raise.
if callable(to):
raise
# If this doesn't "feel" like a URL, re-raise.
if '/' not in to and '.' not in to:
raise
# Finally, fall back and assume it's a URL
self.redirect_data = to
def extra(self, key, value):
""" Send additional information to the browser. """
self.extra_data[key] = value
def render_to_response(self, template_name, dictionary=None, prefix=None, context_instance=None):
""" Update any included templates. """
# Because we have access to the request object, we can use request context
# This is not analogous to render_to_strings interface
if context_instance is None:
context_instance = RequestContext(self.request)
rendered_content = render_to_string(template_name, dictionary, context_instance=context_instance)
dom_element = ".%s" % get_template_include_key(template_name, prefix)
self.replace(dom_element, rendered_content)
@property
def json_response(self):
""" Return a json response with our ajax data """
elements = (
('extra', self.extra_data),
('messages', self.messages_data),
('forms', self.form_data),
('replace', self.replace_data),
('hide', self.hide_data),
('update', self.update_data),
('redirect', self.redirect_data),
)
return JsonResponse(dict((a,b) for a,b in elements if b)) | Adjax | /Adjax-1.0.1.tar.gz/Adjax-1.0.1/adjax/base.py | base.py |
from django.core.serializers import json, serialize
from django.http import HttpResponse
from django.utils import simplejson
from django.db.models.query import QuerySet
try:
import hashlib
hash_function = hashlib.sha1
except ImportError:
import sha
hash_function = sha.new
def get_key(instance, field_name):
""" Returns the key that will be used to identify dynamic fields in the DOM. """
# TODO: Avoid any characters that may not appear in class names
m = instance._meta
return '-'.join(('data', m.app_label, m.object_name, str(instance.pk), field_name))
def get_template_include_key(template_name, prefix=None):
""" Get a valid element class name, we'll stick to ascii letters, numbers and hyphens.
NB class names cannot start with a hyphen
"""
digest = int(hash_function(template_name).hexdigest(),16)
hash = base36.from_decimal(digest)
if prefix:
return 'tpl-%s-%s' % (prefix, hash)
else:
return 'tpl-%s' % (hash)
class JsonResponse(HttpResponse):
def __init__(self, obj):
if isinstance(obj, QuerySet):
content = serialize('json', obj)
else:
content = simplejson.dumps(obj, indent=2, cls=json.DjangoJSONEncoder, ensure_ascii=False)
super(JsonResponse, self).__init__(content, content_type='application/json')
"""
Convert numbers from base 10 integers to base X strings and back again.
Sample usage:
>>> base20 = BaseConverter('0123456789abcdefghij')
>>> base20.from_decimal(1234)
'31e'
>>> base20.to_decimal('31e')
1234
From http://www.djangosnippets.org/snippets/1431/
"""
class BaseConverter(object):
decimal_digits = "0123456789"
def __init__(self, digits):
self.digits = digits
def from_decimal(self, i):
return self.convert(i, self.decimal_digits, self.digits)
def to_decimal(self, s):
return int(self.convert(s, self.digits, self.decimal_digits))
def convert(number, fromdigits, todigits):
# Based on http://code.activestate.com/recipes/111286/
if str(number)[0] == '-':
number = str(number)[1:]
neg = 1
else:
neg = 0
# make an integer out of the number
x = 0
for digit in str(number):
x = x * len(fromdigits) + fromdigits.index(digit)
# create the result in base 'len(todigits)'
if x == 0:
res = todigits[0]
else:
res = ""
while x > 0:
digit = x % len(todigits)
res = todigits[digit] + res
x = int(x / len(todigits))
if neg:
res = '-' + res
return res
convert = staticmethod(convert)
base36 = BaseConverter('0123456789abcdefghijklmnopqrstuvwxyz') | Adjax | /Adjax-1.0.1.tar.gz/Adjax-1.0.1/adjax/utils.py | utils.py |
__all__ = ('adjax_response', 'success', 'info', 'warning', 'error', 'debug',
'redirect', 'update', 'form', 'replace', 'hide', 'extra',
'render_to_response')
__version_info__ = ('1', '0', '1')
__version__ = '.'.join(__version_info__)
__authors__ = ["Will Hardy <[email protected]>"]
from adjax.decorators import adjax_response
from adjax.base import get_store
from django.contrib.messages import success, info, warning, error, debug
def update(request, obj, attributes=None):
""" Sends the updated version of the given attributes on the given object.
If no attributes are given, all attributes are sent (be careful if you
don't want all data to be public).
If a minus sign is in front of an attribute, it is omitted.
A mix of attribtue names with and without minus signs is just silly.
No other attributes will be included.
"""
store = get_store(request)
if not attributes or all(map(lambda s: s.startswith("-"), attributes)):
attributes = obj.__dict__.keys()
store.update(obj, (a for a in attributes if not a.startswith("-")))
def form(request, form_obj):
""" Validate the given form and send errors to browser. """
get_store(request).form(form_obj)
def replace(request, element, html):
""" Replace the given DOM element with the given html.
The DOM element is specified using css identifiers.
Some javascript libraries may have an extended syntax,
which can be used if you don't value portability.
"""
get_store(request).replace(element, html)
def redirect(request, path):
""" Redirect the browser dynamically to another page. """
get_store(request).redirect(path)
def hide(request, element):
""" Hides the given DOM element.
The DOM element is specified using css identifiers.
Some javascript libraries may have an extended syntax,
which can be used if you don't value portability.
"""
get_store(request).hide(element)
def extra(request, key, value):
""" Send additional information to the browser. """
get_store(request).extra(key, value)
def render_to_response(request, template_name, context=None, prefix=None):
""" Update any included templates. """
get_store(request).render_to_response(template_name, context, prefix) | Adjax | /Adjax-1.0.1.tar.gz/Adjax-1.0.1/adjax/__init__.py | __init__.py |
from django import template
from django.template.loader import get_template
from adjax.utils import get_key, get_template_include_key
from django.conf import settings
register = template.Library()
def adjax(parser, token):
try:
tag_name, object_name = token.split_contents()
except ValueError:
raise template.TemplateSyntaxError, "%r tag requires a single argument" % token.contents.split()[0]
return DynamicValueNode(object_name)
class DynamicValueNode(template.Node):
def __init__(self, object_name):
self.object_name, self.field_name = object_name.rsplit(".", 1)
self.instance = template.Variable(self.object_name)
self.value = template.Variable(object_name)
def render(self, context):
instance = self.instance.resolve(context)
if hasattr(instance, '_meta'):
return '<span class="%s">%s</span>' % (get_key(instance, self.field_name), self.value.resolve(context))
def adjax_include(parser, token):
bits = token.split_contents()
try:
tag_name, template_name = bits[:2]
except ValueError:
raise template.TemplateSyntaxError, "%r tag requires a template name" % bits[0]
kwargs = {}
for arg in bits[2:]:
key, value = arg.split("=", 1)
if key in ('prefix', 'wrapper'):
kwargs[str(key)] = value
else:
raise template.TemplateSyntaxError, "invalid argument (%s) for %r tag" % (key, tag_name)
return AdjaxIncludeNode(template_name, **kwargs)
class AdjaxIncludeNode(template.Node):
def __init__(self, template_name, prefix=None, wrapper='"div"'):
self.template_name = template.Variable(template_name)
self.prefix = prefix and template.Variable(prefix) or None
self.wrapper = template.Variable(wrapper)
def render(self, context):
template_name = self.template_name.resolve(context)
wrapper = self.wrapper.resolve(context)
prefix = self.prefix and self.prefix.resolve(context) or None
key = get_template_include_key(template_name, prefix)
try:
content = get_template(template_name).render(context)
return '<%s class="%s">%s</%s>' % (wrapper, key, content, wrapper)
except template.TemplateSyntaxError, e:
if settings.TEMPLATE_DEBUG:
raise
return ''
except:
return '' # Like Django, fail silently for invalid included templates.
# Register our tags
register.tag('adjax', adjax)
register.tag('adjax_include', adjax_include) | Adjax | /Adjax-1.0.1.tar.gz/Adjax-1.0.1/adjax/templatetags/ajax.py | ajax.py |
import sys
DEFAULT_VERSION = "0.6c9"
DEFAULT_URL = "http://pypi.python.org/packages/%s/s/setuptools/" % sys.version[:3]
md5_data = {
'setuptools-0.6b1-py2.3.egg': '8822caf901250d848b996b7f25c6e6ca',
'setuptools-0.6b1-py2.4.egg': 'b79a8a403e4502fbb85ee3f1941735cb',
'setuptools-0.6b2-py2.3.egg': '5657759d8a6d8fc44070a9d07272d99b',
'setuptools-0.6b2-py2.4.egg': '4996a8d169d2be661fa32a6e52e4f82a',
'setuptools-0.6b3-py2.3.egg': 'bb31c0fc7399a63579975cad9f5a0618',
'setuptools-0.6b3-py2.4.egg': '38a8c6b3d6ecd22247f179f7da669fac',
'setuptools-0.6b4-py2.3.egg': '62045a24ed4e1ebc77fe039aa4e6f7e5',
'setuptools-0.6b4-py2.4.egg': '4cb2a185d228dacffb2d17f103b3b1c4',
'setuptools-0.6c1-py2.3.egg': 'b3f2b5539d65cb7f74ad79127f1a908c',
'setuptools-0.6c1-py2.4.egg': 'b45adeda0667d2d2ffe14009364f2a4b',
'setuptools-0.6c2-py2.3.egg': 'f0064bf6aa2b7d0f3ba0b43f20817c27',
'setuptools-0.6c2-py2.4.egg': '616192eec35f47e8ea16cd6a122b7277',
'setuptools-0.6c3-py2.3.egg': 'f181fa125dfe85a259c9cd6f1d7b78fa',
'setuptools-0.6c3-py2.4.egg': 'e0ed74682c998bfb73bf803a50e7b71e',
'setuptools-0.6c3-py2.5.egg': 'abef16fdd61955514841c7c6bd98965e',
'setuptools-0.6c4-py2.3.egg': 'b0b9131acab32022bfac7f44c5d7971f',
'setuptools-0.6c4-py2.4.egg': '2a1f9656d4fbf3c97bf946c0a124e6e2',
'setuptools-0.6c4-py2.5.egg': '8f5a052e32cdb9c72bcf4b5526f28afc',
'setuptools-0.6c5-py2.3.egg': 'ee9fd80965da04f2f3e6b3576e9d8167',
'setuptools-0.6c5-py2.4.egg': 'afe2adf1c01701ee841761f5bcd8aa64',
'setuptools-0.6c5-py2.5.egg': 'a8d3f61494ccaa8714dfed37bccd3d5d',
'setuptools-0.6c6-py2.3.egg': '35686b78116a668847237b69d549ec20',
'setuptools-0.6c6-py2.4.egg': '3c56af57be3225019260a644430065ab',
'setuptools-0.6c6-py2.5.egg': 'b2f8a7520709a5b34f80946de5f02f53',
'setuptools-0.6c7-py2.3.egg': '209fdf9adc3a615e5115b725658e13e2',
'setuptools-0.6c7-py2.4.egg': '5a8f954807d46a0fb67cf1f26c55a82e',
'setuptools-0.6c7-py2.5.egg': '45d2ad28f9750e7434111fde831e8372',
'setuptools-0.6c8-py2.3.egg': '50759d29b349db8cfd807ba8303f1902',
'setuptools-0.6c8-py2.4.egg': 'cba38d74f7d483c06e9daa6070cce6de',
'setuptools-0.6c8-py2.5.egg': '1721747ee329dc150590a58b3e1ac95b',
'setuptools-0.6c9-py2.3.egg': 'a83c4020414807b496e4cfbe08507c03',
'setuptools-0.6c9-py2.4.egg': '260a2be2e5388d66bdaee06abec6342a',
'setuptools-0.6c9-py2.5.egg': 'fe67c3e5a17b12c0e7c541b7ea43a8e6',
'setuptools-0.6c9-py2.6.egg': 'ca37b1ff16fa2ede6e19383e7b59245a',
}
import sys, os
try: from hashlib import md5
except ImportError: from md5 import md5
def _validate_md5(egg_name, data):
if egg_name in md5_data:
digest = md5(data).hexdigest()
if digest != md5_data[egg_name]:
print >>sys.stderr, (
"md5 validation of %s failed! (Possible download problem?)"
% egg_name
)
sys.exit(2)
return data
def use_setuptools(
version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir,
download_delay=15
):
"""Automatically find/download setuptools and make it available on sys.path
`version` should be a valid setuptools version number that is available
as an egg for download under the `download_base` URL (which should end with
a '/'). `to_dir` is the directory where setuptools will be downloaded, if
it is not already available. If `download_delay` is specified, it should
be the number of seconds that will be paused before initiating a download,
should one be required. If an older version of setuptools is installed,
this routine will print a message to ``sys.stderr`` and raise SystemExit in
an attempt to abort the calling script.
"""
was_imported = 'pkg_resources' in sys.modules or 'setuptools' in sys.modules
def do_download():
egg = download_setuptools(version, download_base, to_dir, download_delay)
sys.path.insert(0, egg)
import setuptools; setuptools.bootstrap_install_from = egg
try:
import pkg_resources
except ImportError:
return do_download()
try:
pkg_resources.require("setuptools>="+version); return
except pkg_resources.VersionConflict, e:
if was_imported:
print >>sys.stderr, (
"The required version of setuptools (>=%s) is not available, and\n"
"can't be installed while this script is running. Please install\n"
" a more recent version first, using 'easy_install -U setuptools'."
"\n\n(Currently using %r)"
) % (version, e.args[0])
sys.exit(2)
else:
del pkg_resources, sys.modules['pkg_resources'] # reload ok
return do_download()
except pkg_resources.DistributionNotFound:
return do_download()
def download_setuptools(
version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir,
delay = 15
):
"""Download setuptools from a specified location and return its filename
`version` should be a valid setuptools version number that is available
as an egg for download under the `download_base` URL (which should end
with a '/'). `to_dir` is the directory where the egg will be downloaded.
`delay` is the number of seconds to pause before an actual download attempt.
"""
import urllib2, shutil
egg_name = "setuptools-%s-py%s.egg" % (version,sys.version[:3])
url = download_base + egg_name
saveto = os.path.join(to_dir, egg_name)
src = dst = None
if not os.path.exists(saveto): # Avoid repeated downloads
try:
from distutils import log
if delay:
log.warn("""
---------------------------------------------------------------------------
This script requires setuptools version %s to run (even to display
help). I will attempt to download it for you (from
%s), but
you may need to enable firewall access for this script first.
I will start the download in %d seconds.
(Note: if this machine does not have network access, please obtain the file
%s
and place it in this directory before rerunning this script.)
---------------------------------------------------------------------------""",
version, download_base, delay, url
); from time import sleep; sleep(delay)
log.warn("Downloading %s", url)
src = urllib2.urlopen(url)
# Read/write all in one block, so we don't create a corrupt file
# if the download is interrupted.
data = _validate_md5(egg_name, src.read())
dst = open(saveto,"wb"); dst.write(data)
finally:
if src: src.close()
if dst: dst.close()
return os.path.realpath(saveto)
def main(argv, version=DEFAULT_VERSION):
"""Install or upgrade setuptools and EasyInstall"""
try:
import setuptools
except ImportError:
egg = None
try:
egg = download_setuptools(version, delay=0)
sys.path.insert(0,egg)
from setuptools.command.easy_install import main
return main(list(argv)+[egg]) # we're done here
finally:
if egg and os.path.exists(egg):
os.unlink(egg)
else:
if setuptools.__version__ == '0.0.1':
print >>sys.stderr, (
"You have an obsolete version of setuptools installed. Please\n"
"remove it from your system entirely before rerunning this script."
)
sys.exit(2)
req = "setuptools>="+version
import pkg_resources
try:
pkg_resources.require(req)
except pkg_resources.VersionConflict:
try:
from setuptools.command.easy_install import main
except ImportError:
from easy_install import main
main(list(argv)+[download_setuptools(delay=0)])
sys.exit(0) # try to force an exit
else:
if argv:
from setuptools.command.easy_install import main
main(argv)
else:
print "Setuptools version",version,"or greater has been installed."
print '(Run "ez_setup.py -U setuptools" to reinstall or upgrade.)'
def update_md5(filenames):
"""Update our built-in md5 registry"""
import re
for name in filenames:
base = os.path.basename(name)
f = open(name,'rb')
md5_data[base] = md5(f.read()).hexdigest()
f.close()
data = [" %r: %r,\n" % it for it in md5_data.items()]
data.sort()
repl = "".join(data)
import inspect
srcfile = inspect.getsourcefile(sys.modules[__name__])
f = open(srcfile, 'rb'); src = f.read(); f.close()
match = re.search("\nmd5_data = {\n([^}]+)}", src)
if not match:
print >>sys.stderr, "Internal error!"
sys.exit(2)
src = src[:match.start(1)] + repl + src[match.end(1):]
f = open(srcfile,'w')
f.write(src)
f.close()
if __name__=='__main__':
if len(sys.argv)>2 and sys.argv[1]=='--md5update':
update_md5(sys.argv[2:])
else:
main(sys.argv[1:]) | Adjector | /Adjector-1.0b1.tar.gz/Adjector-1.0b1/ez_setup.py | ez_setup.py |
Adjector 1.0b
*************
Hi there. Thanks for using Adjector, a lightweight, flexible, open-source
ad server written in Python.
Adjector is licensed under the GPL, version 2 or 3, at your option.
For more information, see LICENSE.txt.
This Distribution
-----------------
This is the main Adjector distribution. A client-only version and a Trac
plugin are also available. They can be downloaded at
http://projects.icapsid.net/adjector/wiki/Download
Documentation
-------------
All of our documentation is online at
http://projects.icapsid.net/adjector
You may wish to get started with 'Installing Adjector' at
http://projects.icapsid.net/adjector/wiki/Install
For questions, comments, help, or any other information, visit us online
or email [email protected]. | Adjector | /Adjector-1.0b1.tar.gz/Adjector-1.0b1/README.txt | README.txt |
"""Setup the adjector application"""
import elixir
import logging
from pylons import config
from adjector.config.environment import load_environment
from adjector.core.conf import conf as adjector_conf
from adjector.lib.util import import_module
from adjector.lib.precache import precache_zone
log = logging.getLogger(__name__)
def setup_app(command, conf, vars):
"""Place any commands to setup adjector here"""
load_environment(conf.global_conf, conf.local_conf)
# This has to be *after* the environment is loaded, otherwise our options don't make it to the model
import adjector.model as model
from adjector.model import meta
# Create the tables if they don't already exist
meta.metadata.create_all(bind=meta.engine)
elixir.create_all(meta.engine)
# Import initial data, if it exists
if adjector_conf.initial_data:
try:
module = import_module(adjector_conf.initial_data)
print 'Importing initial data...'
if hasattr(module, 'sets'):
print ' Importing %i sets' % len(module.sets)
for set in module.sets:
model.Set(set)
model.session.commit()
if hasattr(module, 'creatives'):
print ' Importing %i creatives' % len(module.creatives)
for creative in module.creatives:
model.Creative(creative)
if hasattr(module, 'locations'):
print ' Importing %i locations' % len(module.locations)
for location in module.locations:
model.Location(location)
model.session.commit()
if hasattr(module, 'zones'):
print ' Importing %i zones' % len(module.zones)
for zone in module.zones:
model.Zone(zone)
model.session.commit()
print ' Done'
print 'Precaching...'
for zone in model.Zone.query():
precache_zone(zone)
print ' Done'
except ImportError:
log.warn('Could not find example data.') | Adjector | /Adjector-1.0b1.tar.gz/Adjector-1.0b1/adjector/websetup.py | websetup.py |
import os
import tw.api as twa
from beaker.middleware import CacheMiddleware, SessionMiddleware
from paste.cascade import Cascade
from paste.recursive import RecursiveMiddleware
from paste.registry import RegistryManager
from paste.urlparser import StaticURLParser
from paste.deploy.converters import asbool
from pylons import config
from pylons.middleware import ErrorHandler, StatusCodeRedirect
from pylons.wsgiapp import PylonsApp
from routes.middleware import RoutesMiddleware
from adjector.config.environment import load_environment
from adjector.core.conf import conf
from adjector.lib.middleware import FilteredApp
def make_app(global_conf, full_stack=True, static_files=True, **app_conf):
"""Create a Pylons WSGI application and return it
``global_conf``
The inherited configuration for this application. Normally from
the [DEFAULT] section of the Paste ini file.
``full_stack``
Whether this application provides a full WSGI stack (by default,
meaning it handles its own exceptions and errors). Disable
full_stack when this application is "managed" by another WSGI
middleware.
``static_files``
Whether this application serves its own static files; disable
when another web server is responsible for serving them.
``app_conf``
The application's local configuration. Normally specified in
the [app:<name>] section of the Paste ini file (where <name>
defaults to main).
"""
# Configure the Pylons environment
load_environment(global_conf, app_conf)
# The Pylons WSGI app
app = PylonsApp()
# Routing/Session/Cache Middleware
app = RoutesMiddleware(app, config['routes.map'])
app = SessionMiddleware(app, config)
app = CacheMiddleware(app, config)
# CUSTOM MIDDLEWARE HERE (filtered by error handling middlewares)
# Catch internal redirects
app = RecursiveMiddleware(app)
# Toscawidgets
app = twa.make_middleware(app, {
'toscawidgets.framework' : 'pylons',
'toscawidgets.framework.default_view' : 'genshi',
'toscawidgets.middleware.inject_resources' : True
})
if asbool(full_stack):
# Handle Python exceptions
app = ErrorHandler(app, global_conf, **config['pylons.errorware'])
# Display error documents for 401, 403, 404 status codes (and
# 500 when debug is disabled)
if asbool(config['debug']):
app = StatusCodeRedirect(app)
else:
app = StatusCodeRedirect(app, [400, 401, 403, 404, 500])
# Establish the Registry for this application
app = RegistryManager(app)
#if asbool(static_files):
# Serve static files
#static_app = StaticURLParser(config['pylons.paths']['static_files'])
static_app = StaticURLParser(config['pylons.paths']['static_files'])
if conf.base_url:
static_app = FilteredApp(static_app, conf.base_url)
app = Cascade([static_app, app])
return app | Adjector | /Adjector-1.0b1.tar.gz/Adjector-1.0b1/adjector/config/middleware.py | middleware.py |
import logging
from pylons import config
from routes import Mapper
from adjector.core.conf import conf
log = logging.getLogger(__name__)
def intify(*keys):
'''
Make vars into integers
'''
def container(environ, result):
for key in keys:
if result.get(key) is None:
continue
if result[key].isdigit():
result[key] = int(result[key])
else:
log.error('%s was sent to intify method but is not in digit form' % result[key])
result[key] = None
return True
return dict(function=container)
def make_map():
"""Create, configure and return the routes Mapper"""
map = Mapper(directory=config['pylons.paths']['controllers'], always_scan=config['debug'])
map.minimization = False
# The ErrorController route (handles 404/500 error pages); it should
# likely stay at the top, ensuring it can always be resolved
map.connect('/error/{action}', controller='error')
map.connect('/error/{action}/{id}', controller='error')
base = conf.admin_base_url
# CUSTOM ROUTES HERE
map.redirect(base, base + '/')
map.connect(base + '/', controller='main', action='index')
map.connect(base + '/import/cj', controller='cj', action='start')
map.connect(base + '/import/cj/{action}', controller='cj')
map.connect(base + '/import/cj/{site_id}/{id}/{action}', controller='cj', requirements=dict(site_id='\d+', id='\d+')) #note that i am not intifying this on purpose
map.connect(base + '/new/{controller}', action='new')
map.connect(base + '/stats', controller='stats', action='index')
map.connect(conf.render_base_url + '/zone/{ident}/render', controller='zone', action='render')
map.connect(conf.render_base_url + '/zone/{ident}/render.js', controller='zone', action='render_js')
map.connect(conf.tracking_base_url + '/track/{action}', controller='track')
map.connect(base + '/{controller}', action='list')
map.connect(base + '/{controller}/{id}', action='view', requirements=dict(id='\d+'), conditions=intify('id'))
map.connect(base + '/{controller}/{action}', requirements=dict(controller='(?!tracking)'))
map.connect(base + '/{controller}/{id}/{action}', requirements=dict(id='\d+'), conditions=intify('id'))
return map | Adjector | /Adjector-1.0b1.tar.gz/Adjector-1.0b1/adjector/config/routing.py | routing.py |
"""Pylons environment configuration"""
import os
#from genshi.template import TemplateLoader
from pylons import config
from sqlalchemy import engine_from_config
import adjector.lib.app_globals as app_globals
import adjector.lib.helpers
from adjector.config.routing import make_map
from adjector.core.conf import conf
def load_environment(global_conf, app_conf):
"""Configure the Pylons environment via the ``pylons.config``
object
"""
# Pylons paths
root = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
paths = dict(root=root,
controllers=os.path.join(root, 'controllers'),
static_files=os.path.join(root, 'public'),
templates=[os.path.join(root, 'templates')])
# Initialize config with the basic options
config.init_app(global_conf, app_conf, package='adjector', template_engine=None, paths=paths)
# Load config - has to be done before routing
config['pylons.app_globals'] = app_globals.Globals()
config['pylons.h'] = adjector.lib.helpers
# Create the Genshi TemplateLoader
genshi_options = {'genshi.default_doctype': 'xhtml-strict',
'genshi.default_format': 'xhtml',
'genshi.default_encoding': 'UTF-8',
'genshi.max_cache_size': 250,
}
config.add_template_engine('genshi', 'adjector.templates', genshi_options)
#config['pylons.app_globals'].genshi_loader = TemplateLoader(
# paths['templates'], auto_reload=True)
# CONFIGURATION OPTIONS HERE (note: all config options will override
# any Pylons config options)
conf.load(config)
config['pylons.app_globals'].conf = conf
# Setup the SQLAlchemy database engine
# If we put this here, we can load our config *first*
from adjector.model import init_model
engine = engine_from_config(config, 'sqlalchemy.')
init_model(engine)
# Setup routing *after* config options parsed
config['routes.map'] = make_map() | Adjector | /Adjector-1.0b1.tar.gz/Adjector-1.0b1/adjector/config/environment.py | environment.py |
import os.path
import random
import re
from adjector.core.conf import conf
from adjector.core.cj_util import remove_tracking_cj
def add_tracking(html):
if re.search('google_ad_client', html):
return add_tracking_adsense(html)
else:
return add_tracking_generic(html)
def add_tracking_generic(html):
def repl(match):
groups = match.groups()
return groups[0] + 'ADJECTOR_TRACKING_BASE_URL/track/click_with_redirect?creative_id=ADJECTOR_CREATIVE_ID&zone_id=ADJECTOR_ZONE_ID&cache_bust=' + cache_bust() + '&url=' + groups[1] + groups[2]
html_tracked = re.sub(r'''(.*<a[^>]+href\s*=\s*['"])([^"']+)(['"][^>]*>.*)''', repl, html)
if html == html_tracked: # if no change, don't touch.
return
else:
return html_tracked
def add_tracking_adsense(html):
adsense_tracking_code = open(os.path.join(conf.root, 'public', 'js', 'adsense_tracker.js')).read()
click_track = 'ADJECTOR_TRACKING_BASE_URL/track/click_with_image?creative_id=ADJECTOR_CREATIVE_ID&zone_id=ADJECTOR_ZONE_ID&cache_bust=' # cache_bust added in js
html_tracked = '''
<span>
%(html)s
<script type="text/javascript"><!--// <![CDATA[
/* adjector_click_track=%(click_track)s */
%(adsense_tracking_code)s
// ]]> --></script>
</span>
''' % dict(html=html, adsense_tracking_code=adsense_tracking_code, click_track=click_track)
return html_tracked
def cache_bust():
return str(random.random())[2:]
def remove_tracking(html, cj_site_id = None):
if cj_site_id:
return remove_tracking_cj(html, cj_site_id)
elif re.search('google_ad_client', html):
return remove_tracking_adsense(html)
else:
return html # we can't do anything
def remove_tracking_adsense(html):
html_notrack = '''
<script type='text/javascript'>
var adjector_google_adtest_backup = google_adtest;
var google_adtest='on';
</script>
%(html)s
<script type='text/javascript'>
var google_adtest=adjector_google_adtest_backup;
</script>
''' % dict(html=html)
return html_notrack | Adjector | /Adjector-1.0b1.tar.gz/Adjector-1.0b1/adjector/core/tracking.py | tracking.py |
from __future__ import division
import logging
import random
import re
from sqlalchemy import and_, func, or_
from sqlalchemy.sql import case, join, select, subquery
import adjector.model as model
from adjector.core.conf import conf
from adjector.core.tracking import remove_tracking
log = logging.getLogger(__name__)
def old_render_zone(ident, track=None, admin=False):
'''
Render A Random Creative for this Zone. Access by id or name.
Respect all zone requirements. Use creative weights and their containing set weights to weight randomness.
If zone.normalize_by_container, normalize creatives by the total weight of the set they are in,
so the total weight of the creatives directly in any set is always 1.
If block and text ads can be shown, a decision will be made to show one or the other based on the total probability of each type of creative.
Note that this function is called by the API function render_zone.
'''
# Note that this is my first time seriously using SA, feel free to clean this up
if isinstance(ident, int) or ident.isdigit():
zone = model.Zone.get(int(ident))
else:
zone = model.Zone.query.filter_by(name=ident).first()
if zone is None: # Fail gracefully, don't commit suicide because someone deleted a zone from the ad server
log.error('Tried to render zone %s. Zone Not Found' % ident)
return ''
# Find zone site_id, if applicable. Default to global site_id, or else None.
cj_site_id = zone.parent_cj_site_id or conf.cj_site_id
# Figure out what kind of creative we need
# Size filtering
whereclause_zone = and_(or_(and_(model.Creative.width >= zone.min_width,
model.Creative.width <= zone.max_width,
model.Creative.height >= zone.min_height,
model.Creative.height <= zone.max_height),
model.Creative.is_text == True),
# Date filtering
or_(model.Creative.start_date == None, model.Creative.start_date <= func.now()),
or_(model.Creative.end_date == None, model.Creative.end_date >= func.now()),
# Site Id filtering
or_(model.Creative.cj_site_id == None, model.Creative.cj_site_id == cj_site_id,
and_(conf.enable_cj_site_replacements, cj_site_id != None, model.Creative.cj_site_id != None)),
# Disabled?
model.Creative.disabled == False)
creative_types = zone.creative_types # This might change later.
doing_text = None # just so it can't be undefined later
# Sanity check - this shouldn't ever happen
if zone.num_texts == 0:
creative_types = 2
# Filter by text or block if needed. If you want both we do some magic later. But first we need to find out how much of each we have, weight wise.
if creative_types == 1:
whereclause_zone.append(model.Creative.is_text==True)
number_needed = zone.num_texts
doing_text = True
elif creative_types == 2:
whereclause_zone.append(model.Creative.is_text==False)
number_needed = 1
doing_text = False
creatives = model.Creative.table
all_results = []
# Find random creatives; Loop until we have as many as we need
while True:
# First let's figure how to normalize by how many items will be displayed. This ensures all items are displayed equally.
# We want this to be 1 for blocks and num_texts for texts. Also throw in the zone.weight_texts
#items_displayed = cast(creatives.c.is_text, Integer) * (zone.num_texts - 1) + 1
text_weight_adjust = case([(True, zone.weight_texts / zone.num_texts), (False, 1)], creatives.c.is_text)
if zone.normalize_by_container:
# Find the total weight of each parent in order to normalize
parent_weights = subquery('parent_weight',
[creatives.c.parent_id, func.sum(creatives.c.parent_weight * creatives.c.weight).label('pw_total')],
group_by=creatives.c.parent_id)
# Join creatives table and normalized weight table - I'm renaming a lot of fields here to make life easier down the line
# SA was insisting on doing a full subquery anyways (I just wanted a join)
c1 = subquery('c1',
[creatives.c.id.label('id'), creatives.c.title.label('title'), creatives.c.html.label('html'),
creatives.c.html_tracked.label('html_tracked'), creatives.c.is_text.label('is_text'),
creatives.c.cj_site_id.label('cj_site_id'),
(creatives.c.weight * creatives.c.parent_weight * text_weight_adjust /
case([(parent_weights.c.pw_total > 0, parent_weights.c.pw_total)], else_ = None)).label('normalized_weight')], # Make sure we can't divide by 0
whereclause_zone, # here go our filters
from_obj=join(creatives, parent_weights, or_(creatives.c.parent_id == parent_weights.c.parent_id,
and_(creatives.c.parent_id == None, parent_weights.c.parent_id == None)))).alias('c1')
else:
# We don't normalize weight by parent weight, so we dont' need fancy joins
c1 = subquery('c1',
[creatives.c.id, creatives.c.title, creatives.c.html, creatives.c.html_tracked, creatives.c.is_text, creatives.c.cj_site_id,
(creatives.c.weight * creatives.c.parent_weight * text_weight_adjust).label('normalized_weight')],
whereclause_zone)
#for a in model.session.execute(c1).fetchall(): print a
if creative_types == 0: # (Either type)
# Now that we have our weights in order, let's figure out how many of each thing (text/block) we have, weightwise.
texts_weight = select([func.sum(c1.c.normalized_weight)], c1.c.is_text == True).scalar() or 0
blocks_weight = select([func.sum(c1.c.normalized_weight)], c1.c.is_text == False).scalar() or 0
# Create weighted bins, text first (0-whatever). We are going to decide what kind of thing to make right here, right now,
# based on the weights of each. Because we can't have both (yet).
rand = random.random()
if texts_weight + blocks_weight == 0:
break
if rand < texts_weight / (texts_weight + blocks_weight):
c1 = c1.select().where(c1.c.is_text == True).alias('text')
total_weight = texts_weight
number_needed = zone.num_texts
doing_text = True
else:
c1 = c1.select().where(c1.c.is_text == False).alias('nottext')
total_weight = blocks_weight
number_needed = 1
doing_text = False
else:
# Find total normalized weight of all creatives in order to normalize *that*
total_weight = select([func.sum(c1.c.normalized_weight)])#.scalar() or 0
#if not total_weight:
# break
c2 = c1.alias('c2')
# Find the total weight above a creative in the table in order to form weighted bins for the random number generator
# Note that this is the upper bound, not the lower (if it was the lower it could be NULL)
incremental_weight = select([func.sum(c1.c.normalized_weight) / total_weight], c1.c.id <= c2.c.id, from_obj=c1)
# Get everything into one thing - for debugging this is a good place to select and print out stuff
shmush = select([c2.c.id, c2.c.title, c2.c.html, c2.c.html_tracked, c2.c.cj_site_id,
incremental_weight.label('inc_weight'), (c2.c.normalized_weight / total_weight).label('final_weight')],
from_obj=c2).alias('shmush')
#for a in model.session.execute(shmush).fetchall(): print a
# Generate some random numbers and comparisons - sorry about the magic it saves about 10 lines
# The crazy 0.9999 is to make sure we don't get a number so close to one we run into float precision errors (all the weights might not quite sum to 1,
# and so we might end up falling outside the bin!)
# Experimentally the error never seems to be worse than that, and that number is imprecise enough to be displayed exactly by python.
rand = [random.random() * 0.9999999999 for i in xrange(number_needed)]
whereclause_rand = or_(*[and_(shmush.c.inc_weight - shmush.c.final_weight <= rand[i], rand[i] < shmush.c.inc_weight) for i in xrange(number_needed)])
# Select only creatives where the random number falls between its cutoff and the next
results = model.session.execute(select([shmush.c.id, shmush.c.title, shmush.c.html, shmush.c.html_tracked, shmush.c.cj_site_id], whereclause_rand)).fetchall()
# Deal with number of results
if len(results) == 0:
if not doing_text or not all_results:
return ''
# Otherwise, we are probably just out of results.
break
if len(results) > number_needed:
log.error('Too many results while rendering zone %i. I got %i results and wanted %i' % (zone.id, len(results), number_needed))
results = results[:number_needed]
all_results.extend(results)
break
elif len(results) < number_needed:
if not doing_text:
raise Exception('Somehow we managed to get past several checks, and we have 0 < results < needed_results for block creatives.' + \
'Since needed_results should be 1, this seems fairly difficult.')
all_results.extend(results)
# It looks like we need more results, this should only happen when we are doing text. Try again.
number_needed -= len(results)
# Exclude ones we've already got
whereclause_zone.append(and_(*[model.Creative.id != result.id for result in results]))
# Set to only render text this time around
if creative_types == 0:
creative_types = 1
whereclause_zone.append(model.Creative.is_text == True)
# Continue loop...
else: # we have the right number?
all_results.extend(results)
break
if doing_text and len(all_results) < zone.num_texts:
log.warn('Could only retrieve %i of %i desired creatives for zone %i. This (hopefully) means you are requesting more creatives than exist.' \
% (len(all_results), zone.num_texts, zone.id))
# Ok, that's done, we have our results.
# Let's render some html
html = ''
if doing_text:
html += zone.before_all_text or ''
for creative in all_results:
if track or (track is None and conf.enable_adjector_view_tracking):
# Create a view thingy
model.View(creative['id'], zone.id)
model.session.commit()
# Figure out the html value...
# Use either click tracked or regular html
if (track or (track is None and conf.enable_adjector_click_tracking)) and creative['html_tracked'] is not None:
creative_html = creative['html_tracked'].replace('ADJECTOR_TRACKING_BASE_URL', conf.tracking_base_url)\
.replace('ADJECTOR_CREATIVE_ID', str(creative['id'])).replace('ADJECTOR_ZONE_ID', str(zone.id))
else:
creative_html = creative['html']
# Remove or modify third party click tracking
if (track is False or (track is None and not conf.enable_third_party_tracking)) and creative['cj_site_id'] is not None:
creative_html = remove_tracking(creative_html, creative['cj_site_id'])
elif conf.enable_cj_site_replacements:
creative_html = re.sub(str(creative['cj_site_id']), str(cj_site_id), creative_html)
########### Now we can do some text assembly ###########
# If text, add pre-text
if doing_text:
html += zone.before_each_text or ''
html += creative_html
# Are we in admin mode?
if admin:
html += '''
<div class='adjector_admin' style='color: red; background-color: silver'>
Creative: <a href='%(admin_base_url)s%(creative_url)s'>%(creative_title)s</a>
Zone: <a href='%(admin_base_url)s%(zone_url)s'>%(zone_title)s</a>
</div>
''' % dict(admin_base_url = conf.admin_base_url, creative_url = '/creative/%i' % creative['id'], zone_url = zone.view(),
creative_title = creative.title, zone_title = zone.title)
if doing_text:
html += zone.after_each_text or ''
if doing_text:
html += zone.after_all_text or ''
# Wrap in javascript if asked
if html and '<script' not in html and conf.require_javascript:
wrapper = '''<script type='text/javascript'>document.write('%s')</script>'''
# Do some quick substitutions to inject... #TODO there must be an existing function that does this
html = re.sub(r"'", r"\'", html) # escape quotes
html = re.sub(r"[\r\n]", r"", html) # remove line breaks
return wrapper % html
return html
def render_zone(ident, track=None, admin=False):
'''
Render A Random Creative for this Zone, using precached data. Access by id or name.
Respect all zone requirements. Use creative weights and their containing set weights to weight randomness.
If zone.normalize_by_container, normalize creatives by the total weight of the set they are in,
so the total weight of the creatives directly in any set is always 1.
If block and text ads can be shown, a decision will be made to show one or the other based on the total probability of each type of creative.
Note that this function is called by the API function render_zone.
'''
# Note that this is my first time seriously using SA, feel free to clean this up
if isinstance(ident, int) or ident.isdigit():
zone = model.Zone.get(int(ident))
else:
zone = model.Zone.query.filter_by(name=ident).first()
if zone is None: # Fail gracefully, don't commit suicide because someone deleted a zone from the ad server
log.error('Tried to render zone %s. Zone Not Found' % ident)
return ''
# Find zone site_id, if applicable. Default to global site_id, or else None.
cj_site_id = zone.parent_cj_site_id or conf.cj_site_id
# Texts or blocks?
rand = random.random()
if rand < zone.total_text_weight:
# texts!
number_needed = zone.num_texts
doing_text = True
else:
# blocks!
number_needed = 1
doing_text = False
query = model.CreativeZonePair.query.filter_by(zone_id = zone.id, is_text = doing_text)
num_pairs = query.count()
if num_pairs == number_needed:
pairs = query.all()
else:
pairs = [] # keep going until we get as many as we need
still_needed = number_needed
banned_ranges = []
while still_needed:
# Generate some random numbers and comparisons - sorry about the magic it saves about 10 lines
# The crazy 0.9999 is to make sure we don't get a number so close to one we run into float precision errors (all the weights might not quite sum to 1,
# and so we might end up falling outside the bin!)
# Experimentally the error never seems to be worse than that, and that number is imprecise enough to be displayed exactly by python.
# Assemble random numbers
rands = []
while len(rands) < still_needed:
rand = random.random() * 0.9999999999
bad_rand = False
for range in banned_ranges:
if range[0] <= rand < range[1]:
bad_rand = True
break
if not bad_rand:
rands.append(rand)
# Select only creatives where the random number falls between its cutoff and the next
results = query.filter(or_(*[and_(model.CreativeZonePair.lower_bound <= rands[i],
rands[i] < model.CreativeZonePair.upper_bound) for i in xrange(still_needed)])).all()
# What if there are no results?
if len(results) == 0:
if not pairs: # I guess there are no results
return ''
break # or else we are just out of results
still_needed -= len(results)
pairs += results
# Exclude ones we've already got, if we need to loop again
banned_ranges.extend([pair.lower_bound, pair.upper_bound] for pair in results)
#JIC
if len(pairs) > number_needed:
# This shouldn't be able to happen
log.error('Too many results while rendering zone %i. I got %i results and wanted %i' % (zone.id, len(results), number_needed))
pairs = pairs[:number_needed]
elif len(pairs) < number_needed:
log.warn('Could only retrieve %i of %i desired creatives for zone %i. This (hopefully) means you are requesting more creatives than exist.' \
% (len(pairs), zone.num_texts, zone.id))
# Ok, that's done, we have our results.
# Let's render some html
html = ''
if doing_text:
html += zone.before_all_text or ''
for pair in pairs:
creative = pair.creative
if track or (track is None and conf.enable_adjector_view_tracking):
# Create a view thingy - this is much faster than using SA (almost instant)
model.session.execute('INSERT INTO views (creative_id, zone_id, time) VALUES (%i, %i, now())' % (creative.id, zone.id))
#model.View(creative.id, zone.id)
# Figure out the html value...
# Use either click tracked or regular html
if (track or (track is None and conf.enable_adjector_click_tracking)) and creative.html_tracked is not None:
creative_html = creative.html_tracked.replace('ADJECTOR_TRACKING_BASE_URL', conf.tracking_base_url)\
.replace('ADJECTOR_CREATIVE_ID', str(creative.id)).replace('ADJECTOR_ZONE_ID', str(zone.id))
else:
creative_html = creative.html
# Remove or modify third party click tracking
if (track is False or (track is None and not conf.enable_third_party_tracking)) and creative.cj_site_id is not None:
creative_html = remove_tracking(creative_html, creative.cj_site_id)
elif cj_site_id and creative.cj_site_id and conf.enable_cj_site_replacements:
creative_html = re.sub(str(creative.cj_site_id), str(cj_site_id), creative_html)
########### Now we can do some text assembly ###########
# If text, add pre-text
if doing_text:
html += zone.before_each_text or ''
html += creative_html
# Are we in admin mode?
if admin:
html += '''
<div class='adjector_admin' style='color: red; background-color: silver'>
Creative: <a href='%(admin_base_url)s%(creative_url)s'>%(creative_title)s</a>
Zone: <a href='%(admin_base_url)s%(zone_url)s'>%(zone_title)s</a>
</div>
''' % dict(admin_base_url = conf.admin_base_url, creative_url = '/creative/%i' % creative.id, zone_url = zone.view(),
creative_title = creative.title, zone_title = zone.title)
if doing_text:
html += zone.after_each_text or ''
if doing_text:
html += zone.after_all_text or ''
model.session.commit() #having this down here saves us quite a bit of time
# Wrap in javascript if asked
if html and '<script' not in html and conf.require_javascript:
wrapper = '''<script type='text/javascript'>document.write('%s')</script>'''
# Do some quick substitutions to inject... #TODO there must be an existing function that does this
html = re.sub(r"'", r"\'", html) # escape quotes
html = re.sub(r"[\r\n]", r"", html) # remove line breaks
return wrapper % html
return html | Adjector | /Adjector-1.0b1.tar.gz/Adjector-1.0b1/adjector/core/render.py | render.py |
var adjector_adSenseDeliveryDone;
var adjector_adSensePx;
var adjector_adSensePy;
function adjector_adSenseClick(path)
{
// Add cache buster here to ensure multiple clicks are recorded
var cb = new String (Math.random());
cb = cb.substring(2,11);
var i = new Image();
i.src = path + cb;
}
function adjector_adSenseLog(obj)
{
if (typeof obj.parentNode != 'undefined')
{
parent = obj.parentNode;
while(parent.tagName == 'INS'){ // escape from google's <ins> nodes
parent = parent.parentNode
}
var t = parent.innerHTML;
var params = t.match(/\/\*\s*adjector_click_track=([^ ]+)\s*\*\//)
if (params)
{
adjector_adSenseClick(params[1]);
}
}
}
function adjector_adSenseGetMouse(e)
{
// Adapted from http://www.howtocreate.co.uk/tutorials/javascript/eventinfo
if (typeof e.pageX == 'number')
{
//most browsers
adjector_adSensePx = e.pageX;
adjector_adSensePy = e.pageY;
}
else if (typeof e.clientX == 'number')
{
//Internet Explorer and older browsers
//other browsers provide this, but follow the pageX/Y branch
adjector_adSensePx = e.clientX;
adjector_adSensePy = e.clientY;
if (document.body && (document.body.scrollLeft || document.body.scrollTop))
{
//IE 4, 5 & 6 (in non-standards compliant mode)
adjector_adSensePx += document.body.scrollLeft;
adjector_adSensePy += document.body.scrollTop;
}
else if (document.documentElement && (document.documentElement.scrollLeft || document.documentElement.scrollTop ))
{
//IE 6 (in standards compliant mode)
adjector_adSensePx += document.documentElement.scrollLeft;
adjector_adSensePy += document.documentElement.scrollTop;
}
}
}
function adjector_adSenseFindX(obj)
{
var x = 0;
while (obj)
{
x += obj.offsetLeft;
obj = obj.offsetParent;
}
return x;
}
function adjector_adSenseFindY(obj)
{
var y = 0;
while (obj)
{
y += obj.offsetTop;
obj = obj.offsetParent;
}
return y;
}
function adjector_adSensePageExit(e)
{
var ad = document.getElementsByTagName("iframe");
if (typeof adjector_adSensePx == 'undefined')
return;
for (var i = 0; i < ad.length; i++)
{
var adLeft = adjector_adSenseFindX(ad[i]);
var adTop = adjector_adSenseFindY(ad[i]);
var adRight = parseInt(adLeft) + parseInt(ad[i].width) + 15;
var adBottom = parseInt(adTop) + parseInt(ad[i].height) + 10;
var inFrameX = (adjector_adSensePx > (adLeft - 10) && adjector_adSensePx < adRight);
var inFrameY = (adjector_adSensePy > (adTop - 10) && adjector_adSensePy < adBottom);
//alert(adjector_adSensePx + ',' + adjector_adSensePy + ' ' + adLeft + ':' + adRight + 'x' + adTop + ':' + adBottom);
if (inFrameY && inFrameX)
{
if (ad[i].src.match(/googlesyndication\.com|ypn-js\.overture\.com|googleads\.g\.doubleclick\.net/))
adjector_adSenseLog(ad[i]);
}
}
}
function adjector_adSenseInit()
{
if (document.all && typeof window.opera == 'undefined')
{
//ie
var el = document.getElementsByTagName("iframe");
for (var i = 0; i < el.length; i++)
{
if (el[i].src.match(/googlesyndication\.com|ypn-js\.overture\.com|googleads\.g\.doubleclick\.net/))
{
el[i].onfocus = function()
{
adjector_adSenseLog(this);
}
}
}
}
else if (typeof window.addEventListener != 'undefined')
{
// other browsers
window.addEventListener('unload', adjector_adSensePageExit, false);
window.addEventListener('mousemove', adjector_adSenseGetMouse, true);
}
}
function adjector_adSenseDelivery()
{
if (typeof adjector_adSenseDeliveryDone != 'undefined' && adjector_adSenseDeliveryDone)
return;
adjector_adSenseDeliveryDone = true;
if(typeof window.addEventListener != 'undefined')
{
//.. gecko, safari, konqueror and standard
window.addEventListener('load', adjector_adSenseInit, false);
}
else if(typeof document.addEventListener != 'undefined')
{
//.. opera 7
document.addEventListener('load', adjector_adSenseInit, false);
}
else if(typeof window.attachEvent != 'undefined')
{
//.. win/ie
window.attachEvent('onload', adjector_adSenseInit);
}
else
{
//.. mac/ie5 and anything else that gets this far
//if there's an existing onload function
if(typeof window.onload == 'function')
{
//store it
var existing = onload;
//add new onload handler
window.onload = function()
{
//call existing onload function
existing();
//call adsense_init onload function
adjector_adSenseInit();
};
}
else
{
//setup onload function
window.onload = adjector_adSenseInit;
}
}
}
adjector_adSenseDelivery(); | Adjector | /Adjector-1.0b1.tar.gz/Adjector-1.0b1/adjector/public/js/adsense_tracker.js | adsense_tracker.js |
import logging
#from suds import WebFault
from urllib import unquote_plus
from urllib2 import HTTPError
#from suds.sudsobject import asdict
#from adjector.core.cj_util import from_cj_date
from adjector.lib.cj_interface import get_cj_links
from adjector.lib.base import *
log = logging.getLogger(__name__)
class CjController(BaseController):
errors = {}
def start(self):
if not conf.cj_api_key:
return 'You must enter an api key in order to connect to Commission Junction.'
# Fill websiteId field
site_ids = []
cj_sites = model.Location.query.filter(model.Location.cj_site_id != None)
if cj_sites:
site_ids = [[loc.cj_site_id, loc.title] for loc in cj_sites]
if conf.cj_site_id and conf.cj_site_id not in [str(id) for id, title in site_ids]:
# if your global site id is yet another thing
site_ids.insert(0, [conf.cj_site_id, 'Global'])
# Only show the box if neccessary
if len(site_ids) == 0:
return 'You must enter at least one global or location-specific site id in order to connect to Commission Junction.'
elif len(site_ids) == 1:
c.form = forms.CJLinkSearchOneSite(action='/import/cj/search', value={'website_id': site_ids[0][0]})
else:
c.form = forms.CJLinkSearch(action='/import/cj/search', child_args={'website_id': dict(options = site_ids)})
c.title = 'Import from Commission Junction'
return render('common.form')
@rest.dispatch_on(POST='do_search')
def search(self):
''' redo last search'''
if not session.has_key('last_search'):
return redirect_to('/import/cj')
last = session['last_search']
if request.params.has_key('page') and request.params['page'] != last['form_result']['page_number']:
# render new one, I guess
self.form_result = last['form_result']
self.form_result['page_number'] = request.params['page']
return self._actually_do_search()
c.links = last['links']
c.total = last['total']
c.page = last['page']
c.count = last['count']
c.per_page = last['per_page']
self._process(c.links, all=True)
c.title = 'Import from Commission Junction'
return render('cj.links')
@rest.restrict('POST')
@validate(form=forms.CJLinkSearch, error_handler='list')
def do_search(self):
return self._actually_do_search()
def _actually_do_search(self):
if not conf.cj_api_key:
return 'You must enter the necessary credentials in order to connect to Commission Junction.'
result = self.form_result.copy()
c.show_imported = self.form_result.pop('show_imported')
c.show_ignored = self.form_result.pop('show_ignored')
try:
links, counts = get_cj_links(**self.form_result)
except HTTPError, error:
return 'Could not connect to Commission Junction.<br />Code: %s<br />Error: %s' % (error.code, error.msg)
c.total = counts['total']
c.page = counts['page']
c.count = counts['count']
c.per_page = self.form_result['records_per_page']
if c.total == 0:
return 'No Links Found'
self._process(links)
session['last_search'] = dict(form_result=result, links=c.links, total=c.total, page=c.page, count=c.count, per_page = c.per_page)
session.save()
c.title = 'Import from Commission Junction'
return render('cj.links')
def _process(self, links, all=False):
session['cj_links'] = session.get('cj_links', {})
c.links = []
for link in links:
# Add to sessionn
session['cj_links']['%s:%s' % (link['cj_site_id'], link['cj_link_id'])] = link
# Filter more for what to display...
# Check if ignored. If so, continue unless paramenter sent.
link['ignored'] = model.CJIgnoredLink.query.filter_by(cj_link_id = link['cj_link_id']).first()
if link['ignored'] is not None and not (all or c.show_ignored):
continue
# Check to see if we already have this imported. If so, continue unless parameter sent
link['creative'] = model.Creative.query.filter_by(cj_link_id = link['cj_link_id']).first()
if link['creative'] is not None and not (all or c.show_imported):
continue
c.links.append(link)
session.save()
def process(self):
''' Process multiple links at once '''
# See if we still have the links somewhere
try:
links = session['cj_links']
except KeyError:
session['message'] = 'Link storage error; try searching again before importing any links.'
session.save()
return redirect_to('/import/cj')
idents = [unquote_plus(param) for param in request.params.keys() if ':' in unquote_plus(param)]
### ADD LINKS ###
if request.params.has_key('import'):
action, verb = self._add, 'imported'
### IGNORE LINKS ###
elif request.params.has_key('ignore'):
action, verb = self._ignore, 'ignored'
## UNIGNORE LINKS ###
elif request.params.has_key('unignore'):
action, verb = self._unignore, 'unignored'
else:
return redirect_to('/import/cj/search')
count = 0
self.updated = []
for ident in idents:
link = links[ident]
count += action(link)
if action == self._add:
self._on_updates(self.updated)
session['message'] = '%i links %s.' % (count, verb)
if count < len(idents):
session['message'] += ' %i links were already %s.' % (len(idents) - count, verb)
session.save()
return redirect_to('/import/cj/search')
def _add(self, link):
# Do we already have this one as a creative?
if model.Creative.query.filter_by(cj_link_id = link['cj_link_id']).first() is not None:
log.warn('Link id %s already added' % link['cj_link_id']) #TODO: output message
return False
# Remove ignored tag if neccessary
ignored = model.CJIgnoredLink.query.filter_by(cj_link_id = link['cj_link_id']).first()
if ignored:
model.session.delete(ignored)
# Create set if necessary
theset = model.Set.query.filter_by(cj_advertiser_id = link['cj_advertiser_id']).first()
if theset is None:
theset = model.Set(dict(title = link['advertiser_name'], cj_advertiser_id = link['cj_advertiser_id']))
self.updated.extend(theset._updated)
# Import link to creative
creative = model.Creative(dict([key, value] for key, value in link.iteritems() if key in model.Creative.__dict__))
self.updated.extend(creative._updated)
creative.parent = theset
model.session.commit()
return True
def _ignore(self, link):
ignored = model.CJIgnoredLink.query.filter_by(cj_link_id = link['cj_link_id']).first()
if ignored:
log.warn('Tried to ignore a link that was already ignored. Link id = %s' % link['cj_link_id'])
return False
model.CJIgnoredLink(link['cj_link_id'], link['cj_advertiser_id'])
model.session.commit()
return True
def _unignore(self, link):
ignored = model.CJIgnoredLink.query.filter_by(cj_link_id = link['cj_link_id']).first()
if not ignored:
log.warn('Tried to unignore a link that was not ignored. Link id = %s' % link['cj_link_id'])
return False
else:
model.session.delete(ignored)
model.session.commit()
return True | Adjector | /Adjector-1.0b1.tar.gz/Adjector-1.0b1/adjector/controllers/cj.py | cj.py |
import logging
import re
from paste.deploy.converters import asbool
from adjector.core.render import render_zone
from adjector.lib.base import *
log = logging.getLogger(__name__)
class ZoneController(ObjectController):
native = model.Zone
form = forms.Zone
singular = 'zone'
plural = 'zones'
@rest.dispatch_on(POST='do_edit')
def view(self, id):
obj = self._obj(id)
setattr(c, self.singular, obj)
value=obj.value()
value['preview'] = c.render = h.Markup(render_zone(id, track=False))
child_args = dict(parent_id=dict(options=[''] + obj.possible_parents()))
c.form = self.form(action=h.url_for(), value = value, child_args=child_args, edit=True)
c.title = obj.title
return render('view.zone')
def render(self, ident):
options = request.environ.get('adjector.options', {})
if request.params.has_key('track'):
options['track'] = asbool(request.params.has_key('track'))
if request.params.has_key('admin'):
options['admin'] = asbool(request.params.has_key('admin'))
return render_zone(ident, **options)
def render_js(self, ident):
'''
Render ads through a javascript tag
Usage Example:
<script type='text/javascript' src='http://localhost:5000/RENDER_BASE_URL/zone/NAME/render.js?track=0' />
Where RENDER_BASE_URL is the url you specified in your .ini file and NAME is your ad name.
'''
options = request.environ.get('adjector.options', {})
if request.params.has_key('track'):
options['track'] = asbool(request.params['track'])
if request.params.has_key('admin'):
options['admin'] = asbool(request.params['admin'])
rendered = render_zone(ident, **options)
wrapper = '''document.write('%s')'''
# Do some quick substitutions to inject... #TODO there must be an existing function that does this
rendered = re.sub(r"'", r"\'", rendered) # escape quotes
rendered = re.sub(r"[\r\n]", r"", rendered) # remove line breaks
response.headers['content-type'] = 'text/javascript; charset=utf8'
return wrapper % rendered | Adjector | /Adjector-1.0b1.tar.gz/Adjector-1.0b1/adjector/controllers/zone.py | zone.py |
import pylons
from paste.deploy import loadapp
from webob.exc import HTTPNotFound
#from paste.recursive import Includer
from adjector.core.conf import conf
class AdjectorMiddleware(object):
def __init__(self, app, config):
self.app = app
#raw_config = appconfig('config:%s' % config['__file__'], name='adjector')
#self.path = adjector_config_raw.local_conf.get('base_url', '/adjector')
self.adjector_app = loadapp('config:%s' % config['__file__'], name='adjector')
self.path = conf.base_url
# Remove the adjector config from the config stack; otherwise the host app gets *very* confused
# We should be done initializing adjector, so this isn't used again anyways.
# The RegistryMiddleware takes care of this from now on (during requests).
process_configs = pylons.config._process_configs
adjector_dict = [dic for dic in process_configs if dic['pylons.package'] == 'adjector'][0]
process_configs.remove(adjector_dict)
def __call__(self, environ, start_response):
if self.path and environ['PATH_INFO'].startswith(self.path):
#environ['PATH_INFO'] = environ['PATH_INFO'][len(self.path):] or '/'
#environ['SCRIPT_NAME'] = self.path
return self.adjector_app(environ, start_response)
else:
#environ['adjector.app'] = self.adjector_app
#environ['adjector.include'] = Includer(self.adjector_app, environ, start_response)
return self.app(environ, start_response)
def make_middleware(app, global_conf, **app_conf):
return AdjectorMiddleware(app, global_conf)
def null_middleware(global_conf, **app_conf):
return lambda app: app
class FilterWith(object):
def __init__(self, app, filter, path):
self.app = app
self.filter = filter
self.path = path
def __call__(self, environ, start_response):
if self.path and environ['PATH_INFO'].startswith(self.path):
environ['PATH_INFO'] = environ['PATH_INFO'][len(self.path):] or '/'
environ['SCRIPT_NAME'] += self.path
return self.filter(environ, start_response)
else:
return self.app(environ, start_response)
class FilteredApp(object):
'''
Only allow access when path_info starts with 'path', otherwise throw 404
This can't be a subclass of StaticURLParser because that creates new instances of its __class__ '''
def __init__(self, app, path):
self.app = app
self.path = path
def __call__(self, environ, start_response):
if self.path and environ['PATH_INFO'].startswith(self.path):
environ['PATH_INFO'] = environ['PATH_INFO'][len(self.path):] or '/'
environ['SCRIPT_NAME'] += self.path
return self.app(environ, start_response)
else:
raise HTTPNotFound()
class StripTrailingSlash(object):
def __init__(self, app):
self.app = app
def __call__(self, environ, start_response):
environ['PATH_INFO'] = environ.get('PATH_INFO', '').rstrip('/')
return self.app(environ, start_response) | Adjector | /Adjector-1.0b1.tar.gz/Adjector-1.0b1/adjector/lib/middleware.py | middleware.py |
import logging
import adjector.forms as forms #IGNORE:W0611
import adjector.model as model #IGNORE:W0611
from datetime import datetime, timedelta
from pylons import g, request, response, session, tmpl_context as c #IGNORE:W0611
from pylons.controllers import WSGIController
from pylons.controllers.util import abort, redirect_to #IGNORE:W0611
from pylons.decorators import rest #pylint: disable-msg=E0611,W0611
from paste.recursive import ForwardRequestException #pylint: disable-msg=E0611,W0611
from pylons.templating import render #IGNORE:W0611
from sqlalchemy import and_, asc, desc, func, or_ #IGNORE:W0611
from tw.api import WidgetBunch #IGNORE:W0611
from tw.mods.pylonshf import validate #IGNORE:W0611
from webob.exc import HTTPNotFound
from adjector.core.conf import conf #IGNORE:W0611
from adjector.model import meta
from adjector.model.entities import CircularDependencyException
from adjector.lib import helpers as h #IGNORE:W0611
from adjector.lib.precache import precache_zone
from adjector.lib.util import FormProxy
log = logging.getLogger(__name__)
class BaseController(WSGIController):
def __call__(self, environ, start_response):
'''Invoke the Controller'''
# WSGIController.__call__ dispatches to the Controller method
# the request is routed to. This routing information is
# available in environ['pylons.routes_dict']
try:
return WSGIController.__call__(self, environ, start_response)
finally:
meta.Session.remove()
def __init__(self):
WSGIController.__init__(self)
if session.has_key('message'):
c.session_message = session['message']
del session['message']
session.save()
def _on_updates(self, updated):
''' Do things to dirty objects '''
# precaching
creatives = [obj for obj in updated if isinstance(obj, model.Creative)]
zones = [obj for obj in updated if isinstance(obj, model.Zone)]
# if no creatives modified, we only have to modify the changed zones
if not creatives:
return [precache_zone(zone) for zone in zones]
# If creatives modified, we need to figure out what zones they belonged to
# and totally redo those
for creative in creatives:
for pair in creative.creative_zone_pairs:
zones.append(pair.zone)
# now that we know what zones we *definitely* need to refresh...
for zone in model.Zone.query():
if zone in zones:
# totally redo all weights for this zone
precache_zone(zone)
else:
# only redo if the creatives NOW will be in that zone
precache_zone(zone, [c.id for c in creatives])
class ObjectController(BaseController):
native = None
form_proxy = FormProxy()
singular = None
plural = None
def __init__(self):
BaseController.__init__(self)
if self.form:
self.form_proxy.set(self.form)
def _obj(self, id):
obj = self.native.get(int(id))
if obj is None:
raise HTTPNotFound('%s not found' % self.singular.title())
return obj
def list(self):
query = self.native.query()
setattr(c, self.plural, self.native.query())
c.title = self.plural.title()
return render('list.%s' % self.plural)
@rest.dispatch_on(POST='do_edit')
def view(self, id):
obj = self._obj(id)
setattr(c, self.singular, obj)
child_args = dict(parent_id=dict(options=[''] + obj.possible_parents(obj)))
c.form = self.form(action=h.url_for(), value=obj.value(), child_args=child_args, edit=True)
c.title = obj.title
return render('view.%s' % self.singular)
@rest.dispatch_on(POST='do_new')
def new(self):
child_args = dict(parent_id=dict(options=[''] + self.native.possible_parents()))
c.form = self.form(action=h.url_for(), value=dict(request.params), child_args=child_args, edit=False)
c.title = 'New %s' % self.singular.title()
return render('common.form')
@rest.restrict('POST')
@validate(form=form_proxy, error_handler='new') #pylint: disable-msg=E0602
def do_new(self):
try:
obj = self.native(self.form_result)
model.session.commit()
self._on_updates(obj._updated)
session['message'] = 'Changes saved.'
except CircularDependencyException:
model.session.rollback()
session['message'] = 'Assigning that set/location creates a cycle. Don\'t do that!'
session.save()
return redirect_to(obj.view())
@rest.restrict('POST')
@validate(form=form_proxy, error_handler='view') #pylint: disable-msg=E0602
def do_edit(self, id):
obj = self._obj(id)
if request.POST.has_key('delete'):
self._delete(self, obj)
try:
updates = obj.set(self.form_result)
model.session.commit()
self._on_updates(updates)
session['message'] = 'Changes saved.'
except CircularDependencyException:
model.session.rollback()
session['message'] = 'Assigning that set/location creates a cycle. Don\'t do that!'
session.save()
return redirect_to(obj.view())
def _delete(self, obj):
obj.delete()
model.session.commit()
session['message'] = '%s deleted.' % self.singular.title()
session.save()
return redirect_to(h.url_for(action='list'))
class ContainerObjectController(ObjectController):
def list(self):
setattr(c, self.plural, self.native.query.filter_by(parent_id=None))
c.title = self.plural.title()
return render('list.%s' % self.plural)
def _delete(self, obj):
updated = obj.delete()
model.session.commit()
_on_updates(self, updated)
session['message'] = '%s deleted.' % self.singular.title()
session.save()
return redirect_to(h.url_for(action='list')) | Adjector | /Adjector-1.0b1.tar.gz/Adjector-1.0b1/adjector/lib/base.py | base.py |
#import os.path
#from suds.client import Client
#from libxml2 import parseDoc
from xml.dom.minidom import parseString
from urllib import urlencode
from urllib2 import urlopen, Request
import adjector.model as model
from adjector.core.conf import conf
from adjector.core.cj_util import from_cj_date
#def get_link_search_client():
# cj_linksearch_wsdl = os.path.join(conf.root, 'external', 'CJ_LinkSearchServiceV2.0.wsdl')
# return Client('file://' + cj_linksearch_wsdl)
#
#def get_link_search_defaults():
# return dict(developerKey = conf.cj_api_key,
# advertiserIds = 'joined',
# language = 'en',
# #linkSize = '300x250 Medium Rectangle',
# serviceableArea = 'US',
# #promotionEndDate = 'Ongoing',
# #sortBy = 'linkType',
# sortOrder = 'desc',
# startAt = 0,
# maxResults = 100) #Note: change this for debugging so you don't hammer CJ
def get_link_property(link, property):
child = link.getElementsByTagName(property)[0].firstChild
if not child:
return ''
return str(child.toxml())
def get_cj_links(**kwargs):
params = {}
for k,v in kwargs.iteritems():
params[k.replace('_','-')] = v
params.update({'advertiser-ids': 'joined'})
req = Request('https://linksearch.api.cj.com/v2/link-search?%s' % urlencode(params),
headers = {'authorization': conf.cj_api_key})
result = urlopen(req).read()
doc = parseString(result)
links_attr = doc.getElementsByTagName('links')[0]
total = int(links_attr.getAttribute('total-matched'))
count = int(links_attr.getAttribute('records-returned'))
page = int(links_attr.getAttribute('page-number'))
links = []
now = model.tz_now()
for link in doc.getElementsByTagName('link'):
if get_link_property(link, 'relationship-status') != 'joined':
# There is no need to show links from advertisers we won't make $$ from
continue
if get_link_property(link, 'promotion-end-date') and from_cj_date(get_link_property(link, 'promotion-end-date')) < now:
# Don't show expired links
continue
links.append(dict(
title = get_link_property(link, 'link-name'),
html = get_link_property(link, 'link-code-html').replace('<', '<').replace('>', '>'),
is_text = get_link_property(link, 'link-type') == 'Text Link',
width = int(get_link_property(link, 'creative-width')),
height = int(get_link_property(link, 'creative-height')),
start_date = from_cj_date(get_link_property(link, 'promotion-start-date')),
end_date = from_cj_date(get_link_property(link, 'promotion-end-date')),
cj_link_id = int(get_link_property(link, 'link-id')),
cj_advertiser_id = int(get_link_property(link, 'advertiser-id')),
cj_site_id = int(params['website-id']),
# Values not stored by adjector
description = get_link_property(link, 'description'),
link_type = get_link_property(link, 'link-type'),
advertiser_name = get_link_property(link, 'advertiser-name'),
promo_type = get_link_property(link, 'promotion-type'),
seven_day_epc = get_link_property(link, 'seven-day-epc'),
three_month_epc = get_link_property(link, 'three-month-epc'),
click_commission = get_link_property(link, 'click-commission'),
lead_commission = get_link_property(link, 'lead-commission'),
sale_commission = get_link_property(link, 'sale-commission'),
))
return links, {'count': count, 'total': total, 'page': page} | Adjector | /Adjector-1.0b1.tar.gz/Adjector-1.0b1/adjector/lib/cj_interface.py | cj_interface.py |
from __future__ import division
import logging
from sqlalchemy import and_, func, or_
from sqlalchemy.sql import case, join, select, subquery
from adjector.core.conf import conf
import adjector.model as model
log = logging.getLogger(__name__)
def precache_zone(zone, only_if_creative_ids = None):
'''
Precache data for creatives for this Zone. Access by id or name.
Respect all zone requirements. Use creative weights and their containing set weights to weight randomness.
If zone.normalize_by_container, normalize creatives by the total weight of the set they are in,
so the total weight of the creatives directly in any set is always 1.
If block and text ads can be shown, a decision will be made to show one or the other based on the total probability of each type of creative.
Note that this function is called each time you update a relevant creative or zone.
'''
# Find zone site_id, if applicable. Default to global site_id, or else None.
cj_site_id = zone.parent_cj_site_id or conf.cj_site_id
#print 'precaching zone %s with oici %s' % (zone.id, only_if_creative_ids)
# FILTERING
# Figure out what kind of creative we need
# Size filtering
whereclause_zone = and_(or_(and_(model.Creative.width >= zone.min_width,
model.Creative.width <= zone.max_width,
model.Creative.height >= zone.min_height,
model.Creative.height <= zone.max_height),
model.Creative.is_text == True),
# Date filtering
or_(model.Creative.start_date == None, model.Creative.start_date <= func.now()),
or_(model.Creative.end_date == None, model.Creative.end_date >= func.now()),
# Site Id filtering
or_(model.Creative.cj_site_id == None, model.Creative.cj_site_id == cj_site_id,
and_(conf.enable_cj_site_replacements, cj_site_id != None, model.Creative.cj_site_id != None)),
# Disabled?
model.Creative.disabled == False)
# Sanity check - this shouldn't ever happen
if zone.num_texts == 0:
zone.creative_types = 2
# Filter by text or block if needed. If you want both we do some magic later.
# But first we will need to find out how much of each we have, weight wise.
# Also we delete all of the ones that we won't need
if zone.creative_types == 1:
zone.total_text_weight = 1.0
whereclause_zone.append(model.Creative.is_text==True)
[model.session.delete(pair) for pair in model.CreativeZonePair.query.filter_by(zone_id = zone.id, is_text = False)]
elif zone.creative_types == 2:
zone.total_text_weight = 0.0
whereclause_zone.append(model.Creative.is_text==False)
[model.session.delete(pair) for pair in model.CreativeZonePair.query.filter_by(zone_id = zone.id, is_text = True)]
# Bail if the edited creative won't go in here; no sense in redoing everything...
if only_if_creative_ids and not model.Creative.query.filter(and_(whereclause_zone, model.Creative.id in only_if_creative_ids)).first():
return
#print 'continuing'
# WEIGHING
creatives = model.Creative.table
# First let's figure how to normalize by how many items will be displayed. This ensures all items are displayed equally.
# We want this to be 1 for blocks and num_texts for texts. Also throw in the zone.weight_texts
#items_displayed = cast(creatives.c.is_text, Integer) * (zone.num_texts - 1) + 1
text_weight_adjust = case([(True, zone.weight_texts / zone.num_texts), (False, 1)], creatives.c.is_text)
if zone.normalize_by_container:
# Find the total weight of each parent in order to normalize
parent_weights = subquery('parent_weight',
[creatives.c.parent_id, func.sum(creatives.c.parent_weight * creatives.c.weight).label('pw_total')],
group_by=creatives.c.parent_id)
# Join creatives table and normalized weight table - I'm renaming fields here to make life easier down the line
# SA was insisting on doing a full subquery anyways (I just wanted a join)
c1 = subquery('c1',
[creatives.c.id.label('id'), creatives.c.is_text.label('is_text'),
(creatives.c.weight * creatives.c.parent_weight * text_weight_adjust /
case([(parent_weights.c.pw_total > 0, parent_weights.c.pw_total)], else_ = None)).label('normalized_weight')], # Make sure we can't divide by 0
whereclause_zone, # here go our filters
from_obj=join(creatives, parent_weights, or_(creatives.c.parent_id == parent_weights.c.parent_id,
and_(creatives.c.parent_id == None, parent_weights.c.parent_id == None)))).alias('c1')
else:
# We don't normalize weight by parent weight, so we dont' need fancy joins
c1 = subquery('c1',
[creatives.c.id.label('id'), creatives.c.is_text.label('is_text'),
(creatives.c.weight * creatives.c.parent_weight * text_weight_adjust).label('normalized_weight')],
whereclause_zone)
#for a in model.session.execute(c1).fetchall(): print a
if zone.creative_types == 0: # (Either type)
# Now that we have our weights in order, let's figure out how many of each thing (text/block) we have, weightwise.
# This will let us choose texts OR blocks later
texts_weight = select([func.sum(c1.c.normalized_weight)], c1.c.is_text == True).scalar() or 0
blocks_weight = select([func.sum(c1.c.normalized_weight)], c1.c.is_text == False).scalar() or 0
if texts_weight + blocks_weight == 0:
return _on_empty_zone(zone)
total_weight = texts_weight + blocks_weight
zone.total_text_weight = texts_weight / total_weight
c1texts = subquery('c1', [c1.c.id, c1.c.normalized_weight], c1.c.is_text == True)
c1blocks = subquery('c1', [c1.c.id, c1.c.normalized_weight], c1.c.is_text == False)
_finish_precache(c1texts, texts_weight, zone, True)
_finish_precache(c1blocks, blocks_weight, zone, False)
else:
# Find total normalized weight of all creatives in order to normalize *that*
total_weight = select([func.sum(c1.c.normalized_weight)])#.scalar() or 0
if total_weight == 0:
return _on_empty_zone(zone)
_finish_precache(c1, total_weight, zone, zone.creative_types == 1)
def _finish_precache(c1, total_weight, zone, is_text):
c2 = c1.alias('c2')
# Find the total weight above a creative in the table in order to form weighted bins for the random number generator
# Note that this is the upper bound, not the lower (if it was the lower it could be NULL)
incremental_weight = select([func.sum(c1.c.normalized_weight) / total_weight], c1.c.id <= c2.c.id, from_obj=c1)
# Get everything into one thing
# Lower bound = inc_weight - final weight, upper_bound = inc_weight
shmush = select([c2.c.id,
incremental_weight.label('inc_weight'), (c2.c.normalized_weight / total_weight).label('final_weight')],
from_obj=c2).alias('shmush')
#for a in model.session.execute(shmush).fetchall(): print a
creatives = model.session.execute(shmush).fetchall()
for creative in creatives:
# current pair?
pair = model.CreativeZonePair.query.filter_by(zone_id = zone.id, creative_id = creative['id']).first()
if pair:
pair.set(dict(is_text = is_text,
lower_bound = creative['inc_weight'] - creative['final_weight'],
upper_bound = creative['inc_weight']))
else:
pair = model.CreativeZonePair(dict(zone_id = zone.id,
creative_id = creative['id'],
is_text = is_text,
lower_bound = creative['inc_weight'] - creative['final_weight'],
upper_bound = creative['inc_weight']))
# Delete old cache objects
for pair in model.CreativeZonePair.query.filter(and_(model.CreativeZonePair.zone_id == zone.id,
model.CreativeZonePair.is_text == is_text,
model.CreativeZonePair.creative_id not in [creative['id'] for creative in creatives])):
model.session.delete(pair)
model.session.commit()
def _on_empty_zone(zone):
for pair in model.CreativeZonePair.query.filter(model.Zone.id == zone.id):
model.session.delete(pair)
model.session.commit() | Adjector | /Adjector-1.0b1.tar.gz/Adjector-1.0b1/adjector/lib/precache.py | precache.py |
import logging, re, time
from datetime import datetime, timedelta
from dateutil.relativedelta import relativedelta
from paste.deploy.converters import asbool
from tw.forms.validators import FancyValidator, FormValidator, Invalid, UnicodeString, Wrapper
from adjector.core.conf import conf
AsBool = Wrapper(to_python=asbool)
log = logging.getLogger(__name__)
class DateTime(FancyValidator):
strip = True
end_interval = False
messages = {
'invalidDate': 'Enter a valid date of the form YYYY-MM-DD HH:MM:SS. You may leave off anything but the year.'
}
def _to_python(self, value, state):
formats = ['%Y-%m-%d %H:%M:%S', '%Y-%m-%d %H:%M', '%Y-%m-%d', '%Y-%m', '%Y']
add_if_end = [None, relativedelta(seconds=59), relativedelta(days=1, seconds=-1),
relativedelta(months=1, seconds=-1), relativedelta(years=1, seconds=-1)]
for format, aie in zip(formats, add_if_end):
try:
dt = datetime(*(time.strptime(value, format)[0:6]))
if self.end_interval and aie:
dt += aie
return conf.timezone.localize(dt)
except ValueError, e:
log.debug('Validation error %s' % e)
raise Invalid(self.message('invalidDate', state), value, state)
def _from_python(self, value, state):
return value.strftime('%Y-%m-%d %H:%M:%S')
class SimpleString(UnicodeString):
messages = {
'invalidString': 'May only contain alphanumerics, underscores, periods, and dashes.'
}
def validate_python(self, value, state):
UnicodeString.validate_python(self, value, state)
if re.search(r'[^\w\-.]', value):
raise Invalid(self.message('invalidString', state), value, state)
# From Siafoo
class UniqueValue(FormValidator):
validate_partial_form = True
value_field = ''
previous_value_field = ''
unique_test = None # A function that gets passed the new value to test for uniqueness. should return trueish or falsish
not_empty = True
__unpackargs__ = ('unique_test', 'value_field', 'previous_value_field')
messages = {
'notUnique': 'You must enter a unique value'
}
def validate_partial(self, field_dict, state):
for name in [self.value_field, self.previous_value_field]:
if name and not field_dict.has_key(name):
return
self.validate_python(field_dict, state)
def validate_python(self, field_dict, state):
FormValidator.validate_python(self, field_dict, state)
value = field_dict.get(self.value_field)
previous_value = field_dict.get(self.previous_value_field)
if (not self.not_empty or value == '') and value != previous_value and not self.unique_test(value):
errors = {self.value_field: self.message('notUnique', state)}
error_list = errors.items()
error_list.sort()
error_message = '<br>\n'.join(
['%s: %s' % (name, value) for name, value in error_list])
raise Invalid(error_message, field_dict, state, error_dict=errors) | Adjector | /Adjector-1.0b1.tar.gz/Adjector-1.0b1/adjector/forms/validators.py | validators.py |
import tw.api as twa
import tw.forms as twf
from formencode.schema import Schema
from tw.forms.validators import Int, Number, UnicodeString
from adjector.core.conf import conf
import adjector.model as model
from adjector.forms.validators import *
class FilteringSchema(Schema):
allow_extra_fields = True
filter_extra_fields = True
class GenericField(twf.FormField):
template = 'genshi:adjector.templates.form.widgets.generic_field'
# Some shortcuts
UnicodeEmptyString = UnicodeString(strip=True, not_empty=False, if_missing=None)
UnicodeNonEmptyString = UnicodeString(strip=True, not_empty=True, max=80)
IntMissing = Int(if_missing=None)
PositiveInt = Int(min=0, if_missing=None)
class CreativeForm(twf.ListForm):
'''Creative creation form'''
class fields(twa.WidgetsList):
preview = GenericField(label_text='Preview', validator=None, edit_only=True)
title = twf.TextField(label_text='Title', validator=UnicodeNonEmptyString, size=50)
parent_id = twf.SingleSelectField(label_text='Set', validator=Int)
is_text = twf.SingleSelectField(label_text='Type', options=[[0, 'Block'], [1, 'Text']],
validator=AsBool(not_empty=True))
weight = twf.TextField(label_text='Weight', default=1.0, validator=Number(not_empty=True, min=0.0))
total_weight = twf.TextField(label_text='Total Weight', validator=None, disabled=True, edit_only=True)
html = twf.TextArea(label_text='HTML', cols=100, rows=10,
validator=UnicodeString(strip=True, not_empty=False, if_missing=''))
html_tracked = twf.TextArea(label_text='HTML with Tracking Code', cols=100, rows=10, validator=None, disabled=True, edit_only=True)
add_tracking = twf.CheckBox(label_text='Add Tracking',
help_text='Won\'t be used unless enable_adjector_click_tracking is true.',
default=True)
width = twf.TextField(label_text='Width', validator=PositiveInt)
height = twf.TextField(label_text='Height', validator=PositiveInt)
start_date = twf.TextField(label_text='Start Date', validator=DateTime(if_missing=None))
end_date = twf.TextField(label_text='End Date', validator=DateTime(end_interval=True, if_missing=None))
disabled = twf.CheckBox(label_text='Disabled', default=False)
delete = twf.SubmitButton(default='Delete', named_button=True, validator=None, edit_only=True)
validator = FilteringSchema()
template = 'genshi:adjector.templates.form.basic'
Creative = CreativeForm('new_creative')
class LocationForm(twf.ListForm):
'''Set creation form'''
class fields(twa.WidgetsList):
title = twf.TextField(label_text='Title', validator=UnicodeNonEmptyString, size=50)
parent_id = twf.SingleSelectField(label_text='Parent Location', validator=Int)
description = twf.TextArea(label_text='Description', cols=100, rows=5, validator=UnicodeEmptyString)
cj_site_id = twf.TextField(label_text='CJ Site ID', validator=Int(min=0))
delete = twf.SubmitButton(default='Delete', named_button=True, validator=None, edit_only=True)
validator = FilteringSchema()
template = 'genshi:adjector.templates.form.basic'
Location = LocationForm()
class SetForm(twf.ListForm):
'''Set creation form'''
class fields(twa.WidgetsList):
title = twf.TextField(label_text='Title', validator=UnicodeNonEmptyString, size=50)
parent_id = twf.SingleSelectField(label_text='Parent Set', validator=Int)
weight = twf.TextField(label_text='Weight', default=1.0, validator=Number(not_empty=True, min=0.0))
total_weight = twf.TextField(label_text='Total Weight', validator=None, disabled=True, edit_only=True)
description = twf.TextArea(label_text='Description', cols=100, rows=5, validator=UnicodeEmptyString)
delete = twf.SubmitButton(default='Delete', named_button=True, validator=None, edit_only=True)
validator = FilteringSchema()
template = 'genshi:adjector.templates.form.basic'
Set = SetForm()
class ZoneForm(twf.ListForm):
'''Zone creation form'''
class fields(twa.WidgetsList):
preview = GenericField(label_text='Preview', validator=None, edit_only=True)
title = twf.TextField(label_text='Title', validator=UnicodeNonEmptyString, size=50)
name = twf.TextField(label_text='Unique Name', help_text='optional; an alternate way to access the zone',
validator = SimpleString(strip=True, not_empty=False, max=80))
parent_id = twf.SingleSelectField(label_text='Location', validator=Int, help_text='necessary for imported creatives to display here')
creative_types = twf.SingleSelectField(label_text='Show Creative Types', options=[[0, 'Blocks and Text'], [1, 'Text Only'], [2, 'Blocks Only']],
validator=Int(not_empty=True, min=0, max=2))
description = twf.TextArea(label_text='Description', cols=100, rows=5, validator=UnicodeEmptyString)
min_width = twf.TextField(label_text='Min Width', validator=PositiveInt)
max_width = twf.TextField(label_text='Max Width', validator=PositiveInt)
min_height = twf.TextField(label_text='Min Height', validator=PositiveInt)
max_height = twf.TextField(label_text='Max Height', validator=PositiveInt)
num_texts = twf.TextField(label_text='Number of Text Creatives to Show', default=1, validator=Int(not_empty=False, min=1, if_missing=1))
before_all_text = twf.TextArea(label_text='Before All Text Creatives', validator=UnicodeEmptyString, cols=100)
after_all_text = twf.TextArea(label_text='After All Text Creatives', validator=UnicodeEmptyString, cols=100)
before_each_text = twf.TextArea(label_text='Before Each Text Creative', validator=UnicodeEmptyString, cols=100)
after_each_text = twf.TextArea(label_text='After Each Text Creative', validator=UnicodeEmptyString, cols=100)
weight_texts = twf.TextField(label_text='Adjust Weight for Text Creatives (Blocks = 1.0)',
default=1.0, validator=Number(not_empty=True, min=0.0, if_missing=1.0))
normalize_by_container = twf.CheckBox(label_text='Normalize By Container')
previous_name = twf.HiddenField(validator=UnicodeString(strip=True, not_empty=False, max=80, if_missing=None))
delete = twf.SubmitButton(default='Delete', named_button=True, validator=None, edit_only=True)
validator = FilteringSchema(chained_validators=[
UniqueValue(lambda name: model.Zone.query.filter_by(name=name).count() == 0, 'name', 'previous_name', not_empty=False)
])
template = 'genshi:adjector.templates.form.basic'
Zone = ZoneForm('new_zone')
class CJLinkSearchFields(twa.WidgetsList):
# Some of these don't seem to be supported by the REST interface. Thanks CJ, I really appreciate it.
keywords = twf.TextField(label_text='Keywords', help_text = 'space separated, +keyword requires a keyword, -is a not, default is or operation',
validator = UnicodeString(strip=True, not_empty=False))
# link_size = twf.SingleSelectField(label_text='Size', validator = UnicodeString(strip=True, not_empty=False),
# options=['', '88x31 Micro Bar', '120x60 Button 2', '120x90 Button 1',
# '150x50 Banner', '234x60 Half Banner', '468x60 Full Banner', '125x125 Square Button', '180x150 Rectangle',
# '250x250 Square Pop-Up', '300x250 Medium Rectangle', '336x280 Large Rectangle', '240x400 Vertical Rectangle',
# '120x240 Vertical Banner', '120x600 Skyscraper', '160x600 Wide Skyscraper', 'Other'])
link_type = twf.SingleSelectField(label_text='Type', validator = UnicodeString(strip=True, not_empty=False),
options=['', 'Banner', 'Advanced Link', 'Text Link', 'Content Link', 'SmartLink',
'Product Catalog', 'Advertiser SmartZone', 'Keyword Link'])
promotion_start_date = twf.TextField(label_text='Start Date', help_text = 'Format: MM/DD/YYYY',
validator = UnicodeString(strip=True, not_empty=False))
promotion_end_date = twf.TextField(label_text='End Date', help_text = 'Format: MM/DD/YYYY or "Ongoing" for only links with no end date',
validator = UnicodeString(strip=True, not_empty=False))
promotion_type = twf.SingleSelectField(label_text='Promotion Type', help_text = 'Required if Start or End date given',
validator = UnicodeString(strip=True, not_empty=False),
options = ['',
['coupon', 'Coupon'],
['sweepstakes', 'Sweepstakes'],
['product', 'Product'],
['sale', 'Sale'],
['free shipping', 'Free Shipping'],
['seasonal link', 'Seasonal Link']])
# language = twf.TextField(label_text='Language', default='en', validator = UnicodeString(strip=True, not_empty=False))
# serviceable_area = twf.TextField(label_text='Serviceable Area', default='US', validator = UnicodeString(strip=True, not_empty=False))
records_per_page = twf.TextField(label_text='Records Per Page', default=100, validator=Int(min=0, not_empty=False))
page_number = twf.TextField(label_text='Page Number', default=1, validator=Int(min=0, not_empty=False))
# sort_by = twf.SingleSelectField(label_text='Sort By',
# validator = UnicodeString(strip=True, not_empty=False),
# options=[['', 'Relevance'],
# ['link-id', 'Link ID'],
# ['link-destination', 'Link Destination'],
# ['link-type', 'Link Type'],
# ['advertiser-id', 'Advertiser ID'],
# ['advertiser-name', 'Advertiser Name'],
# ['creative-width', 'Width'],
# ['creative-height', 'Height'],
# ['promotion-start-date', 'Start Date'],
# ['promotion-end-date', 'End Date'],
# ['category', 'Category']])
# sort_order = twf.SingleSelectField(label_text='Sort Order', options=[['dec', 'Descending'], ['asc', 'Ascending']],
# validator=UnicodeString(not_empty=True))
show_ignored = twf.CheckBox(label_text='Show Ignored Links')
show_imported = twf.CheckBox(label_text='Show Imported Links')
class CJLinkSearchForm(twf.ListForm):
class extra_field(twa.WidgetsList):
website_id = twf.SingleSelectField(label_text='Website', help_text='Doesn\'t matter much if enable_cj_site_replacements is true.',
validator=UnicodeString(not_empty=True))
fields = CJLinkSearchFields + extra_field
template = 'genshi:adjector.templates.form.basic'
CJLinkSearch = CJLinkSearchForm()
class CJLinkSearchOneSiteForm(twf.ListForm):
class extra_field(twa.WidgetsList):
website_id = twf.HiddenField(validator=UnicodeString(not_empty=True))
fields = CJLinkSearchFields + extra_field
template = 'genshi:adjector.templates.form.basic'
CJLinkSearchOneSite = CJLinkSearchOneSiteForm() | Adjector | /Adjector-1.0b1.tar.gz/Adjector-1.0b1/adjector/forms/forms.py | forms.py |
import logging
from datetime import datetime
from elixir import using_options, using_table_options, BLOB, Boolean, ColumnProperty, \
DateTime, Entity, EntityMeta, Field, Float, Integer, ManyToMany, ManyToOne, \
OneToMany, OneToOne, SmallInteger, String, UnicodeText
from genshi import Markup
from sqlalchemy import func, UniqueConstraint
from adjector.core.conf import conf
from adjector.core.tracking import add_tracking, remove_tracking
log = logging.getLogger(__name__)
max_int = 2147483647
tz_now = lambda : datetime.now(conf.timezone)
UnicodeText = UnicodeText(assert_unicode=False)
class CircularDependencyException(Exception):
pass
class GenericEntity(object):
def __init__(self, data):
self._updated = self.set(data)
def set(self, data):
for field in data.keys():
if hasattr(self, field):
if field == 'title':
data[field] = data[field][:80]
self.__setattr__(field, data[field])
else:
log.warning('No field: %s' % field)
def value(self):
return self.__dict__
class GenericListEntity(GenericEntity):
def set(self, data):
GenericEntity.set(self, data)
# Detect cycles in parenting - Brent's algorithm http://www.siafoo.net/algorithm/11
turtle = self
rabbit = self
steps_taken = 0
step_limit = 2
while True:
if not rabbit.parent_id:
break #no loop
rabbit = rabbit.query.get(rabbit.parent_id)
steps_taken += 1
if rabbit == turtle:
# loop!
raise CircularDependencyException
if steps_taken == step_limit:
steps_taken = 0
step_limit *=2
turtle = rabbit
class CJIgnoredLink(Entity):
cj_advertiser_id = Field(Integer, required=True)
cj_link_id = Field(Integer, required=True)
using_options(tablename=conf.table_prefix + 'cj_ignored_links')
using_table_options(UniqueConstraint('cj_link_id'))
def __init__(self, link_id, advertiser_id):
self.cj_advertiser_id = advertiser_id
self.cj_link_id = link_id
class Click(Entity):
time = Field(DateTime(timezone=True), required=True, default=tz_now)
creative = ManyToOne('Creative', ondelete='set null')
zone = ManyToOne('Zone', ondelete='set null')
using_options(tablename=conf.table_prefix + 'clicks')
def __init__(self, creative_id, zone_id):
self.creative_id = creative_id
self.zone_id = zone_id
class Creative(GenericEntity, Entity):
parent = ManyToOne('Set', required=False, ondelete='set null')
#zones = ManyToMany('Zone', tablename='creatives_to_zones')
creative_zone_pairs = OneToMany('CreativeZonePair', cascade='delete')
title = Field(String(80, convert_unicode=True), required=True)
html = Field(UnicodeText, required=True, default='')
is_text = Field(Boolean, required=True, default=False)
width = Field(Integer, required=True, default=0)
height = Field(Integer, required=True, default=0)
start_date = Field(DateTime(timezone=True))
end_date = Field(DateTime(timezone=True))
weight = Field(Float, required=True, default=1.0)
add_tracking = Field(Boolean, required=True, default=True)
disabled = Field(Boolean, required=True, default=False)
create_date = Field(DateTime(timezone=True), required=True, default=tz_now)
cj_link_id = Field(Integer)
cj_advertiser_id = Field(Integer)
cj_site_id = Field(Integer)
views = OneToMany('View')
clicks = OneToMany('Click')
# Cached Values
html_tracked = Field(UnicodeText) #will be overwritten on set
parent_weight = Field(Float, required=True, default=1.0) # overwritten on any parent weight change
using_options(tablename=conf.table_prefix + 'creatives', order_by='title')
using_table_options(UniqueConstraint('cj_link_id'))
def __init__(self, data):
GenericEntity.__init__(self, data)
if self.parent_id:
self.parent_weight = Set.get(self.parent_id).weight
def get_clicks(self, start=None, end=None):
query = Click.query.filter_by(creative_id = self.id)
if start:
query = query.filter(Click.time > start)
if end:
query = query.filter(Click.time < end)
return query.count()
def get_views(self, start=None, end=None):
query = View.query.filter_by(creative_id = self.id)
if start:
query = query.filter(View.time > start)
if end:
query = query.filter(View.time < end)
return query.count()
@staticmethod
def possible_parents(this=None):
return [[set.id, set.title] for set in Set.query()]
def set(self, data):
old_parent_id = self.parent_id
old_html = self.html
old_add_tracking = self.add_tracking
GenericEntity.set(self, data)
if self.parent_id != old_parent_id:
self.parent_weight = Set.get(self.parent_id).weight
# TODO: Handle Block / Text bullshit
# Parse html
if self.html != old_html or self.add_tracking != old_add_tracking:
if self.add_tracking is not False:
self.html_tracked = add_tracking(self.html)
else:
self.html_tracked = None
return [self]
def value(self):
value = GenericEntity.value(self)
value['preview'] = Markup(remove_tracking(self.html, self.cj_site_id))
value['total_weight'] = self.weight * self.parent_weight
value['html_tracked'] = value['html_tracked'] or value['html']
return value
def view(self):
return '%s/creative/%i' % (conf.admin_base_url, self.id)
class CreativeZonePair(GenericEntity, Entity):
creative = ManyToOne('Creative', ondelete='cascade', use_alter=True)
zone = ManyToOne('Zone', ondelete='cascade', use_alter=True)
is_text = Field(Boolean, required=True)
lower_bound = Field(Float, required=True)
upper_bound = Field(Float, required=True)
using_options(tablename=conf.table_prefix + 'creative_zone_pairs')
using_table_options(UniqueConstraint('creative_id', 'zone_id'))
class Location(GenericListEntity, Entity):
''' A container for locations or zones '''
parent = ManyToOne('Location', required=False, ondelete='set null')
sublocations = OneToMany('Location')
zones = OneToMany('Zone')
title = Field(String(80, convert_unicode=True), required=True)
description = Field(UnicodeText)
create_date = Field(DateTime(timezone=True), required=True, default=tz_now)
cj_site_id = Field(Integer)
parent_cj_site_id = Field(Integer)
using_options(tablename=conf.table_prefix + 'locations', order_by='title')
def __init__(self, data):
GenericEntity.__init__(self, data)
if self.parent_id:
self.parent_cj_site_id = Location.get(self.parent_id).cj_site_id
def delete(self, data):
updated = []
for subloc in self.sublocations:
updated.extend(subloc.set(dict(parent_cj_site_id = None)))
for zone in self.zones:
updated.extend(zone.set(dict(parent_cj_site_id = None)))
Entity.delete(self)
return updated
@staticmethod
def possible_parents(this = None):
filter = None
if this:
filter = Location.id != this.id
return [[location.id, location.title] for location in Location.query.filter(filter)]
def set(self, data):
updated = [self]
old_parent_id = self.parent_id
old_cj_site_id = self.cj_site_id
old_parent_cj_site_id = self.parent_cj_site_id
GenericEntity.set(self, data)
if self.parent_id != old_parent_id:
self.parent_cj_site_id = Location.get(self.parent_id).cj_site_id
if self.cj_site_id != old_cj_site_id or self.parent_cj_site_id != old_parent_cj_site_id:
# Only pass parent- down if we don't have our own
for subloc in self.sublocations:
updated.extend(subloc.set(dict(parent_cj_site_id = self.cj_site_id or self.parent_cj_site_id)))
for zone in self.zones:
updated.extend(zone.set(dict(parent_cj_site_id = self.cj_site_id or self.parent_cj_site_id)))
return updated
def view(self):
return '%s/location/%i' % (conf.admin_base_url, self.id)
class Set(GenericListEntity, Entity):
parent = ManyToOne('Set', required=False, ondelete='set null')
subsets = OneToMany('Set')
creatives = OneToMany('Creative')
title = Field(String(80, convert_unicode=True), required=True)
description = Field(UnicodeText)
weight = Field(Float, required=True, default=1.0)
parent_weight = Field(Float, required=True, default=1.0) # overwritten on any parent weight change
create_date = Field(DateTime(timezone=True), required=True, default=tz_now)
cj_advertiser_id = Field(Integer)
using_options(tablename=conf.table_prefix + 'sets', order_by='title')
using_table_options(UniqueConstraint('cj_advertiser_id'))
def __init__(self, data):
GenericEntity.__init__(self, data)
if self.parent_id:
self.parent_weight = Set.get(self.parent_id).weight
def delete(self, data):
updated = []
for subset in self.subsets:
updated.extend(subset.set(dict(parent_weight = 1.0)))
for creative in self.creatives:
updated.extend(creative.set(dict(parent_weight = 1.0)))
Entity.delete(self)
return updated
@staticmethod
def possible_parents(this = None):
filter = None
if this:
filter = Set.id != this.id
return [[set.id, set.title] for set in Set.query.filter(filter)]
def set(self, data):
updated = [self]
old_parent_id = self.parent_id
old_weight = self.weight
old_parent_weight = self.parent_weight
GenericEntity.set(self, data)
if self.parent_id != old_parent_id:
self.parent_weight = Set.get(self.parent_id).weight
if self.weight != old_weight or self.parent_weight != old_parent_weight:
for subset in self.subsets:
updated.extend(subset.set(dict(parent_weight = self.parent_weight * self.weight)))
for creative in self.creatives:
updated.extend(creative.set(dict(parent_weight = self.parent_weight * self.weight)))
return updated
def value(self):
value = GenericEntity.value(self)
value['total_weight'] = self.weight * self.parent_weight
return value
def view(self):
return '%s/set/%i' % (conf.admin_base_url, self.id)
class View(GenericEntity, Entity):
time = Field(DateTime(timezone=True), required=True, default=tz_now)
creative = ManyToOne('Creative', ondelete='set null')
zone = ManyToOne('Zone', ondelete='set null')
using_options(tablename=conf.table_prefix + 'views')
def __init__(self, creative_id, zone_id):
self.creative_id = creative_id
self.zone_id = zone_id
class Zone(GenericEntity, Entity):
parent = ManyToOne('Location', required=False, ondelete='set null')
creative_zone_pairs = OneToMany('CreativeZonePair', cascade='delete')
name = Field(String(80, convert_unicode=True), required=False)
title = Field(String(80, convert_unicode=True), required=True)
description = Field(UnicodeText)
#creatives = ManyToMany('Creative', tablename='creatives_to_zones')
normalize_by_container = Field(Boolean, required=True, default=False)
creative_types = Field(SmallInteger, required=True, default=0) #0: Both, 1: Text, 2: Blocks
# These only matter if blocks allowed
min_width = Field(Integer, required=True, default=0)
max_width = Field(Integer, required=True, default=max_int)
min_height = Field(Integer, required=True, default=0)
max_height = Field(Integer, required=True, default=max_int)
# These only matter if text allowed
num_texts = Field(SmallInteger, required=True, default=1)
weight_texts = Field(Float, required=True, default=1.0)
before_all_text = Field(UnicodeText)
after_all_text = Field(UnicodeText)
before_each_text = Field(UnicodeText)
after_each_text = Field(UnicodeText)
create_date = Field(DateTime(timezone=True), required=True, default=tz_now)
# Cached from parent
parent_cj_site_id = Field(Integer)
# Cached from creatives
total_text_weight = Field(Float) # i dunno, some default? should be updated quick.
views = OneToMany('View')
clicks = OneToMany('Click')
using_options(tablename=conf.table_prefix + 'zones', order_by='title')
def __init__(self, data):
GenericEntity.__init__(self, data)
if self.parent_id:
self.parent_cj_site_id = Location.get(self.parent_id).cj_site_id
def get_clicks(self, start=None, end=None):
query = Click.query.filter_by(zone_id = self.id)
if start:
query = query.filter(Click.time > start)
if end:
query = query.filter(Click.time < end)
return query.count()
def get_views(self, start=None, end=None):
query = View.query.filter_by(zone_id = self.id)
if start:
query = query.filter(View.time > start)
if end:
query = query.filter(View.time < end)
return query.count()
@staticmethod
def possible_parents(this=None):
return [[location.id, location.title] for location in Location.query()]
def set(self, data):
if data.has_key('previous_name'):
del data['previous_name']
old_parent_id = self.parent_id
GenericEntity.set(self, data)
if self.parent_id != old_parent_id:
self.parent_cj_site_id = Location.get(self.parent_id).cj_site_id
return [self]
def value(self):
val = self.__dict__.copy()
val['previous_name'] = self.name
return val
def view(self):
return '%s/zone/%i' % (conf.admin_base_url, self.id) | Adjector | /Adjector-1.0b1.tar.gz/Adjector-1.0b1/adjector/model/entities.py | entities.py |
import sys
DEFAULT_VERSION = "0.6c9"
DEFAULT_URL = "http://pypi.python.org/packages/%s/s/setuptools/" % sys.version[:3]
md5_data = {
'setuptools-0.6b1-py2.3.egg': '8822caf901250d848b996b7f25c6e6ca',
'setuptools-0.6b1-py2.4.egg': 'b79a8a403e4502fbb85ee3f1941735cb',
'setuptools-0.6b2-py2.3.egg': '5657759d8a6d8fc44070a9d07272d99b',
'setuptools-0.6b2-py2.4.egg': '4996a8d169d2be661fa32a6e52e4f82a',
'setuptools-0.6b3-py2.3.egg': 'bb31c0fc7399a63579975cad9f5a0618',
'setuptools-0.6b3-py2.4.egg': '38a8c6b3d6ecd22247f179f7da669fac',
'setuptools-0.6b4-py2.3.egg': '62045a24ed4e1ebc77fe039aa4e6f7e5',
'setuptools-0.6b4-py2.4.egg': '4cb2a185d228dacffb2d17f103b3b1c4',
'setuptools-0.6c1-py2.3.egg': 'b3f2b5539d65cb7f74ad79127f1a908c',
'setuptools-0.6c1-py2.4.egg': 'b45adeda0667d2d2ffe14009364f2a4b',
'setuptools-0.6c2-py2.3.egg': 'f0064bf6aa2b7d0f3ba0b43f20817c27',
'setuptools-0.6c2-py2.4.egg': '616192eec35f47e8ea16cd6a122b7277',
'setuptools-0.6c3-py2.3.egg': 'f181fa125dfe85a259c9cd6f1d7b78fa',
'setuptools-0.6c3-py2.4.egg': 'e0ed74682c998bfb73bf803a50e7b71e',
'setuptools-0.6c3-py2.5.egg': 'abef16fdd61955514841c7c6bd98965e',
'setuptools-0.6c4-py2.3.egg': 'b0b9131acab32022bfac7f44c5d7971f',
'setuptools-0.6c4-py2.4.egg': '2a1f9656d4fbf3c97bf946c0a124e6e2',
'setuptools-0.6c4-py2.5.egg': '8f5a052e32cdb9c72bcf4b5526f28afc',
'setuptools-0.6c5-py2.3.egg': 'ee9fd80965da04f2f3e6b3576e9d8167',
'setuptools-0.6c5-py2.4.egg': 'afe2adf1c01701ee841761f5bcd8aa64',
'setuptools-0.6c5-py2.5.egg': 'a8d3f61494ccaa8714dfed37bccd3d5d',
'setuptools-0.6c6-py2.3.egg': '35686b78116a668847237b69d549ec20',
'setuptools-0.6c6-py2.4.egg': '3c56af57be3225019260a644430065ab',
'setuptools-0.6c6-py2.5.egg': 'b2f8a7520709a5b34f80946de5f02f53',
'setuptools-0.6c7-py2.3.egg': '209fdf9adc3a615e5115b725658e13e2',
'setuptools-0.6c7-py2.4.egg': '5a8f954807d46a0fb67cf1f26c55a82e',
'setuptools-0.6c7-py2.5.egg': '45d2ad28f9750e7434111fde831e8372',
'setuptools-0.6c8-py2.3.egg': '50759d29b349db8cfd807ba8303f1902',
'setuptools-0.6c8-py2.4.egg': 'cba38d74f7d483c06e9daa6070cce6de',
'setuptools-0.6c8-py2.5.egg': '1721747ee329dc150590a58b3e1ac95b',
'setuptools-0.6c9-py2.3.egg': 'a83c4020414807b496e4cfbe08507c03',
'setuptools-0.6c9-py2.4.egg': '260a2be2e5388d66bdaee06abec6342a',
'setuptools-0.6c9-py2.5.egg': 'fe67c3e5a17b12c0e7c541b7ea43a8e6',
'setuptools-0.6c9-py2.6.egg': 'ca37b1ff16fa2ede6e19383e7b59245a',
}
import sys, os
try: from hashlib import md5
except ImportError: from md5 import md5
def _validate_md5(egg_name, data):
if egg_name in md5_data:
digest = md5(data).hexdigest()
if digest != md5_data[egg_name]:
print >>sys.stderr, (
"md5 validation of %s failed! (Possible download problem?)"
% egg_name
)
sys.exit(2)
return data
def use_setuptools(
version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir,
download_delay=15
):
"""Automatically find/download setuptools and make it available on sys.path
`version` should be a valid setuptools version number that is available
as an egg for download under the `download_base` URL (which should end with
a '/'). `to_dir` is the directory where setuptools will be downloaded, if
it is not already available. If `download_delay` is specified, it should
be the number of seconds that will be paused before initiating a download,
should one be required. If an older version of setuptools is installed,
this routine will print a message to ``sys.stderr`` and raise SystemExit in
an attempt to abort the calling script.
"""
was_imported = 'pkg_resources' in sys.modules or 'setuptools' in sys.modules
def do_download():
egg = download_setuptools(version, download_base, to_dir, download_delay)
sys.path.insert(0, egg)
import setuptools; setuptools.bootstrap_install_from = egg
try:
import pkg_resources
except ImportError:
return do_download()
try:
pkg_resources.require("setuptools>="+version); return
except pkg_resources.VersionConflict, e:
if was_imported:
print >>sys.stderr, (
"The required version of setuptools (>=%s) is not available, and\n"
"can't be installed while this script is running. Please install\n"
" a more recent version first, using 'easy_install -U setuptools'."
"\n\n(Currently using %r)"
) % (version, e.args[0])
sys.exit(2)
else:
del pkg_resources, sys.modules['pkg_resources'] # reload ok
return do_download()
except pkg_resources.DistributionNotFound:
return do_download()
def download_setuptools(
version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir,
delay = 15
):
"""Download setuptools from a specified location and return its filename
`version` should be a valid setuptools version number that is available
as an egg for download under the `download_base` URL (which should end
with a '/'). `to_dir` is the directory where the egg will be downloaded.
`delay` is the number of seconds to pause before an actual download attempt.
"""
import urllib2, shutil
egg_name = "setuptools-%s-py%s.egg" % (version,sys.version[:3])
url = download_base + egg_name
saveto = os.path.join(to_dir, egg_name)
src = dst = None
if not os.path.exists(saveto): # Avoid repeated downloads
try:
from distutils import log
if delay:
log.warn("""
---------------------------------------------------------------------------
This script requires setuptools version %s to run (even to display
help). I will attempt to download it for you (from
%s), but
you may need to enable firewall access for this script first.
I will start the download in %d seconds.
(Note: if this machine does not have network access, please obtain the file
%s
and place it in this directory before rerunning this script.)
---------------------------------------------------------------------------""",
version, download_base, delay, url
); from time import sleep; sleep(delay)
log.warn("Downloading %s", url)
src = urllib2.urlopen(url)
# Read/write all in one block, so we don't create a corrupt file
# if the download is interrupted.
data = _validate_md5(egg_name, src.read())
dst = open(saveto,"wb"); dst.write(data)
finally:
if src: src.close()
if dst: dst.close()
return os.path.realpath(saveto)
def main(argv, version=DEFAULT_VERSION):
"""Install or upgrade setuptools and EasyInstall"""
try:
import setuptools
except ImportError:
egg = None
try:
egg = download_setuptools(version, delay=0)
sys.path.insert(0,egg)
from setuptools.command.easy_install import main
return main(list(argv)+[egg]) # we're done here
finally:
if egg and os.path.exists(egg):
os.unlink(egg)
else:
if setuptools.__version__ == '0.0.1':
print >>sys.stderr, (
"You have an obsolete version of setuptools installed. Please\n"
"remove it from your system entirely before rerunning this script."
)
sys.exit(2)
req = "setuptools>="+version
import pkg_resources
try:
pkg_resources.require(req)
except pkg_resources.VersionConflict:
try:
from setuptools.command.easy_install import main
except ImportError:
from easy_install import main
main(list(argv)+[download_setuptools(delay=0)])
sys.exit(0) # try to force an exit
else:
if argv:
from setuptools.command.easy_install import main
main(argv)
else:
print "Setuptools version",version,"or greater has been installed."
print '(Run "ez_setup.py -U setuptools" to reinstall or upgrade.)'
def update_md5(filenames):
"""Update our built-in md5 registry"""
import re
for name in filenames:
base = os.path.basename(name)
f = open(name,'rb')
md5_data[base] = md5(f.read()).hexdigest()
f.close()
data = [" %r: %r,\n" % it for it in md5_data.items()]
data.sort()
repl = "".join(data)
import inspect
srcfile = inspect.getsourcefile(sys.modules[__name__])
f = open(srcfile, 'rb'); src = f.read(); f.close()
match = re.search("\nmd5_data = {\n([^}]+)}", src)
if not match:
print >>sys.stderr, "Internal error!"
sys.exit(2)
src = src[:match.start(1)] + repl + src[match.end(1):]
f = open(srcfile,'w')
f.write(src)
f.close()
if __name__=='__main__':
if len(sys.argv)>2 and sys.argv[1]=='--md5update':
update_md5(sys.argv[2:])
else:
main(sys.argv[1:]) | AdjectorClient | /AdjectorClient-1.0b1.tar.gz/AdjectorClient-1.0b1/ez_setup.py | ez_setup.py |
Adjector 1.0b
*************
Hi there. Thanks for using Adjector, a lightweight, flexible, open-source
ad server written in Python.
Adjector is licensed under the GPL, version 2 or 3, at your option.
For more information, see LICENSE.txt.
This Distribution
-----------------
This is the client-only Adjector distribution.
This distribution does not have as many dependencies, but can only access
a database created by the full version of Adjector. This version
is intended for installation on systems that need ads served, but which
are using another system to configure the ads.
A Trac plugin is also available. The full Adjector version and the Trac
plugin can be downloaded at
http://projects.icapsid.net/adjector/wiki/Download
Documentation
-------------
All of our documentation is online at
http://projects.icapsid.net/adjector
You may wish to get started with 'Installing the Adjector Client' at
http://projects.icapsid.net/adjector/wiki/ClientInstall
For questions, comments, help, or any other information, visit us online
or email [email protected].
| AdjectorClient | /AdjectorClient-1.0b1.tar.gz/AdjectorClient-1.0b1/README.txt | README.txt |
import os.path
import random
import re
from adjector.core.conf import conf
from adjector.core.cj_util import remove_tracking_cj
def add_tracking(html):
if re.search('google_ad_client', html):
return add_tracking_adsense(html)
else:
return add_tracking_generic(html)
def add_tracking_generic(html):
def repl(match):
groups = match.groups()
return groups[0] + 'ADJECTOR_TRACKING_BASE_URL/track/click_with_redirect?creative_id=ADJECTOR_CREATIVE_ID&zone_id=ADJECTOR_ZONE_ID&cache_bust=' + cache_bust() + '&url=' + groups[1] + groups[2]
html_tracked = re.sub(r'''(.*<a[^>]+href\s*=\s*['"])([^"']+)(['"][^>]*>.*)''', repl, html)
if html == html_tracked: # if no change, don't touch.
return
else:
return html_tracked
def add_tracking_adsense(html):
adsense_tracking_code = open(os.path.join(conf.root, 'public', 'js', 'adsense_tracker.js')).read()
click_track = 'ADJECTOR_TRACKING_BASE_URL/track/click_with_image?creative_id=ADJECTOR_CREATIVE_ID&zone_id=ADJECTOR_ZONE_ID&cache_bust=' # cache_bust added in js
html_tracked = '''
<span>
%(html)s
<script type="text/javascript"><!--// <![CDATA[
/* adjector_click_track=%(click_track)s */
%(adsense_tracking_code)s
// ]]> --></script>
</span>
''' % dict(html=html, adsense_tracking_code=adsense_tracking_code, click_track=click_track)
return html_tracked
def cache_bust():
return str(random.random())[2:]
def remove_tracking(html, cj_site_id = None):
if cj_site_id:
return remove_tracking_cj(html, cj_site_id)
elif re.search('google_ad_client', html):
return remove_tracking_adsense(html)
else:
return html # we can't do anything
def remove_tracking_adsense(html):
html_notrack = '''
<script type='text/javascript'>
var adjector_google_adtest_backup = google_adtest;
var google_adtest='on';
</script>
%(html)s
<script type='text/javascript'>
var google_adtest=adjector_google_adtest_backup;
</script>
''' % dict(html=html)
return html_notrack | AdjectorClient | /AdjectorClient-1.0b1.tar.gz/AdjectorClient-1.0b1/adjector/core/tracking.py | tracking.py |
from __future__ import division
import logging
import random
import re
from sqlalchemy import and_, func, or_
from sqlalchemy.sql import case, join, select, subquery
import adjector.model as model
from adjector.core.conf import conf
from adjector.core.tracking import remove_tracking
log = logging.getLogger(__name__)
def old_render_zone(ident, track=None, admin=False):
'''
Render A Random Creative for this Zone. Access by id or name.
Respect all zone requirements. Use creative weights and their containing set weights to weight randomness.
If zone.normalize_by_container, normalize creatives by the total weight of the set they are in,
so the total weight of the creatives directly in any set is always 1.
If block and text ads can be shown, a decision will be made to show one or the other based on the total probability of each type of creative.
Note that this function is called by the API function render_zone.
'''
# Note that this is my first time seriously using SA, feel free to clean this up
if isinstance(ident, int) or ident.isdigit():
zone = model.Zone.get(int(ident))
else:
zone = model.Zone.query.filter_by(name=ident).first()
if zone is None: # Fail gracefully, don't commit suicide because someone deleted a zone from the ad server
log.error('Tried to render zone %s. Zone Not Found' % ident)
return ''
# Find zone site_id, if applicable. Default to global site_id, or else None.
cj_site_id = zone.parent_cj_site_id or conf.cj_site_id
# Figure out what kind of creative we need
# Size filtering
whereclause_zone = and_(or_(and_(model.Creative.width >= zone.min_width,
model.Creative.width <= zone.max_width,
model.Creative.height >= zone.min_height,
model.Creative.height <= zone.max_height),
model.Creative.is_text == True),
# Date filtering
or_(model.Creative.start_date == None, model.Creative.start_date <= func.now()),
or_(model.Creative.end_date == None, model.Creative.end_date >= func.now()),
# Site Id filtering
or_(model.Creative.cj_site_id == None, model.Creative.cj_site_id == cj_site_id,
and_(conf.enable_cj_site_replacements, cj_site_id != None, model.Creative.cj_site_id != None)),
# Disabled?
model.Creative.disabled == False)
creative_types = zone.creative_types # This might change later.
doing_text = None # just so it can't be undefined later
# Sanity check - this shouldn't ever happen
if zone.num_texts == 0:
creative_types = 2
# Filter by text or block if needed. If you want both we do some magic later. But first we need to find out how much of each we have, weight wise.
if creative_types == 1:
whereclause_zone.append(model.Creative.is_text==True)
number_needed = zone.num_texts
doing_text = True
elif creative_types == 2:
whereclause_zone.append(model.Creative.is_text==False)
number_needed = 1
doing_text = False
creatives = model.Creative.table
all_results = []
# Find random creatives; Loop until we have as many as we need
while True:
# First let's figure how to normalize by how many items will be displayed. This ensures all items are displayed equally.
# We want this to be 1 for blocks and num_texts for texts. Also throw in the zone.weight_texts
#items_displayed = cast(creatives.c.is_text, Integer) * (zone.num_texts - 1) + 1
text_weight_adjust = case([(True, zone.weight_texts / zone.num_texts), (False, 1)], creatives.c.is_text)
if zone.normalize_by_container:
# Find the total weight of each parent in order to normalize
parent_weights = subquery('parent_weight',
[creatives.c.parent_id, func.sum(creatives.c.parent_weight * creatives.c.weight).label('pw_total')],
group_by=creatives.c.parent_id)
# Join creatives table and normalized weight table - I'm renaming a lot of fields here to make life easier down the line
# SA was insisting on doing a full subquery anyways (I just wanted a join)
c1 = subquery('c1',
[creatives.c.id.label('id'), creatives.c.title.label('title'), creatives.c.html.label('html'),
creatives.c.html_tracked.label('html_tracked'), creatives.c.is_text.label('is_text'),
creatives.c.cj_site_id.label('cj_site_id'),
(creatives.c.weight * creatives.c.parent_weight * text_weight_adjust /
case([(parent_weights.c.pw_total > 0, parent_weights.c.pw_total)], else_ = None)).label('normalized_weight')], # Make sure we can't divide by 0
whereclause_zone, # here go our filters
from_obj=join(creatives, parent_weights, or_(creatives.c.parent_id == parent_weights.c.parent_id,
and_(creatives.c.parent_id == None, parent_weights.c.parent_id == None)))).alias('c1')
else:
# We don't normalize weight by parent weight, so we dont' need fancy joins
c1 = subquery('c1',
[creatives.c.id, creatives.c.title, creatives.c.html, creatives.c.html_tracked, creatives.c.is_text, creatives.c.cj_site_id,
(creatives.c.weight * creatives.c.parent_weight * text_weight_adjust).label('normalized_weight')],
whereclause_zone)
#for a in model.session.execute(c1).fetchall(): print a
if creative_types == 0: # (Either type)
# Now that we have our weights in order, let's figure out how many of each thing (text/block) we have, weightwise.
texts_weight = select([func.sum(c1.c.normalized_weight)], c1.c.is_text == True).scalar() or 0
blocks_weight = select([func.sum(c1.c.normalized_weight)], c1.c.is_text == False).scalar() or 0
# Create weighted bins, text first (0-whatever). We are going to decide what kind of thing to make right here, right now,
# based on the weights of each. Because we can't have both (yet).
rand = random.random()
if texts_weight + blocks_weight == 0:
break
if rand < texts_weight / (texts_weight + blocks_weight):
c1 = c1.select().where(c1.c.is_text == True).alias('text')
total_weight = texts_weight
number_needed = zone.num_texts
doing_text = True
else:
c1 = c1.select().where(c1.c.is_text == False).alias('nottext')
total_weight = blocks_weight
number_needed = 1
doing_text = False
else:
# Find total normalized weight of all creatives in order to normalize *that*
total_weight = select([func.sum(c1.c.normalized_weight)])#.scalar() or 0
#if not total_weight:
# break
c2 = c1.alias('c2')
# Find the total weight above a creative in the table in order to form weighted bins for the random number generator
# Note that this is the upper bound, not the lower (if it was the lower it could be NULL)
incremental_weight = select([func.sum(c1.c.normalized_weight) / total_weight], c1.c.id <= c2.c.id, from_obj=c1)
# Get everything into one thing - for debugging this is a good place to select and print out stuff
shmush = select([c2.c.id, c2.c.title, c2.c.html, c2.c.html_tracked, c2.c.cj_site_id,
incremental_weight.label('inc_weight'), (c2.c.normalized_weight / total_weight).label('final_weight')],
from_obj=c2).alias('shmush')
#for a in model.session.execute(shmush).fetchall(): print a
# Generate some random numbers and comparisons - sorry about the magic it saves about 10 lines
# The crazy 0.9999 is to make sure we don't get a number so close to one we run into float precision errors (all the weights might not quite sum to 1,
# and so we might end up falling outside the bin!)
# Experimentally the error never seems to be worse than that, and that number is imprecise enough to be displayed exactly by python.
rand = [random.random() * 0.9999999999 for i in xrange(number_needed)]
whereclause_rand = or_(*[and_(shmush.c.inc_weight - shmush.c.final_weight <= rand[i], rand[i] < shmush.c.inc_weight) for i in xrange(number_needed)])
# Select only creatives where the random number falls between its cutoff and the next
results = model.session.execute(select([shmush.c.id, shmush.c.title, shmush.c.html, shmush.c.html_tracked, shmush.c.cj_site_id], whereclause_rand)).fetchall()
# Deal with number of results
if len(results) == 0:
if not doing_text or not all_results:
return ''
# Otherwise, we are probably just out of results.
break
if len(results) > number_needed:
log.error('Too many results while rendering zone %i. I got %i results and wanted %i' % (zone.id, len(results), number_needed))
results = results[:number_needed]
all_results.extend(results)
break
elif len(results) < number_needed:
if not doing_text:
raise Exception('Somehow we managed to get past several checks, and we have 0 < results < needed_results for block creatives.' + \
'Since needed_results should be 1, this seems fairly difficult.')
all_results.extend(results)
# It looks like we need more results, this should only happen when we are doing text. Try again.
number_needed -= len(results)
# Exclude ones we've already got
whereclause_zone.append(and_(*[model.Creative.id != result.id for result in results]))
# Set to only render text this time around
if creative_types == 0:
creative_types = 1
whereclause_zone.append(model.Creative.is_text == True)
# Continue loop...
else: # we have the right number?
all_results.extend(results)
break
if doing_text and len(all_results) < zone.num_texts:
log.warn('Could only retrieve %i of %i desired creatives for zone %i. This (hopefully) means you are requesting more creatives than exist.' \
% (len(all_results), zone.num_texts, zone.id))
# Ok, that's done, we have our results.
# Let's render some html
html = ''
if doing_text:
html += zone.before_all_text or ''
for creative in all_results:
if track or (track is None and conf.enable_adjector_view_tracking):
# Create a view thingy
model.View(creative['id'], zone.id)
model.session.commit()
# Figure out the html value...
# Use either click tracked or regular html
if (track or (track is None and conf.enable_adjector_click_tracking)) and creative['html_tracked'] is not None:
creative_html = creative['html_tracked'].replace('ADJECTOR_TRACKING_BASE_URL', conf.tracking_base_url)\
.replace('ADJECTOR_CREATIVE_ID', str(creative['id'])).replace('ADJECTOR_ZONE_ID', str(zone.id))
else:
creative_html = creative['html']
# Remove or modify third party click tracking
if (track is False or (track is None and not conf.enable_third_party_tracking)) and creative['cj_site_id'] is not None:
creative_html = remove_tracking(creative_html, creative['cj_site_id'])
elif conf.enable_cj_site_replacements:
creative_html = re.sub(str(creative['cj_site_id']), str(cj_site_id), creative_html)
########### Now we can do some text assembly ###########
# If text, add pre-text
if doing_text:
html += zone.before_each_text or ''
html += creative_html
# Are we in admin mode?
if admin:
html += '''
<div class='adjector_admin' style='color: red; background-color: silver'>
Creative: <a href='%(admin_base_url)s%(creative_url)s'>%(creative_title)s</a>
Zone: <a href='%(admin_base_url)s%(zone_url)s'>%(zone_title)s</a>
</div>
''' % dict(admin_base_url = conf.admin_base_url, creative_url = '/creative/%i' % creative['id'], zone_url = zone.view(),
creative_title = creative.title, zone_title = zone.title)
if doing_text:
html += zone.after_each_text or ''
if doing_text:
html += zone.after_all_text or ''
# Wrap in javascript if asked
if html and '<script' not in html and conf.require_javascript:
wrapper = '''<script type='text/javascript'>document.write('%s')</script>'''
# Do some quick substitutions to inject... #TODO there must be an existing function that does this
html = re.sub(r"'", r"\'", html) # escape quotes
html = re.sub(r"[\r\n]", r"", html) # remove line breaks
return wrapper % html
return html
def render_zone(ident, track=None, admin=False):
'''
Render A Random Creative for this Zone, using precached data. Access by id or name.
Respect all zone requirements. Use creative weights and their containing set weights to weight randomness.
If zone.normalize_by_container, normalize creatives by the total weight of the set they are in,
so the total weight of the creatives directly in any set is always 1.
If block and text ads can be shown, a decision will be made to show one or the other based on the total probability of each type of creative.
Note that this function is called by the API function render_zone.
'''
# Note that this is my first time seriously using SA, feel free to clean this up
if isinstance(ident, int) or ident.isdigit():
zone = model.Zone.get(int(ident))
else:
zone = model.Zone.query.filter_by(name=ident).first()
if zone is None: # Fail gracefully, don't commit suicide because someone deleted a zone from the ad server
log.error('Tried to render zone %s. Zone Not Found' % ident)
return ''
# Find zone site_id, if applicable. Default to global site_id, or else None.
cj_site_id = zone.parent_cj_site_id or conf.cj_site_id
# Texts or blocks?
rand = random.random()
if rand < zone.total_text_weight:
# texts!
number_needed = zone.num_texts
doing_text = True
else:
# blocks!
number_needed = 1
doing_text = False
query = model.CreativeZonePair.query.filter_by(zone_id = zone.id, is_text = doing_text)
num_pairs = query.count()
if num_pairs == number_needed:
pairs = query.all()
else:
pairs = [] # keep going until we get as many as we need
still_needed = number_needed
banned_ranges = []
while still_needed:
# Generate some random numbers and comparisons - sorry about the magic it saves about 10 lines
# The crazy 0.9999 is to make sure we don't get a number so close to one we run into float precision errors (all the weights might not quite sum to 1,
# and so we might end up falling outside the bin!)
# Experimentally the error never seems to be worse than that, and that number is imprecise enough to be displayed exactly by python.
# Assemble random numbers
rands = []
while len(rands) < still_needed:
rand = random.random() * 0.9999999999
bad_rand = False
for range in banned_ranges:
if range[0] <= rand < range[1]:
bad_rand = True
break
if not bad_rand:
rands.append(rand)
# Select only creatives where the random number falls between its cutoff and the next
results = query.filter(or_(*[and_(model.CreativeZonePair.lower_bound <= rands[i],
rands[i] < model.CreativeZonePair.upper_bound) for i in xrange(still_needed)])).all()
# What if there are no results?
if len(results) == 0:
if not pairs: # I guess there are no results
return ''
break # or else we are just out of results
still_needed -= len(results)
pairs += results
# Exclude ones we've already got, if we need to loop again
banned_ranges.extend([pair.lower_bound, pair.upper_bound] for pair in results)
#JIC
if len(pairs) > number_needed:
# This shouldn't be able to happen
log.error('Too many results while rendering zone %i. I got %i results and wanted %i' % (zone.id, len(results), number_needed))
pairs = pairs[:number_needed]
elif len(pairs) < number_needed:
log.warn('Could only retrieve %i of %i desired creatives for zone %i. This (hopefully) means you are requesting more creatives than exist.' \
% (len(pairs), zone.num_texts, zone.id))
# Ok, that's done, we have our results.
# Let's render some html
html = ''
if doing_text:
html += zone.before_all_text or ''
for pair in pairs:
creative = pair.creative
if track or (track is None and conf.enable_adjector_view_tracking):
# Create a view thingy - this is much faster than using SA (almost instant)
model.session.execute('INSERT INTO views (creative_id, zone_id, time) VALUES (%i, %i, now())' % (creative.id, zone.id))
#model.View(creative.id, zone.id)
# Figure out the html value...
# Use either click tracked or regular html
if (track or (track is None and conf.enable_adjector_click_tracking)) and creative.html_tracked is not None:
creative_html = creative.html_tracked.replace('ADJECTOR_TRACKING_BASE_URL', conf.tracking_base_url)\
.replace('ADJECTOR_CREATIVE_ID', str(creative.id)).replace('ADJECTOR_ZONE_ID', str(zone.id))
else:
creative_html = creative.html
# Remove or modify third party click tracking
if (track is False or (track is None and not conf.enable_third_party_tracking)) and creative.cj_site_id is not None:
creative_html = remove_tracking(creative_html, creative.cj_site_id)
elif cj_site_id and creative.cj_site_id and conf.enable_cj_site_replacements:
creative_html = re.sub(str(creative.cj_site_id), str(cj_site_id), creative_html)
########### Now we can do some text assembly ###########
# If text, add pre-text
if doing_text:
html += zone.before_each_text or ''
html += creative_html
# Are we in admin mode?
if admin:
html += '''
<div class='adjector_admin' style='color: red; background-color: silver'>
Creative: <a href='%(admin_base_url)s%(creative_url)s'>%(creative_title)s</a>
Zone: <a href='%(admin_base_url)s%(zone_url)s'>%(zone_title)s</a>
</div>
''' % dict(admin_base_url = conf.admin_base_url, creative_url = '/creative/%i' % creative.id, zone_url = zone.view(),
creative_title = creative.title, zone_title = zone.title)
if doing_text:
html += zone.after_each_text or ''
if doing_text:
html += zone.after_all_text or ''
model.session.commit() #having this down here saves us quite a bit of time
# Wrap in javascript if asked
if html and '<script' not in html and conf.require_javascript:
wrapper = '''<script type='text/javascript'>document.write('%s')</script>'''
# Do some quick substitutions to inject... #TODO there must be an existing function that does this
html = re.sub(r"'", r"\'", html) # escape quotes
html = re.sub(r"[\r\n]", r"", html) # remove line breaks
return wrapper % html
return html | AdjectorClient | /AdjectorClient-1.0b1.tar.gz/AdjectorClient-1.0b1/adjector/core/render.py | render.py |
import logging
from datetime import datetime
from elixir import using_options, using_table_options, BLOB, Boolean, ColumnProperty, \
DateTime, Entity, EntityMeta, Field, Float, Integer, ManyToMany, ManyToOne, \
OneToMany, OneToOne, SmallInteger, String, UnicodeText
from genshi import Markup
from sqlalchemy import func, UniqueConstraint
from adjector.core.conf import conf
from adjector.core.tracking import add_tracking, remove_tracking
log = logging.getLogger(__name__)
max_int = 2147483647
tz_now = lambda : datetime.now(conf.timezone)
UnicodeText = UnicodeText(assert_unicode=False)
class CircularDependencyException(Exception):
pass
class GenericEntity(object):
def __init__(self, data):
self._updated = self.set(data)
def set(self, data):
for field in data.keys():
if hasattr(self, field):
if field == 'title':
data[field] = data[field][:80]
self.__setattr__(field, data[field])
else:
log.warning('No field: %s' % field)
def value(self):
return self.__dict__
class GenericListEntity(GenericEntity):
def set(self, data):
GenericEntity.set(self, data)
# Detect cycles in parenting - Brent's algorithm http://www.siafoo.net/algorithm/11
turtle = self
rabbit = self
steps_taken = 0
step_limit = 2
while True:
if not rabbit.parent_id:
break #no loop
rabbit = rabbit.query.get(rabbit.parent_id)
steps_taken += 1
if rabbit == turtle:
# loop!
raise CircularDependencyException
if steps_taken == step_limit:
steps_taken = 0
step_limit *=2
turtle = rabbit
class CJIgnoredLink(Entity):
cj_advertiser_id = Field(Integer, required=True)
cj_link_id = Field(Integer, required=True)
using_options(tablename=conf.table_prefix + 'cj_ignored_links')
using_table_options(UniqueConstraint('cj_link_id'))
def __init__(self, link_id, advertiser_id):
self.cj_advertiser_id = advertiser_id
self.cj_link_id = link_id
class Click(Entity):
time = Field(DateTime(timezone=True), required=True, default=tz_now)
creative = ManyToOne('Creative', ondelete='set null')
zone = ManyToOne('Zone', ondelete='set null')
using_options(tablename=conf.table_prefix + 'clicks')
def __init__(self, creative_id, zone_id):
self.creative_id = creative_id
self.zone_id = zone_id
class Creative(GenericEntity, Entity):
parent = ManyToOne('Set', required=False, ondelete='set null')
#zones = ManyToMany('Zone', tablename='creatives_to_zones')
creative_zone_pairs = OneToMany('CreativeZonePair', cascade='delete')
title = Field(String(80, convert_unicode=True), required=True)
html = Field(UnicodeText, required=True, default='')
is_text = Field(Boolean, required=True, default=False)
width = Field(Integer, required=True, default=0)
height = Field(Integer, required=True, default=0)
start_date = Field(DateTime(timezone=True))
end_date = Field(DateTime(timezone=True))
weight = Field(Float, required=True, default=1.0)
add_tracking = Field(Boolean, required=True, default=True)
disabled = Field(Boolean, required=True, default=False)
create_date = Field(DateTime(timezone=True), required=True, default=tz_now)
cj_link_id = Field(Integer)
cj_advertiser_id = Field(Integer)
cj_site_id = Field(Integer)
views = OneToMany('View')
clicks = OneToMany('Click')
# Cached Values
html_tracked = Field(UnicodeText) #will be overwritten on set
parent_weight = Field(Float, required=True, default=1.0) # overwritten on any parent weight change
using_options(tablename=conf.table_prefix + 'creatives', order_by='title')
using_table_options(UniqueConstraint('cj_link_id'))
def __init__(self, data):
GenericEntity.__init__(self, data)
if self.parent_id:
self.parent_weight = Set.get(self.parent_id).weight
def get_clicks(self, start=None, end=None):
query = Click.query.filter_by(creative_id = self.id)
if start:
query = query.filter(Click.time > start)
if end:
query = query.filter(Click.time < end)
return query.count()
def get_views(self, start=None, end=None):
query = View.query.filter_by(creative_id = self.id)
if start:
query = query.filter(View.time > start)
if end:
query = query.filter(View.time < end)
return query.count()
@staticmethod
def possible_parents(this=None):
return [[set.id, set.title] for set in Set.query()]
def set(self, data):
old_parent_id = self.parent_id
old_html = self.html
old_add_tracking = self.add_tracking
GenericEntity.set(self, data)
if self.parent_id != old_parent_id:
self.parent_weight = Set.get(self.parent_id).weight
# TODO: Handle Block / Text bullshit
# Parse html
if self.html != old_html or self.add_tracking != old_add_tracking:
if self.add_tracking is not False:
self.html_tracked = add_tracking(self.html)
else:
self.html_tracked = None
return [self]
def value(self):
value = GenericEntity.value(self)
value['preview'] = Markup(remove_tracking(self.html, self.cj_site_id))
value['total_weight'] = self.weight * self.parent_weight
value['html_tracked'] = value['html_tracked'] or value['html']
return value
def view(self):
return '%s/creative/%i' % (conf.admin_base_url, self.id)
class CreativeZonePair(GenericEntity, Entity):
creative = ManyToOne('Creative', ondelete='cascade', use_alter=True)
zone = ManyToOne('Zone', ondelete='cascade', use_alter=True)
is_text = Field(Boolean, required=True)
lower_bound = Field(Float, required=True)
upper_bound = Field(Float, required=True)
using_options(tablename=conf.table_prefix + 'creative_zone_pairs')
using_table_options(UniqueConstraint('creative_id', 'zone_id'))
class Location(GenericListEntity, Entity):
''' A container for locations or zones '''
parent = ManyToOne('Location', required=False, ondelete='set null')
sublocations = OneToMany('Location')
zones = OneToMany('Zone')
title = Field(String(80, convert_unicode=True), required=True)
description = Field(UnicodeText)
create_date = Field(DateTime(timezone=True), required=True, default=tz_now)
cj_site_id = Field(Integer)
parent_cj_site_id = Field(Integer)
using_options(tablename=conf.table_prefix + 'locations', order_by='title')
def __init__(self, data):
GenericEntity.__init__(self, data)
if self.parent_id:
self.parent_cj_site_id = Location.get(self.parent_id).cj_site_id
def delete(self, data):
updated = []
for subloc in self.sublocations:
updated.extend(subloc.set(dict(parent_cj_site_id = None)))
for zone in self.zones:
updated.extend(zone.set(dict(parent_cj_site_id = None)))
Entity.delete(self)
return updated
@staticmethod
def possible_parents(this = None):
filter = None
if this:
filter = Location.id != this.id
return [[location.id, location.title] for location in Location.query.filter(filter)]
def set(self, data):
updated = [self]
old_parent_id = self.parent_id
old_cj_site_id = self.cj_site_id
old_parent_cj_site_id = self.parent_cj_site_id
GenericEntity.set(self, data)
if self.parent_id != old_parent_id:
self.parent_cj_site_id = Location.get(self.parent_id).cj_site_id
if self.cj_site_id != old_cj_site_id or self.parent_cj_site_id != old_parent_cj_site_id:
# Only pass parent- down if we don't have our own
for subloc in self.sublocations:
updated.extend(subloc.set(dict(parent_cj_site_id = self.cj_site_id or self.parent_cj_site_id)))
for zone in self.zones:
updated.extend(zone.set(dict(parent_cj_site_id = self.cj_site_id or self.parent_cj_site_id)))
return updated
def view(self):
return '%s/location/%i' % (conf.admin_base_url, self.id)
class Set(GenericListEntity, Entity):
parent = ManyToOne('Set', required=False, ondelete='set null')
subsets = OneToMany('Set')
creatives = OneToMany('Creative')
title = Field(String(80, convert_unicode=True), required=True)
description = Field(UnicodeText)
weight = Field(Float, required=True, default=1.0)
parent_weight = Field(Float, required=True, default=1.0) # overwritten on any parent weight change
create_date = Field(DateTime(timezone=True), required=True, default=tz_now)
cj_advertiser_id = Field(Integer)
using_options(tablename=conf.table_prefix + 'sets', order_by='title')
using_table_options(UniqueConstraint('cj_advertiser_id'))
def __init__(self, data):
GenericEntity.__init__(self, data)
if self.parent_id:
self.parent_weight = Set.get(self.parent_id).weight
def delete(self, data):
updated = []
for subset in self.subsets:
updated.extend(subset.set(dict(parent_weight = 1.0)))
for creative in self.creatives:
updated.extend(creative.set(dict(parent_weight = 1.0)))
Entity.delete(self)
return updated
@staticmethod
def possible_parents(this = None):
filter = None
if this:
filter = Set.id != this.id
return [[set.id, set.title] for set in Set.query.filter(filter)]
def set(self, data):
updated = [self]
old_parent_id = self.parent_id
old_weight = self.weight
old_parent_weight = self.parent_weight
GenericEntity.set(self, data)
if self.parent_id != old_parent_id:
self.parent_weight = Set.get(self.parent_id).weight
if self.weight != old_weight or self.parent_weight != old_parent_weight:
for subset in self.subsets:
updated.extend(subset.set(dict(parent_weight = self.parent_weight * self.weight)))
for creative in self.creatives:
updated.extend(creative.set(dict(parent_weight = self.parent_weight * self.weight)))
return updated
def value(self):
value = GenericEntity.value(self)
value['total_weight'] = self.weight * self.parent_weight
return value
def view(self):
return '%s/set/%i' % (conf.admin_base_url, self.id)
class View(GenericEntity, Entity):
time = Field(DateTime(timezone=True), required=True, default=tz_now)
creative = ManyToOne('Creative', ondelete='set null')
zone = ManyToOne('Zone', ondelete='set null')
using_options(tablename=conf.table_prefix + 'views')
def __init__(self, creative_id, zone_id):
self.creative_id = creative_id
self.zone_id = zone_id
class Zone(GenericEntity, Entity):
parent = ManyToOne('Location', required=False, ondelete='set null')
creative_zone_pairs = OneToMany('CreativeZonePair', cascade='delete')
name = Field(String(80, convert_unicode=True), required=False)
title = Field(String(80, convert_unicode=True), required=True)
description = Field(UnicodeText)
#creatives = ManyToMany('Creative', tablename='creatives_to_zones')
normalize_by_container = Field(Boolean, required=True, default=False)
creative_types = Field(SmallInteger, required=True, default=0) #0: Both, 1: Text, 2: Blocks
# These only matter if blocks allowed
min_width = Field(Integer, required=True, default=0)
max_width = Field(Integer, required=True, default=max_int)
min_height = Field(Integer, required=True, default=0)
max_height = Field(Integer, required=True, default=max_int)
# These only matter if text allowed
num_texts = Field(SmallInteger, required=True, default=1)
weight_texts = Field(Float, required=True, default=1.0)
before_all_text = Field(UnicodeText)
after_all_text = Field(UnicodeText)
before_each_text = Field(UnicodeText)
after_each_text = Field(UnicodeText)
create_date = Field(DateTime(timezone=True), required=True, default=tz_now)
# Cached from parent
parent_cj_site_id = Field(Integer)
# Cached from creatives
total_text_weight = Field(Float) # i dunno, some default? should be updated quick.
views = OneToMany('View')
clicks = OneToMany('Click')
using_options(tablename=conf.table_prefix + 'zones', order_by='title')
def __init__(self, data):
GenericEntity.__init__(self, data)
if self.parent_id:
self.parent_cj_site_id = Location.get(self.parent_id).cj_site_id
def get_clicks(self, start=None, end=None):
query = Click.query.filter_by(zone_id = self.id)
if start:
query = query.filter(Click.time > start)
if end:
query = query.filter(Click.time < end)
return query.count()
def get_views(self, start=None, end=None):
query = View.query.filter_by(zone_id = self.id)
if start:
query = query.filter(View.time > start)
if end:
query = query.filter(View.time < end)
return query.count()
@staticmethod
def possible_parents(this=None):
return [[location.id, location.title] for location in Location.query()]
def set(self, data):
if data.has_key('previous_name'):
del data['previous_name']
old_parent_id = self.parent_id
GenericEntity.set(self, data)
if self.parent_id != old_parent_id:
self.parent_cj_site_id = Location.get(self.parent_id).cj_site_id
return [self]
def value(self):
val = self.__dict__.copy()
val['previous_name'] = self.name
return val
def view(self):
return '%s/zone/%i' % (conf.admin_base_url, self.id) | AdjectorClient | /AdjectorClient-1.0b1.tar.gz/AdjectorClient-1.0b1/adjector/model/entities.py | entities.py |
Adjector 1.0b
*************
Hi there. Thanks for using Adjector, a lightweight, flexible, open-source
ad server written in Python.
Adjector is licensed under the GPL, version 2 or 3, at your option.
For more information, see LICENSE.txt.
This Distribution
-----------------
This is the Trac plugin for Adjector.
Either the full version or the client-only version of Adjector are also
required. If neither is installed on your system, the client-only version
will be installed when you install the plugin.
Both versions can be downloaded at
http://projects.icapsid.net/adjector/wiki/Download
Documentation
-------------
All of our documentation is online at
http://projects.icapsid.net/adjector
You may wish to get started with 'The Trac Plugin' at
http://projects.icapsid.net/adjector/wiki/TracPlugin
For questions, comments, help, or any other information, visit us online
or email [email protected]. | AdjectorTracPlugin | /AdjectorTracPlugin-1.0b1.tar.gz/AdjectorTracPlugin-1.0b1/README.txt | README.txt |
from adjector.client import initialize_adjector, render_zone
from trac.core import Component, implements
from trac.web.api import IRequestFilter
class AdjectorTracPlugin(Component):
implements(IRequestFilter)
#magic self variables: env, config, log
def __init__(self):
self.log.error('ADJECTOR')
config = dict(self.config.options('adjector'))
initialize_adjector(config)
#IRequestFilter
"""Extension point interface for components that want to filter HTTP
requests, before and/or after they are processed by the main handler."""
def pre_process_request(self, req, handler):
"""Called after initial handler selection, and can be used to change
the selected handler or redirect request.
Always returns the request handler, even if unchanged.
"""
return handler
# for ClearSilver templates
def post_process_request(self, req, template, content_type):
"""Do any post-processing the request might need; typically adding
values to req.hdf, or changing template or mime type.
Always returns a tuple of (template, content_type), even if
unchanged.
Note that `template`, `content_type` will be `None` if:
- called when processing an error page
- the default request handler did not return any result
(for 0.10 compatibility; only used together with ClearSilver templates)
"""
return (template, content_type)
# for Genshi templates
def post_process_request(self, req, template, data, content_type):
"""Do any post-processing the request might need; typically adding
values to the template `data` dictionary, or changing template or
mime type.
`data` may be update in place.
Always returns a tuple of (template, data, content_type), even if
unchanged.
Note that `template`, `data`, `content_type` will be `None` if:
- called when processing an error page
- the default request handler did not return any result
(Since 0.11)
"""
data['render_zone'] = render_zone
return (template, data, content_type) | AdjectorTracPlugin | /AdjectorTracPlugin-1.0b1.tar.gz/AdjectorTracPlugin-1.0b1/adjector/plugins/trac/adjector_trac.py | adjector_trac.py |
## AdminToolsDjango
ไฝฟ็จๆจกๆฟ ่ชๅจๅๅปบDjango้กน็ฎ
## ไป็ป๏ผ
ๅ
ผๅฎน linux windows ็ณป็ป
ๅๅปบ่ๆ็ฏๅขๅ ๅฎ่ฃ
djano==3.2.11 ็ถๅ็จๆจกๆฟๅๅปบ้กน็ฎ๏ผ
ๅฏไปฅ่ชๅฎไนๆจกๆฟ๏ผ ๆจกๆฟ่ทฏๅพ ๅปบ่ฎฎไฝฟ็จ็ปๅฏน่ทฏๅพ
ๅฏไปฅ่ชๅฎไน้กน็ฎไธ็บงๆไปถๅคน๏ผproject_parent_dir๏ผ ๅปบ่ฎฎไฝฟ็จ็ปๅฏน่ทฏๅพ
## ไฝฟ็จ็คบไพ๏ผ
```python
import AdminToolsDjango
# ๅๅปบไธไธชๅฏน่ฑก
django_project = AdminToolsDjango.ProjectManager(project_parent_dir='/obj_test/new_obj_parent',
project_name='new_obj',
)
print(django_project.cmd_activate_venv)
# ๅๅปบ้กน็ฎ
django_project.create_project()
# ้
็ฝฎ็ไบง็ฏๅข ไผ่ชๅจ้
็ฝฎnginxๅๅไปฃ็ ่ฆ็กฎไฟlinux็ณป็ป ๆ nginx ๆๅกๅจ
django_project.configure_production_environment()
```
## ๅฎ่ฃ
๏ผ
ไฝฟ็จไปฅไธๅฝไปคๅฎ่ฃ
AdminToolsDjango๏ผ
```shell
pip install AdminToolsDjango
```
| AdminToolsDjango | /AdminToolsDjango-1.0.6.tar.gz/AdminToolsDjango-1.0.6/README.md | README.md |
from copy import deepcopy
import datetime
import json
from time import time
class RequestCreator:
"""
A class to help build a request for Adobe Analytics API 2.0 getReport
"""
template = {
"globalFilters": [],
"metricContainer": {
"metrics": [],
"metricFilters": [],
},
"settings": {
"countRepeatInstances": True,
"limit": 20000,
"page": 0,
"nonesBehavior": "exclude-nones",
},
"statistics": {"functions": ["col-max", "col-min"]},
"rsid": "",
}
def __init__(self, request: dict = None) -> None:
"""
Instanciate the constructor.
Arguments:
request : OPTIONAL : overwrite the template with the definition provided.
"""
if request is not None:
if '.json' in request and type(request) == str:
with open(request,'r') as f:
request = json.load(f)
self.__request = deepcopy(request) or deepcopy(self.template)
self.__metricCount = len(self.__request["metricContainer"]["metrics"])
self.__metricFilterCount = len(
self.__request["metricContainer"].get("metricFilters", [])
)
self.__globalFiltersCount = len(self.__request["globalFilters"])
### Preparing some time statement.
today = datetime.datetime.now()
today_date_iso = today.isoformat().split("T")[0]
## should give '20XX-XX-XX'
tomorrow_date_iso = (
(today + datetime.timedelta(days=1)).isoformat().split("T")[0]
)
time_start = "T00:00:00.000"
time_end = "T23:59:59.999"
startToday_iso = today_date_iso + time_start
endToday_iso = today_date_iso + time_end
startMonth_iso = f"{today_date_iso[:-2]}01{time_start}"
tomorrow_iso = tomorrow_date_iso + time_start
next_month = today.replace(day=28) + datetime.timedelta(days=4)
last_day_month = next_month - datetime.timedelta(days=next_month.day)
last_day_month_date_iso = last_day_month.isoformat().split("T")[0]
last_day_month_iso = last_day_month_date_iso + time_end
thirty_days_prior_date_iso = (
(today - datetime.timedelta(days=30)).isoformat().split("T")[0]
)
thirty_days_prior_iso = thirty_days_prior_date_iso + time_start
seven_days_prior_iso_date = (
(today - datetime.timedelta(days=7)).isoformat().split("T")[0]
)
seven_days_prior_iso = seven_days_prior_iso_date + time_start
### assigning predefined dates:
self.dates = {
"thisMonth": f"{startMonth_iso}/{last_day_month_iso}",
"untilToday": f"{startMonth_iso}/{startToday_iso}",
"todayIncluded": f"{startMonth_iso}/{endToday_iso}",
"last30daysTillToday": f"{thirty_days_prior_iso}/{startToday_iso}",
"last30daysTodayIncluded": f"{thirty_days_prior_iso}/{tomorrow_iso}",
"last7daysTillToday": f"{seven_days_prior_iso}/{startToday_iso}",
"last7daysTodayIncluded": f"{seven_days_prior_iso}/{endToday_iso}",
}
self.today = today
def __repr__(self):
return json.dumps(self.__request, indent=4)
def __str__(self):
return json.dumps(self.__request, indent=4)
def addMetric(self, metricId: str = None) -> None:
"""
Add a metric to the template.
Arguments:
metricId : REQUIRED : The metric to add
"""
if metricId is None:
raise ValueError("Require a metric ID")
columnId = self.__metricCount
addMetric = {"columnId": str(columnId), "id": metricId}
if columnId == 0:
addMetric["sort"] = "desc"
self.__request["metricContainer"]["metrics"].append(addMetric)
self.__metricCount += 1
def removeMetrics(self) -> None:
"""
Remove all metrics.
"""
self.__request["metricContainer"]["metrics"] = []
self.__metricCount = 0
def getMetrics(self) -> list:
"""
return a list of the metrics used
"""
return [metric["id"] for metric in self.__request["metricContainer"]["metrics"]]
def setSearch(self,clause:str=None)->None:
"""
Add a search clause in the Analytics request.
Arguments:
clause : REQUIRED : String to tell what search clause to add.
Examples:
"( CONTAINS 'unspecified' ) OR ( CONTAINS 'none' ) OR ( CONTAINS '' )"
"( MATCH 'undefined' )"
"( NOT CONTAINS 'undefined' )"
"( BEGINS-WITH 'undefined' )"
"( BEGINS-WITH 'undefined' ) AND ( BEGINS-WITH 'none' )"
"""
if clause is None:
raise ValueError("Require a clause to add to the request")
self.__request["search"] = {
"clause" : clause
}
def removeSearch(self)->None:
"""
Remove the search associated with the request.
"""
del self.__request["search"]
def addMetricFilter(
self, metricId: str = None, filterId: str = None, metricIndex: int = None
) -> None:
"""
Add a filter to a metric.
Arguments:
metricId : REQUIRED : metric where the filter is added
filterId : REQUIRED : The filter to add.
when breakdown, use the following format for the value "dimension:::itemId"
metricIndex : OPTIONAL : If used, set the filter to the metric located on that index.
"""
if metricId is None:
raise ValueError("Require a metric ID")
if filterId is None:
raise ValueError("Require a filter ID")
filterIdCount = self.__metricFilterCount
if filterId.startswith("s") and "@AdobeOrg" in filterId:
filterType = "segment"
filter = {
"id": str(filterIdCount),
"type": filterType,
"segmentId": filterId,
}
elif filterId.startswith("20") and "/20" in filterId:
filterType = "dateRange"
filter = {
"id": str(filterIdCount),
"type": filterType,
"dateRange": filterId,
}
elif ":::" in filterId:
filterType = "breakdown"
dimension, itemId = filterId.split(":::")
filter = {
"id": str(filterIdCount),
"type": filterType,
"dimension": dimension,
"itemId": itemId,
}
else: ### case when it is predefined segments like "All_Visits"
filterType = "segment"
filter = {
"id": str(filterIdCount),
"type": filterType,
"segmentId": filterId,
}
if filterIdCount == 0:
self.__request["metricContainer"]["metricFilters"] = [filter]
else:
self.__request["metricContainer"]["metricFilters"].append(filter)
### adding filter to the metric
if metricIndex is None:
for metric in self.__request["metricContainer"]["metrics"]:
if metric["id"] == metricId:
if "filters" in metric.keys():
metric["filters"].append(str(filterIdCount))
else:
metric["filters"] = [str(filterIdCount)]
else:
metric = self.__request["metricContainer"]["metrics"][metricIndex]
if "filters" in metric.keys():
metric["filters"].append(str(filterIdCount))
else:
metric["filters"] = [str(filterIdCount)]
### incrementing the filter counter
self.__metricFilterCount += 1
def removeMetricFilter(self, filterId: str = None) -> None:
"""
remove a filter from a metric
Arguments:
filterId : REQUIRED : The filter to add.
when breakdown, use the following format for the value "dimension:::itemId"
"""
found = False ## flag
if filterId is None:
raise ValueError("Require a filter ID")
if ":::" in filterId:
filterId = filterId.split(":::")[1]
list_index = []
for metricFilter in self.__request["metricContainer"]["metricFilters"]:
if filterId in str(metricFilter):
list_index.append(metricFilter["id"])
found = True
## decrementing the filter counter
if found:
for metricFilterId in reversed(list_index):
del self.__request["metricContainer"]["metricFilters"][
int(metricFilterId)
]
for metric in self.__request["metricContainer"]["metrics"]:
if metricFilterId in metric.get("filters", []):
metric["filters"].remove(metricFilterId)
self.__metricFilterCount -= 1
def setLimit(self, limit: int = 100) -> None:
"""
Specific the number of element to retrieve. Default is 10.
Arguments:
limit : OPTIONAL : number of elements to return
"""
self.__request["settings"]["limit"] = limit
def setRepeatInstance(self, repeat: bool = True) -> None:
"""
Specify if repeated instances should be counted.
Arguments:
repeat : OPTIONAL : True or False (True by default)
"""
self.__request["settings"]["countRepeatInstances"] = repeat
def setNoneBehavior(self, returnNones: bool = True) -> None:
"""
Set the behavior of the None values in that request.
Arguments:
returnNones : OPTIONAL : True or False (True by default)
"""
if returnNones:
self.__request["settings"]["nonesBehavior"] = "return-nones"
else:
self.__request["settings"]["nonesBehavior"] = "exclude-nones"
def setDimension(self, dimension: str = None) -> None:
"""
Set the dimension to be used for reporting.
Arguments:
dimension : REQUIRED : the dimension to build your report on
"""
if dimension is None:
raise ValueError("A dimension must be passed")
self.__request["dimension"] = dimension
def setRSID(self, rsid: str = None) -> None:
"""
Set the reportSuite ID to be used for the reporting.
Arguments:
rsid : REQUIRED : The Data View ID to be passed.
"""
if rsid is None:
raise ValueError("A reportSuite ID must be passed")
self.__request["rsid"] = rsid
def addGlobalFilter(self, filterId: str = None) -> None:
"""
Add a global filter to the report.
NOTE : You need to have a dateRange filter at least in the global report.
Arguments:
filterId : REQUIRED : The filter to add to the global filter.
example :
"s2120430124uf03102jd8021" -> segment
"2020-01-01T00:00:00.000/2020-02-01T00:00:00.000" -> dateRange
"""
if filterId.startswith("s") and "@AdobeOrg" in filterId:
filterType = "segment"
filter = {
"type": filterType,
"segmentId": filterId,
}
elif filterId.startswith("20") and "/20" in filterId:
filterType = "dateRange"
filter = {
"type": filterType,
"dateRange": filterId,
}
elif ":::" in filterId:
filterType = "breakdown"
dimension, itemId = filterId.split(":::")
filter = {
"type": filterType,
"dimension": dimension,
"itemId": itemId,
}
else: ### case when it is predefined segments like "All_Visits"
filterType = "segment"
filter = {
"type": filterType,
"segmentId": filterId,
}
### incrementing the count for globalFilter
self.__globalFiltersCount += 1
### adding to the globalFilter list
self.__request["globalFilters"].append(filter)
def updateDateRange(
self,
dateRange: str = None,
shiftingDays: int = None,
shiftingDaysEnd: int = None,
shiftingDaysStart: int = None,
) -> None:
"""
Update the dateRange filter on the globalFilter list
One of the 3 elements specified below is required.
Arguments:
dateRange : OPTIONAL : string representing the new dateRange string, such as: 2020-01-01T00:00:00.000/2020-02-01T00:00:00.000
shiftingDays : OPTIONAL : An integer, if you want to add or remove days from the current dateRange provided. Apply to end and beginning of dateRange.
So 2020-01-01T00:00:00.000/2020-02-01T00:00:00.000 with +2 will give 2020-01-03T00:00:00.000/2020-02-03T00:00:00.000
shiftingDaysEnd : : OPTIONAL : An integer, if you want to add or remove days from the last part of the current dateRange. Apply only to end of the dateRange.
So 2020-01-01T00:00:00.000/2020-02-01T00:00:00.000 with +2 will give 2020-01-01T00:00:00.000/2020-02-03T00:00:00.000
shiftingDaysStart : OPTIONAL : An integer, if you want to add or remove days from the last first part of the current dateRange. Apply only to beginning of the dateRange.
So 2020-01-01T00:00:00.000/2020-02-01T00:00:00.000 with +2 will give 2020-01-03T00:00:00.000/2020-02-01T00:00:00.000
"""
pos = -1
for index, filter in enumerate(self.__request["globalFilters"]):
if filter["type"] == "dateRange":
pos = index
curDateRange = filter["dateRange"]
start, end = curDateRange.split("/")
start = datetime.datetime.fromisoformat(start)
end = datetime.datetime.fromisoformat(end)
if dateRange is not None and type(dateRange) == str:
for index, filter in enumerate(self.__request["globalFilters"]):
if filter["type"] == "dateRange":
pos = index
curDateRange = filter["dateRange"]
newDef = {
"type": "dateRange",
"dateRange": dateRange,
}
if shiftingDays is not None and type(shiftingDays) == int:
newStart = (start + datetime.timedelta(shiftingDays)).isoformat(
timespec="milliseconds"
)
newEnd = (end + datetime.timedelta(shiftingDays)).isoformat(
timespec="milliseconds"
)
newDef = {
"type": "dateRange",
"dateRange": f"{newStart}/{newEnd}",
}
elif shiftingDaysEnd is not None and type(shiftingDaysEnd) == int:
newEnd = (end + datetime.timedelta(shiftingDaysEnd)).isoformat(
timespec="milliseconds"
)
newDef = {
"type": "dateRange",
"dateRange": f"{start}/{newEnd}",
}
elif shiftingDaysStart is not None and type(shiftingDaysStart) == int:
newStart = (start + datetime.timedelta(shiftingDaysStart)).isoformat(
timespec="milliseconds"
)
newDef = {
"type": "dateRange",
"dateRange": f"{newStart}/{end}",
}
if pos > -1:
self.__request["globalFilters"][pos] = newDef
else: ## in case there is no dateRange already
self.__request["globalFilters"][pos].append(newDef)
def removeGlobalFilter(self, index: int = None, filterId: str = None) -> None:
"""
Remove a specific filter from the globalFilter list.
You can use either the index of the list or the specific Id of the filter used.
Arguments:
index : REQUIRED : index in the list return
filterId : REQUIRED : the id of the filter to be removed (ex: filterId, dateRange)
"""
pos = -1
if index is not None:
del self.__request["globalFilters"][index]
elif filterId is not None:
for index, filter in enumerate(self.__request["globalFilters"]):
if filterId in str(filter):
pos = index
if pos > -1:
del self.__request["globalFilters"][pos]
### decrementing the count for globalFilter
self.__globalFiltersCount -= 1
def to_dict(self) -> None:
"""
Return the request definition
"""
return deepcopy(self.__request)
def save(self, fileName: str = None) -> None:
"""
save the request definition in a JSON file.
Argument:
filename : OPTIONAL : Name of the file. (default cjapy_request_<timestamp>.json)
"""
fileName = fileName or f"aa_request_{int(time())}.json"
with open(fileName, "w") as f:
f.write(json.dumps(self.to_dict(), indent=4)) | Adobe-Lib-Manual | /Adobe_Lib_Manual-4.2.tar.gz/Adobe_Lib_Manual-4.2/aanalytics2/requestCreator.py | requestCreator.py |
import pandas as pd
import json
from typing import Union, IO
import time
from .requestCreator import RequestCreator
from copy import deepcopy
class Workspace:
"""
A class to return data from the getReport method.
"""
startDate = None
endDate = None
settings = None
def __init__(
self,
responseData: dict,
dataRequest: dict = None,
columns: dict = None,
summaryData: dict = None,
analyticsConnector: object = None,
reportType: str = "normal",
metrics: Union[dict, list] = None, ## for normal type, static report
metricFilters: dict = None,
resolveColumns: bool = True,
) -> None:
"""
Setup the different values from the response of the getReport
Argument:
responseData : REQUIRED : data returned & predigested by the getReport method.
dataRequest : REQUIRED : dataRequest containing the request
columns : REQUIRED : the columns element of the response.
summaryData : REQUIRED : summary data containing total calculated by CJA
analyticsConnector : REQUIRED : analytics object connector.
reportType : OPTIONAL : define type of report retrieved.(normal, static, multi)
metrics : OPTIONAL : dictionary of the columns Id for normal report and list of columns name for Static report
metricFilters : OPTIONAL : Filter name for the id of the filter
resolveColumns : OPTIONAL : If you want to resolve the column name and returning ID instead of name
"""
for filter in dataRequest["globalFilters"]:
if filter["type"] == "dateRange":
self.startDate = filter["dateRange"].split("/")[0]
self.endDate = filter["dateRange"].split("/")[1]
self.dataRequest = RequestCreator(dataRequest)
self.requestSize = dataRequest["settings"]["limit"]
self.settings = dataRequest["settings"]
self.pageRequested = dataRequest["settings"]["page"] + 1
self.summaryData = summaryData
self.reportType = reportType
self.analyticsObject = analyticsConnector
## global filters resolution
filters = []
for filter in dataRequest["globalFilters"]:
if filter["type"] == "segment":
segmentId = filter.get("segmentId",None)
if segmentId is not None:
seg = self.analyticsObject.getSegment(filter["segmentId"])
filter["segmentName"] = seg["name"]
else:
context = filter.get('segmentDefinition',{}).get('container',{}).get('context')
description = filter.get('segmentDefinition',{}).get('container',{}).get('pred',{}).get('description')
listName = ','.join(filter.get('segmentDefinition',{}).get('container',{}).get('pred',{}).get('list',[]))
function = filter.get('segmentDefinition',{}).get('container',{}).get('pred',{}).get('func')
filter["segmentId"] = f"Dynamic: {context} {description} {function} {listName}"
filter["segmentName"] = f"{context} {description} {listName}"
filters.append(filter)
self.globalFilters = filters
self.metricFilters = metricFilters
if reportType == "normal" or reportType == "static":
df_init = pd.DataFrame(responseData).T
df_init = df_init.reset_index()
elif reportType == "multi":
df_init = responseData
if reportType == "normal":
columns_data = ["itemId"]
elif reportType == "static":
columns_data = ["SegmentName"]
### adding dimensions & metrics in columns names when reportType is "normal"
if "dimension" in dataRequest.keys() and reportType == "normal":
columns_data.append(dataRequest["dimension"])
### adding metrics in columns names
columnIds = columns["columnIds"]
# To get readable names of template metrics and Success Events, we need to get the full list of metrics for the Report Suite first.
# But we won't do this if there are no such metrics in the report.
if (resolveColumns is True) & (
len([metric for metric in metrics.values() if metric.startswith("metrics/")]) > 0):
rsMetricsList = self.analyticsObject.getMetrics(rsid=dataRequest["rsid"])
for col in columnIds:
metrics: dict = metrics ## case when dict is used
metricListName: list = metrics[col].split(":::")
if resolveColumns:
metricResolvedName = []
for metric in metricListName:
if metric.startswith("cm"):
cm = self.analyticsObject.getCalculatedMetric(metric)
metricName = cm.get("name",metric)
metricResolvedName.append(metricName)
elif metric.startswith("s"):
seg = self.analyticsObject.getSegment(metric)
segName = seg.get("name",metric)
metricResolvedName.append(segName)
elif metric.startswith("metrics/"):
metricName = rsMetricsList[rsMetricsList["id"] == metric]["name"].iloc[0]
metricResolvedName.append(metricName)
else:
metricResolvedName.append(metric)
colName = ":::".join(metricResolvedName)
columns_data.append(colName)
else:
columns_data.append(metrics[col])
elif reportType == "static":
metrics: list = metrics ## case when a list is used
columns_data.append("SegmentId")
columns_data += metrics
if df_init.empty == False and (
reportType == "static" or reportType == "normal"
):
df_init.columns = columns_data
self.columns = list(df_init.columns)
elif reportType == "multi":
self.columns = list(df_init.columns)
else:
self.columns = list(df_init.columns)
self.row_numbers = len(df_init)
self.dataframe = df_init
def __str__(self):
return json.dumps(
{
"startDate": self.startDate,
"endDate": self.endDate,
"globalFilters": self.globalFilters,
"totalRows": self.row_numbers,
"columns": self.columns,
},
indent=4,
)
def __repr__(self):
return json.dumps(
{
"startDate": self.startDate,
"endDate": self.endDate,
"globalFilters": self.globalFilters,
"totalRows": self.row_numbers,
"columns": self.columns,
},
indent=4,
)
def to_csv(
self,
filename: str = None,
delimiter: str = ",",
index: bool = False,
) -> IO:
"""
Save the result in a CSV
Arguments:
filename : OPTIONAL : name of the file
delimiter : OPTIONAL : delimiter of the CSV
index : OPTIONAL : should the index be included in the CSV (default False)
"""
if filename is None:
filename = f"cjapy_{int(time.time())}.csv"
self.df_init.to_csv(filename, delimiter=delimiter, index=index)
def to_json(self, filename: str = None, orient: str = "index") -> IO:
"""
Save the result to JSON
Arguments:
filename : OPTIONAL : name of the file
orient : OPTIONAL : orientation of the JSON
"""
if filename is None:
filename = f"cjapy_{int(time.time())}.json"
self.df_init.to_json(filename, orient=orient)
def breakdown(
self,
index: Union[int, str] = None,
dimension: str = None,
n_results: Union[int, str] = 10,
) -> object:
"""
Breakdown a specific index or value of the dataframe, by another dimension.
NOTE: breakdowns are possible only from normal reportType.
Return a workspace instance.
Arguments:
index : REQUIRED : Value to use as filter for the breakdown or index of the dataframe to use for the breakdown.
dimension : REQUIRED : dimension to report.
n_results : OPTIONAL : number of results you want to have on your breakdown. Default 10, can use "inf"
"""
if index is None or dimension is None:
raise ValueError(
"Require a value to use as breakdown and dimension to request"
)
breadown_dimension = list(self.dataframe.columns)[1]
if type(index) == str:
row: pd.Series = self.dataframe[self.dataframe.iloc[:, 1] == index]
itemValue: str = row["itemId"].values[0]
elif type(index) == int:
itemValue = self.dataframe.loc[index, "itemId"]
breakdown = f"{breadown_dimension}:::{itemValue}"
new_request = RequestCreator(self.dataRequest.to_dict())
new_request.setDimension(dimension)
metrics = new_request.getMetrics()
for metric in metrics:
new_request.addMetricFilter(metricId=metric, filterId=breakdown)
if n_results < 20000:
new_request.setLimit(n_results)
report = self.analyticsObject.getReport2(
new_request.to_dict(), n_results=n_results
)
if n_results == "inf" or n_results > 20000:
report = self.analyticsObject.getReport2(
new_request.to_dict(), n_results=n_results
)
return report | Adobe-Lib-Manual | /Adobe_Lib_Manual-4.2.tar.gz/Adobe_Lib_Manual-4.2/aanalytics2/workspace.py | workspace.py |
import json
import os
from pathlib import Path
from typing import Optional
import time
# Non standard libraries
from .config import config_object, header
def find_path(path: str) -> Optional[Path]:
"""Checks if the file denoted by the specified `path` exists and returns the Path object
for the file.
If the file under the `path` does not exist and the path denotes an absolute path, tries
to find the file by converting the absolute path to a relative path.
If the file does not exist with either the absolute and the relative path, returns `None`.
"""
if Path(path).exists():
return Path(path)
elif path.startswith('/') and Path('.' + path).exists():
return Path('.' + path)
elif path.startswith('\\') and Path('.' + path).exists():
return Path('.' + path)
else:
return None
def createConfigFile(destination: str = 'config_analytics_template.json',auth_type: str = "oauthV2",verbose: bool = False) -> None:
"""Creates a `config_admin.json` file with the pre-defined configuration format
to store the access data in under the specified `destination`.
Arguments:
destination : OPTIONAL : the name of the file + path if you want
auth_type : OPTIONAL : The type of Oauth type you want to use for your config file. Possible value: "jwt" or "oauthV2"
"""
json_data = {
'org_id': '<orgID>',
'client_id': "<APIkey>",
'secret': "<YourSecret>",
}
if auth_type == 'oauthV2':
json_data['scopes'] = "<scopes>"
elif auth_type == 'jwt':
json_data["tech_id"] = "<something>@techacct.adobe.com"
json_data["pathToKey"] = "<path/to/your/privatekey.key>"
if '.json' not in destination:
destination += '.json'
with open(destination, 'w') as cf:
cf.write(json.dumps(json_data, indent=4))
if verbose:
print(f" file created at this location : {os.getcwd()}{os.sep}{destination}.json")
def importConfigFile(path: str = None,auth_type:str=None) -> None:
"""Reads the file denoted by the supplied `path` and retrieves the configuration information
from it.
Arguments:
path: REQUIRED : path to the configuration file. Can be either a fully-qualified or relative.
auth_type : OPTIONAL : The type of Auth to be used by default. Detected if none is passed, OauthV2 takes precedence.
Possible values: "jwt" or "oauthV2"
Example of path value.
"config.json"
"./config.json"
"/my-folder/config.json"
"""
config_file_path: Optional[Path] = find_path(path)
if config_file_path is None:
raise FileNotFoundError(
f"Unable to find the configuration file under path `{path}`."
)
with open(config_file_path, 'r') as file:
provided_config = json.load(file)
provided_keys = provided_config.keys()
if 'api_key' in provided_keys:
## old naming for client_id
client_id = provided_config['api_key']
elif 'client_id' in provided_keys:
client_id = provided_config['client_id']
else:
raise RuntimeError(f"Either an `api_key` or a `client_id` should be provided.")
if auth_type is None:
if 'scopes' in provided_keys:
auth_type = 'oauthV2'
elif 'tech_id' in provided_keys and "pathToKey" in provided_keys:
auth_type = 'jwt'
args = {
"org_id" : provided_config['org_id'],
"secret" : provided_config['secret'],
"client_id" : client_id
}
if auth_type == 'oauthV2':
args["scopes"] = provided_config["scopes"].replace(' ','')
if auth_type == 'jwt':
args["tech_id"] = provided_config["tech_id"]
args["path_to_key"] = provided_config["pathToKey"]
configure(**args)
def configure(org_id: str = None,
tech_id: str = None,
secret: str = None,
client_id: str = None,
path_to_key: str = None,
private_key: str = None,
oauth: bool = False,
token: str = None,
scopes: str = None
):
"""Performs programmatic configuration of the API using provided values.
Arguments:
org_id : REQUIRED : Organization ID
tech_id : REQUIRED : Technical Account ID
secret : REQUIRED : secret generated for your connection
client_id : REQUIRED : The client_id (old api_key) provided by the JWT connection.
path_to_key : REQUIRED : If you have a file containing your private key value.
private_key : REQUIRED : If you do not use a file but pass a variable directly.
oauth : OPTIONAL : If you wish to pass the token generated by oauth
token : OPTIONAL : If oauth set to True, you need to pass the token
scopes : OPTIONAL : If you use Oauth, you need to pass the scopes
"""
if not org_id:
raise ValueError("`org_id` must be specified in the configuration.")
if not client_id:
raise ValueError("`client_id` must be specified in the configuration.")
if not tech_id and oauth == False and not scopes:
raise ValueError("`tech_id` must be specified in the configuration.")
if not secret and oauth == False:
raise ValueError("`secret` must be specified in the configuration.")
if (not path_to_key and not private_key and oauth == False) and not scopes:
raise ValueError("scopes must be specified if Oauth setup.\n `pathToKey` or `private_key` must be specified in the configuration if JWT setup.")
config_object["org_id"] = org_id
config_object["client_id"] = client_id
header["x-api-key"] = client_id
config_object["tech_id"] = tech_id
config_object["secret"] = secret
config_object["pathToKey"] = path_to_key
config_object["private_key"] = private_key
config_object["scopes"] = scopes
# ensure the reset of the state by overwriting possible values from previous import.
config_object["date_limit"] = 0
config_object["token"] = ""
if oauth:
date_limit = int(time.time()) + (22 * 60 * 60)
config_object["date_limit"] = date_limit
config_object["token"] = token
header["Authorization"] = f"Bearer {token}"
def get_private_key_from_config(config: dict) -> str:
"""
Returns the private key directly or read a file to return the private key.
"""
private_key = config.get('private_key')
if private_key is not None:
return private_key
private_key_path = find_path(config['pathToKey'])
if private_key_path is None:
raise FileNotFoundError(f'Unable to find the private key under path `{config["pathToKey"]}`.')
with open(Path(private_key_path), 'r') as f:
private_key = f.read()
return private_key
def generateLoggingObject(
level:str="WARNING",
stream:bool=True,
file:bool=False,
filename:str="aanalytics2.log",
format:str="%(asctime)s::%(name)s::%(funcName)s::%(levelname)s::%(message)s::%(lineno)d"
)->dict:
"""
Generates a dictionary for the logging object with basic configuration.
You can find the information for the different possible values on the logging documentation.
https://docs.python.org/3/library/logging.html
Arguments:
level : Level of the logger to display information (NOTSET, DEBUG,INFO,WARNING,EROR,CRITICAL)
stream : If the logger should display print statements
file : If the logger should write the messages to a file
filename : name of the file where log are written
format : format of the log to be written.
"""
myObject = {
"level" : level,
"stream" : stream,
"file" : file,
"format" : format,
"filename":filename
}
return myObject | Adobe-Lib-Manual | /Adobe_Lib_Manual-4.2.tar.gz/Adobe_Lib_Manual-4.2/aanalytics2/configs.py | configs.py |
import json, os, re
import time, datetime
from concurrent import futures
from copy import deepcopy
from pathlib import Path
from typing import IO, Union, List
from collections import defaultdict
from itertools import tee
import logging
# Non standard libraries
import pandas as pd
from urllib import parse
from aanalytics2 import config, connector, token_provider
from .projects import *
from .requestCreator import RequestCreator
from .workspace import Workspace
JsonOrDataFrameType = Union[pd.DataFrame, dict]
JsonListOrDataFrameType = Union[pd.DataFrame, List[dict]]
def retrieveToken(verbose: bool = False, save: bool = False, **kwargs)->str:
"""
LEGACY retrieve token directly following the importConfigFile or Configure method.
"""
token_with_expiry = token_provider.get_jwt_token_and_expiry_for_config(config.config_object,**kwargs)
token = token_with_expiry['token']
config.config_object['token'] = token
config.config_object['date_limit'] = time.time() + token_with_expiry['expiry'] / 1000 - 500
config.header.update({'Authorization': f'Bearer {token}'})
if verbose:
print(f"token valid till : {time.ctime(time.time() + token_with_expiry['expiry'] / 1000)}")
return token
class Login:
"""
Class to connect to the the login company.
"""
loggingEnabled = False
logger = None
def __init__(self, config: dict = config.config_object, header: dict = config.header, retry: int = 0,loggingObject:dict=None) -> None:
"""
Instantiate the Loggin class.
Arguments:
config : REQUIRED : dictionary with your configuration information.
header : REQUIRED : dictionary of your header.
retry : OPTIONAL : if you want to retry, the number of time to retry
loggingObject : OPTIONAL : If you want to set logging capability for your actions.
"""
if loggingObject is not None and sorted(["level","stream","format","filename","file"]) == sorted(list(loggingObject.keys())):
self.loggingEnabled = True
self.logger = logging.getLogger(f"{__name__}.login")
self.logger.setLevel(loggingObject["level"])
if type(loggingObject["format"]) == str:
formatter = logging.Formatter(loggingObject["format"])
elif type(loggingObject["format"]) == logging.Formatter:
formatter = loggingObject["format"]
if loggingObject["file"]:
fileHandler = logging.FileHandler(loggingObject["filename"])
fileHandler.setFormatter(formatter)
self.logger.addHandler(fileHandler)
if loggingObject["stream"]:
streamHandler = logging.StreamHandler()
streamHandler.setFormatter(formatter)
self.logger.addHandler(streamHandler)
self.connector = connector.AdobeRequest(
config_object=config, header=header, retry=retry,loggingEnabled=self.loggingEnabled,logger=self.logger)
self.header = self.connector.header
self.COMPANY_IDS = {}
self.retry = retry
def getCompanyId(self,verbose:bool=False) -> dict:
"""
Retrieve the company ids for later call for the properties.
"""
if self.loggingEnabled:
self.logger.debug("getCompanyId start")
res = self.connector.getData(
"https://analytics.adobe.io/discovery/me", headers=self.header)
json_res = res
if self.loggingEnabled:
self.logger.debug(f"getCompanyId reponse: {json_res}")
try:
companies = json_res['imsOrgs'][0]['companies']
self.COMPANY_IDS = json_res['imsOrgs'][0]['companies']
return companies
except:
if verbose:
print("exception when trying to get companies with parameter 'all'")
print(json_res)
if self.loggingEnabled:
self.logger.error(f"Error trying to get companyId: {json_res}")
return None
def createAnalyticsConnection(self, companyId: str = None,loggingObject:dict=None) -> object:
"""
Returns an instance of the Analytics class so you can query the different elements from that instance.
Arguments:
companyId: REQUIRED : The globalCompanyId that you want to use in your connection
loggingObject : OPTIONAL : If you want to set logging capability for your actions.
the retry parameter set in the previous class instantiation will be used here.
"""
analytics = Analytics(company_id=companyId,
config_object=self.connector.config, header=self.header, retry=self.retry,loggingObject=loggingObject)
return analytics
class Analytics:
"""
Class that instantiate a connection to a single login company.
"""
# Endpoints
header = {"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": "Bearer ",
"X-Api-Key": ""
}
_endpoint = 'https://analytics.adobe.io/api'
_getRS = '/collections/suites'
_getDimensions = '/dimensions'
_getMetrics = '/metrics'
_getSegments = '/segments'
_getCalcMetrics = '/calculatedmetrics'
_getDateRanges = '/dateranges'
_getReport = '/reports'
loggingEnabled = False
logger = None
def __init__(self, company_id: str = None, config_object: dict = config.config_object, header: dict = config.header,
retry: int = 0,loggingObject:dict=None):
"""
Instantiate the Analytics class.
The Analytics class will be automatically connected to the API 2.0.
You have possibility to review the connection detail by looking into the connector instance.
"header", "company_id" and "endpoint_company" are attribute accessible for debugging.
Arguments:
company_id : REQUIRED : company ID retrieved by the getCompanyId
retry : OPTIONAL : Number of time you want to retrieve fail calls
loggingObject : OPTIONAL : logging object to log actions during runtime.
config_object : OPTIONAL : config object to be used for setting token (do not update if you do not know)
header : OPTIONAL : template header used for all requests (do not update if you do not know!)
"""
if company_id is None:
raise AttributeError(
'Expected "company_id" to be referenced.\nPlease ensure you pass the globalCompanyId when instantiating this class.')
if loggingObject is not None and sorted(["level","stream","format","filename","file"]) == sorted(list(loggingObject.keys())):
self.loggingEnabled = True
self.logger = logging.getLogger(f"{__name__}.analytics")
self.logger.setLevel(loggingObject["level"])
if type(loggingObject["format"]) == str:
formatter = logging.Formatter(loggingObject["format"])
elif type(loggingObject["format"]) == logging.Formatter:
formatter = loggingObject["format"]
if loggingObject["file"]:
fileHandler = logging.FileHandler(loggingObject["filename"])
fileHandler.setFormatter(formatter)
self.logger.addHandler(fileHandler)
if loggingObject["stream"]:
streamHandler = logging.StreamHandler()
streamHandler.setFormatter(formatter)
self.logger.addHandler(streamHandler)
self.connector = connector.AdobeRequest(
config_object=config_object, header=header, retry=retry,loggingEnabled=self.loggingEnabled,logger=self.logger)
self.header = self.connector.header
self.connector.header['x-proxy-global-company-id'] = company_id
self.header['x-proxy-global-company-id'] = company_id
self.endpoint_company = f"{self._endpoint}/{company_id}"
self.company_id = company_id
self.listProjectIds = []
self.projectsDetails = {}
self.segments = []
self.calculatedMetrics = []
try:
import importlib.resources as pkg_resources
pathLOGS = pkg_resources.path(
"aanalytics2", "eventType_usageLogs.pickle")
except ImportError:
try:
# Try backported to PY<37 `importlib_resources`.
import pkg_resources
pathLOGS = pkg_resources.resource_filename(
"aanalytics2", "eventType_usageLogs.pickle")
except:
print('Empty LOGS_EVENT_TYPE attribute')
try:
with pathLOGS as f:
self.LOGS_EVENT_TYPE = pd.read_pickle(f)
except:
self.LOGS_EVENT_TYPE = "no data"
def __str__(self)->str:
obj = {
"endpoint" : self.endpoint_company,
"companyId" : self.company_id,
"header" : self.header,
"token" : self.connector.config['token']
}
return json.dumps(obj,indent=4)
def __repr__(self)->str:
obj = {
"endpoint" : self.endpoint_company,
"companyId" : self.company_id,
"header" : self.header,
"token" : self.connector.config['token']
}
return json.dumps(obj,indent=4)
def refreshToken(self, token: str = None):
if token is None:
raise AttributeError(
'Expected "token" to be referenced.\nPlease ensure you pass the token.')
self.header['Authorization'] = "Bearer " + token
def decodeAArequests(self,file:IO=None,urls:Union[list,str]=None,save:bool=False,**kwargs)->pd.DataFrame:
"""
Takes any of the parameter to load adobe url and decompose the requests into a dataframe, that you can save if you want.
Arguments:
file : OPTIONAL : file referencing the different requests saved (excel, or txt)
urls : OPTIONAL : list of requests (or a single request) that you want to decode.
save : OPTIONAL : parameter to save your decode list into a csv file.
Returns a dataframe.
possible kwargs:
encoding : the type of encoding to decode the file
"""
if self.loggingEnabled:
self.logger.debug(f"Starting decodeAArequests")
if file is None and urls is None:
raise ValueError("Require at least file or urls to contains data")
if file is not None:
if '.txt' in file:
with open(file,'r',encoding=kwargs.get('encoding','utf-8')) as f:
urls = f.readlines() ## passing decoding to urls
elif '.xlsx' in file:
temp_df = pd.read_excel(file,header=None)
urls = list(temp_df[0]) ## passing decoding to urls
if urls is not None:
if type(urls) == str:
data = parse.parse_qsl(urls)
df = pd.DataFrame(data)
df.columns = ['index','request']
df.set_index('index',inplace=True)
if save:
df.to_csv(f'request_{int(time.time())}.csv')
return df
elif type(urls) == list: ## decoding list of strings
tmp_list = [parse.parse_qsl(data) for data in urls]
tmp_dfs = [pd.DataFrame(data) for data in tmp_list]
tmp_dfs2 = []
for df, index in zip(tmp_dfs,range(len(tmp_dfs))):
df.columns = ['index',f"request {index+1}"]
## cleanup timestamp from request url
string = df.iloc[0,0]
df.iloc[0,0] = re.search('http.*://(.+?)/s[0-9]+.*',string).group(1) # tracking server
df.set_index('index',inplace=True)
new_df = df
tmp_dfs2.append(new_df)
df_full = pd.concat(tmp_dfs2,axis=1)
if save:
df_full.to_csv(f'requests_{int(time.time())}.csv')
return df_full
def getReportSuites(self, txt: str = None, rsid_list: str = None, limit: int = 100, extended_info: bool = False,
save: bool = False) -> list:
"""
Get the reportSuite IDs data. Returns a dataframe of reportSuite name and report suite id.
Arguments:
txt : OPTIONAL : returns the reportSuites that matches a speific text field
rsid_list : OPTIONAL : returns the reportSuites that matches the list of rsids set
limit : OPTIONAL : How many reportSuite retrieves per serverCall
save : OPTIONAL : if set to True, it will save the list in a file. (Default False)
"""
if self.loggingEnabled:
self.logger.debug(f"Starting getReportSuite")
nb_error, nb_empty = 0, 0 # use for multi-thread loop
params = {}
params.update({'limit': str(limit)})
params.update({'page': '0'})
if txt is not None:
params.update({'rsidContains': str(txt)})
if rsid_list is not None:
params.update({'rsids': str(rsid_list)})
params.update(
{"expansion": "name,parentRsid,currency,calendarType,timezoneZoneinfo"})
if self.loggingEnabled:
self.logger.debug(f"parameters : {params}")
rsids = self.connector.getData(self.endpoint_company + self._getRS,
params=params, headers=self.header)
content = rsids['content']
if not extended_info:
list_content = [{'name': item['name'], 'rsid': item['rsid']}
for item in content]
df_rsids = pd.DataFrame(list_content)
else:
df_rsids = pd.DataFrame(content)
total_page = rsids['totalPages']
last_page = rsids['lastPage']
if not last_page: # if last_page =False
callsToMake = total_page
list_params = [{**params, 'page': page}
for page in range(1, callsToMake)]
list_urls = [self.endpoint_company +
self._getRS for x in range(1, callsToMake)]
listheaders = [self.header for x in range(1, callsToMake)]
workers = min(10, total_page)
with futures.ThreadPoolExecutor(workers) as executor:
res = executor.map(lambda x, y, z: self.connector.getData(
x, y, headers=z), list_urls, list_params, listheaders)
res = list(res)
list_data = [val for sublist in [r['content']
for r in res if 'content' in r.keys()] for val in sublist]
nb_error = sum(1 for elem in res if 'error_code' in elem.keys())
nb_empty = sum(1 for elem in res if 'content' in elem.keys() and len(
elem['content']) == 0)
if not extended_info:
list_append = [{'name': item['name'], 'rsid': item['rsid']}
for item in list_data]
df_append = pd.DataFrame(list_append)
else:
df_append = pd.DataFrame(list_data)
df_rsids = df_rsids.append(df_append, ignore_index=True)
if save:
if self.loggingEnabled:
self.logger.debug(f"saving rsids : {params}")
df_rsids.to_csv('RSIDS.csv', sep='\t')
if nb_error > 0 or nb_empty > 0:
message = f'WARNING : Retrieved data are partial.\n{nb_error}/{len(list_urls) + 1} requests returned an error.\n{nb_empty}/{len(list_urls)} requests returned an empty response. \nTry to use filter to retrieve reportSuite or increase limit per request'
print(message)
if self.loggingEnabled:
self.logger.warning(message)
return df_rsids
def getVirtualReportSuites(self, extended_info: bool = False, limit: int = 100, filterIds: str = None,
idContains: str = None, segmentIds: str = None, save: bool = False) -> list:
"""
return a lit of virtual reportSuites and their id. It can contain more information if expansion is selected.
Arguments:
extended_info : OPTIONAL : boolean to retrieve the maximum of information.
limit : OPTIONAL : How many reportSuite retrieves per serverCall
filterIds : OPTIONAL : comma delimited list of virtual reportSuite ID to be retrieved.
idContains : OPTIONAL : element that should be contained in the Virtual ReportSuite Id
segmentIds : OPTIONAL : comma delimited list of segmentId contained in the VRSID
save : OPTIONAL : if set to True, it will save the list in a file. (Default False)
"""
if self.loggingEnabled:
self.logger.debug(f"Starting getVirtualReportSuites")
expansion_values = "globalCompanyKey,parentRsid,parentRsidName,timezone,timezoneZoneinfo,currentTimezoneOffset,segmentList,description,modified,isDeleted,dataCurrentAsOf,compatibility,dataSchema,sessionDefinition,curatedComponents,type"
params = {"limit": limit}
nb_error = 0
nb_empty = 0
list_urls = []
if extended_info:
params['expansion'] = expansion_values
if filterIds is not None:
params['filterByIds'] = filterIds
if idContains is not None:
params['idContains'] = idContains
if segmentIds is not None:
params['segmentIds'] = segmentIds
path = f"{self.endpoint_company}/reportsuites/virtualreportsuites"
if self.loggingEnabled:
self.logger.debug(f"params: {params}")
vrsid = self.connector.getData(
path, params=params, headers=self.header)
content = vrsid['content']
if not extended_info:
list_content = [{'name': item['name'], 'vrsid': item['id']}
for item in content]
df_vrsids = pd.DataFrame(list_content)
else:
df_vrsids = pd.DataFrame(content)
total_page = vrsid['totalPages']
last_page = vrsid['lastPage']
if not last_page: # if last_page =False
callsToMake = total_page
list_params = [{**params, 'page': page}
for page in range(1, callsToMake)]
list_urls = [path for x in range(1, callsToMake)]
listheaders = [self.header for x in range(1, callsToMake)]
workers = min(10, total_page)
with futures.ThreadPoolExecutor(workers) as executor:
res = executor.map(lambda x, y, z: self.connector.getData(
x, y, headers=z), list_urls, list_params, listheaders)
res = list(res)
list_data = [val for sublist in [r['content']
for r in res if 'content' in r.keys()] for val in sublist]
nb_error = sum(1 for elem in res if 'error_code' in elem.keys())
nb_empty = sum(1 for elem in res if 'content' in elem.keys() and len(
elem['content']) == 0)
if not extended_info:
list_append = [{'name': item['name'], 'vrsid': item['id']}
for item in list_data]
df_append = pd.DataFrame(list_append)
else:
df_append = pd.DataFrame(list_data)
df_vrsids = df_vrsids.append(df_append, ignore_index=True)
if save:
df_vrsids.to_csv('VRSIDS.csv', sep='\t')
if nb_error > 0 or nb_empty > 0:
message = f'WARNING : Retrieved data are partial.\n{nb_error}/{len(list_urls) + 1} requests returned an error.\n{nb_empty}/{len(list_urls)} requests returned an empty response. \nTry to use filter to retrieve reportSuite or increase limit per request'
print(message)
if self.loggingEnabled:
self.logger.warning(message)
return df_vrsids
def getVirtualReportSuite(self, vrsid: str = None, extended_info: bool = False,
format: str = 'df') -> JsonOrDataFrameType:
"""
return a single virtual report suite ID information as dataframe.
Arguments:
vrsid : REQUIRED : The virtual reportSuite to be retrieved
extended_info : OPTIONAL : boolean to add more information
format : OPTIONAL : format of the output. 2 values "df" for dataframe and "raw" for raw json.
"""
if vrsid is None:
raise Exception("require a Virtual ReportSuite ID")
if self.loggingEnabled:
self.logger.debug(f"Starting getVirtualReportSuite for {vrsid}")
expansion_values = "globalCompanyKey,parentRsid,parentRsidName,timezone,timezoneZoneinfo,currentTimezoneOffset,segmentList,description,modified,isDeleted,dataCurrentAsOf,compatibility,dataSchema,sessionDefinition,curatedComponents,type"
params = {}
if extended_info:
params['expansion'] = expansion_values
path = f"{self.endpoint_company}/reportsuites/virtualreportsuites/{vrsid}"
data = self.connector.getData(path, params=params, headers=self.header)
if format == "df":
data = pd.DataFrame({vrsid: data})
return data
def getVirtualReportSuiteComponents(self, vrsid: str = None, nan_value=""):
"""
Uses the getVirtualReportSuite function to get a VRS and returns
the VRS components for a VRS as a dataframe. VRS must have Component Curation enabled.
Arguments:
vrsid : REQUIRED : Virtual Report Suite ID
nan_value : OPTIONAL : how to handle empty cells, default = ""
"""
if self.loggingEnabled:
self.logger.debug(f"Starting getVirtualReportSuiteComponents")
vrs_data = self.getVirtualReportSuite(extended_info=True, vrsid=vrsid)
if "curatedComponents" not in vrs_data.index:
return pd.DataFrame()
components_cell = vrs_data[vrs_data.index ==
"curatedComponents"].iloc[0, 0]
return pd.DataFrame(components_cell).fillna(value=nan_value)
def createVirtualReportSuite(self, name: str = None, parentRsid: str = None, segmentList: list = None,
dataSchema: str = "Cache", data_dict: dict = None, **kwargs) -> dict:
"""
Create a new virtual report suite based on the information provided.
Arguments:
name : REQUIRED : name of the virtual reportSuite
parentRsid : REQUIRED : Parent reportSuite ID for the VRS
segmentLists : REQUIRED : list of segment id to be applied on the ReportSuite.
dataSchema : REQUIRED : Type of schema used for the VRSID. (default "Cache")
data_dict : OPTIONAL : you can pass directly the dictionary.
"""
if self.loggingEnabled:
self.logger.debug(f"Starting createVirtualReportSuite")
path = f"{self.endpoint_company}/reportsuites/virtualreportsuites"
expansion_values = "globalCompanyKey,parentRsid,parentRsidName,timezone,timezoneZoneinfo,currentTimezoneOffset,segmentList,description,modified,isDeleted,dataCurrentAsOf,compatibility,dataSchema,sessionDefinition,curatedComponents,type"
params = {'expansion': expansion_values}
if data_dict is None:
body = {
"name": name,
"parentRsid": parentRsid,
"segmentList": segmentList,
"dataSchema": dataSchema,
"description": kwargs.get('description', '')
}
else:
if 'name' not in data_dict.keys() or 'parentRsid' not in data_dict.keys() or 'segmentList' not in data_dict.keys() or 'dataSchema' not in data_dict.keys():
if self.loggingEnabled:
self.logger.error(f"Missing one or more fundamental keys : name, parentRsid, segmentList, dataSchema")
raise Exception("Missing one or more fundamental keys : name, parentRsid, segmentList, dataSchema")
body = data_dict
res = self.connector.postData(
path, params=params, data=body, headers=self.header)
return res
def updateVirtualReportSuite(self, vrsid: str = None, data_dict: dict = None, **kwargs) -> dict:
"""
Updates a Virtual Report Suite based on a JSON-like dictionary (same structure as createVirtualReportSuite)
Note that to update components, you need to supply ALL components currently associated with this suite.
Supplying only the components you want to change will remove all others from the VR Suite!
Arguments:
vrsid : REQUIRED : The id of the virtual report suite to update
data_dict : a json-like dictionary of the vrs data to update
"""
if vrsid is None:
raise Exception("require a virtual reportSuite ID")
if self.loggingEnabled:
self.logger.debug(f"Starting updateVirtualReportSuite for {vrsid}")
path = f"{self.endpoint_company}/reportsuites/virtualreportsuites/{vrsid}"
body = data_dict
res = self.connector.putData(path, data=body, headers=self.header)
if self.loggingEnabled:
self.logger.debug(f"updateVirtualReportSuite response : {res}")
return res
def deleteVirtualReportSuite(self, vrsid: str = None) -> str:
"""
Delete a Virtual Report Suite based on the id passed.
Arguments:
vrsid : REQUIRED : The id of the virtual reportSuite to delete.
"""
if vrsid is None:
raise Exception("require a Virtual ReportSuite ID")
if self.loggingEnabled:
self.logger.debug(f"Starting deleteVirtualReportSuite for {vrsid}")
path = f"{self.endpoint_company}/reportsuites/virtualreportsuites/{vrsid}"
res = self.connector.deleteData(path, headers=self.header)
if self.loggingEnabled:
self.logger.debug(f"deleteVirtualReportSuite {vrsid} response : {res}")
return res
def validateVirtualReportSuite(self, name: str = None, parentRsid: str = None, segmentList: list = None,
dataSchema: str = "Cache", data_dict: dict = None, **kwargs) -> dict:
"""
Validate the object to create a new virtual report suite based on the information provided.
Arguments:
name : REQUIRED : name of the virtual reportSuite
parentRsid : REQUIRED : Parent reportSuite ID for the VRS
segmentLists : REQUIRED : list of segment ids to be applied on the ReportSuite.
dataSchema : REQUIRED : Type of schema used for the VRSID (default : Cache).
data_dict : OPTIONAL : you can pass directly the dictionary.
"""
if self.loggingEnabled:
self.logger.debug(f"Starting validateVirtualReportSuite")
path = f"{self.endpoint_company}/reportsuites/virtualreportsuites/validate"
expansion_values = "globalCompanyKey, parentRsid, parentRsidName, timezone, timezoneZoneinfo, currentTimezoneOffset, segmentList, description, modified, isDeleted, dataCurrentAsOf, compatibility, dataSchema, sessionDefinition, curatedComponents, type"
if data_dict is None:
body = {
"name": name,
"parentRsid": parentRsid,
"segmentList": segmentList,
"dataSchema": dataSchema,
"description": kwargs.get('description', '')
}
else:
if 'name' not in data_dict.keys() or 'parentRsid' not in data_dict.keys() or 'segmentList' not in data_dict.keys() or 'dataSchema' not in data_dict.keys():
raise Exception(
"Missing one or more fundamental keys : name, parentRsid, segmentList, dataSchema")
body = data_dict
res = self.connector.postData(path, data=body, headers=self.header)
if self.loggingEnabled:
self.logger.debug(f"validateVirtualReportSuite response : {res}")
return res
def getDimensions(self, rsid: str, tags: bool = False, description:bool=False, save=False, **kwargs) -> pd.DataFrame:
"""
Retrieve the list of dimensions from a specific reportSuite. Shrink columns to simplify output.
Returns the data frame of available dimensions.
Arguments:
rsid : REQUIRED : Report Suite ID from which you want the dimensions
tags : OPTIONAL : If you would like to have additional information, such as tags. (bool : default False)
description : OPTIONAL : Trying to add the description column. It may break the method.
save : OPTIONAL : If set to True, it will save the info in a csv file (bool : default False)
Possible kwargs:
full : Boolean : Doesn't shrink the number of columns if set to true
example : getDimensions(rsid,full=True)
"""
if self.loggingEnabled:
self.logger.debug(f"Starting getDimensions")
params = {}
if tags:
params.update({'expansion': 'tags'})
params.update({'rsid': rsid})
dims = self.connector.getData(self.endpoint_company +
self._getDimensions, params=params, headers=self.header)
df_dims = pd.DataFrame(dims)
columns = ['id', 'name', 'category', 'type',
'parent', 'pathable']
if description:
columns.append('description')
if kwargs.get('full', False):
new_cols = pd.DataFrame(df_dims.support.values.tolist(),
columns=['support_oberon', 'support_dw']) # extract list in column
new_df = df_dims.merge(new_cols, right_index=True, left_index=True)
new_df.drop(['reportable', 'support'], axis=1, inplace=True)
df_dims = new_df
else:
df_dims = df_dims[columns]
if save:
df_dims.to_csv(f'dimensions_{rsid}.csv')
return df_dims
def getMetrics(self, rsid: str, tags: bool = False, save=False, description:bool=False, dataGroup:bool=False, **kwargs) -> pd.DataFrame:
"""
Retrieve the list of metrics from a specific reportSuite. Shrink columns to simplify output.
Returns the data frame of available metrics.
Arguments:
rsid : REQUIRED : Report Suite ID from which you want the dimensions (str)
tags : OPTIONAL : If you would like to have additional information, such as tags.(bool : default False)
dataGroup : OPTIONAL : Adding dataGroups to the column exported. Default False.
May break the report.
save : OPTIONAL : If set to True, it will save the info in a csv file (bool : default False)
Possible kwargs:
full : Boolean : Doesn't shrink the number of columns if set to true.
"""
if self.loggingEnabled:
self.logger.debug(f"Starting getMetrics")
params = {}
if tags:
params.update({'expansion': 'tags'})
params.update({'rsid': rsid})
metrics = self.connector.getData(self.endpoint_company +
self._getMetrics, params=params, headers=self.header)
df_metrics = pd.DataFrame(metrics)
columns = ['id', 'name', 'category', 'type',
'precision', 'segmentable']
if dataGroup:
columns.append('dataGroup')
if description:
columns.append('description')
if kwargs.get('full', False):
new_cols = pd.DataFrame(df_metrics.support.values.tolist(), columns=[
'support_oberon', 'support_dw'])
new_df = df_metrics.merge(
new_cols, right_index=True, left_index=True)
new_df.drop('support', axis=1, inplace=True)
df_metrics = new_df
else:
df_metrics = df_metrics[columns]
if save:
df_metrics.to_csv(f'metrics_{rsid}.csv', sep='\t')
return df_metrics
def getUsers(self, save: bool = False, **kwargs) -> pd.DataFrame:
"""
Retrieve the list of users for a login company.Returns a data frame.
Arguments:
save : OPTIONAL : Save the data in a file (bool : default False).
Possible kwargs:
limit : Nummber of results per requests. Default 100.
expansion : string list such as "lastAccess,createDate"
"""
if self.loggingEnabled:
self.logger.debug(f"Starting getUsers")
list_urls = []
nb_error, nb_empty = 0, 0 # use for multi-thread loop
params = {'limit': kwargs.get('limit', 100)}
if kwargs.get("expansion", None) is not None:
params["expansion"] = kwargs.get("expansion", None)
path = "/users"
users = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header)
data = users['content']
lastPage = users['lastPage']
if not lastPage: # check if lastpage is inversed of False
callsToMake = users['totalPages']
list_params = [{'limit': params['limit'], 'page': page}
for page in range(1, callsToMake)]
list_urls = [self.endpoint_company +
"/users" for x in range(1, callsToMake)]
listheaders = [self.header
for x in range(1, callsToMake)]
workers = min(10, len(list_params))
with futures.ThreadPoolExecutor(workers) as executor:
res = executor.map(lambda x, y, z: self.connector.getData(x, y, headers=z), list_urls,
list_params, listheaders)
res = list(res)
users_lists = [elem['content']
for elem in res if 'content' in elem.keys()]
nb_error = sum(1 for elem in res if 'error_code' in elem.keys())
nb_empty = sum(1 for elem in res if 'content' in elem.keys()
and len(elem['content']) == 0)
append_data = [val for sublist in [data for data in users_lists]
for val in sublist] # flatten list of list
data = data + append_data
df_users = pd.DataFrame(data)
columns = ['email', 'login', 'fullName', 'firstName', 'lastName', 'admin', 'loginId', 'imsUserId', 'login',
'createDate', 'lastAccess', 'title', 'disabled', 'phoneNumber', 'companyid']
df_users = df_users[columns]
df_users['createDate'] = pd.to_datetime(df_users['createDate'])
df_users['lastAccess'] = pd.to_datetime(df_users['lastAccess'])
if save:
df_users.to_csv(f'users_{int(time.time())}.csv', sep='\t')
if nb_error > 0 or nb_empty > 0:
print(
f'WARNING : Retrieved data are partial.\n{nb_error}/{len(list_urls) + 1} requests returned an error.\n{nb_empty}/{len(list_urls)} requests returned an empty response. \nTry to use filter to retrieve users or increase limit')
return df_users
def getUserMe(self,loginId:str=None)->dict:
"""
Retrieve a single user based on its loginId
Argument:
loginId : REQUIRED : Login ID for the user
"""
path = f"/users/me"
res = self.connector.getData(self.endpoint_company + path)
return res
def getSegments(self, name: str = None, tagNames: str = None, inclType: str = 'all', rsids_list: list = None,
sidFilter: list = None, extended_info: bool = False, format: str = "df", save: bool = False,
verbose: bool = False, **kwargs) -> JsonListOrDataFrameType:
"""
Retrieve the list of segments. Returns a data frame.
Arguments:
name : OPTIONAL : Filter to only include segments that contains the name (str)
tagNames : OPTIONAL : Filter list to only include segments that contains one of the tags (string delimited with comma, can be list as well)
inclType : OPTIONAL : type of segments to be retrieved.(str) Possible values:
- all : Default value (all segments possibles)
- shared : shared segments
- template : template segments
- deleted : deleted segments
- internal : internal segments
- curatedItem : curated segments
rsid_list : OPTIONAL : Filter list to only include segments tied to specified RSID list (list)
sidFilter : OPTIONAL : Filter list to only include segments in the specified list (list)
extended_info : OPTIONAL : additional segment metadata fields to include on response (bool : default False)
if set to true, returns reportSuiteName, ownerFullName, modified, tags, compatibility, definition
format : OPTIONAL : defined the format returned by the query. (Default df)
possibe values :
"df" : default value that return a dataframe
"raw": return a list of value. More or less what is return from server.
save : OPTIONAL : If set to True, it will save the info in a csv file (bool : default False)
verbose : OPTIONAL : If set to True, print some information
Possible kwargs:
limit : number of segments retrieved by request. default 500: Limited to 1000 by the AnalyticsAPI.
NOTE : Segment Endpoint doesn't support multi-threading. Default to 500.
"""
if self.loggingEnabled:
self.logger.debug(f"Starting getSegments")
limit = int(kwargs.get('limit', 500))
params = {'includeType': 'all', 'limit': limit}
if extended_info:
params.update(
{'expansion': 'reportSuiteName,ownerFullName,created,modified,tags,compatibility,definition,shares'})
if name is not None:
params.update({'name': str(name)})
if tagNames is not None:
if type(tagNames) == list:
tagNames = ','.join(tagNames)
params.update({'tagNames': tagNames})
if inclType != 'all':
params['includeType'] = inclType
if rsids_list is not None:
if type(rsids_list) == list:
rsids_list = ','.join(rsids_list)
params.update({'rsids': rsids_list})
if sidFilter is not None:
if type(sidFilter) == list:
sidFilter = ','.join(sidFilter)
params.update({'rsids': sidFilter})
data = []
lastPage = False
page_nb = 0
if verbose:
print("Starting requesting segments")
while not lastPage:
params['page'] = page_nb
segs = self.connector.getData(self.endpoint_company +
self._getSegments, params=params, headers=self.header)
data += segs['content']
lastPage = segs['lastPage']
page_nb += 1
if verbose and page_nb % 10 == 0:
print(f"request #{page_nb / 10}")
if format == "df":
segments = pd.DataFrame(data)
else:
segments = data
if save and format == "df":
segments.to_csv(f'segments_{int(time.time())}.csv', sep='\t')
if verbose:
print(
f'Saving data in file : {os.getcwd()}{os.sep}segments_{int(time.time())}.csv')
elif save and format == "raw":
with open(f"segments_{int(time.time())}.csv","w") as f:
f.write(json.dumps(segments,indent=4))
return segments
def getSegment(self, segment_id: str = None,full:bool=False, *args) -> dict:
"""
Get a specific segment from the ID. Returns the object of the segment.
Arguments:
segment_id : REQUIRED : the segment id to retrieve.
full : OPTIONAL : Add all possible options
Possible args:
- "reportSuiteName" : string : to retrieve reportSuite attached to the segment
- "ownerFullName" : string : to retrieve ownerFullName attached to the segment
- "modified" : string : to retrieve when segment was modified
- "tags" : string : to retrieve tags attached to the segment
- "compatibility" : string : to retrieve which tool is compatible
- "definition" : string : definition of the segment
- "publishingStatus" : string : status for the segment
- "definitionLastModified" : string : last definition of the segment
- "categories" : string : categories of the segment
"""
ValidArgs = ["reportSuiteName", "ownerFullName", "modified", "tags", "compatibility",
"definition", "publishingStatus", "publishingStatus", "definitionLastModified", "categories"]
if segment_id is None:
raise Exception("Expected a segment id")
if self.loggingEnabled:
self.logger.debug(f"Starting getSegment for {segment_id}")
path = f"/segments/{segment_id}"
for element in args:
if element not in ValidArgs:
args.remove(element)
params = {'expansion': ','.join(args)}
if full:
params = {'expansion': ','.join(ValidArgs)}
res = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header)
return res
def scanSegment(self,segment:Union[str,dict],verbose:bool=False)->dict:
"""
Return the dimensions, metrics and reportSuite used and the main scope of the segment.
Arguments:
segment : REQUIRED : either the ID of the segment or the full definition.
verbose : OPTIONAL : print some comment.
"""
if self.loggingEnabled:
self.logger.debug(f"Starting scanSegment")
if type(segment) == str:
if verbose:
print('retrieving segment definition')
defSegment = self.getSegment(segment,full=True)
elif type(segment) == dict:
defSegment = deepcopy(segment)
if 'definition' not in defSegment.keys():
raise KeyError('missing "definition" key ')
if verbose:
print('copied segment definition')
mydef = str(defSegment['definition'])
dimensions : list = re.findall("'(variables/.+?)'",mydef)
metrics : list = re.findall("'(metrics/.+?)'",mydef)
reportSuite = defSegment['rsid']
scope = re.search("'context': '(.+)'}[^'context']+",mydef)
res = {
'dimensions' : set(dimensions) if len(dimensions)>0 else {},
'metrics' : set(metrics) if len(metrics)>0 else {},
'rsid' : reportSuite,
'scope' : scope.group(1)
}
return res
def createSegment(self, segmentJSON: dict = None) -> dict:
"""
Method that creates a new segment based on the dictionary passed to it.
Arguments:
segmentJSON : REQUIRED : the dictionary that represents the JSON statement for the segment.
More information at this address <https://adobedocs.github.io/analytics-2.0-apis/#/segments/segments_createSegment>
"""
if self.loggingEnabled:
self.logger.debug(f"starting createSegment")
if segmentJSON is None:
print('No segment data has been pushed')
return None
data = deepcopy(segmentJSON)
seg = self.connector.postData(
self.endpoint_company + self._getSegments,
data=data,
headers=self.header
)
return seg
def createSegmentValidate(self, segmentJSON: dict = None) -> object:
"""
Method that validate a new segment based on the dictionary passed to it.
Arguments:
segmentJSON : REQUIRED : the dictionary that represents the JSON statement for the segment.
More information at this address <https://adobedocs.github.io/analytics-2.0-apis/#/segments/segments_createSegment>
"""
if self.loggingEnabled:
self.logger.debug(f"starting createSegmentValidate")
if segmentJSON is None:
print('No segment data has been pushed')
return None
data = deepcopy(segmentJSON)
path = "/segments/validate"
seg = self.connector.postData(self.endpoint_company +path,data=data)
return seg
def updateSegment(self, segmentID: str = None, segmentJSON: dict = None) -> object:
"""
Method that updates a specific segment based on the dictionary passed to it.
Arguments:
segmentID : REQUIRED : Segment ID to be updated
segmentJSON : REQUIRED : the dictionary that represents the JSON statement for the segment.
"""
if self.loggingEnabled:
self.logger.debug(f"starting updateSegment")
if segmentJSON is None or segmentID is None:
print('No segment or segmentID data has been pushed')
if self.loggingEnabled:
self.logger.error(f"No segment or segmentID data has been pushed")
return None
data = deepcopy(segmentJSON)
seg = self.connector.putData(
self.endpoint_company + self._getSegments + '/' + segmentID,
data=data,
headers=self.header
)
return seg
def deleteSegment(self, segmentID: str = None) -> object:
"""
Method that delete a specific segment based the ID passed.
Arguments:
segmentID : REQUIRED : Segment ID to be deleted
"""
if segmentID is None:
print('No segmentID data has been pushed')
return None
if self.loggingEnabled:
self.logger.debug(f"starting deleteSegment for {segmentID}")
seg = self.connector.deleteData(self.endpoint_company +
self._getSegments + '/' + segmentID, headers=self.header)
return seg
def getCalculatedMetrics(
self,
name: str = None,
tagNames: str = None,
inclType: str = 'all',
rsids_list: list = None,
extended_info: bool = False,
save=False,
format:str='df',
**kwargs
) -> pd.DataFrame:
"""
Retrieve the list of calculated metrics. Returns a data frame.
Arguments:
name : OPTIONAL : Filter to only include calculated metrics that contains the name (str)
tagNames : OPTIONAL : Filter list to only include calculated metrics that contains one of the tags (string delimited with comma, can be list as well)
inclType : OPTIONAL : type of calculated Metrics to be retrieved. (str) Possible values:
- all : Default value (all calculated metrics possibles)
- shared : shared calculated metrics
- template : template calculated metrics
rsid_list : OPTIONAL : Filter list to only include segments tied to specified RSID list (list)
extended_info : OPTIONAL : additional segment metadata fields to include on response (list)
additional infos: reportSuiteName,definition, ownerFullName, modified, tags, compatibility
save : OPTIONAL : If set to True, it will save the info in a csv file (Default False)
format : OPTIONAL : format of the output. 2 values "df" for dataframe and "raw" for raw json.
Possible kwargs:
limit : number of segments retrieved by request. default 500: Limited to 1000 by the AnalyticsAPI.(int)
"""
if self.loggingEnabled:
self.logger.debug(f"starting getCalculatedMetrics")
limit = int(kwargs.get('limit', 500))
params = {'includeType': inclType, 'limit': limit}
if name is not None:
params.update({'name': str(name)})
if tagNames is not None:
if type(tagNames) == list:
tagNames = ','.join(tagNames)
params.update({'tagNames': tagNames})
if inclType != 'all':
params['includeType'] = inclType
if rsids_list is not None:
if type(rsids_list) == list:
rsids_list = ','.join(rsids_list)
params.update({'rsids': rsids_list})
if extended_info:
params.update(
{'expansion': 'reportSuiteName,definition,ownerFullName,modified,tags,categories,compatibility,shares'})
metrics = self.connector.getData(self.endpoint_company +
self._getCalcMetrics, params=params)
data = metrics['content']
lastPage = metrics['lastPage']
if not lastPage: # check if lastpage is inversed of False
page_nb = 0
while not lastPage:
page_nb += 1
params['page'] = page_nb
metrics = self.connector.getData(self.endpoint_company +
self._getCalcMetrics, params=params, headers=self.header)
data += metrics['content']
lastPage = metrics['lastPage']
if format == "raw":
if save:
with open(f'calculated_metrics_{int(time.time())}.json','w') as f:
f.write(json.dumps(data,indent=4))
return data
df_calc_metrics = pd.DataFrame(data)
if save:
df_calc_metrics.to_csv(f'calculated_metrics_{int(time.time())}.csv', sep='\t')
return df_calc_metrics
def getCalculatedMetric(self,calculatedMetricId:str=None,full:bool=True)->dict:
"""
Return a dictionary on the calculated metrics requested.
Arguments:
calculatedMetricId : REQUIRED : The calculated metric ID to be retrieved.
full : OPTIONAL : additional segment metadata fields to include on response (list)
additional infos: reportSuiteName,definition, ownerFullName, modified, tags, compatibility
"""
if calculatedMetricId is None:
raise ValueError("Require a calculated metrics ID")
if self.loggingEnabled:
self.logger.debug(f"starting getCalculatedMetric for {calculatedMetricId}")
params = {}
if full:
params.update({'expansion': 'reportSuiteName,definition,ownerFullName,modified,tags,categories,compatibility'})
path = f"/calculatedmetrics/{calculatedMetricId}"
res = self.connector.getData(self.endpoint_company+path,params=params)
return res
def scanCalculatedMetric(self,calculatedMetric:Union[str,dict],verbose:bool=False)->dict:
"""
Return a dictionary of metrics and dimensions used in the calculated metrics.
"""
if self.loggingEnabled:
self.logger.debug(f"starting scanCalculatedMetric")
if type(calculatedMetric) == str:
if verbose:
print('retrieving calculated metrics definition')
cm = self.getCalculatedMetric(calculatedMetric,full=True)
elif type(calculatedMetric) == dict:
cm = deepcopy(calculatedMetric)
if 'definition' not in cm.keys():
raise KeyError('missing "definition" key')
if verbose:
print('copied calculated metrics definition')
mydef = str(cm['definition'])
segments:list = cm['compatibility'].get('segments',[])
res = {"dimensions":[],'metrics':[]}
for segment in segments:
if verbose:
print(f"retrieving segment {segment} definition")
tmp:dict = self.scanSegment(segment)
res['dimensions'] += [dim for dim in tmp['dimensions']]
res['metrics'] += [met for met in tmp['metrics']]
metrics : list = re.findall("'(metrics/.+?)'",mydef)
res['metrics'] += metrics
res['rsid'] = cm['rsid']
res['metrics'] = set(res['metrics']) if len(res['metrics'])>0 else {}
res['dimensions'] = set(res['dimensions']) if len(res['dimensions'])>0 else {}
return res
def createCalculatedMetric(self, metricJSON: dict = None) -> dict:
"""
Method that create a specific calculated metric based on the dictionary passed to it.
Arguments:
metricJSON : REQUIRED : Calculated Metrics information to create. (Required: name, definition, rsid)
More information can be found at this address https://adobedocs.github.io/analytics-2.0-apis/#/calculatedmetrics/calculatedmetrics_createCalculatedMetric
"""
if self.loggingEnabled:
self.logger.debug(f"starting createCalculatedMetric")
if metricJSON is None or type(metricJSON) != dict:
if self.loggingEnabled:
self.logger.error(f'Expected a dictionary to create the calculated metrics')
raise Exception(
"Expected a dictionary to create the calculated metrics")
if 'name' not in metricJSON.keys() or 'definition' not in metricJSON.keys() or 'rsid' not in metricJSON.keys():
if self.loggingEnabled:
self.logger.error(f'Expected "name", "definition" and "rsid" in the data')
raise KeyError(
'Expected "name", "definition" and "rsid" in the data')
cm = self.connector.postData(self.endpoint_company +
self._getCalcMetrics, headers=self.header, data=metricJSON)
return cm
def createCalculatedMetricValidate(self,metricJSON: dict=None)->dict:
"""
Method that validate a specific calculated metrics definition based on the dictionary passed to it.
Arguments:
metricJSON : REQUIRED : Calculated Metrics information to create. (Required: name, definition, rsid)
More information can be found at this address https://adobedocs.github.io/analytics-2.0-apis/#/calculatedmetrics/calculatedmetrics_createCalculatedMetric
"""
if self.loggingEnabled:
self.logger.debug(f"starting createCalculatedMetricValidate")
if metricJSON is None or type(metricJSON) != dict:
raise Exception(
"Expected a dictionary to create the calculated metrics")
if 'name' not in metricJSON.keys() or 'definition' not in metricJSON.keys() or 'rsid' not in metricJSON.keys():
if self.loggingEnabled:
self.logger.error(f'Expected "name", "definition" and "rsid" in the data')
raise KeyError(
'Expected "name", "definition" and "rsid" in the data')
path = "/calculatedmetrics/validate"
cm = self.connector.postData(self.endpoint_company+path, data=metricJSON)
return cm
def updateCalculatedMetric(self, calcID: str = None, calcJSON: dict = None) -> object:
"""
Method that updates a specific Calculated Metrics based on the dictionary passed to it.
Arguments:
calcID : REQUIRED : Calculated Metric ID to be updated
calcJSON : REQUIRED : the dictionary that represents the JSON statement for the calculated metric.
"""
if calcJSON is None or calcID is None:
print('No calcMetric or calcMetric JSON data has been passed')
return None
if self.loggingEnabled:
self.logger.debug(f"starting updateCalculatedMetric for {calcID}")
data = deepcopy(calcJSON)
cm = self.connector.putData(
self.endpoint_company + self._getCalcMetrics + '/' + calcID,
data=data,
headers=self.header
)
return cm
def deleteCalculatedMetric(self, calcID: str = None) -> object:
"""
Method that delete a specific calculated metrics based on the id passed..
Arguments:
calcID : REQUIRED : Calculated Metrics ID to be deleted
"""
if calcID is None:
print('No calculated metrics data has been passed')
return None
if self.loggingEnabled:
self.logger.debug(f"starting deleteCalculatedMetric for {calcID}")
cm = self.connector.deleteData(
self.endpoint_company + self._getCalcMetrics + '/' + calcID,
headers=self.header
)
return cm
def getDateRanges(self, extended_info: bool = False, save: bool = False, includeType: str = 'all',verbose:bool=False,
**kwargs) -> pd.DataFrame:
"""
Get the list of date ranges available for the user.
Arguments:
extended_info : OPTIONAL : additional segment metadata fields to include on response
additional infos: reportSuiteName, ownerFullName, modified, tags, compatibility, definition
save : OPTIONAL : If set to True, it will save the info in a csv file (Default False)
includeType : Include additional date ranges not owned by user. The "all" option takes precedence over "shared"
Possible values are all, shared, templates. You can add all of them as comma separated string.
Possible kwargs:
limit : number of segments retrieved by request. default 500: Limited to 1000 by the AnalyticsAPI.
full : Boolean : Doesn't shrink the number of columns if set to true
"""
if self.loggingEnabled:
self.logger.debug(f"starting getDateRanges")
limit = int(kwargs.get('limit', 500))
includeType = includeType.split(',')
params = {'limit': limit, 'includeType': includeType}
if extended_info:
params.update(
{'expansion': 'definition,ownerFullName,modified,tags'})
dateRanges = self.connector.getData(
self.endpoint_company + self._getDateRanges,
params=params,
headers=self.header,
verbose=verbose
)
data = dateRanges['content']
df_dates = pd.DataFrame(data)
if save:
df_dates.to_csv('date_range.csv', index=False)
return df_dates
def getDateRange(self,dateRangeID:str=None)->dict:
"""
Get a specific Data Range based on the ID
Arguments:
dateRangeID : REQUIRED : the date range ID to be retrieved.
"""
if dateRangeID is None:
raise ValueError("No date range ID has been passed")
if self.loggingEnabled:
self.logger.debug(f"starting getDateRange with ID: {dateRangeID}")
params ={
"expansion":"definition,ownerFullName,modified,tags"
}
dr = self.connector.getData(
self.endpoint_company + f"{self._getDateRanges}/{dateRangeID}",
params=params
)
return dr
def updateDateRange(self, dateRangeID: str = None, dateRangeJSON: dict = None) -> dict:
"""
Method that updates a specific Date Range based on the dictionary passed to it.
Arguments:
dateRangeID : REQUIRED : Date Range ID to be updated
dateRangeJSON : REQUIRED : the dictionary that represents the JSON statement for the date Range.
"""
if dateRangeJSON is None or dateRangeID is None:
raise ValueError("No date range or date range JSON data have been passed")
if self.loggingEnabled:
self.logger.debug(f"starting updateDateRange")
data = deepcopy(dateRangeJSON)
dr = self.connector.putData(
self.endpoint_company + self._getDateRanges + '/' + dateRangeID,
data=data,
headers=self.header
)
return dr
def deleteDateRange(self, dateRangeID: str = None) -> object:
"""
Method that deletes a specific date Range based on the id passed.
Arguments:
dateRangeID : REQUIRED : ID of Date Range to be deleted
"""
if dateRangeID is None:
print('No Date Range ID has been pushed')
return None
if self.loggingEnabled:
self.logger.debug(f"starting deleteDateRange for {dateRangeID}")
response = self.connector.deleteData(
self.endpoint_company + self._getDateRanges + '/' + dateRangeID,
headers=self.header
)
return response
def getCalculatedFunctions(self, **kwargs) -> pd.DataFrame:
"""
Returns the calculated metrics functions.
"""
if self.loggingEnabled:
self.logger.debug(f"starting getCalculatedFunctions")
path = "/calculatedmetrics/functions"
limit = int(kwargs.get('limit', 500))
params = {'limit': limit}
funcs = self.connector.getData(
self.endpoint_company + path,
params=params,
headers=self.header
)
df = pd.DataFrame(funcs)
return df
def getTags(self, limit: int = 100, **kwargs) -> list:
"""
Return the list of tags
Arguments:
limit : OPTIONAL : Amount of tag to be returned by request. Default 100
"""
if self.loggingEnabled:
self.logger.debug(f"starting getTags")
path = "/componentmetadata/tags"
params = {'limit': limit}
if kwargs.get('page', False):
params['page'] = kwargs.get('page', 0)
res = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header)
data = res['content']
if not res['lastPage']:
page = res['number'] + 1
data += self.getTags(limit=limit, page=page)
return data
def getTag(self, tagId: str = None) -> dict:
"""
Return the a tag by its ID.
Arguments:
tagId : REQUIRED : the Tag ID to be retrieved.
"""
if tagId is None:
raise Exception("Require a tag ID for this method.")
if self.loggingEnabled:
self.logger.debug(f"starting getTag for {tagId}")
path = f"/componentmetadata/tags/{tagId}"
res = self.connector.getData(self.endpoint_company + path, headers=self.header)
return res
def getComponentTagName(self, tagNames: str = None, componentType: str = None) -> dict:
"""
Given a comma separated list of tag names, return component ids associated with them.
Arguments:
tagNames : REQUIRED : Comma separated list of tag names.
componentType : REQUIRED : The component type to operate on.
Available values : segment, dashboard, bookmark, calculatedMetric, project, dateRange, metric, dimension, virtualReportSuite, scheduledJob, alert, classificationSet
"""
path = "/componentmetadata/tags/tagnames"
if tagNames is None:
raise Exception("Requires tag names to be provided")
if self.loggingEnabled:
self.logger.debug(f"starting getComponentTagName for {tagNames}")
if componentType is None:
raise Exception("Requires a Component Type to be provided")
params = {
"tagNames": tagNames,
"componentType": componentType
}
res = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header)
return res
def searchComponentsTags(self, componentType: str = None, componentIds: list = None) -> dict:
"""
Search for the tags of a list of component by their ids.
Arguments:
componentType : REQUIRED : The component type to use in the search.
Available values : segment, dashboard, bookmark, calculatedMetric, project, dateRange, metric, dimension, virtualReportSuite, scheduledJob, alert, classificationSet
componentIds : REQUIRED : List of components Ids to use.
"""
if self.loggingEnabled:
self.logger.debug(f"starting searchComponentsTags")
if componentType is None:
raise Exception("ComponentType is required")
if componentIds is None or type(componentIds) != list:
raise Exception("componentIds is required as a list of ids")
path = "/componentmetadata/tags/component/search"
obj = {
"componentType": componentType,
"componentIds": componentIds
}
if self.loggingEnabled:
self.logger.debug(f"params {obj}")
res = self.connector.postData(self.endpoint_company + path, data=obj, headers=self.header)
return res
def createTags(self, data: list = None) -> dict:
"""
Create a new tag and applies that new tag to the passed components.
Arguments:
data : REQUIRED : list of the tag to be created with their component relation.
Example of data :
[
{
"id": 0,
"name": "string",
"description": "string",
"components": [
{
"componentType": "string",
"componentId": "string",
"tags": [
"Unknown Type: Tag"
]
}
]
}
]
"""
if self.loggingEnabled:
self.logger.debug(f"starting createTags")
if data is None:
raise Exception("Requires a list of tags to be created")
path = "โ/componentmetadataโ/tags"
if self.loggingEnabled:
self.logger.debug(f"data: {data}")
res = self.connector.postData(self.endpoint_company + path, data=data, headers=self.header)
return res
def deleteTags(self, componentType: str = None, componentIds: str = None) -> str:
"""
Delete all tags from the component Type and the component ids specified.
Arguments:
componentIds : REQUIRED : the Comma-separated list of componentIds to operate on.
componentType : REQUIRED : The component type to operate on.
Available values : segment, dashboard, bookmark, calculatedMetric, project, dateRange, metric, dimension, virtualReportSuite, scheduledJob, alert, classificationSet
"""
if self.loggingEnabled:
self.logger.debug(f"starting deleteTags")
if componentType is None:
raise Exception("require a component type")
if componentIds is None:
raise Exception("require component ID(s)")
path = "/componentmetadata/tags"
params = {
"componentType": componentType,
"componentIds": componentIds
}
res = self.connector.deleteData(self.endpoint_company + path, params=params, headers=self.header)
return res
def deleteTag(self, tagId: str = None) -> str:
"""
Delete a Tag based on its id.
Arguments:
tagId : REQUIRED : The tag ID to be deleted.
"""
if tagId is None:
raise Exception("A tag ID is required")
if self.loggingEnabled:
self.logger.debug(f"starting deleteTag for {tagId}")
path = "โ/componentmetadataโ/tagsโ/{tagId}"
res = self.connector.deleteData(self.endpoint_company + path, headers=self.header)
return res
def getComponentTags(self, componentId: str = None, componentType: str = None) -> list:
"""
Given a componentId, return all tags associated with that component.
Arguments:
componentId : REQUIRED : The componentId to operate on. Currently this is just the segmentId.
componentType : REQUIRED : The component type to operate on.
segment, dashboard, bookmark, calculatedMetric, project, dateRange, metric, dimension, virtualReportSuite, scheduledJob, alert, classificationSet
"""
if self.loggingEnabled:
self.logger.debug(f"starting getComponentTags")
path = "/componentmetadata/tags/search"
if componentType is None:
raise Exception("require a component type")
if componentId is None:
raise Exception("require a component ID")
params = {"componentId": componentId, "componentType": componentType}
res = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header)
return res
def updateComponentTags(self, data: list = None):
"""
Overwrite the component Tags with the list send.
Arguments:
data : REQUIRED : list of the components to be udpated with their respective list of tag names.
Object looks like the following:
[
{
"componentType": "string",
"componentId": "string",
"tags": [
"Unknown Type: Tag"
]
}
]
"""
if self.loggingEnabled:
self.logger.debug(f"starting updateComponentTags")
if data is None or type(data) != list:
raise Exception("require list of update to be sent.")
path = "/componentmetadata/tags/tagitems"
res = self.connector.putData(self.endpoint_company + path, data=data, headers=self.header)
return res
def getScheduledJobs(self, includeType: str = "all", full: bool = True,limit:int=1000,format:str="df",verbose: bool = False) -> JsonListOrDataFrameType:
"""
Get Scheduled Projects. You can retrieve the projectID out of the tasks column to see for which workspace a schedule
Arguments:
includeType : OPTIONAL : By default gets all non-expired or deleted projects. (default "all")
You can specify e.g. "all,shared,expired,deleted" to get more.
Active schedules always get exported,so you need to use the `rsLocalExpirationTime` parameter in the `schedule` column to e.g. see which schedules are expired
full : OPTIONAL : By default True. It returns the following additional information "ownerFullName,groups,tags,sharesFullName,modified,favorite,approved,scheduledItemName,scheduledUsersFullNames,deletedReason"
limit : OPTIONAL : Number of element retrieved by request (default max 1000)
format : OPTIONAL : Define the format you want to output the result. Default "df" for dataframe, other option "raw"
verbose: OPTIONAL : set to True for debug output
"""
if self.loggingEnabled:
self.logger.debug(f"starting getScheduledJobs")
params = {"includeType": includeType,
"pagination": True,
"locale": "en_US",
"page": 0,
"limit": limit
}
if full is True:
params["expansion"] = "ownerFullName,groups,tags,sharesFullName,modified,favorite,approved,scheduledItemName,scheduledUsersFullNames,deletedReason"
path = "/scheduler/scheduler/scheduledjobs/"
if verbose:
print(f"Getting Scheduled Jobs with Parameters {params}")
res = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header)
if res.get("content") is None:
raise Exception(f"Scheduled Job had no content in response. Parameters were: {params}")
# get Scheduled Jobs data into Data Frame
data = res.get("content")
last_page = res.get("lastPage",True)
total_el = res.get("totalElements")
number_el = res.get("numberOfElements")
if verbose:
print(f"Last Page {last_page}, total elements: {total_el}, number_el: {number_el}")
# iterate through pages if not on last page yet
while last_page == False:
if verbose:
print(f"last_page is {last_page}, next round")
params["page"] += 1
res = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header)
data += res.get("content")
last_page = res.get("lastPage",True)
if format == "df":
df = pd.DataFrame(data)
return df
return data
def getScheduledJob(self,scheduleId:str=None)->dict:
"""
Return a scheduled project definition.
Arguments:
scheduleId : REQUIRED : Schedule project ID
"""
if scheduleId is None:
raise ValueError("A schedule ID is required")
if self.loggingEnabled:
self.logger.debug(f"starting getScheduledJob with ID: {scheduleId}")
path = f"/scheduler/scheduler/scheduledjobs/{scheduleId}"
params = {
'expansion': 'modified,favorite,approved,tags,shares,sharesFullName,reportSuiteName,schedule,triggerObject,tasks,deliverySetting'}
res = self.connector.getData(self.endpoint_company + path, params=params)
return res
def createScheduledJob(self,projectId:str=None,type:str="pdf",schedule:dict=None,loginIds:list=None,emails:list=None,groupIds:list=None,width:int=None)->dict:
"""
Creates a schedule job based on the information provided as arguments.
Expiration will be in one year by default.
Arguments:
projectId : REQUIRED : The workspace project ID to send.
type : REQUIRED : how to send the project, default "pdf"
schedule : REQUIRED : object to specify the schedule used.
example: {
"hour": 10,
"minute": 45,
"second": 25,
"interval": 1,
"type": "daily"
}
{
'type': 'weekly',
'second': 53,
'minute': 0,
'hour': 8,
'daysOfWeek': [2],
'interval': 1
}
{
'type': 'monthly',
'second': 53,
'minute': 30,
'hour': 16,
'dayOfMonth': 21,
'interval': 1
}
loginIds : REQUIRED : A list of login ID of the users that are recipient of the report. It can be retrieved by the getUsers method.
emails : OPTIONAL : If users are not registered in AA, you can specify a list of email addresses.
groupIds : OPTIONAL : Group Id to send the report to.
width : OPTIONAL : width of the report to be sent. (Minimum 800)
"""
if self.loggingEnabled:
self.logger.debug(f"starting createScheduleJob")
path = f"/scheduler/scheduler/scheduledjobs/"
dateNow = datetime.datetime.now()
nowDateTime = datetime.datetime.isoformat(dateNow,timespec='seconds')
futureDate = datetime.datetime.isoformat(dateNow.replace(dateNow.year + 1),timespec='seconds')
deliveryId_res = self.createDeliverySetting(loginIds=loginIds, emails=emails,groupIds=groupIds)
deliveryId = deliveryId_res.get('id','')
if deliveryId == "":
if self.loggingEnabled:
self.logger.error(f"erro creating the delivery ID")
self.logger.error(json.dumps(deliveryId_res))
raise Exception("Error creating the delivery ID")
me = self.getUserMe()
projectDetail = self.getProject(projectId)
data = {
"approved" : False,
"complexity":{},
"curatedItem":False,
"description" : "",
"favorite" : False,
"hidden":False,
"internal":False,
"intrinsicIdentity" : False,
"isDeleted":False,
"isDisabled":False,
"locale":"en_US",
"noAccess":False,
"template":False,
"version":"1.0.1",
"rsid":projectDetail.get('rsid',''),
"schedule":{
"rsLocalStartTime":nowDateTime,
"rsLocalExpirationTime":futureDate,
"triggerObject":schedule
},
"tasks":[
{
"tasktype":"generate",
"tasksubtype":"analysisworkspace",
"requestParams":{
"artifacts":[type],
"imsOrgId": self.connector.config['org_id'],
"imsUserId": me.get('imsUserId',''),
"imsUserName":"API",
"projectId" : projectDetail.get('id'),
"projectName" : projectDetail.get('name')
}
},
{
"tasktype":"deliver",
"artifactType":type,
"deliverySettingId": deliveryId,
}
]
}
if width is not None and width >= 800:
data['tasks'][0]['requestParams']['width'] = width
res = self.connector.postData(self.endpoint_company+path,data=data)
return res
def updateScheduledJob(self,scheduleId:str=None,scheduleObj:dict=None)->dict:
"""
Update a schedule Job based on its id and the definition attached to it.
Arguments:
scheduleId : REQUIRED : the jobs to be updated.
scheduleObj : REQUIRED : The object to replace the current definition.
"""
if scheduleId is None:
raise ValueError("A schedule ID is required")
if scheduleObj is None:
raise ValueError('A schedule Object is required')
if self.loggingEnabled:
self.logger.debug(f"starting updateScheduleJob with ID: {scheduleId}")
path = f"/scheduler/scheduler/scheduledjobs/{scheduleId}"
res = self.connector.putData(self.endpoint_company+path,data=scheduleObj)
return res
def deleteScheduledJob(self,scheduleId:str=None)->dict:
"""
Delete a schedule project based on its ID.
Arguments:
scheduleId : REQUIRED : the schedule ID to be deleted.
"""
if scheduleId is None:
raise Exception("A schedule ID is required for deletion")
if self.loggingEnabled:
self.logger.debug(f"starting deleteScheduleJob with ID: {scheduleId}")
path = f"/scheduler/scheduler/scheduledjobs/{scheduleId}"
res = self.connector.deleteData(self.endpoint_company + path)
return res
def getDeliverySettings(self)->list:
"""
Return a list of delivery settings.
"""
path = f"/scheduler/scheduler/deliverysettings/"
params = {'expansion': 'definition',"limit" : 2000}
lastPage = False
page_nb = 0
data = []
while lastPage != True:
params['page'] = page_nb
res = self.connector.getData(self.endpoint_company + path, params=params)
data += res.get('content',[])
if len(res.get('content',[]))==params["limit"]:
lastPage = False
else:
lastPage = True
page_nb += 1
return data
def getDeliverySetting(self,deliverySettingId:str=None)->dict:
"""
Retrieve the delivery setting from a scheduled project.
Argument:
deliverySettingId : REQUIRED : The delivery setting ID of the scheduled project.
"""
path = f"/scheduler/scheduler/deliverysettings/{deliverySettingId}/"
params = {'expansion': 'definition'}
res = self.connector.getData(self.endpoint_company + path, params=params)
return res
def createDeliverySetting(self,loginIds:list=None,emails:list=None,groupIds:list=None)->dict:
"""
Create a delivery setting for a specific scheduled project.
Automatically used when using `createScheduleJob`.
Arguments:
loginIds : REQUIRED : List of login ID to send the scheduled project to. Can be retrieved by the getUsers method.
emails : OPTIONAL : In case the recipient are not in the analytics interface.
groupIds : OPTIONAL : List of group ID to send the scheduled project to.
"""
path = f"/scheduler/scheduler/deliverysettings/"
if loginIds is None:
loginIds = []
if emails is None:
emails = []
if groupIds is None:
groupIds = []
data = {
"definition" : {
"allAdmins" : False,
"emailAddresses" : emails,
"groupIds" : groupIds,
"loginIds": loginIds,
"type": "email"
},
"name" : "email-aanalytics2"
}
res = self.connector.postData(self.endpoint_company + path, data=data)
return res
def updateDeliverySetting(self,deliveryId:str=None,loginIds:list=None,emails:list=None,groupIds:list=None)->dict:
"""
Create a delivery setting for a specific scheduled project.
Automatically created for email setting.
Arguments:
deliveryId : REQUIRED : the delivery setting ID to be updated
loginIds : REQUIRED : List of login ID to send the scheduled project to. Can be retrieved by the getUsers method.
emails : OPTIONAL : In case the recipient are not in the analytics interface.
groupIds : OPTIONAL : List of group ID to send the scheduled project to.
"""
if deliveryId is None:
raise ValueError("Require a delivery setting ID")
path = f"/scheduler/scheduler/deliverysettings/{deliveryId}"
if loginIds is None:
loginIds = []
if emails is None:
emails = []
if groupIds is None:
groupIds = []
data = {
"definition" : {
"allAdmins" : False,
"emailAddresses" : emails,
"groupIds" : groupIds,
"loginIds": loginIds,
"type": "email"
},
"name" : "email-aanalytics2"
}
res = self.connector.putData(self.endpoint_company + path, data=data)
return res
def deleteDeliverySetting(self,deliveryId:str=None)->dict:
"""
Delete a delivery setting based on the ID passed.
Arguments:
deliveryId : REQUIRED : The delivery setting ID to be deleted.
"""
if deliveryId is None:
raise ValueError("Require a delivery setting ID")
path = f"/scheduler/scheduler/deliverysettings/{deliveryId}"
res = self.connector.deleteData(self.endpoint_company + path)
return res
def getProjects(self, includeType: str = 'all', full: bool = False, limit: int = None, includeShared: bool = False,
includeTemplate: bool = False, format: str = 'df', cache:bool=False, save: bool = False) -> JsonListOrDataFrameType:
"""
Returns the list of projects through either a dataframe or a list.
Arguments:
includeType : OPTIONAL : type of projects to be retrieved.(str) Possible values:
- all : Default value (all projects possibles)
- shared : shared projects
full : OPTIONAL : if set to True, returns all information about projects.
limit : OPTIONAL : Limit the number of result returned.
includeShared : OPTIONAL : If full is set to False, you can retrieve only information about sharing.
includeTemplate: OPTIONAL : If full is set to False, you can add information about template here.
format : OPTIONAL : format : OPTIONAL : format of the output. 2 values "df" for dataframe (default) and "raw" for raw json.
cache : OPTIONAL : Boolean in case you want to cache the result in the "listProjectIds" attribute.
save : OPTIONAL : If set to True, it will save the info in a csv file (bool : default False)
"""
if self.loggingEnabled:
self.logger.debug(f"starting getProjects")
path = "/projects"
params = {"includeType": includeType}
if full:
params[
"expansion"] = 'reportSuiteName,ownerFullName,tags,shares,sharesFullName,modified,favorite,approved,companyTemplate,externalReferences,accessLevel'
else:
params["expansion"] = "ownerFullName,modified"
if includeShared:
params["expansion"] += ',shares,sharesFullName'
if includeTemplate:
params["expansion"] += ',companyTemplate'
if limit is not None:
params['limit'] = limit
if self.loggingEnabled:
self.logger.debug(f"params: {params}")
res = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header)
if cache:
self.listProjectIds = res
if format == "raw":
if save:
with open('projects.json', 'w') as f:
f.write(json.dumps(res, indent=2))
return res
df = pd.DataFrame(res)
if df.empty == False:
df['created'] = pd.to_datetime(df['created'], format='%Y-%m-%dT%H:%M:%SZ')
df['modified'] = pd.to_datetime(df['modified'], format='%Y-%m-%dT%H:%M:%SZ')
if save:
df.to_csv(f'projects_{int(time.time())}.csv', index=False)
return df
def getProject(self, projectId: str = None, projectClass: bool = False, rsidSuffix: bool = False, retry: int = 0, cache:bool=False, verbose: bool = False) -> Union[dict,Project]:
"""
Return the dictionary of the project information and its definition.
It will return a dictionary or a Project class.
The project detail will be saved as Project class in the projectsDetails class attribute.
Arguments:
projectId : REQUIRED : the project ID to be retrieved.
projectClass : OPTIONAL : if set to True. Returns a class of the project with prefiltered information
rsidSuffix : OPTIONAL : if set to True, returns project class with rsid as suffic to dimensions and metrics.
retry : OPTIONAL : If you want to retry the request if it fails. Specify number of retry (0 default)
cache : OPTIONAL : If you want to cache the result as Project class in the "projectsDetails" attribute.
verbose : OPTIONAL : If you wish to have logs of status
"""
if projectId is None:
raise Exception("Requires a projectId parameter")
params = {
'expansion': 'definition,ownerFullName,modified,favorite,approved,tags,shares,sharesFullName,reportSuiteName,companyTemplate,accessLevel'}
path = f"/projects/{projectId}"
if self.loggingEnabled:
self.logger.debug(f"starting getProject for {projectId}")
res = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header,retry=retry, verbose=verbose)
if projectClass:
if self.loggingEnabled:
self.logger.info(f"building an instance of Project class")
myProject = Project(res,rsidSuffix=rsidSuffix)
return myProject
if cache:
if self.loggingEnabled:
self.logger.info(f"caching the project as Project class")
try:
self.projectsDetails[projectId] = Project(res)
except:
if verbose:
print('WARNING : Cannot convert Project to Project class')
if self.loggingEnabled:
self.logger.warning(f"Cannot convert Project to Project class")
return res
def getAllProjectDetails(self, projects:JsonListOrDataFrameType=None, filterNameProject:str=None, filterNameOwner:str=None, useAttribute:bool=True, cache:bool=False, rsidSuffix:bool=False, output:str="dict", verbose:bool=False)->dict:
"""
Retrieve all projects details. You can either pass the list of dataframe returned from the getProjects methods and some filters.
Returns a dict of ProjectId and the value is the Project class for analysis.
Arguments:
projects : OPTIONAL : Takes the type of object returned from the getProjects (all data - not only the ID).
If None is provided and you never ran the getProjects method, we will call the getProjects method and retrieve the elements.
Otherwise you can pass either a limited list of elements that you want to check details for.
filterNameProject : OPTIONAL : If you want to retrieve project details for project with a specific string in their name.
filterNameOwner : OPTIONAL : If you want to retrieve project details for project with an owner having a specific name.
useAttribute : OPTIONAL : True by default, it will use the projectList saved in the listProjectIds attribute.
If you want to start from scratch on the retrieval process of your projects.
rsidSuffix : OPTIONAL : If you want to add rsid as suffix of metrics and dimensions (::rsid)
cache : OPTIONAL : If you want to cache the different elements retrieved for future usage.
output : OPTIONAL : If you want to return a "list" or "dict" from this method. (default "dict")
verbose : OPTIONAL : Set to True to print information.
Not using filter may end up taking a while to retrieve the information.
"""
if self.loggingEnabled:
self.logger.debug(f"starting getAllProjectDetails")
## if no project data
if projects is None:
if self.loggingEnabled:
self.logger.debug(f"No projects passed")
if len(self.listProjectIds)>0 and useAttribute:
fullProjectIds = self.listProjectIds
else:
fullProjectIds = self.getProjects(format='raw',cache=cache)
## if project data is passed
elif projects is not None:
if self.loggingEnabled:
self.logger.debug(f"projects passed")
if isinstance(projects,pd.DataFrame):
fullProjectIds = projects.to_dict(orient='records')
elif isinstance(projects,list):
fullProjectIds = (proj['id'] for proj in projects)
if filterNameProject is not None:
if self.loggingEnabled:
self.logger.debug(f"filterNameProject passed")
fullProjectIds = [project for project in fullProjectIds if filterNameProject in project['name']]
if filterNameOwner is not None:
if self.loggingEnabled:
self.logger.debug(f"filterNameOwner passed")
fullProjectIds = [project for project in fullProjectIds if filterNameOwner in project['owner'].get('name','')]
if verbose:
print(f'{len(fullProjectIds)} project details to retrieve')
print(f"estimated time required : {int(len(fullProjectIds)/60)} minutes")
if self.loggingEnabled:
self.logger.debug(f'{len(fullProjectIds)} project details to retrieve')
projectIds = (project['id'] for project in fullProjectIds)
projectsDetails = {projectId:self.getProject(projectId,projectClass=True,rsidSuffix=rsidSuffix) for projectId in projectIds}
if filterNameProject is None and filterNameOwner is None:
self.projectsDetails = projectsDetails
if output == "list":
list_projectsDetails = [projectsDetails[key] for key in projectsDetails]
return list_projectsDetails
return projectsDetails
def deleteProject(self, projectId: str = None) -> dict:
"""
Delete the project specified by its ID.
Arguments:
projectId : REQUIRED : the project ID to be deleted.
"""
if self.loggingEnabled:
self.logger.debug(f"starting deleteProject")
if projectId is None:
raise Exception("Requires a projectId parameter")
path = f"/projects/{projectId}"
res = self.connector.deleteData(self.endpoint_company + path, headers=self.header)
return res
def validateProject(self,projectObj:dict = None)->dict:
"""
Validate a project definition based on the definition passed.
Arguments:
projectObj : REQUIRED : the dictionary that represents the Workspace definition.
requires the following elements: name,description,rsid, definition, owner
"""
if self.loggingEnabled:
self.logger.debug(f"starting validateProject")
if projectObj is None and type(projectObj) != dict :
raise Exception("Requires a projectObj data to be sent to the server.")
if 'project' in projectObj.keys():
rsid = projectObj['project'].get('rsid',None)
else:
rsid = projectObj.get('rsid',None)
projectObj = {'project':projectObj}
if rsid is None:
raise Exception("Could not find a rsid parameter in your project definition")
path = "/projects/validate"
params = {'rsid':rsid}
res = self.connector.postData(self.endpoint_company + path, data=projectObj, headers=self.header,params=params)
return res
def updateProject(self, projectId: str = None, projectObj: dict = None) -> dict:
"""
Update your project with the new object placed as parameter.
Arguments:
projectId : REQUIRED : the project ID to be updated.
projectObj : REQUIRED : the dictionary to replace the previous Workspace.
requires the following elements: name,description,rsid, definition, owner
"""
if self.loggingEnabled:
self.logger.debug(f"starting updateProject")
if projectId is None:
raise Exception("Requires a projectId parameter")
path = f"/projects/{projectId}"
if projectObj is None:
raise Exception("Requires a projectObj parameter")
if 'name' not in projectObj.keys():
raise KeyError("Requires name key in the project object")
if 'description' not in projectObj.keys():
raise KeyError("Requires description key in the project object")
if 'rsid' not in projectObj.keys():
raise KeyError("Requires rsid key in the project object")
if 'owner' not in projectObj.keys():
raise KeyError("Requires owner key in the project object")
if type(projectObj['owner']) != dict:
raise ValueError("Requires owner key to be a dictionary")
if 'definition' not in projectObj.keys():
raise KeyError("Requires definition key in the project object")
if type(projectObj['definition']) != dict:
raise ValueError("Requires definition key to be a dictionary")
res = self.connector.putData(self.endpoint_company + path, data=projectObj, headers=self.header)
return res
def createProject(self, projectObj: dict = None) -> dict:
"""
Create a project based on the definition you have set.
Arguments:
projectObj : REQUIRED : the dictionary to create a new Workspace.
requires the following elements: name,description,rsid, definition, owner
"""
if self.loggingEnabled:
self.logger.debug(f"starting createProject")
path = "/projects/"
if projectObj is None:
raise Exception("Requires a projectId parameter")
if 'name' not in projectObj.keys():
raise KeyError("Requires name key in the project object")
if 'description' not in projectObj.keys():
raise KeyError("Requires description key in the project object")
if 'rsid' not in projectObj.keys():
raise KeyError("Requires rsid key in the project object")
if 'owner' not in projectObj.keys():
raise KeyError("Requires owner key in the project object")
if type(projectObj['owner']) != dict:
raise ValueError("Requires owner key to be a dictionary")
if 'definition' not in projectObj.keys():
raise KeyError("Requires definition key in the project object")
if type(projectObj['definition']) != dict:
raise ValueError("Requires definition key to be a dictionary")
res = self.connector.postData(self.endpoint_company + path, data=projectObj, headers=self.header)
return res
def findComponentsUsage(self,components:list=None,
projectDetails:list=None,
segments:Union[list,pd.DataFrame]=None,
calculatedMetrics:Union[list,pd.DataFrame]=None,
recursive:bool=False,
regexUsed:bool=False,
verbose:bool=False,
resetProjectDetails:bool=False,
rsidSuffix:bool=False,
)->dict:
"""
Find the usage of components in the different part of Adobe Analytics setup.
Projects, Segment, Calculated metrics.
Arguments:
components : REQUIRED : list of component to look for.
Example : evar10,event1,prop3,segmentId, calculatedMetricsId
projectDetails: OPTIONAL : list of instances of Project class.
segments : OPTIONAL : If you wish to pass the segments to look for. (should contain definition)
calculatedMetrics : OPTIONAL : If you wish to pass the segments to look for. (should contain definition)
recursive : OPTIONAL : if set to True, will also find the reference where the meta component are used.
segments based on your elements will also be searched to see where they are located.
regexUsed : OPTIONAL : If set to True, the element are definied as a regex and some default setup is turned off.
resetProjectDetails : OPTIONAL : Set to false by default. If set to True, it will NOT use the cache.
rsidSuffix : OPTIONAL : If you do not give projectDetails and you want to look for rsid usage in report for dimensions and metrics.
"""
if components is None or type(components) != list:
raise ValueError("components must be present as a list")
if self.loggingEnabled:
self.logger.debug(f"starting findComponentsUsage for {components}")
listComponentProp = [comp for comp in components if 'prop' in comp]
listComponentVar = [comp for comp in components if 'evar' in comp]
listComponentEvent = [comp for comp in components if 'event' in comp]
listComponentSegs = [comp for comp in components if comp.startswith('s')]
listComponentCalcs = [comp for comp in components if comp.startswith('cm')]
restComponents = set(components) - set(listComponentProp+listComponentVar+listComponentEvent+listComponentSegs+listComponentCalcs)
listDefaultElements = [comp for comp in restComponents]
listRecusion = []
## adding unregular ones
regPartSeg = "('|\.)" ## ensure to not catch evar100 for evar10
regPartProj = "($|\.|\::)" ## ensure to not catch evar100 for evar10
if regexUsed:
if self.loggingEnabled:
self.logger.debug(f"regex is used")
regPartSeg = ""
regPartProj = ""
## Segments
if verbose:
print('retrieving segments')
if self.loggingEnabled:
self.logger.debug(f"retrieving segments")
if len(self.segments) == 0 and segments is None:
self.segments = self.getSegments(extended_info=True)
mySegments = self.segments
elif len(self.segments) > 0 and segments is None:
mySegments = self.segments
elif segments is not None:
if type(segments) == list:
mySegments = pd.DataFrame(segments)
elif type(segments) == pd.DataFrame:
mySegments = segments
else:
mySegments = segments
### Calculated Metrics
if verbose:
print('retrieving calculated metrics')
if self.loggingEnabled:
self.logger.debug(f"retrieving calculated metrics")
if len(self.calculatedMetrics) == 0 and calculatedMetrics is None:
self.calculatedMetrics = self.getCalculatedMetrics(extended_info=True)
myMetrics = self.calculatedMetrics
elif len(self.segments) > 0 and calculatedMetrics is None:
myMetrics = self.calculatedMetrics
elif calculatedMetrics is not None:
if type(calculatedMetrics) == list:
myMetrics = pd.DataFrame(calculatedMetrics)
elif type(calculatedMetrics) == pd.DataFrame:
myMetrics = calculatedMetrics
else:
myMetrics = calculatedMetrics
### Projects
if (len(self.projectsDetails) == 0 and projectDetails is None) or resetProjectDetails:
if self.loggingEnabled:
self.logger.debug(f"retrieving projects details")
self.projectDetails = self.getAllProjectDetails(verbose=verbose,rsidSuffix=rsidSuffix)
myProjectDetails = (self.projectsDetails[key].to_dict() for key in self.projectsDetails)
elif len(self.projectsDetails) > 0 and projectDetails is None and resetProjectDetails==False:
if self.loggingEnabled:
self.logger.debug(f"transforming projects details")
myProjectDetails = (self.projectsDetails[key].to_dict() for key in self.projectsDetails)
elif projectDetails is not None:
if self.loggingEnabled:
self.logger.debug(f"setting the project details")
if isinstance(projectDetails[0],Project):
myProjectDetails = (item.to_dict() for item in projectDetails)
elif isinstance(projectDetails[0],dict):
myProjectDetails = (Project(item).to_dict() for item in projectDetails)
else:
raise Exception("Project details were not able to be processed")
teeProjects:tuple = tee(myProjectDetails) ## duplicating the project generator for recursive pass (low memory - intensive computation)
returnObj = {element : {'segments':[],'calculatedMetrics':[],'projects':[]} for element in components}
recurseObj = defaultdict(list)
if verbose:
print('search started')
print(f'recursive option : {recursive}')
print('start looking into segments')
if self.loggingEnabled:
self.logger.debug(f"Analyzing segments")
for _,seg in mySegments.iterrows():
for prop in listComponentProp:
if re.search(f"{prop+regPartSeg}",str(seg['definition'])):
returnObj[prop]['segments'].append({seg['name']:seg['id']})
if recursive:
listRecusion.append(seg['id'])
for var in listComponentVar:
if re.search(f"{var+regPartSeg}",str(seg['definition'])):
returnObj[var]['segments'].append({seg['name']:seg['id']})
if recursive:
listRecusion.append(seg['id'])
for event in listComponentEvent:
if re.search(f"{event}'",str(seg['definition'])):
returnObj[event]['segments'].append({seg['name']:seg['id']})
if recursive:
listRecusion.append(seg['id'])
for element in listDefaultElements:
if re.search(f"{element}",str(seg['definition'])):
returnObj[element]['segments'].append({seg['name']:seg['id']})
if recursive:
listRecusion.append(seg['id'])
if self.loggingEnabled:
self.logger.debug(f"Analyzing calculated metrics")
if verbose:
print('start looking into calculated metrics')
for _,met in myMetrics.iterrows():
for prop in listComponentProp:
if re.search(f"{prop+regPartSeg}",str(met['definition'])):
returnObj[prop]['calculatedMetrics'].append({met['name']:met['id']})
if recursive:
listRecusion.append(met['id'])
for var in listComponentVar:
if re.search(f"{var+regPartSeg}",str(met['definition'])):
returnObj[var]['calculatedMetrics'].append({met['name']:met['id']})
if recursive:
listRecusion.append(met['id'])
for event in listComponentEvent:
if re.search(f"{event}'",str(met['definition'])):
returnObj[event]['calculatedMetrics'].append({met['name']:met['id']})
if recursive:
listRecusion.append(met['id'])
for element in listDefaultElements:
if re.search(f"{element}'",str(met['definition'])):
returnObj[element]['calculatedMetrics'].append({met['name']:met['id']})
if recursive:
listRecusion.append(met['id'])
if verbose:
print('start looking into projects')
if self.loggingEnabled:
self.logger.debug(f"Analyzing projects")
for proj in teeProjects[0]:
## mobile reports don't have dimensions.
if proj['reportType'] == "desktop":
for prop in listComponentProp:
for element in proj['dimensions']:
if re.search(f"{prop+regPartProj}",element):
returnObj[prop]['projects'].append({proj['name']:proj['id']})
for var in listComponentVar:
for element in proj['dimensions']:
if re.search(f"{var+regPartProj}",element):
returnObj[var]['projects'].append({proj['name']:proj['id']})
for event in listComponentEvent:
for element in proj['metrics']:
if re.search(f"{event}",element):
returnObj[event]['projects'].append({proj['name']:proj['id']})
for seg in listComponentSegs:
for element in proj.get('segments',[]):
if re.search(f"{seg}",element):
returnObj[seg]['projects'].append({proj['name']:proj['id']})
for met in listComponentCalcs:
for element in proj.get('calculatedMetrics',[]):
if re.search(f"{met}",element):
returnObj[met]['projects'].append({proj['name']:proj['id']})
for element in listDefaultElements:
for met in proj['calculatedMetrics']:
if re.search(f"{element}",met):
returnObj[element]['projects'].append({proj['name']:proj['id']})
for dim in proj['dimensions']:
if re.search(f"{element}",dim):
returnObj[element]['projects'].append({proj['name']:proj['id']})
for rsid in proj['rsids']:
if re.search(f"{element}",rsid):
returnObj[element]['projects'].append({proj['name']:proj['id']})
for event in proj['metrics']:
if re.search(f"{element}",event):
returnObj[element]['projects'].append({proj['name']:proj['id']})
if recursive:
if verbose:
print('start looking into recursive elements')
if self.loggingEnabled:
self.logger.debug(f"recursive option checked")
for proj in teeProjects[1]:
for rec in listRecusion:
for element in proj.get('segments',[]):
if re.search(f"{rec}",element):
recurseObj[rec].append({proj['name']:proj['id']})
for element in proj.get('calculatedMetrics',[]):
if re.search(f"{rec}",element):
recurseObj[rec].append({proj['name']:proj['id']})
if recursive:
returnObj['recursion'] = recurseObj
if verbose:
print('done')
return returnObj
def getUsageLogs(self,
startDate:str=None,
endDate:str=None,
eventType:str=None,
event:str=None,
rsid:str=None,
login:str=None,
ip:str=None,
limit:int=100,
max_result:int=None,
format:str="df",
verbose:bool=False,
**kwargs)->dict:
"""
Returns the Audit Usage Logs from your company analytics setup.
Arguments:
startDate : REQUIRED : Start date, format : 2020-12-01T00:00:00-07.(default 60 days prior today)
endDate : REQUIRED : End date, format : 2020-12-15T14:32:33-07. (default today)
Should be a maximum of a 3 month period between startDate and endDate.
eventType : OPTIONAL : The numeric id for the event type you want to filter logs by.
Please reference the lookup table in the LOGS_EVENT_TYPE
event : OPTIONAL : The event description you want to filter logs by.
No wildcards are permitted, but this filter is case insensitive and supports partial matches.
rsid : OPTIONAL : ReportSuite ID to filter on.
login : OPTIONAL : The login value of the user you want to filter logs by. This filter functions as an exact match.
ip : OPTIONAL : The IP address you want to filter logs by. This filter supports a partial match.
limit : OPTIONAL : Number of results per page.
max_result : OPTIONAL : Number of maximum amount of results if you want. If you want to cap the process. Ex : max_result=1000
format : OPTIONAL : If you wish to have a DataFrame ("df" - default) or list("raw") as output.
verbose : OPTIONAL : Set it to True if you want to have console info.
possible kwargs:
page : page number (default 0)
"""
if self.loggingEnabled:
self.logger.debug(f"starting getUsageLogs")
import datetime
now = datetime.datetime.now()
if startDate is None:
startDate = datetime.datetime.isoformat(now - datetime.timedelta(days=60)).split('.')[0]
if endDate is None:
endDate = datetime.datetime.isoformat(now).split('.')[0]
path = "/auditlogs/usage"
params = {"page":kwargs.get('page',0),"limit":limit,"startDate":startDate,"endDate":endDate}
if eventType is not None:
params['eventType'] = eventType
if event is not None:
params['event'] = event
if rsid is not None:
params['rsid'] = rsid
if login is not None:
params['login'] = login
if ip is not None:
params['ip'] = ip
if self.loggingEnabled:
self.logger.debug(f"params: {params}")
res = self.connector.getData(self.endpoint_company + path, params=params,verbose=verbose)
data = res['content']
lastPage = res['lastPage']
while lastPage == False:
params["page"] += 1
res = self.connector.getData(self.endpoint_company + path, params=params,verbose=verbose)
data += res['content']
lastPage = res['lastPage']
if max_result is not None:
if len(data) >= max_result:
lastPage = True
if format == "df":
df = pd.DataFrame(data)
return df
return data
def getTopItems(self,rsid:str=None,dimension:str=None,dateRange:str=None,searchClause:str=None,lookupNoneValues:bool = True,limit:int=10,verbose:bool=False,**kwargs)->object:
"""
Returns the top items of a request.
Arguments:
rsid : REQUIRED : ReportSuite ID of the data
dimension : REQUIRED : The dimension to retrieve
dateRange : OPTIONAL : Format YYYY-MM-DD/YYYY-MM-DD (default 90 days)
searchClause : OPTIONAL : General search string; wrap with single quotes. Example: 'PageABC'
lookupNoneValues : OPTIONAL : None values to be included (default True)
limit : OPTIONAL : Number of items to be returned per page.
verbose : OPTIONAL : If you want to have comments displayed (default False)
possible kwargs:
page : page to look for
startDate : start date with format YYYY-MM-DD
endDate : end date with format YYYY-MM-DD
searchAnd, searchOr, searchNot, searchPhrase : Search element to be included (or not), partial match or not.
"""
if self.loggingEnabled:
self.logger.debug(f"starting getTopItems")
path = "/reports/topItems"
page = kwargs.get("page",0)
if rsid is None:
raise ValueError("Require a reportSuite ID")
if dimension is None:
raise ValueError("Require a dimension")
params = {"rsid" : rsid, "dimension":dimension,"lookupNoneValues":lookupNoneValues,"limit":limit,"page":page}
if searchClause is not None:
params["search-clause"] = searchClause
if dateRange is not None and '/' in dateRange:
params["dateRange"] = dateRange
if kwargs.get('page',None) is not None:
params["page"] = kwargs.get('page')
if kwargs.get("startDate",None) is not None:
params["startDate"] = kwargs.get("startDate")
if kwargs.get("endDate",None) is not None:
params["endDate"] = kwargs.get("endDate")
if kwargs.get("searchAnd", None) is not None:
params["searchAnd"] = kwargs.get("searchAnd")
if kwargs.get("searchOr",None) is not None:
params["searchOr"] = kwargs.get("searchOr")
if kwargs.get("searchNot",None) is not None:
params["searchNot"] = kwargs.get("searchNot")
if kwargs.get("searchPhrase",None) is not None:
params["searchPhrase"] = kwargs.get("searchPhrase")
last_page = False
if verbose:
print('Starting to fetch the data...')
data = []
while not last_page:
if verbose:
print(f'request page : {page}')
res = self.connector.getData(self.endpoint_company+path,params=params)
last_page = res.get("lastPage",True)
data += res["rows"]
page += 1
params["page"] = page
df = pd.DataFrame(data)
return df
def getAnnotations(self,full:bool=True,includeType:str='all',limit:int=1000,page:int=0)->list:
"""
Returns a list of the available annotations
Arguments:
full : OPTIONAL : If set to True (default), returned all available information of the annotation.
includeType : OPTIONAL : use to return only "shared" or "all"(default) annotation available.
limit : OPTIONAL : number of result per page (default 1000)
page : OPTIONAL : page used for pagination
"""
params = {"includeType":includeType,"page":page}
if full:
params['expansion'] = "name,description,dateRange,color,applyToAllReports,scope,createdDate,modifiedDate,modifiedById,tags,shares,approved,favorite,owner,usageSummary,companyId,reportSuiteName,rsid"
path = f"/annotations"
lastPage = False
data = []
while lastPage == False:
res = self.connector.getData(self.endpoint_company + path,params=params)
data += res.get('content',[])
lastPage = res.get('lastPage',True)
params['page'] += 1
return data
def getAnnotation(self,annotationId:str=None)->dict:
"""
Return a specific annotation definition.
Arguments:
annotationId : REQUIRED : The annotation ID
"""
if annotationId is None:
raise ValueError("Require an annotation ID")
path = f"/annotations/{annotationId}"
params ={
"expansion" : "name,description,dateRange,color,applyToAllReports,scope,createdDate,modifiedDate,modifiedById,tags,shares,approved,favorite,owner,usageSummary,companyId,reportSuiteName,rsid"
}
res = self.connector.getData(self.endpoint_company + path,params=params)
return res
def deleteAnnotation(self,annotationId:str=None)->dict:
"""
Delete a specific annotation definition.
Arguments:
annotationId : REQUIRED : The annotation ID to be deleted
"""
if annotationId is None:
raise ValueError("Require an annotation ID")
path = f"/annotations/{annotationId}"
res = self.connector.deleteData(self.endpoint_company + path)
return res
def createAnnotation(self,
name:str=None,
dateRange:str=None,
rsid:str=None,
metricIds:list=None,
dimensionObj:list=None,
description:str=None,
filterIds:list=None,
applyToAllReports:bool=False,
**kwargs)->dict:
"""
Create an Annotation.
Arguments:
name : REQUIRED : Name of the annotation
dateRange : REQUIRED : Date range of the annotation to be used.
Example: 2022-04-19T00:00:00/2022-04-19T23:59:59
rsid : REQUIRED : ReportSuite ID
metricIds : OPTIONAL : List of metrics ID to be annotated
filterIds : OPTIONAL : List of Segments ID to apply for annotation for context.
dimensionObj : OPTIONAL : List of dimensions object specification:
{
componentType: "dimension"
dimensionType: "string"
id: "variables/product"
operator: "streq"
terms: ["unknown"]
}
applyToAllReports : OPTIONAL : If the annotation apply to all ReportSuites.
possible kwargs:
colors: Color to be used, examples: "STANDARD1"
shares: List of userId for sharing the annotation
tags: List of tagIds to be applied
favorite: boolean to set the annotation as favorite (false by default)
approved: boolean to set the annotation as approved (false by default)
"""
path = f"/annotations"
if name is None:
raise ValueError("A name must be specified")
if dateRange is None:
raise ValueError("A dateRange must be specified")
if rsid is None:
raise ValueError("a master ReportSuite ID must be specified")
description = description or "api generated"
data = {
"name": name,
"description": description,
"dateRange": dateRange,
"color": kwargs.get('colors',"STANDARD1"),
"applyToAllReports": applyToAllReports,
"scope": {
"metrics":[],
"filters":[]
},
"tags": [],
"approved": kwargs.get('approved',False),
"favorite": kwargs.get('favorite',False),
"rsid": rsid
}
if metricIds is not None and type(metricIds) == list:
for metric in metricIds:
data['scopes']['metrics'].append({
"id" : metric,
"componentType":"metric"
})
if filterIds is None and type(filterIds) == list:
for filter in filterIds:
data['scopes']['filters'].append({
"id" : filter,
"componentType":"segment"
})
if dimensionObj is not None and type(dimensionObj) == list:
for obj in dimensionObj:
data['scopes']['filters'].append(obj)
if kwargs.get("shares",None) is not None:
data['shares'] = []
for user in kwargs.get("shares",[]):
data['shares'].append({
"shareToId" : user,
"shareToType":"user"
})
if kwargs.get('tags',None) is not None:
for tag in kwargs.get('tags'):
res = self.getTag(tag)
data['tags'].append({
"id":tag,
"name":res['name']
})
res = self.connector.postData(self.endpoint_company + path,data=data)
return res
def updateAnnotation(self,annotationId:str=None,annotationObj:dict=None)->dict:
"""
Update an annotation based on its ID. PUT method.
Arguments:
annotationId : REQUIRED : The annotation ID to be updated
annotationObj : REQUIRED : The object to replace the annotation.
"""
if annotationObj is None or type(annotationObj) != dict:
raise ValueError('Require a dictionary representing the annotation definition')
if annotationId is None:
raise ValueError('Require the annotation ID')
path = f"/annotations/{annotationId}"
res = self.connector.putData(self.endpoint_company+path,data=annotationObj)
return res
# def getDataWarehouseReports(self,reportSuite:str=None,reportName:str=None,deliveryUUID:str=None,status:str=None,
# ScheduledRequestUUID:str=None,limit:int=1000)-> dict:
# """
# Get all DW reports that matched filter parameters.
# Arguments:
# reportSuite : OPTIONAL : The name of the reportSuite
# reportName : OPTIONAL : The name of the report
# deliveryUUID : OPTIONAL : the UUID generated for that report
# status : OPTIONAL : Status of the report Generation, can be any of [COMPLETED, CANCELED, ERROR_DELIVERY, ERROR_PROCESSING, CREATED, PROCESSING, PENDING]
# scheduledRequestUUID : OPTIONAL : THe schedule report UUID generated by this report
# limit : OPTIONAL : Maximum amount of data returned
# """
# path = '/data_warehouse/report'
# params = {"limit":limit}
# if reportSuite is not None:
# params['ReportSuite'] = reportSuite
# if reportName is not None:
# params['ReportName'] = reportName
# if deliveryUUID is not None:
# params['DeliveryProfileUUID'] = deliveryUUID
# if status is not None and status in ["COMPLETED", "CANCELED", "ERROR_DELIVERY", "ERROR_PROCESSING", "CREATED", "PROCESSING", "PENDING"]:
# params["Status"] = status
# if ScheduledRequestUUID is not None:
# params['ScheduledRequestUUID'] = ScheduledRequestUUID
# res = self.connector.getData('https://analytics.adobe.io/api' + path,params=params)
# return res
# def getDataWarehouseReport(self,reportUUID:str=None)-> dict:
# """
# Return a single report information out of the report UUID.
# Arguments:
# reportUUID : REQUIRED : the report UUID
# """
# if reportUUID is None:
# raise ValueError("Require a report UUID")
# path = f'/data_warehouse/report/{reportUUID}'
# res = self.connector.getData('https://analytics.adobe.io/api' + path)
# return res
# def getDataWarehouseRequests(self,reportSuite:str=None,reportName:str=None,status:str=None,limit:int=1000)-> dict:
# """
# Get all DW requests that matched filter parameters.
# Arguments:
# reportSuite : OPTIONAL : The name of the reportSuite
# reportName : OPTIONAL : The name of the report
# status : OPTIONAL : Status of the report Generation, can be any of [COMPLETED, CANCELED, ERROR_DELIVERY, ERROR_PROCESSING, CREATED, PROCESSING, PENDING]
# scheduledRequestUUID : OPTIONAL : THe schedule report UUID generated by this report
# limit : OPTIONAL : Maximum amount of data returned
# """
# path = '/data_warehouse/scheduled'
# params = {"limit":limit}
# if reportSuite is not None:
# params['ReportSuite'] = reportSuite
# if reportName is not None:
# params['ReportName'] = reportName
# if status is not None and status in ["COMPLETED", "CANCELED", "ERROR_DELIVERY", "ERROR_PROCESSING", "CREATED", "PROCESSING", "PENDING"]:
# params["Status"] = status
# res = self.connector.getData('https://analytics.adobe.io/api'+ path,params=params)
# return res
# def getDataWarehouseRequest(self,scheduleUUID:str=None)-> dict:
# """
# Return a single request information out of the schedule UUID.
# Arguments:
# scheduleUUID : REQUIRED : the schedule UUID
# """
# if scheduleUUID is None:
# raise ValueError("Require a report UUID")
# path = f'/data_warehouse/scheduled/{scheduleUUID}'
# res = self.connector.getData('https://analytics.adobe.io' + path)
# return res
# def createDataWarehouseRequest(self,
# requestDict:dict=None,
# reportName:str=None,
# login:str=None,
# emails:list=None,
# emailNote:str=None,
# )->dict:
# """
# Create a Data Warehouse request based on either the dictionary provided or the parameters filled.
# Arguments:
# requestDict : OPTIONAL : The complete dictionary definition for a datawarehouse export.
# If not provided, require the other parameters to be used.
# reportName : OPTIONAL : The name of the report
# login : OPTIONAL : The login Id of the user
# emails : OPTIONAL : List of emails for notification. example : ['[email protected]']
# dimensions : OPTIONAL : List of dimensions to use, example : ['prop1']
# metrics : OPTIONAL : List of metrics to use, example : ['event1','event2']
# segments : OPTIONAL : List of segments to use, example : ['seg1','seg2']
# dateGranularity : OPTIONAL :
# reportPeriod : OPTIONAL :
# emailNote : OPTIONAL : Note for the email
# """
# f'/data_warehouse/scheduled/'
# def getDataWarehouseDeliveryAccounts(self)->dict:
# """
# Get All delivery Account used by a company.
# """
# path = f'/data_warehouse/delivery/account'
# res = self.connector.getData('https://analytics.adobe.io'+path)
# return res
# def getDataWarehouseDeliveryProfile(self)->dict:
# """
# Get all Delivery Profile for a given global company id
# """
# path = f'/data_warehouse/delivery/profile'
# res = self.connector.getData('https://analytics.adobe.io'+path)
# return res
def compareReportSuites(self,listRsids:list=None,element:str='dimensions',comparison:str="full",save: bool=False)->pd.DataFrame:
"""
Compare reportSuite on dimensions (default) or metrics based on the comparison selected.
Returns a dataframe with multi-index and a column telling which elements are differents
Arguments:
listRsids : REQUIRED : list of report suite ID to compare
element : REQUIRED : Elements to compare. 2 possible choices:
dimensions (default)
metrics
comparison : REQUIRED : Type of comparison to do:
full (default) : compare name and settings
name : compare only names
save : OPTIONAL : if you want to save in a csv.
"""
if self.loggingEnabled:
self.logger.debug(f"starting compareReportSuites")
if listRsids is None or type(listRsids) != list:
raise ValueError("Require a list of rsids")
if element=="dimensions":
if self.loggingEnabled:
self.logger.debug(f"dimensions selected")
listDFs = [self.getDimensions(rsid,full=True) for rsid in listRsids]
elif element == "metrics":
listDFs = [self.getMetrics(rsid,full=True) for rsid in listRsids]
if self.loggingEnabled:
self.logger.debug(f"metrics selected")
for df,rsid in zip(listDFs, listRsids):
df['rsid']=rsid
df.set_index('id',inplace=True)
df.set_index('rsid',append=True,inplace=True)
df = pd.concat(listDFs)
df = df.unstack()
if comparison=='name':
df_name = df['name'].copy()
## transforming to a new df with boolean value comparison to col 0
temp_df = df_name.eq(df_name.iloc[:, 0], axis=0)
## now doing a complete comparison of all boolean with all
df_name['different'] = ~temp_df.eq(temp_df.iloc[:,0],axis=0).all(1)
if save:
df_name.to_csv(f'comparison_name_{int(time.time())}.csv')
if self.loggingEnabled:
self.logger.debug(f'Name only comparison, file : comparison_name_{int(time.time())}.csv')
return df_name
## retrieve main indexes from multi level indexes
mainIndex = set([val[0] for val in list(df.columns)])
dict_temp = {}
for index in mainIndex:
temp_df = df[index].copy()
temp_df.fillna('',inplace=True)
## transforming to a new df with boolean value comparison to col 0
temp_df.eq(temp_df.iloc[:, 0], axis=0)
## now doing a complete comparison of all boolean with all
dict_temp[index] = list(temp_df.eq(temp_df.iloc[:,0],axis=0).all(1))
df_bool = pd.DataFrame(dict_temp)
df['different'] = list(~df_bool.eq(df_bool.iloc[:,0],axis=0).all(1))
if save:
df.to_csv(f'comparison_full_{element}_{int(time.time())}.csv')
if self.loggingEnabled:
self.logger.debug(f'Full comparison, file : comparison_full_{element}_{int(time.time())}.csv')
return df
def shareComponent(self, componentId: str = None, componentType: str = None, shareToId: int = None,
shareToImsId: int = None, shareToType: str = None, shareToLogin: str = None,
accessLevel: str = None, shareFromImsId: str = None) -> dict:
"""
Shares a component with an individual or a group (product profile ID) a dictionary on the calculated metrics requested.
Returns the JSON response from the API.
Arguments:
componentId : REQUIRED : The component ID to share.
componentType : REQUIRED : The component Type ("calculatedMetric", "segment", "project", "dateRange")
shareToId: ID of the user or the group to share to
shareToImsId: IMS ID of the user to share to (alternative to ID)
shareToLogin: Login of the user to share to (alternative to ID)
shareToType: "group" => share to a group (product profile), "user" => share to a user, "all" => share to all users (in this case, no shareToId or shareToImsId is needed)
"""
if self.loggingEnabled:
self.logger.debug(f"Starting to share component ID {componentId} with parameters: {locals()}")
path = f"/componentmetadata/shares/"
data = {
"accessLevel": accessLevel,
"componentId": componentId,
"componentType": componentType,
"shareToId": shareToId,
"shareToImsId": shareToImsId,
"shareToLogin": shareToLogin,
"shareToType": shareToType
}
res = self.connector.postData(self.endpoint_company + path, data=data)
return res
def _dataDescriptor(self, json_request: dict):
"""
read the request and returns an object with information about the request.
It will be used in order to build the dataclass and the dataframe.
"""
if self.loggingEnabled:
self.logger.debug(f"starting _dataDescriptor")
obj = {}
if json_request.get('dimension',None) is not None:
obj['dimension'] = json_request.get('dimension')
obj['filters'] = {'globalFilters': [], 'metricsFilters': {}}
obj['rsid'] = json_request['rsid']
metrics_info = json_request['metricContainer']
obj['metrics'] = [metric['id'] for metric in metrics_info['metrics']]
if 'metricFilters' in metrics_info.keys():
metricsFilter = {metric['id']: metric['filters'] for metric in metrics_info['metrics'] if
len(metric.get('filters', [])) > 0}
filters = []
for metric in metricsFilter:
for item in metricsFilter[metric]:
if 'segmentId' in metrics_info['metricFilters'][int(item)].keys():
filters.append(
metrics_info['metricFilters'][int(item)]['segmentId'])
if 'dimension' in metrics_info['metricFilters'][int(item)].keys():
filters.append(
metrics_info['metricFilters'][int(item)]['dimension'])
obj['filters']['metricsFilters'][metric] = set(filters)
for fil in json_request['globalFilters']:
if 'dateRange' in fil.keys():
obj['filters']['globalFilters'].append(fil['dateRange'])
if 'dimension' in fil.keys():
obj['filters']['globalFilters'].append(fil['dimension'])
if 'segmentId' in fil.keys():
obj['filters']['globalFilters'].append(fil['segmentId'])
return obj
def _readData(
self,
data_rows: list,
anomaly: bool = False,
cols: list = None,
item_id: bool = False
) -> pd.DataFrame:
"""
read the data from the requests and returns a dataframe.
Parameters:
data_rows : REQUIRED : Rows that have been returned by the request.
anomaly : OPTIONAL : Boolean to tell if the anomaly detection has been used.
cols : OPTIONAL : list of columns names
"""
if self.loggingEnabled:
self.logger.debug(f"starting _readData")
if cols is None:
raise ValueError("list of columns must be specified")
data_rows = deepcopy(data_rows)
dict_data = {row.get('value', 'missing_value'): row['data'] for row in data_rows}
if cols is not None:
n_metrics = len(cols) - 1
if item_id: # adding the itemId in the data returned
cols.append('item_id')
for row in data_rows:
dict_data[row.get('value', 'missing_value')].append(row['itemId'])
if anomaly:
# set full columns
cols = cols + [f'{metric}-{suffix}' for metric in cols[1:] for suffix in
['expected', 'UpperBound', 'LowerBound']]
# add data to the dictionary
for row in data_rows:
for item in range(n_metrics):
dict_data[row['value']].append(
row.get('dataExpected', [0 for i in range(n_metrics)])[item])
dict_data[row['value']].append(
row.get('dataUpperBound', [0 for i in range(n_metrics)])[item])
dict_data[row['value']].append(
row.get('dataLowerBound', [0 for i in range(n_metrics)])[item])
df = pd.DataFrame(dict_data).T # require to transform the data
df.reset_index(inplace=True, )
df.columns = cols
return df
def getReport(
self,
json_request: Union[dict, str, IO,RequestCreator],
limit: int = 1000,
n_results: Union[int, str] = 1000,
save: bool = False,
item_id: bool = False,
unsafe: bool = False,
verbose: bool = False,
debug=False,
**kwargs,
) -> object:
"""
Retrieve data from a JSON request.Returns an object containing meta info and dataframe.
Arguments:
json_request: REQUIRED : JSON statement that contains your request for Analytics API 2.0.
The argument can be :
- a dictionary : It will be used as it is.
- a string that is a dictionary : It will be transformed to a dictionary / JSON.
- a path to a JSON file that contains the statement (must end with ".json").
- an instance of the RequestCreator class
limit : OPTIONAL : number of result per request (defaut 1000)
n_results : OPTIONAL : Number of result that you would like to retrieve. (default 1000)
if you want to have all possible data, use "inf".
item_id : OPTIONAL : Boolean to define if you want to return the item id for sub requests (default False)
unsafe : OPTIONAL : If set to True, it will not check "lastPage" parameter and assume first request is complete.
This may break the script or return incomplete data. (default False).
save : OPTIONAL : If you would like to save the data within a CSV file. (default False)
verbose : OPTIONAL : If you want to have comments displayed (default False)
"""
if unsafe and verbose:
print('---- running the getReport in "unsafe" mode ----')
obj = {}
if isinstance(json_request,RequestCreator):
request = json_request.to_dict()
elif type(json_request) == dict:
request = json_request
elif type(json_request) == str and '.json' not in json_request:
try:
request = json.loads(json_request)
except:
raise TypeError("expected a parsable string")
elif '.json' in json_request:
try:
with open(Path(json_request), 'r') as file:
file_string = file.read()
request = json.loads(file_string)
except:
raise TypeError("expected a parsable string")
request['settings']['limit'] = limit
# info for creating report
data_info = self._dataDescriptor(request)
if verbose:
print('Request decrypted')
obj.update(data_info)
anomaly = request['settings'].get('includeAnomalyDetection', False)
columns = [data_info['dimension']] + data_info['metrics']
# preparing for the loop
# in case "inf" has been used. Turn it to a number
n_results = kwargs.get('n_result',n_results)
n_results = float(n_results)
if n_results != float('inf') and n_results < request['settings']['limit']:
# making sure we don't call more than set in wrapper
request['settings']['limit'] = n_results
data_list = []
last_page = False
page_nb, count_elements, total_elements = 0, 0, 0
if verbose:
print('Starting to fetch the data...')
while not last_page:
timestamp = round(time.time())
request['settings']['page'] = page_nb
report = self.connector.postData(self.endpoint_company +
self._getReport, data=request, headers=self.header)
if verbose:
print('Data received.')
# Recursion to take care of throttling limit
while report.get('status_code', 200) == 429 or report.get('error_code',None) == "429050":
if verbose:
print('reaching the limit : pause for 50 s and entering recursion.')
if debug:
with open(f'limit_reach_{timestamp}.json', 'w') as f:
f.write(json.dumps(report, indent=4))
time.sleep(50)
report = self.connector.postData(self.endpoint_company +
self._getReport, data=request, headers=self.header)
if 'lastPage' not in report and unsafe == False: # checking error when no lastPage key in report
if verbose:
print(json.dumps(report, indent=2))
print('Warning : Server Error')
print(json.dumps(report))
if debug:
with open(f'server_failure_request_{timestamp}.json', 'w') as f:
f.write(json.dumps(request, indent=4))
with open(f'server_failure_response_{timestamp}.json', 'w') as f:
f.write(json.dumps(report, indent=4))
print(
f'Warning : Save JSON request : server_failure_request_{timestamp}.json')
print(
f'Warning : Save JSON response : server_failure_response_{timestamp}.json')
obj['data'] = pd.DataFrame()
return obj
# fallback when no lastPage in report
last_page = report.get('lastPage', True)
if verbose:
print(f'last page status : {last_page}')
if 'errorCode' in report.keys():
print('Error with your statement \n' +
report['errorDescription'])
return {report['errorCode']: report['errorDescription']}
count_elements += report.get('numberOfElements', 0)
total_elements = report.get(
'totalElements', request['settings']['limit'])
if total_elements == 0:
obj['data'] = pd.DataFrame()
print(
'Warning : No data returned & lastPage is False.\nExit the loop - no save file & empty dataframe.')
if debug:
with open(f'report_no_element_{timestamp}.json', 'w') as f:
f.write(json.dumps(report, indent=4))
if verbose:
print(
f'% of total elements retrieved. TotalElements: {report.get("totalElements", "no data")}')
return obj # in case loop happening with empty data, returns empty data
if verbose and total_elements != 0:
print(
f'% of total elements retrieved: {round((count_elements / total_elements) * 100, 2)} %')
if last_page == False and n_results != float('inf'):
if count_elements >= n_results:
last_page = True
data = report['rows']
data_list += deepcopy(data) # do a deepcopy
page_nb += 1
if verbose:
print(f'# of requests : {page_nb}')
# return report
df = self._readData(data_list, anomaly=anomaly,
cols=columns, item_id=item_id)
if save:
timestampReport = round(time.time())
df.to_csv(f'report-{timestampReport}.csv', index=False)
if verbose:
print(
f'Saving data in file : {os.getcwd()}{os.sep}report-{timestampReport}.csv')
obj['data'] = df
if verbose:
print(
f'Report contains {(count_elements / total_elements) * 100} % of the available dimensions')
return obj
def _prepareData(
self,
dataRows: list = None,
reportType: str = "normal",
) -> dict:
"""
Read the data returned by the getReport and returns a dictionary used by the Workspace class.
Arguments:
dataRows : REQUIRED : data rows data from CJA API getReport
reportType : REQUIRED : "normal" or "static"
"""
if dataRows is None:
raise ValueError("Require dataRows")
data_rows = deepcopy(dataRows)
expanded_rows = {}
if reportType == "normal":
for row in data_rows:
expanded_rows[row["itemId"]] = [row["value"]]
expanded_rows[row["itemId"]] += row["data"]
elif reportType == "static":
expanded_rows = data_rows
return expanded_rows
def _decrypteStaticData(
self, dataRequest: dict = None, response: dict = None,resolveColumns:bool=False
) -> dict:
"""
From the request dictionary and the response, decrypte the data to standardise the reading.
"""
dataRows = []
## retrieve StaticRow ID and segmentID
if len([metric for metric in dataRequest['metricContainer'].get('metricFilters',[]) if metric.get('id','').startswith("STATIC_ROW_COMPONENT")])>0:
if "dateRange" in list(dataRequest['metricContainer'].get('metricFilters',[])[0].keys()):
tableSegmentsRows = {
obj["id"]: obj["dateRange"]
for obj in dataRequest["metricContainer"]["metricFilters"]
if obj["id"].startswith("STATIC_ROW_COMPONENT")
}
elif "segmentId" in list(dataRequest['metricContainer'].get('metricFilters',[])[0].keys()):
tableSegmentsRows = {
obj["id"]: obj["segmentId"]
for obj in dataRequest["metricContainer"]["metricFilters"]
if obj["id"].startswith("STATIC_ROW_COMPONENT")
}
else:
tableSegmentsRows = {
obj["id"]: obj["segmentId"]
for obj in dataRequest["metricContainer"]["metricFilters"]
}
## retrieve place and segmentID
segmentApplied = {}
for obj in dataRequest["metricContainer"]["metricFilters"]:
if obj["id"].startswith("STATIC_ROW") == False:
if obj["type"] == "breakdown":
segmentApplied[obj["id"]] = f"{obj['dimension']}:::{obj['itemId']}"
elif obj["type"] == "segment":
segmentApplied[obj["id"]] = obj["segmentId"]
elif obj["type"] == "dateRange":
segmentApplied[obj["id"]] = obj["dateRange"]
### table columnIds and StaticRow IDs
tableColumnIds = {
obj["columnId"]: obj["filters"][0]
for obj in dataRequest["metricContainer"]["metrics"]
}
### create relations for metrics with Filter on top
filterRelations = {
obj["filters"][0]: obj["filters"][1:]
for obj in dataRequest["metricContainer"]["metrics"]
if len(obj["filters"]) > 1
}
staticRows = set(val for val in tableSegmentsRows.values())
nb_rows = len(staticRows) ## define how many segment used as rows
nb_columns = int(
len(dataRequest["metricContainer"]["metrics"]) / nb_rows
) ## use to detect rows
staticRows = set(val for val in tableSegmentsRows.values())
staticRowsNames = []
for row in staticRows:
if row.startswith("s") and "@AdobeOrg" in row:
filter = self.Segment(row)
staticRowsNames.append(filter["name"])
else:
staticRowsNames.append(row)
if resolveColumns:
staticRowDict = {
row: self.getSegment(rowName).get('name',rowName) for row, rowName in zip(staticRows, staticRowsNames)
}
else:
staticRowDict = {
row: rowName for row, rowName in zip(staticRows, staticRowsNames)
}
### metrics
dataRows = defaultdict(list)
for row in staticRowDict: ## iter on the different static rows
for column, data in zip(
response["columns"]["columnIds"], response["summaryData"]["totals"]
):
if tableSegmentsRows[tableColumnIds[column]] == row:
## check translation of metricId with Static Row ID
if row not in dataRows[staticRowDict[row]]:
dataRows[staticRowDict[row]].append(row)
dataRows[staticRowDict[row]].append(data)
## should ends like : {'segmentName' : ['STATIC',123,456]}
return nb_columns, tableColumnIds, segmentApplied, filterRelations, dataRows
def getReport2(
self,
request: Union[dict, IO,RequestCreator] = None,
limit: int = 20000,
n_results: Union[int, str] = "inf",
allowRemoteLoad: str = "default",
useCache: bool = True,
useResultsCache: bool = False,
includeOberonXml: bool = False,
includePredictiveObjects: bool = False,
returnsNone: bool = None,
countRepeatInstances: bool = None,
ignoreZeroes: bool = None,
rsid: str = None,
resolveColumns: bool = True,
save: bool = False,
returnClass: bool = True,
) -> Union[Workspace, dict]:
"""
Return an instance of Workspace that contains the data requested.
Argumnents:
request : REQUIRED : either a dictionary of a JSON file that contains the request information.
limit : OPTIONAL : number of results per request (default 1000)
n_results : OPTIONAL : total number of results returns. Use "inf" to return everything (default "inf")
allowRemoteLoad : OPTIONAL : Controls if Oberon should remote load data. Default behavior is true with fallback to false if remote data does not exist
useCache : OPTIONAL : Use caching for faster requests (Do not do any report caching)
useResultsCache : OPTIONAL : Use results caching for faster reporting times (This is a pass through to Oberon which manages the Cache)
includeOberonXml : OPTIONAL : Controls if Oberon XML should be returned in the response - DEBUG ONLY
includePredictiveObjects : OPTIONAL : Controls if platform Predictive Objects should be returned in the response. Only available when using Anomaly Detection or Forecasting- DEBUG ONLY
returnsNone : OPTIONAL: Overwritte the request setting to return None values.
countRepeatInstances : OPTIONAL: Overwrite the request setting to count repeatInstances values.
ignoreZeroes : OPTIONAL : Ignore zeros in the results
rsid : OPTIONAL : Overwrite the ReportSuiteId used for report. Only works if the same components are presents.
resolveColumns: OPTIONAL : automatically resolve columns from ID to name for calculated metrics & segments. Default True. (works on returnClass only)
save : OPTIONAL : If you want to save the data (in JSON or CSV, depending the class is used or not)
returnClass : OPTIONAL : return the class building dataframe and better comprehension of data. (default yes)
"""
if self.loggingEnabled:
self.logger.debug(f"Start getReport")
path = "/reports"
params = {
"allowRemoteLoad": allowRemoteLoad,
"useCache": useCache,
"useResultsCache": useResultsCache,
"includeOberonXml": includeOberonXml,
"includePlatformPredictiveObjects": includePredictiveObjects,
}
if type(request) == dict:
dataRequest = request
elif isinstance(request,RequestCreator):
dataRequest = request.to_dict()
elif ".json" in request:
with open(request, "r") as f:
dataRequest = json.load(f)
else:
raise ValueError("Require a JSON or Dictionary to request data")
### Settings
dataRequest = deepcopy(dataRequest)
dataRequest["settings"]["page"] = 0
dataRequest["settings"]["limit"] = limit
if returnsNone:
dataRequest["settings"]["nonesBehavior"] = "return-nones"
elif dataRequest['settings'].get('nonesBehavior',False) != False:
pass ## keeping current settings
else:
dataRequest["settings"]["nonesBehavior"] = "exclude-nones"
if countRepeatInstances:
dataRequest["settings"]["countRepeatInstances"] = True
elif dataRequest["settings"].get("countRepeatInstances",False) != False:
pass ## keeping current settings
else:
dataRequest["settings"]["countRepeatInstances"] = False
if rsid is not None:
dataRequest["rsid"] = rsid
if ignoreZeroes:
dataRequest.get("statistics",{'ignoreZeroes':True})["ignoreZeroes"] = True
deepCopyRequest = deepcopy(dataRequest)
### Request data
if self.loggingEnabled:
self.logger.debug(f"getReport request: {json.dumps(dataRequest,indent=4)}")
res = self.connector.postData(
self.endpoint_company + path, data=dataRequest, params=params
)
if "rows" in res.keys():
reportType = "normal"
if self.loggingEnabled:
self.logger.debug(f"reportType: {reportType}")
dataRows = res.get("rows")
columns = res.get("columns")
summaryData = res.get("summaryData")
totalElements = res.get("numberOfElements")
lastPage = res.get("lastPage", True)
if float(len(dataRows)) >= float(n_results):
## force end of loop when a limit is set on n_results
lastPage = True
while lastPage != True:
dataRequest["settings"]["page"] += 1
res = self.connector.postData(
self.endpoint_company + path, data=dataRequest, params=params
)
dataRows += res.get("rows")
lastPage = res.get("lastPage", True)
totalElements += res.get("numberOfElements")
if float(len(dataRows)) >= float(n_results):
## force end of loop when a limit is set on n_results
lastPage = True
if self.loggingEnabled:
self.logger.debug(f"loop for report over: {len(dataRows)} results")
if returnClass == False:
return dataRows
### create relation between metrics and filters applied
columnIdRelations = {
obj["columnId"]: obj["id"]
for obj in dataRequest["metricContainer"]["metrics"]
}
filterRelations = {
obj["columnId"]: obj["filters"]
for obj in dataRequest["metricContainer"]["metrics"]
if len(obj.get("filters", [])) > 0
}
metricFilters = {}
metricFilterTranslation = {}
for filter in dataRequest["metricContainer"].get("metricFilters", []):
filterId = filter["id"]
if filter["type"] == "breakdown":
filterValue = f"{filter['dimension']}:{filter['itemId']}"
metricFilters[filter["dimension"]] = filter["itemId"]
if filter["type"] == "dateRange":
filterValue = f"{filter['dateRange']}"
metricFilters[filterValue] = filterValue
if filter["type"] == "segment":
filterValue = f"{filter['segmentId']}"
if filterValue.startswith("s") and "@AdobeOrg" in filterValue:
seg = self.getSegment(filterValue)
metricFilters[filterValue] = seg["name"]
metricFilterTranslation[filterId] = filterValue
metricColumns = {}
for colId in columnIdRelations.keys():
metricColumns[colId] = columnIdRelations[colId]
for element in filterRelations.get(colId, []):
metricColumns[colId] += f":::{metricFilterTranslation[element]}"
else:
if returnClass == False:
return res
reportType = "static"
if self.loggingEnabled:
self.logger.debug(f"reportType: {reportType}")
columns = None ## no "columns" key in response
summaryData = res.get("summaryData")
(
nb_columns,
tableColumnIds,
segmentApplied,
filterRelations,
dataRows,
) = self._decrypteStaticData(dataRequest=dataRequest, response=res,resolveColumns=resolveColumns)
### Findings metrics
metricFilters = {}
metricColumns = []
for i in range(nb_columns):
metric: str = res["columns"]["columnIds"][i]
metricName = metric.split(":::")[0]
if metricName.startswith("cm"):
calcMetric = self.getCalculatedMetric(metricName)
metricName = calcMetric["name"]
correspondingStatic = tableColumnIds[metric]
## if the static row has a filter
if correspondingStatic in list(filterRelations.keys()):
## finding segment applied to metrics
for element in filterRelations[correspondingStatic]:
segId:str = segmentApplied[element]
metricName += f":::{segId}"
metricFilters[segId] = segId
if segId.startswith("s") and "@AdobeOrg" in segId:
seg = self.getSegment(segId)
metricFilters[segId] = seg["name"]
metricColumns.append(metricName)
### ending with ['metric1','metric2 + segId',...]
### preparing data points
if self.loggingEnabled:
self.logger.debug(f"preparing data")
preparedData = self._prepareData(dataRows, reportType=reportType)
if returnClass:
if self.loggingEnabled:
self.logger.debug(f"returning Workspace class")
## Using the class
data = Workspace(
responseData=preparedData,
dataRequest=deepCopyRequest,
columns=columns,
summaryData=summaryData,
analyticsConnector=self,
reportType=reportType,
metrics=metricColumns, ## for normal type ## for staticReport
metricFilters=metricFilters,
resolveColumns=resolveColumns,
)
if save:
data.to_csv()
return data | Adobe-Lib-Manual | /Adobe_Lib_Manual-4.2.tar.gz/Adobe_Lib_Manual-4.2/aanalytics2/aanalytics2.py | aanalytics2.py |
import json
import time
from copy import deepcopy
# Non standard libraries
import requests
from aanalytics2 import config, token_provider
class AdobeRequest:
"""
Handle request to Audience Manager and taking care that the request have a valid token set each time.
Attributes:
restTime : Time to rest before sending new request when reaching too many request status code.
"""
loggingEnabled = False
def __init__(self,
config_object: dict = config.config_object,
header: dict = config.header,
verbose: bool = False,
retry: int = 0,
loggingEnabled:bool=False,
logger:object=None
) -> None:
"""
Set the connector to be used for handling request to AAM
Arguments:
config_object : OPTIONAL : Require the importConfig file to have been used.
header : OPTIONAL : header of the config modules
verbose : OPTIONAL : display comment on the request.
retry : OPTIONAL : If you wish to retry failed GET requests
loggingEnabled : OPTIONAL : if the logging is enable for that instance.
logger : OPTIONAL : instance of the logger created
"""
if config_object['org_id'] == '':
raise Exception(
'You have to upload the configuration file with importConfigFile method.')
self.config = deepcopy(config_object)
self.header = deepcopy(header)
self.loggingEnabled = loggingEnabled
self.logger = logger
self.restTime = 30
self.retry = retry
if self.config['token'] == '' or time.time() > self.config['date_limit']:
if 'scopes' in self.config.keys() and self.config.get('scopes',None) is not None:
self.connectionType = 'oauthV2'
token_and_expiry = token_provider.get_oauth_token_and_expiry_for_config(config=self.config, verbose=verbose)
elif self.config.get("private_key",None) is not None or self.config.get("pathToKey",None) is not None:
self.connectionType = 'jwt'
token_and_expiry = token_provider.get_jwt_token_and_expiry_for_config(config=self.config, verbose=verbose)
token = token_and_expiry['token']
expiry = token_and_expiry['expiry']
self.token = token
if self.loggingEnabled:
self.logger.info("token retrieved : {token}")
self.config['token'] = token
self.config['date_limit'] = time.time() + expiry - 500
self.header.update({'Authorization': f'Bearer {token}'})
def _checkingDate(self) -> None:
"""
Checking if the token is still valid
"""
now = time.time()
if now > self.config['date_limit']:
if self.loggingEnabled:
self.logger.warning("token expired. Trying to retrieve a new token")
if self.connectionType =='oauthV2':
token_and_expiry = token_provider.get_oauth_token_and_expiry_for_config(config=self.config)
elif self.connectionType == 'jwt':
token_and_expiry = token_provider.get_jwt_token_and_expiry_for_config(config=self.config)
token = token_and_expiry['token']
if self.loggingEnabled:
self.logger.info(f"new token retrieved : {token}")
self.config['token'] = token
self.config['date_limit'] = time.time() + token_and_expiry['expiry'] - 500
self.header.update({'Authorization': f'Bearer {token}'})
def getData(self, endpoint: str, params: dict = None, data: dict = None, headers: dict = None, *args, **kwargs):
"""
Abstraction for getting data
"""
internRetry = kwargs.get("retry", self.retry)
self._checkingDate()
if self.loggingEnabled:
self.logger.info(f"endpoint: {endpoint}")
self.logger.info(f"params: {params}")
if headers is None:
headers = self.header
if params is None and data is None:
res = requests.get(
endpoint, headers=headers)
elif params is not None and data is None:
res = requests.get(
endpoint, headers=headers, params=params)
elif params is None and data is not None:
res = requests.get(
endpoint, headers=headers, data=data)
elif params is not None and data is not None:
res = requests.get(endpoint, headers=headers, params=params, data=data)
if kwargs.get("verbose", False):
print(f"request URL : {res.request.url}")
print(f"statut_code : {res.status_code}")
try:
while str(res.status_code) == "429":
if kwargs.get("verbose", False):
print(f'Too many requests: retrying in {self.restTime} seconds')
if self.loggingEnabled:
self.logger.info(f"Too many requests: retrying in {self.restTime} seconds")
time.sleep(self.restTime)
res = requests.get(endpoint, headers=headers, params=params, data=data)
res_json = res.json()
except:
## handling 1.4
if self.loggingEnabled:
self.logger.warning(f"handling exception as res.json() cannot be managed")
self.logger.warning(f"status code: {res.status_code}")
if kwargs.get('legacy',False):
try:
return json.loads(res.text)
except:
if self.loggingEnabled:
self.logger.error(f"GET method failed: {res.status_code}, {res.text}")
return res.text
else:
if self.loggingEnabled:
self.logger.error(f"text: {res.text}")
res_json = {'error': 'Request Error'}
while internRetry > 0:
if self.loggingEnabled:
self.logger.warning(f"Trying again with internal retry")
if kwargs.get("verbose", False):
print('Retry parameter activated')
print(f'{internRetry} retry left')
if 'error' in res_json.keys():
time.sleep(30)
res_json = self.getData(endpoint, params=params, data=data, headers=headers, retry=internRetry-1, **kwargs)
return res_json
return res_json
def postData(self, endpoint: str, params: dict = None, data: dict = None, headers: dict = None, *args, **kwargs):
"""
Abstraction for posting data
"""
self._checkingDate()
if headers is None:
headers = self.header
if params is None and data is None:
res = requests.post(endpoint, headers=headers)
elif params is not None and data is None:
res = requests.post(endpoint, headers=headers, params=params)
elif params is None and data is not None:
res = requests.post(endpoint, headers=headers, data=json.dumps(data))
elif params is not None and data is not None:
res = requests.post(endpoint, headers=headers, params=params, data=json.dumps(data))
try:
res_json = res.json()
if res.status_code == 429 or res_json.get('error_code', None) == "429050":
res_json['status_code'] = 429
except:
## handling 1.4
if kwargs.get('legacy',False):
try:
return json.loads(res.text)
except:
if self.loggingEnabled:
self.logger.error(f"POST method failed: {res.status_code}, {res.text}")
return res.text
res_json = {'error': res.get('status_code','Request Error')}
return res_json
def patchData(self, endpoint: str, params: dict = None, data=None, headers: dict = None, *args, **kwargs):
"""
Abstraction for patching data
"""
self._checkingDate()
if headers is None:
headers = self.header
if params is not None and data is None:
res = requests.patch(endpoint, headers=headers, params=params)
elif params is None and data is not None:
res = requests.patch(endpoint, headers=headers, data=json.dumps(data))
elif params is not None and data is not None:
res = requests.patch(endpoint, headers=headers, params=params, data=json.dumps(data))
try:
while str(res.status_code) == "429":
if kwargs.get("verbose", False):
print(f'Too many requests: retrying in {self.restTime} seconds')
time.sleep(self.restTime)
res = requests.patch(endpoint, headers=headers, params=params,data=json.dumps(data))
res_json = res.json()
except:
if self.loggingEnabled:
self.logger.error(f"PATCH method failed: {res.status_code}, {res.text}")
res_json = {'error': res.get('status_code','Request Error')}
return res_json
def putData(self, endpoint: str, params: dict = None, data=None, headers: dict = None, *args, **kwargs):
"""
Abstraction for putting data
"""
self._checkingDate()
if headers is None:
headers = self.header
if params is not None and data is None:
res = requests.put(endpoint, headers=headers, params=params)
elif params is None and data is not None:
res = requests.put(endpoint, headers=headers, data=json.dumps(data))
elif params is not None and data is not None:
res = requests.put(endpoint, headers=headers, params=params, data=json.dumps(data=data))
try:
status_code = res.json()
except:
if self.loggingEnabled:
self.logger.error(f"PUT method failed: {res.status_code}, {res.text}")
status_code = {'error': res.get('status_code','Request Error')}
return status_code
def deleteData(self, endpoint: str, params: dict = None, headers: dict = None, *args, **kwargs):
"""
Abstraction for deleting data
"""
self._checkingDate()
if headers is None:
headers = self.header
if params is None:
res = requests.delete(endpoint, headers=headers)
elif params is not None:
res = requests.delete(endpoint, headers=headers, params=params)
try:
while str(res.status_code) == "429":
if kwargs.get("verbose", False):
print(f'Too many requests: retrying in {self.restTime} seconds')
time.sleep(self.restTime)
res = requests.delete(endpoint, headers=headers, params=params)
status_code = res.status_code
except:
if self.loggingEnabled:
self.logger.error(f"DELETE method failed: {res.status_code}, {res.text}")
status_code = {'error': 'Request Error'}
return status_code | Adobe-Lib-Manual | /Adobe_Lib_Manual-4.2.tar.gz/Adobe_Lib_Manual-4.2/aanalytics2/connector.py | connector.py |
import os
import time
from typing import Dict, Union
import json
import jwt
import requests
from aanalytics2 import configs
def get_jwt_token_and_expiry_for_config(config: dict, verbose: bool = False, save: bool = False, *args, **kwargs) -> \
Dict[str, str]:
"""
Retrieve the token by using the information provided by the user during the import importConfigFile function.
ArgumentS :
verbose : OPTIONAL : Default False. If set to True, print information.
save : OPTIONAL : Default False. If set to True, save the toke in the .
"""
private_key = configs.get_private_key_from_config(config)
header_jwt = {
'cache-control': 'no-cache',
'content-type': 'application/x-www-form-urlencoded'
}
now_plus_24h = int(time.time()) + 8760 * 60 * 60
jwt_payload = {
'exp': now_plus_24h,
'iss': config['org_id'],
'sub': config['tech_id'],
'https://ims-na1.adobelogin.com/s/ent_analytics_bulk_ingest_sdk': True,
'aud': f'https://ims-na1.adobelogin.com/c/{config["client_id"]}'
}
encoded_jwt = _get_jwt(payload=jwt_payload, private_key=private_key)
payload = {
'client_id': config['client_id'],
'client_secret': config['secret'],
'jwt_token': encoded_jwt
}
response = requests.post(config['jwtTokenEndpoint'], headers=header_jwt, data=payload)
json_response = response.json()
try:
token = json_response['access_token']
except KeyError:
print('Issue retrieving token')
print(json_response)
raise Exception(json.dumps(json_response,indent=2))
expiry = json_response['expires_in'] / 1000 ## returns milliseconds expiring
if save:
with open('token.txt', 'w') as f:
f.write(token)
print(f'token has been saved here: {os.getcwd()}{os.sep}token.txt')
if verbose:
print('token valid till : ' + time.ctime(time.time() + expiry))
return {'token': token, 'expiry': expiry}
def get_oauth_token_and_expiry_for_config(config:dict,verbose:bool=False,save:bool=False)->Dict[str,str]:
"""
Retrieve the access token by using the OAuth information provided by the user
during the import importConfigFile function.
Arguments :
config : REQUIRED : Configuration object.
verbose : OPTIONAL : Default False. If set to True, print information.
save : OPTIONAL : Default False. If set to True, save the toke in the .
"""
if config is None:
raise ValueError("config dictionary is required")
oauth_payload = {
"grant_type": "client_credentials",
"client_id": config["client_id"],
"client_secret": config["secret"],
"scope": config["scopes"]
}
response = requests.post(
config["oauthTokenEndpointV2"], data=oauth_payload)
json_response = response.json()
if 'access_token' in json_response.keys():
token = json_response['access_token']
expiry = json_response["expires_in"]
else:
return json.dumps(json_response,indent=2)
if save:
with open('token.txt', 'w') as f:
f.write(token)
if verbose:
print('token valid till : ' + time.ctime(time.time() + expiry))
return {'token': token, 'expiry': expiry}
def _get_jwt(payload: dict, private_key: str) -> str:
"""
Ensure that jwt enconding return the same type (str) as versions < 2.0.0 returned bytes and >2.0.0 return strings.
"""
token: Union[str, bytes] = jwt.encode(payload, private_key, algorithm='RS256')
if isinstance(token, bytes):
return token.decode('utf-8')
return token | Adobe-Lib-Manual | /Adobe_Lib_Manual-4.2.tar.gz/Adobe_Lib_Manual-4.2/aanalytics2/token_provider.py | token_provider.py |
from dataclasses import dataclass
import json
@dataclass
class Project:
"""
This dataclass extract the information retrieved from the getProjet method.
It flatten the elements and gives you insights on what your project contains.
"""
def __init__(self, projectDict: dict = None,rsidSuffix:bool=False):
"""
Instancialize the class.
Arguments:
projectDict : REQUIRED : the dictionary of the project (returned by getProject method)
rsidSuffix : OPTIONAL : If you want to have the rsid suffix to dimension and metrics.
"""
if projectDict is None:
raise Exception("require a dictionary with project information. Retrievable via getProject")
self.id: str = projectDict.get('id', '')
self.name: str = projectDict.get('name', '')
self.description: str = projectDict.get('description', '')
self.rsid: str = projectDict.get('rsid', '')
self.ownerName: str = projectDict['owner'].get('name', '')
self.ownerId: int = projectDict['owner'].get('id', '')
self.ownerEmail: int = projectDict['owner'].get('login', '')
self.template: bool = projectDict.get('companyTemplate', False)
self.version: str = None
if 'definition' in projectDict.keys():
definition: dict = projectDict['definition']
self.version: str = definition.get('version',None)
self.curation: bool = definition.get('isCurated', False)
if definition.get('device', 'desktop') != 'cell':
self.reportType = "desktop"
infos = self._findPanelsInfos(definition['workspaces'][0])
self.nbPanels: int = infos["nb_Panels"]
self.nbSubPanels: int = 0
self.subPanelsTypes: list = []
for panel in infos["panels"]:
self.nbSubPanels += infos["panels"][panel]['nb_subPanels']
self.subPanelsTypes += infos["panels"][panel]['subPanels_types']
self.elementsUsed: dict = self._findElements(definition['workspaces'][0],rsidSuffix=rsidSuffix)
self.nbElementsUsed: int = len(self.elementsUsed['dimensions']) + len(
self.elementsUsed['metrics']) + len(self.elementsUsed['segments']) + len(
self.elementsUsed['calculatedMetrics'])
else:
self.reportType = "mobile"
def __str__(self)->str:
return json.dumps(self.to_dict(),indent=4)
def __repr__(self)->str:
return json.dumps(self.to_dict(),indent=4)
def _findPanelsInfos(self, workspace: dict = None) -> dict:
"""
Return a dict of the different information for each Panel.
Arguments:
workspace : REQUIRED : the workspace dictionary.
"""
dict_data = {'workspace_id': workspace['id']}
dict_data['nb_Panels'] = len(workspace['panels'])
dict_data['panels'] = {}
for panel in workspace['panels']:
dict_data["panels"][panel['id']] = {}
dict_data["panels"][panel['id']]['name'] = panel.get('name', 'Default Name')
dict_data["panels"][panel['id']]['nb_subPanels'] = len(panel['subPanels'])
dict_data["panels"][panel['id']]['subPanels_types'] = [subPanel['reportlet']['type'] for subPanel in
panel['subPanels']]
return dict_data
def _findElements(self, workspace: dict,rsidSuffix:bool=False) -> list:
"""
Returns the list of dimensions used in the FreeformReportlet.
Arguments :
workspace : REQUIRED : the workspace dictionary.
"""
dict_elements: dict = {'dimensions': [], "metrics": [], 'segments': [], "reportSuites": [],
'calculatedMetrics': []}
tmp_rsid = "" # default empty value
for panel in workspace['panels']:
if "reportSuite" in panel.keys():
dict_elements['reportSuites'].append(panel['reportSuite']['id'])
if rsidSuffix:
tmp_rsid = f"::{panel['reportSuite']['id']}"
elif "rsid" in panel.keys():
dict_elements['reportSuites'].append(panel['rsid'])
if rsidSuffix:
tmp_rsid = f"::{panel['rsid']}"
filters: list = panel.get('segmentGroups',[])
if len(filters) > 0:
for element in filters:
typeElement = element['componentOptions'][0].get('component',{}).get('type','')
idElement = element['componentOptions'][0].get('component',{}).get('id','')
if typeElement == "Segment":
dict_elements['segments'].append(idElement)
if typeElement == "DimensionItem":
clean_id: str = idElement[:idElement.find(
'::')] ## cleaning this type of element : 'variables/evar7.6::3000623228'
dict_elements['dimensions'].append(clean_id)
for subPanel in panel['subPanels']:
if subPanel['reportlet']['type'] == "FreeformReportlet":
reportlet = subPanel['reportlet']
rows = reportlet['freeformTable']
if 'dimension' in rows.keys():
dict_elements['dimensions'].append(f"{rows['dimension']['id']}{tmp_rsid}")
if len(rows["staticRows"]) > 0:
for row in rows["staticRows"]:
## I have to get a temp dimension to clean them before loading them in order to avoid counting them multiple time for each rows.
temp_list_dim = []
componentType: str = row.get('component',{}).get('type','')
if componentType == "DimensionItem":
temp_list_dim.append(f"{row['component']['id']}{tmp_rsid}")
elif componentType == "Segments" or componentType == "Segment":
dict_elements['segments'].append(row['component']['id'])
elif componentType == "Metric":
dict_elements['metrics'].append(f"{row['component']['id']}{tmp_rsid}")
elif componentType == "CalculatedMetric":
dict_elements['calculatedMetrics'].append(row['component']['id'])
if len(temp_list_dim) > 0:
temp_list_dim = list(set([el[:el.find('::')] for el in temp_list_dim]))
for dim in temp_list_dim:
dict_elements['dimensions'].append(f"{dim}{tmp_rsid}")
columns = reportlet['columnTree']
for node in columns['nodes']:
temp_data = self._recursiveColumn(node,tmp_rsid=tmp_rsid)
dict_elements['calculatedMetrics'] += temp_data['calculatedMetrics']
dict_elements['segments'] += temp_data['segments']
dict_elements['metrics'] += temp_data['metrics']
if len(temp_data['dimensions']) > 0:
for dim in set(temp_data['dimensions']):
dict_elements['dimensions'].append(dim)
dict_elements['metrics'] = list(set(dict_elements['metrics']))
dict_elements['segments'] = list(set(dict_elements['segments']))
dict_elements['dimensions'] = list(set(dict_elements['dimensions']))
dict_elements['calculatedMetrics'] = list(set(dict_elements['calculatedMetrics']))
return dict_elements
def _recursiveColumn(self, node: dict = None, temp_data: dict = None,tmp_rsid:str=""):
"""
recursive function to fetch elements in column stack
tmp_rsid : OPTIONAL : empty by default, if rsid is pass, it will add the value to dimension and metrics
"""
if temp_data is None:
temp_data: dict = {'dimensions': [], "metrics": [], 'segments': [], "reportSuites": [],
'calculatedMetrics': []}
componentType: str = node.get('component',{}).get('type','')
if componentType == "Metric":
temp_data['metrics'].append(f"{node['component']['id']}{tmp_rsid}")
elif componentType == "CalculatedMetric":
temp_data['calculatedMetrics'].append(node['component']['id'])
elif componentType == "Segment":
temp_data['segments'].append(node['component']['id'])
elif componentType == "DimensionItem":
old_id: str = node['component']['id']
new_id: str = old_id[:old_id.find('::')]
temp_data['dimensions'].append(f"{new_id}{tmp_rsid}")
if len(node['nodes']) > 0:
for new_node in node['nodes']:
temp_data = self._recursiveColumn(new_node, temp_data=temp_data,tmp_rsid=tmp_rsid)
return temp_data
def to_dict(self) -> dict:
"""
transform the class into a dictionary
"""
obj = {
'id': self.id,
'name': self.name,
'description': self.description,
'rsid': self.rsid,
'ownerName': self.ownerName,
'ownerId': self.ownerId,
'ownerEmail': self.ownerEmail,
'template': self.template,
'reportType':self.reportType,
'curation': self.curation or False,
'version': self.version or None,
}
add_object = {}
if hasattr(self, 'nbPanels'):
add_object = {
'curation': self.curation,
'version': self.version,
'nbPanels': self.nbPanels,
'nbSubPanels': self.nbSubPanels,
'subPanelsTypes': self.subPanelsTypes,
'nbElementsUsed': self.nbElementsUsed,
'dimensions': self.elementsUsed['dimensions'],
'metrics': self.elementsUsed['metrics'],
'segments': self.elementsUsed['segments'],
'calculatedMetrics': self.elementsUsed['calculatedMetrics'],
'rsids': self.elementsUsed['reportSuites'],
}
full_obj = {**obj, **add_object}
return full_obj | Adobe-Lib-Manual | /Adobe_Lib_Manual-4.2.tar.gz/Adobe_Lib_Manual-4.2/aanalytics2/projects.py | projects.py |
import gzip
import io
from concurrent import futures
from pathlib import Path
from typing import IO, Union
# Non standard libraries
import pandas as pd
import requests
from aanalytics2 import config, connector
class DIAPI:
"""
This class provide an easy way to use the Data Insertion API.
You can initialize it with the required information to be present in the request and then select to send POST or GET request.
Arguments to instantiate:
rsid : REQUIRED : Report Suite ID
tracking_server : REQUIRED : tracking server for tracking.
example : "xxxx.sc.omtrdc.net"
"""
def __init__(self, rsid: str = None, tracking_server: str = None):
"""
Arguments:
rsid : REQUIRED : Report Suite ID
tracking_server : REQUIRED : tracking server for tracking.
"""
if rsid is None:
raise Exception("Expecting a ReportSuite ID (rsid)")
self.rsid = rsid
if tracking_server is None:
raise Exception("Expecting a tracking server")
self.tracking_server = tracking_server
try:
import importlib.resources as pkg_resources
path = pkg_resources.path("aanalytics2", "supported_tags.pickle")
except ImportError:
# Try backported to PY<37 with pkg_resources.
try:
import pkg_resources
path = pkg_resources.resource_filename(
"aanalytics2", "supported_tags.pickle")
except:
print('no supported_tags file')
try:
with path as f:
self.REFERENCE = pd.read_pickle(f)
except:
self.REFERENCE = None
def getMethod(self, pageName: str = None, g: str = None, pe: str = None, pev1: str = None, pev2: str = None, events: str = None, **kwargs):
"""
Use the GET method to send information to Adobe Analytics
Arguments:
pageName : REQUIRED : The Web page name.
g : REQUIRED : The Web page URL
pe : OPTIONAL : For custom link tracking (Type of link ("d", "e", or "o"))
if selected, require "pev1" or "pev2", additionally pageName is set to Null
pev1 : OPTIONAL : The link's HREF. For custom links, page values are ignored.
pev2 : OPTIONAL : Name of link.
events : OPTIONAL : If you want to pass some events
Possible kwargs:
- see the SUPPORTED_TAGS attributes. Tags should be in the supported format.
"""
if pageName is None and g is None:
raise Exception("Expecting a pageName or g arguments")
if pe is not None and pe not in ["d", "e", "o"]:
raise Exception('Expecting pe argument to be ("d", "e", or "o")')
header = {'Content-Type': 'application/json'}
endpoint = f"https://{self.tracking_server}/b/ss/{self.rsid}/0"
params = {"pageName": pageName, "g": g,
"pe": pe, "pev1": pev1, "pev2": pev2, "events": events, **kwargs}
res = requests.get(endpoint, params=params, headers=header)
return res
def postMethod(self, pageName: str = None, pageURL: str = None, linkType: str = None, linkURL: str = None, linkName: str = None, events: str = None, **kwargs):
"""
Use the POST method to send information to Adobe Analytics
Arguments:
pageName : REQUIRED : The Web page name.
pageURL : REQUIRED : The Web page URL
linkType : OPTIONAL : For custom link tracking (Type of link ("d", "e", or "o"))
if selected, require "pev1" or "pev2", additionally pageName is set to Null
linkURL : OPTIONAL : The link's HREF. For custom links, page values are ignored.
linkName : OPTIONAL : Name of link.
events : OPTIONAL : If you want to pass some events
Possible kwargs:
- see the SUPPORTED_TAGS attributes. Tags should be in the supported format.
"""
if pageName is None and pageURL is None:
raise Exception("Expecting a pageName or pageURL argument")
if linkType is not None and linkType not in ["d", "e", "o"]:
raise Exception('Expecting pe argument to be ("d", "e", or "o")')
header = {'Content-Type': 'application/xml'}
endpoint = f"https://{self.tracking_server}/b/ss//6"
dictionary = {"pageName": pageName, "pageURL": pageURL,
"linkType": linkType, "linkURL": linkURL, "linkName": linkName, "events": events, "reportSuite": self.rsid, **kwargs}
import dicttoxml as dxml
myxml = dxml.dicttoxml(
dictionary, custom_root='request', attr_type=False)
xml_data = myxml.decode()
res = requests.post(endpoint, data=xml_data, headers=header)
return res
class Bulkapi:
"""
This is the bulk API from Adobe Analytics.
By default, the file are sent to the global endpoints for auto-routing.
If you wish to select a specific endpoint, you can modify it during instantiation.
It requires you to upload some adobeio configuration file through the main aanalytics2 module.
Arguments:
endpoint : OPTIONAL : by default using https://analytics-collection.adobe.io
"""
def __init__(self, endpoint: str = "https://analytics-collection.adobe.io", config_object: dict = config.config_object):
"""
Initialize the Bulk API connection. Returns an object with methods to send data to Analytics.
Arguments:
endpoint : REQUIRED : Endpoint to send data to. Default to analytics-collection.adobe.io
possible values, on top of the default choice are:
- https://analytics-collection-va7.adobe.io (US)
- https://analytics-collection-nld2.adobe.io (EU)
config_object : REQUIRED : config object containing the different information to send data.
"""
self.endpoint = endpoint
try:
import importlib.resources as pkg_resources
path = pkg_resources.path(
"aanalytics2", "CSV_Column_and_Query_String_Reference.pickle")
except ImportError:
try:
# Try backported to PY<37 `importlib_resources`.
import pkg_resources
path = pkg_resources.resource_filename(
"aanalytics2", "CSV_Column_and_Query_String_Reference.pickle")
except:
print('no CSV_Column_and_Query_string_Reference file')
try:
with path as f:
self.REFERENCE = pd.read_pickle(f)
except:
self.REFERENCE = None
# if no token has been generated.
self.connector = connector.AdobeRequest()
self.header = self.connector.header
self.header["x-adobe-vgid"] = "ingestion"
del self.header["Content-Type"]
self._createdFiles = []
def validation(self, file: IO = None,encoding:str='utf-8', **kwargs):
"""
Send the file to a validation endpoint. Return the response object from requests.
Argument:
file : REQUIRED : File in a string of byte format.
encoding : OPTIONAL : type of encoding used for the file.
Possible kwargs:
compress_level : handle the compression level, from 0 (no compression) to 9 (slow but more compressed). default 5.
"""
compress_level = kwargs.get("compress_level", 5)
if file is None:
raise Exception("Expecting a file")
path = "/aa/collect/v1/events/validate"
if file.endswith(".gz") == False:
with open(file, "r",encoding=encoding) as f:
content = f.read()
data = gzip.compress(content.encode('utf-8'),
compresslevel=compress_level)
filename = f"{file}.gz"
elif file.endswith(".gz"):
filename = file
with open(file, "rb") as f:
data = f.read()
res = requests.post(self.endpoint + path, files={"file": (None, data)},
headers=self.header)
return res
def generateTemplate(self, includeAdv: bool = False, returnDF: bool = False, save: bool = True):
"""
Generate a CSV file with minimum fields.
Arguments:
includeAdv : OPTIONAL : Include advanced fields in the csv (pe & queryString). Not included by default to avoid confusion for new users. (Default False)
returnDF : OPTIONAL : Return a pandas dataFrame if you want to work directly with a data frame.(default False)
save : OPTIONAL : Save the file created directly in your working folder.
"""
## 2 rows being created
string = """timestamp,marketingCloudVisitorID,events,pageName,pageURL,reportSuiteID,userAgent,pe,queryString\ntimestampValuePOSIX/Epoch Time (e.g. 1486769029) or ISO-8601 (e.g. 2017-02-10T16:23:49-07:00),marketingCloudVisitorIDValue,eventsValue,pageNameValue,pageURLValue,reportSuiteIDValue,userAgentValue,peValue,queryStringValue
"""
data = io.StringIO(string)
df = pd.read_csv(data, sep=',')
if includeAdv == False:
df.drop(["pe", "queryString"], axis=1, inplace=True)
if save:
df.to_csv('template.csv', index=False)
if returnDF:
return df
def _checkFiles(self, file: str = None,encoding:str = "utf-8"):
"""
Internal method that check content and format of the file
"""
if file.endswith(".gz"):
return file
else: # if sending not gzipped file.
new_folder = Path('tmp/')
new_folder.mkdir(exist_ok=True)
with open(file, "r",encoding=encoding) as f:
content = f.read()
new_path = new_folder / f"{file}.gz"
with gzip.open(Path(new_path), 'wb') as f:
f.write(content.encode('utf-8'))
# save the filename to delete
self._createdFiles.append(new_path)
return new_path
def sendFiles(self, files: Union[list, IO] = None,encoding:str='utf-8',**kwargs):
"""
Method to send the file(s) through the Bulk API. Returns a list with the different status file sent.
Arguments:
files : REQUIRED : file to be send to the aalytics collection server. It can be a list or the name of the file to be send.
If list is being send, we assume that each file are to be sent in different visitor groups.
If file are not gzipped, we will compress the file and saved it as gz in the folder.
encoding : OPTIONAL : if encoding is different that default utf-8.
possible kwargs:
workers : maximum amount of worker for parallele processing. (default 4)
"""
path = "/aa/collect/v1/events"
if files is None:
raise Exception("Expecting a file")
compress_level = kwargs.get("compress_level", 5)
files_gz = list()
if type(files) == list:
for file in files:
fileName = self._checkFiles(file,encoding=encoding)
files_gz.append(fileName)
elif type(files) == str:
fileName = self._checkFiles(files,encoding=encoding)
files_gz.append(fileName)
vgid_headers = [f"ingestion_{x}" for x in range(len(files_gz))]
list_headers = [{**self.header, 'x-adobe-vgid': vgid}
for vgid in vgid_headers]
list_urls = [self.endpoint + path for x in range(len(files_gz))]
list_files = ({"file": (None, open(Path(file), "rb").read())}
for file in files_gz) # generator for files
workers_input = kwargs.get("workers", 4)
workers = max(1, workers_input)
with futures.ThreadPoolExecutor(workers) as executor:
res = executor.map(lambda x, y, z: requests.post(
x, headers=y, files=z), list_urls, list_headers, list_files)
list_res = [response.json() for response in res]
# cleaning temp folder
if len(self._createdFiles) > 0:
for file in self._createdFiles:
file_path = Path(file)
file_path.unlink()
self._createdFiles = []
tmp = Path('tmp/')
tmp.rmdir()
return list_res | Adobe-Lib-Manual | /Adobe_Lib_Manual-4.2.tar.gz/Adobe_Lib_Manual-4.2/aanalytics2/ingestion.py | ingestion.py |
import pandas as pd
from aanalytics2 import config, connector
from typing import Union
class LegacyAnalytics:
"""
Class that will help you realize basic requests to the old API 1.4 endpoints
"""
def __init__(self,company_name:str=None,config:dict=config.config_object)->None:
"""
Instancialize the Legacy Analytics wrapper.
"""
if company_name is None:
raise Exception("Require a company name")
self.connector = connector.AdobeRequest(config_object=config)
self.token = self.connector.token
self.endpoint = "https://api.omniture.com/admin/1.4/rest"
self.header = header = {
'Accept': 'application/json',
'Authorization': f'Bearer {self.token}',
'X-ADOBE-DMA-COMPANY': company_name
}
def getData(self,path:str="/",method:str=None,params:dict=None)->dict:
"""
Use the GET method to the parameter used.
Arguments:
path : REQUIRED : If you need a specific path (default "/")
method : OPTIONAL : if you want to pass the method directly there for the parameter.
params : OPTIONAL : If you need to pass parameter to your url, use dictionary i.e. : {"param":"value"}
"""
if params is not None and type(params) != dict:
raise TypeError("Require a dictionary")
myParams = {}
myParams.update(**params or {})
if method is not None:
myParams['method'] = method
path = path
res = self.connector.getData(self.endpoint + path,params=myParams,headers=self.header,legacy=True)
return res
def postData(self,path:str="/",method:str=None,params:dict=None,data:Union[dict,list]=None)->dict:
"""
Use the POST method to the parameter used.
Arguments:
path : REQUIRED : If you need a specific path (default "/")
method : OPTIONAL : if you want to pass the method directly there for the parameter.
params : OPTIONAL : If you need to pass parameter to your url, use dictionary i.e. : {"param":"value"}
data : OPTIONAL : Usually required to pass the dictionary or list to the request
"""
if params is not None and type(params) != dict:
raise TypeError("Require a dictionary")
if data is not None and (type(data) != dict and type(data) != list):
raise TypeError("data should be dictionary or list")
myParams = {}
myParams.update(**params or {})
if method is not None:
myParams['method'] = method
path = path
res = self.connector.postData(self.endpoint + path,params=myParams, data=data,headers=self.header,legacy=True)
return res | Adobe-Lib-Manual | /Adobe_Lib_Manual-4.2.tar.gz/Adobe_Lib_Manual-4.2/aanalytics2/aanalytics14.py | aanalytics14.py |
import struct
from _color import Color
import _helper
class _ColorReader(object):
def __init__(self, stream, offset, count):
self._stream = stream
self._offset = offset
self._count = count
def __iter__(self):
for i in range(self._count):
self._offset, color_space = _helper.get_ushort(self._stream, self._offset)
self._offset, w = _helper.get_ushort(self._stream, self._offset)
self._offset, x = _helper.get_ushort(self._stream, self._offset)
self._offset, y = _helper.get_ushort(self._stream, self._offset)
self._offset, z = _helper.get_ushort(self._stream, self._offset)
yield "Unnamed Color {0}".format(i+1), Color.from_adobe(color_space, w, x, y, z)
class _ColorReaderWithName(_ColorReader):
def _read_name(self):
""" Word size = 2 """
# Marks the start of the string
self._offset, _ = _helper.validate_ushort_is_any(self._stream, (0, ), self._offset)
self._offset, length = _helper.get_ushort(self._stream, self._offset)
data = self._stream[self._offset:self._offset+(length-1)*2]
name = data.decode('utf-16-be')
self._offset += (length-1)*2
self._offset, _ = _helper.validate_ushort_is_any(self._stream, (0, ), self._offset)
return name
def __iter__(self):
colors = tuple(super(_ColorReaderWithName, self).__iter__()) # Version 1 information can be ignored
self._offset, _ = _helper.validate_ushort_is_any(self._stream, (2, ), self._offset)
self._offset, length = _helper.get_ushort(self._stream, self._offset)
if length != len(colors):
raise ValueError("Length of names is not the same as the length of colors")
for name, color in super(_ColorReaderWithName, self).__iter__():
name = self._read_name()
yield name, color
class Aco(object):
READERS = [ _ColorReader, _ColorReaderWithName ]
def __init__(self, stream):
offset, self._version = _helper.validate_ushort_is_any(stream, (0, 1))
offset, self._color_count = _helper.get_ushort(stream, offset)
self._colors = [
]
self._key_mapping = {
}
self._read_colors(stream, offset)
def _read_colors(self, stream, offset):
reader = self.READERS[self._version](stream, offset, self._color_count)
index = 0
for name, color in reader:
self._colors.append(color)
self._key_mapping[name] = index
index += 1
def keys(self):
return self._key_mapping.keys()
@property
def length(self):
return self._color_count
def __getitem__(self, value):
# if we are not a number, assume a key, and look it up
if not isinstance(value, (int, long)):
value = self._key_mapping[value]
return self._colors[value] | AdobeColor | /AdobeColor-0.1.tar.gz/AdobeColor-0.1/adobecolor/_aco.py | _aco.py |
import _helper
import colour
class Color(object):
def __init__(self, w, x, y, z):
self._w = w
self._x = x
self._y = y
self._z = z
self._convert()
def _convert(self):
""" Override this to convert w,x,y,z values into color local values """
pass
@property
def hex(self):
""" Override this to convert to an RGB hex string """
if hasattr(self, "_rgb"):
(r,g,b) = self._rgb
return "{0:02X}{1:02X}{2:02X}".format(r,g,b)
return "Unknown!"
@classmethod
def from_adobe(cls, color_space, w, x, y, z):
if color_space in _SPACE_MAPPER:
return _SPACE_MAPPER[color_space](w,x,y,z)
@property
def colorspace(self):
return type(self).__name__[1:-5]
@property
def value(self):
""" Override to display color value """
return ""
def __repr__(self):
return "<Color colorspace={0} value={1}>".format(
self.colorspace,
self.value
)
class _RGBColor(Color):
def _convert(self):
self._r = int(self._w/256)
self._g = int(self._x/256)
self._b = int(self._y/256)
@property
def value(self):
mapping = (
("r", self._r),
("g", self._g),
("b", self._b),
)
return " ".join(("{0}={1:02X}".format(x,y) for x,y in mapping))
@property
def hex(self):
return "".join(map(
"{0:02X}".format, (self._r, self._g, self._b)
))
class _HSBColor(Color):
def _convert(self):
self._h = int(self._w/182.04)
self._s = int(self._x/655.35)
self._b = int(self._y/655.35)
@property
def value(self):
mapping = (
("h", self._h),
("s", self._s),
("b", self._b),
)
return " ".join(("{0}={1:d}".format(x,y) for x,y in mapping))
def _map(self, i, t, p, q, brightness):
mapper = (
(brightness, t, p),
(q, brightness, p),
(p, brightness, t),
(p, q, brightness),
(t, p, brightness),
(brightness, p, q)
)
if (i > len(mapper)):
data = mapper[-1]
else:
data = mapper[i]
return data
@property
def _rgb(self):
h = self._h/360.0
s = self._s/100.0
b = self._b/100.0
return map(lambda x: int(x*255.0), colour.hsl2rgb((h,s,b)))
class _CMYKColor(Color): pass
class _LabColor(Color): pass
class _GrayColor(Color): pass
class _WideCMYKColor(Color): pass
_SPACE_MAPPER = {
0: _RGBColor,
1: _HSBColor,
2: _CMYKColor,
7: _LabColor,
8: _GrayColor,
9: _WideCMYKColor
} | AdobeColor | /AdobeColor-0.1.tar.gz/AdobeColor-0.1/adobecolor/_color.py | _color.py |
# AdobeConnect2Video

[](https://github.com/AliRezaBeigy/AdobeConnect2Video/blob/master/LICENSE)
[](http://makeapullrequest.com)


A handy tool to convert adobe connect zip data into a single video file
## Requirement
- Python 3
- FFMPEG
- Download [ffmpeg](https://www.ffmpeg.org/download.html) and put the installation path into PATH enviroment variable
## Quick Start
You need to [ffmpeg](https://www.ffmpeg.org) to use this app, so you can simply download ffmpeg from [Official Site](https://www.ffmpeg.org/download.html) then put the installation path into PATH enviroment variable
Now you should install AdobeConnect2Video as global app:
```shell
$ pip install -U AdobeConnect2Video
or
$ python -m pip install -U AdobeConnect2Video
```
**Use `-U` option to update AdobeConnect2Video to the last version**
## Usage
Download the adobe connect zip data by adding following path to the address of online recorded class
```url
output/ClassName.zip?download=zip
```
for exmple if you online recorded class link is
```url
http://online.GGGGG.com/p81var0hcdk5/
```
you can download zip data from following link
```url
http://online.GGGGG.com/p81var0hcdk5/output/ClassName.zip?download=zip
```
```shell
$ AdobeConnect2Video
$ AdobeConnect2Video -i [classId]
```
Example:
If data extracted into 'Course1' directory inside 'data' directory and you want to have output in 'output' directory with 480x470 resolution you can use following command
```shell
$ AdobeConnect2Video -i Course1 -d data -o output -r 480x470
```
For more details:
```text
$ AdobeConnect2Video -h
usage: AdobeConnect2Video [-h] -i ID [-d DATA_PATH] [-o OUTPUT_PATH] [-r RESOLUTION]
options:
-h --help show this help message and exit
-i --id ID the name of directory data is available
-d --data-path the path extracted data must be available as directory
-o --output-path the output path that generated data saved
-r --resolution the resolution of output video
```
## Contributions
If you're interested in contributing to this project, first of all I would like to extend my heartfelt gratitude.
Please feel free to reach out to me if you need help. My Email: [email protected]
Telegram: [@AliRezaBeigy](https://t.me/AliRezaBeigyKhu)
## LICENSE
MIT
| AdobeConnect2Video | /AdobeConnect2Video-1.0.0.tar.gz/AdobeConnect2Video-1.0.0/README.md | README.md |
from copy import deepcopy
import datetime
import json
from time import time
class RequestCreator:
"""
A class to help build a request for Adobe Analytics API 2.0 getReport
"""
template = {
"globalFilters": [],
"metricContainer": {
"metrics": [],
"metricFilters": [],
},
"settings": {
"countRepeatInstances": True,
"limit": 20000,
"page": 0,
"nonesBehavior": "exclude-nones",
},
"statistics": {"functions": ["col-max", "col-min"]},
"rsid": "",
}
def __init__(self, request: dict = None) -> None:
"""
Instanciate the constructor.
Arguments:
request : OPTIONAL : overwrite the template with the definition provided.
"""
if request is not None:
if '.json' in request and type(request) == str:
with open(request,'r') as f:
request = json.load(f)
self.__request = deepcopy(request) or deepcopy(self.template)
self.__metricCount = len(self.__request["metricContainer"]["metrics"])
self.__metricFilterCount = len(
self.__request["metricContainer"].get("metricFilters", [])
)
self.__globalFiltersCount = len(self.__request["globalFilters"])
### Preparing some time statement.
today = datetime.datetime.now()
today_date_iso = today.isoformat().split("T")[0]
## should give '20XX-XX-XX'
tomorrow_date_iso = (
(today + datetime.timedelta(days=1)).isoformat().split("T")[0]
)
time_start = "T00:00:00.000"
time_end = "T23:59:59.999"
startToday_iso = today_date_iso + time_start
endToday_iso = today_date_iso + time_end
startMonth_iso = f"{today_date_iso[:-2]}01{time_start}"
tomorrow_iso = tomorrow_date_iso + time_start
next_month = today.replace(day=28) + datetime.timedelta(days=4)
last_day_month = next_month - datetime.timedelta(days=next_month.day)
last_day_month_date_iso = last_day_month.isoformat().split("T")[0]
last_day_month_iso = last_day_month_date_iso + time_end
thirty_days_prior_date_iso = (
(today - datetime.timedelta(days=30)).isoformat().split("T")[0]
)
thirty_days_prior_iso = thirty_days_prior_date_iso + time_start
seven_days_prior_iso_date = (
(today - datetime.timedelta(days=7)).isoformat().split("T")[0]
)
seven_days_prior_iso = seven_days_prior_iso_date + time_start
### assigning predefined dates:
self.dates = {
"thisMonth": f"{startMonth_iso}/{last_day_month_iso}",
"untilToday": f"{startMonth_iso}/{startToday_iso}",
"todayIncluded": f"{startMonth_iso}/{endToday_iso}",
"last30daysTillToday": f"{thirty_days_prior_iso}/{startToday_iso}",
"last30daysTodayIncluded": f"{thirty_days_prior_iso}/{tomorrow_iso}",
"last7daysTillToday": f"{seven_days_prior_iso}/{startToday_iso}",
"last7daysTodayIncluded": f"{seven_days_prior_iso}/{endToday_iso}",
}
self.today = today
def __repr__(self):
return json.dumps(self.__request, indent=4)
def __str__(self):
return json.dumps(self.__request, indent=4)
def addMetric(self, metricId: str = None) -> None:
"""
Add a metric to the template.
Arguments:
metricId : REQUIRED : The metric to add
"""
if metricId is None:
raise ValueError("Require a metric ID")
columnId = self.__metricCount
addMetric = {"columnId": str(columnId), "id": metricId}
if columnId == 0:
addMetric["sort"] = "desc"
self.__request["metricContainer"]["metrics"].append(addMetric)
self.__metricCount += 1
def removeMetrics(self) -> None:
"""
Remove all metrics.
"""
self.__request["metricContainer"]["metrics"] = []
self.__metricCount = 0
def getMetrics(self) -> list:
"""
return a list of the metrics used
"""
return [metric["id"] for metric in self.__request["metricContainer"]["metrics"]]
def setSearch(self,clause:str=None)->None:
"""
Add a search clause in the Analytics request.
Arguments:
clause : REQUIRED : String to tell what search clause to add.
Examples:
"( CONTAINS 'unspecified' ) OR ( CONTAINS 'none' ) OR ( CONTAINS '' )"
"( MATCH 'undefined' )"
"( NOT CONTAINS 'undefined' )"
"( BEGINS-WITH 'undefined' )"
"( BEGINS-WITH 'undefined' ) AND ( BEGINS-WITH 'none' )"
"""
if clause is None:
raise ValueError("Require a clause to add to the request")
self.__request["search"] = {
"clause" : clause
}
def removeSearch(self)->None:
"""
Remove the search associated with the request.
"""
del self.__request["search"]
def addMetricFilter(
self, metricId: str = None, filterId: str = None, metricIndex: int = None
) -> None:
"""
Add a filter to a metric.
Arguments:
metricId : REQUIRED : metric where the filter is added
filterId : REQUIRED : The filter to add.
when breakdown, use the following format for the value "dimension:::itemId"
metricIndex : OPTIONAL : If used, set the filter to the metric located on that index.
"""
if metricId is None:
raise ValueError("Require a metric ID")
if filterId is None:
raise ValueError("Require a filter ID")
filterIdCount = self.__metricFilterCount
if filterId.startswith("s") and "@AdobeOrg" in filterId:
filterType = "segment"
filter = {
"id": str(filterIdCount),
"type": filterType,
"segmentId": filterId,
}
elif filterId.startswith("20") and "/20" in filterId:
filterType = "dateRange"
filter = {
"id": str(filterIdCount),
"type": filterType,
"dateRange": filterId,
}
elif ":::" in filterId:
filterType = "breakdown"
dimension, itemId = filterId.split(":::")
filter = {
"id": str(filterIdCount),
"type": filterType,
"dimension": dimension,
"itemId": itemId,
}
else: ### case when it is predefined segments like "All_Visits"
filterType = "segment"
filter = {
"id": str(filterIdCount),
"type": filterType,
"segmentId": filterId,
}
if filterIdCount == 0:
self.__request["metricContainer"]["metricFilters"] = [filter]
else:
self.__request["metricContainer"]["metricFilters"].append(filter)
### adding filter to the metric
if metricIndex is None:
for metric in self.__request["metricContainer"]["metrics"]:
if metric["id"] == metricId:
if "filters" in metric.keys():
metric["filters"].append(str(filterIdCount))
else:
metric["filters"] = [str(filterIdCount)]
else:
metric = self.__request["metricContainer"]["metrics"][metricIndex]
if "filters" in metric.keys():
metric["filters"].append(str(filterIdCount))
else:
metric["filters"] = [str(filterIdCount)]
### incrementing the filter counter
self.__metricFilterCount += 1
def removeMetricFilter(self, filterId: str = None) -> None:
"""
remove a filter from a metric
Arguments:
filterId : REQUIRED : The filter to add.
when breakdown, use the following format for the value "dimension:::itemId"
"""
found = False ## flag
if filterId is None:
raise ValueError("Require a filter ID")
if ":::" in filterId:
filterId = filterId.split(":::")[1]
list_index = []
for metricFilter in self.__request["metricContainer"]["metricFilters"]:
if filterId in str(metricFilter):
list_index.append(metricFilter["id"])
found = True
## decrementing the filter counter
if found:
for metricFilterId in reversed(list_index):
del self.__request["metricContainer"]["metricFilters"][
int(metricFilterId)
]
for metric in self.__request["metricContainer"]["metrics"]:
if metricFilterId in metric.get("filters", []):
metric["filters"].remove(metricFilterId)
self.__metricFilterCount -= 1
def setLimit(self, limit: int = 100) -> None:
"""
Specific the number of element to retrieve. Default is 10.
Arguments:
limit : OPTIONAL : number of elements to return
"""
self.__request["settings"]["limit"] = limit
def setRepeatInstance(self, repeat: bool = True) -> None:
"""
Specify if repeated instances should be counted.
Arguments:
repeat : OPTIONAL : True or False (True by default)
"""
self.__request["settings"]["countRepeatInstances"] = repeat
def setNoneBehavior(self, returnNones: bool = True) -> None:
"""
Set the behavior of the None values in that request.
Arguments:
returnNones : OPTIONAL : True or False (True by default)
"""
if returnNones:
self.__request["settings"]["nonesBehavior"] = "return-nones"
else:
self.__request["settings"]["nonesBehavior"] = "exclude-nones"
def setDimension(self, dimension: str = None) -> None:
"""
Set the dimension to be used for reporting.
Arguments:
dimension : REQUIRED : the dimension to build your report on
"""
if dimension is None:
raise ValueError("A dimension must be passed")
self.__request["dimension"] = dimension
def setRSID(self, rsid: str = None) -> None:
"""
Set the reportSuite ID to be used for the reporting.
Arguments:
rsid : REQUIRED : The Data View ID to be passed.
"""
if rsid is None:
raise ValueError("A reportSuite ID must be passed")
self.__request["rsid"] = rsid
def addGlobalFilter(self, filterId: str = None) -> None:
"""
Add a global filter to the report.
NOTE : You need to have a dateRange filter at least in the global report.
Arguments:
filterId : REQUIRED : The filter to add to the global filter.
example :
"s2120430124uf03102jd8021" -> segment
"2020-01-01T00:00:00.000/2020-02-01T00:00:00.000" -> dateRange
"""
if filterId.startswith("s") and "@AdobeOrg" in filterId:
filterType = "segment"
filter = {
"type": filterType,
"segmentId": filterId,
}
elif filterId.startswith("20") and "/20" in filterId:
filterType = "dateRange"
filter = {
"type": filterType,
"dateRange": filterId,
}
elif ":::" in filterId:
filterType = "breakdown"
dimension, itemId = filterId.split(":::")
filter = {
"type": filterType,
"dimension": dimension,
"itemId": itemId,
}
else: ### case when it is predefined segments like "All_Visits"
filterType = "segment"
filter = {
"type": filterType,
"segmentId": filterId,
}
### incrementing the count for globalFilter
self.__globalFiltersCount += 1
### adding to the globalFilter list
self.__request["globalFilters"].append(filter)
def updateDateRange(
self,
dateRange: str = None,
shiftingDays: int = None,
shiftingDaysEnd: int = None,
shiftingDaysStart: int = None,
) -> None:
"""
Update the dateRange filter on the globalFilter list
One of the 3 elements specified below is required.
Arguments:
dateRange : OPTIONAL : string representing the new dateRange string, such as: 2020-01-01T00:00:00.000/2020-02-01T00:00:00.000
shiftingDays : OPTIONAL : An integer, if you want to add or remove days from the current dateRange provided. Apply to end and beginning of dateRange.
So 2020-01-01T00:00:00.000/2020-02-01T00:00:00.000 with +2 will give 2020-01-03T00:00:00.000/2020-02-03T00:00:00.000
shiftingDaysEnd : : OPTIONAL : An integer, if you want to add or remove days from the last part of the current dateRange. Apply only to end of the dateRange.
So 2020-01-01T00:00:00.000/2020-02-01T00:00:00.000 with +2 will give 2020-01-01T00:00:00.000/2020-02-03T00:00:00.000
shiftingDaysStart : OPTIONAL : An integer, if you want to add or remove days from the last first part of the current dateRange. Apply only to beginning of the dateRange.
So 2020-01-01T00:00:00.000/2020-02-01T00:00:00.000 with +2 will give 2020-01-03T00:00:00.000/2020-02-01T00:00:00.000
"""
pos = -1
for index, filter in enumerate(self.__request["globalFilters"]):
if filter["type"] == "dateRange":
pos = index
curDateRange = filter["dateRange"]
start, end = curDateRange.split("/")
start = datetime.datetime.fromisoformat(start)
end = datetime.datetime.fromisoformat(end)
if dateRange is not None and type(dateRange) == str:
for index, filter in enumerate(self.__request["globalFilters"]):
if filter["type"] == "dateRange":
pos = index
curDateRange = filter["dateRange"]
newDef = {
"type": "dateRange",
"dateRange": dateRange,
}
if shiftingDays is not None and type(shiftingDays) == int:
newStart = (start + datetime.timedelta(shiftingDays)).isoformat(
timespec="milliseconds"
)
newEnd = (end + datetime.timedelta(shiftingDays)).isoformat(
timespec="milliseconds"
)
newDef = {
"type": "dateRange",
"dateRange": f"{newStart}/{newEnd}",
}
elif shiftingDaysEnd is not None and type(shiftingDaysEnd) == int:
newEnd = (end + datetime.timedelta(shiftingDaysEnd)).isoformat(
timespec="milliseconds"
)
newDef = {
"type": "dateRange",
"dateRange": f"{start}/{newEnd}",
}
elif shiftingDaysStart is not None and type(shiftingDaysStart) == int:
newStart = (start + datetime.timedelta(shiftingDaysStart)).isoformat(
timespec="milliseconds"
)
newDef = {
"type": "dateRange",
"dateRange": f"{newStart}/{end}",
}
if pos > -1:
self.__request["globalFilters"][pos] = newDef
else: ## in case there is no dateRange already
self.__request["globalFilters"][pos].append(newDef)
def removeGlobalFilter(self, index: int = None, filterId: str = None) -> None:
"""
Remove a specific filter from the globalFilter list.
You can use either the index of the list or the specific Id of the filter used.
Arguments:
index : REQUIRED : index in the list return
filterId : REQUIRED : the id of the filter to be removed (ex: filterId, dateRange)
"""
pos = -1
if index is not None:
del self.__request["globalFilters"][index]
elif filterId is not None:
for index, filter in enumerate(self.__request["globalFilters"]):
if filterId in str(filter):
pos = index
if pos > -1:
del self.__request["globalFilters"][pos]
### decrementing the count for globalFilter
self.__globalFiltersCount -= 1
def to_dict(self) -> None:
"""
Return the request definition
"""
return deepcopy(self.__request)
def save(self, fileName: str = None) -> None:
"""
save the request definition in a JSON file.
Argument:
filename : OPTIONAL : Name of the file. (default cjapy_request_<timestamp>.json)
"""
fileName = fileName or f"aa_request_{int(time())}.json"
with open(fileName, "w") as f:
f.write(json.dumps(self.to_dict(), indent=4)) | AdobeLibManual678 | /AdobeLibManual678-4.3.tar.gz/AdobeLibManual678-4.3/aanalytics2/requestCreator.py | requestCreator.py |
import pandas as pd
import json
from typing import Union, IO
import time
from .requestCreator import RequestCreator
from copy import deepcopy
class Workspace:
"""
A class to return data from the getReport method.
"""
startDate = None
endDate = None
settings = None
def __init__(
self,
responseData: dict,
dataRequest: dict = None,
columns: dict = None,
summaryData: dict = None,
analyticsConnector: object = None,
reportType: str = "normal",
metrics: Union[dict, list] = None, ## for normal type, static report
metricFilters: dict = None,
resolveColumns: bool = True,
) -> None:
"""
Setup the different values from the response of the getReport
Argument:
responseData : REQUIRED : data returned & predigested by the getReport method.
dataRequest : REQUIRED : dataRequest containing the request
columns : REQUIRED : the columns element of the response.
summaryData : REQUIRED : summary data containing total calculated by CJA
analyticsConnector : REQUIRED : analytics object connector.
reportType : OPTIONAL : define type of report retrieved.(normal, static, multi)
metrics : OPTIONAL : dictionary of the columns Id for normal report and list of columns name for Static report
metricFilters : OPTIONAL : Filter name for the id of the filter
resolveColumns : OPTIONAL : If you want to resolve the column name and returning ID instead of name
"""
for filter in dataRequest["globalFilters"]:
if filter["type"] == "dateRange":
self.startDate = filter["dateRange"].split("/")[0]
self.endDate = filter["dateRange"].split("/")[1]
self.dataRequest = RequestCreator(dataRequest)
self.requestSize = dataRequest["settings"]["limit"]
self.settings = dataRequest["settings"]
self.pageRequested = dataRequest["settings"]["page"] + 1
self.summaryData = summaryData
self.reportType = reportType
self.analyticsObject = analyticsConnector
## global filters resolution
filters = []
for filter in dataRequest["globalFilters"]:
if filter["type"] == "segment":
segmentId = filter.get("segmentId",None)
if segmentId is not None:
seg = self.analyticsObject.getSegment(filter["segmentId"])
filter["segmentName"] = seg["name"]
else:
context = filter.get('segmentDefinition',{}).get('container',{}).get('context')
description = filter.get('segmentDefinition',{}).get('container',{}).get('pred',{}).get('description')
listName = ','.join(filter.get('segmentDefinition',{}).get('container',{}).get('pred',{}).get('list',[]))
function = filter.get('segmentDefinition',{}).get('container',{}).get('pred',{}).get('func')
filter["segmentId"] = f"Dynamic: {context} {description} {function} {listName}"
filter["segmentName"] = f"{context} {description} {listName}"
filters.append(filter)
self.globalFilters = filters
self.metricFilters = metricFilters
if reportType == "normal" or reportType == "static":
df_init = pd.DataFrame(responseData).T
df_init = df_init.reset_index()
elif reportType == "multi":
df_init = responseData
if reportType == "normal":
columns_data = ["itemId"]
elif reportType == "static":
columns_data = ["SegmentName"]
### adding dimensions & metrics in columns names when reportType is "normal"
if "dimension" in dataRequest.keys() and reportType == "normal":
columns_data.append(dataRequest["dimension"])
### adding metrics in columns names
columnIds = columns["columnIds"]
# To get readable names of template metrics and Success Events, we need to get the full list of metrics for the Report Suite first.
# But we won't do this if there are no such metrics in the report.
if (resolveColumns is True) & (
len([metric for metric in metrics.values() if metric.startswith("metrics/")]) > 0):
rsMetricsList = self.analyticsObject.getMetrics(rsid=dataRequest["rsid"])
for col in columnIds:
metrics: dict = metrics ## case when dict is used
metricListName: list = metrics[col].split(":::")
if resolveColumns:
metricResolvedName = []
for metric in metricListName:
if metric.startswith("cm"):
cm = self.analyticsObject.getCalculatedMetric(metric)
metricName = cm.get("name",metric)
metricResolvedName.append(metricName)
elif metric.startswith("s"):
seg = self.analyticsObject.getSegment(metric)
segName = seg.get("name",metric)
metricResolvedName.append(segName)
elif metric.startswith("metrics/"):
metricName = rsMetricsList[rsMetricsList["id"] == metric]["name"].iloc[0]
metricResolvedName.append(metricName)
else:
metricResolvedName.append(metric)
colName = ":::".join(metricResolvedName)
columns_data.append(colName)
else:
columns_data.append(metrics[col])
elif reportType == "static":
metrics: list = metrics ## case when a list is used
columns_data.append("SegmentId")
columns_data += metrics
if df_init.empty == False and (
reportType == "static" or reportType == "normal"
):
df_init.columns = columns_data
self.columns = list(df_init.columns)
elif reportType == "multi":
self.columns = list(df_init.columns)
else:
self.columns = list(df_init.columns)
self.row_numbers = len(df_init)
self.dataframe = df_init
def __str__(self):
return json.dumps(
{
"startDate": self.startDate,
"endDate": self.endDate,
"globalFilters": self.globalFilters,
"totalRows": self.row_numbers,
"columns": self.columns,
},
indent=4,
)
def __repr__(self):
return json.dumps(
{
"startDate": self.startDate,
"endDate": self.endDate,
"globalFilters": self.globalFilters,
"totalRows": self.row_numbers,
"columns": self.columns,
},
indent=4,
)
def to_csv(
self,
filename: str = None,
delimiter: str = ",",
index: bool = False,
) -> IO:
"""
Save the result in a CSV
Arguments:
filename : OPTIONAL : name of the file
delimiter : OPTIONAL : delimiter of the CSV
index : OPTIONAL : should the index be included in the CSV (default False)
"""
if filename is None:
filename = f"cjapy_{int(time.time())}.csv"
self.df_init.to_csv(filename, delimiter=delimiter, index=index)
def to_json(self, filename: str = None, orient: str = "index") -> IO:
"""
Save the result to JSON
Arguments:
filename : OPTIONAL : name of the file
orient : OPTIONAL : orientation of the JSON
"""
if filename is None:
filename = f"cjapy_{int(time.time())}.json"
self.df_init.to_json(filename, orient=orient)
def breakdown(
self,
index: Union[int, str] = None,
dimension: str = None,
n_results: Union[int, str] = 10,
) -> object:
"""
Breakdown a specific index or value of the dataframe, by another dimension.
NOTE: breakdowns are possible only from normal reportType.
Return a workspace instance.
Arguments:
index : REQUIRED : Value to use as filter for the breakdown or index of the dataframe to use for the breakdown.
dimension : REQUIRED : dimension to report.
n_results : OPTIONAL : number of results you want to have on your breakdown. Default 10, can use "inf"
"""
if index is None or dimension is None:
raise ValueError(
"Require a value to use as breakdown and dimension to request"
)
breadown_dimension = list(self.dataframe.columns)[1]
if type(index) == str:
row: pd.Series = self.dataframe[self.dataframe.iloc[:, 1] == index]
itemValue: str = row["itemId"].values[0]
elif type(index) == int:
itemValue = self.dataframe.loc[index, "itemId"]
breakdown = f"{breadown_dimension}:::{itemValue}"
new_request = RequestCreator(self.dataRequest.to_dict())
new_request.setDimension(dimension)
metrics = new_request.getMetrics()
for metric in metrics:
new_request.addMetricFilter(metricId=metric, filterId=breakdown)
if n_results < 20000:
new_request.setLimit(n_results)
report = self.analyticsObject.getReport2(
new_request.to_dict(), n_results=n_results
)
if n_results == "inf" or n_results > 20000:
report = self.analyticsObject.getReport2(
new_request.to_dict(), n_results=n_results
)
return report | AdobeLibManual678 | /AdobeLibManual678-4.3.tar.gz/AdobeLibManual678-4.3/aanalytics2/workspace.py | workspace.py |
import json
import os
from pathlib import Path
from typing import Optional
import time
# Non standard libraries
from .config import config_object, header
def find_path(path: str) -> Optional[Path]:
"""Checks if the file denoted by the specified `path` exists and returns the Path object
for the file.
If the file under the `path` does not exist and the path denotes an absolute path, tries
to find the file by converting the absolute path to a relative path.
If the file does not exist with either the absolute and the relative path, returns `None`.
"""
if Path(path).exists():
return Path(path)
elif path.startswith('/') and Path('.' + path).exists():
return Path('.' + path)
elif path.startswith('\\') and Path('.' + path).exists():
return Path('.' + path)
else:
return None
def createConfigFile(destination: str = 'config_analytics_template.json',auth_type: str = "oauthV2",verbose: bool = False) -> None:
"""Creates a `config_admin.json` file with the pre-defined configuration format
to store the access data in under the specified `destination`.
Arguments:
destination : OPTIONAL : the name of the file + path if you want
auth_type : OPTIONAL : The type of Oauth type you want to use for your config file. Possible value: "jwt" or "oauthV2"
"""
json_data = {
'org_id': '<orgID>',
'client_id': "<APIkey>",
'secret': "<YourSecret>",
}
if auth_type == 'oauthV2':
json_data['scopes'] = "<scopes>"
elif auth_type == 'jwt':
json_data["tech_id"] = "<something>@techacct.adobe.com"
json_data["pathToKey"] = "<path/to/your/privatekey.key>"
if '.json' not in destination:
destination += '.json'
with open(destination, 'w') as cf:
cf.write(json.dumps(json_data, indent=4))
if verbose:
print(f" file created at this location : {os.getcwd()}{os.sep}{destination}.json")
def importConfigFile(path: str = None,auth_type:str=None) -> None:
"""Reads the file denoted by the supplied `path` and retrieves the configuration information
from it.
Arguments:
path: REQUIRED : path to the configuration file. Can be either a fully-qualified or relative.
auth_type : OPTIONAL : The type of Auth to be used by default. Detected if none is passed, OauthV2 takes precedence.
Possible values: "jwt" or "oauthV2"
Example of path value.
"config.json"
"./config.json"
"/my-folder/config.json"
"""
config_file_path: Optional[Path] = find_path(path)
if config_file_path is None:
raise FileNotFoundError(
f"Unable to find the configuration file under path `{path}`."
)
with open(config_file_path, 'r') as file:
provided_config = json.load(file)
provided_keys = provided_config.keys()
if 'api_key' in provided_keys:
## old naming for client_id
client_id = provided_config['api_key']
elif 'client_id' in provided_keys:
client_id = provided_config['client_id']
else:
raise RuntimeError(f"Either an `api_key` or a `client_id` should be provided.")
if auth_type is None:
if 'scopes' in provided_keys:
auth_type = 'oauthV2'
elif 'tech_id' in provided_keys and "pathToKey" in provided_keys:
auth_type = 'jwt'
args = {
"org_id" : provided_config['org_id'],
"secret" : provided_config['secret'],
"client_id" : client_id
}
if auth_type == 'oauthV2':
args["scopes"] = provided_config["scopes"].replace(' ','')
if auth_type == 'jwt':
args["tech_id"] = provided_config["tech_id"]
args["path_to_key"] = provided_config["pathToKey"]
configure(**args)
def configure(org_id: str = None,
tech_id: str = None,
secret: str = None,
client_id: str = None,
path_to_key: str = None,
private_key: str = None,
oauth: bool = False,
token: str = None,
scopes: str = None
):
"""Performs programmatic configuration of the API using provided values.
Arguments:
org_id : REQUIRED : Organization ID
tech_id : REQUIRED : Technical Account ID
secret : REQUIRED : secret generated for your connection
client_id : REQUIRED : The client_id (old api_key) provided by the JWT connection.
path_to_key : REQUIRED : If you have a file containing your private key value.
private_key : REQUIRED : If you do not use a file but pass a variable directly.
oauth : OPTIONAL : If you wish to pass the token generated by oauth
token : OPTIONAL : If oauth set to True, you need to pass the token
scopes : OPTIONAL : If you use Oauth, you need to pass the scopes
"""
if not org_id:
raise ValueError("`org_id` must be specified in the configuration.")
if not client_id:
raise ValueError("`client_id` must be specified in the configuration.")
if not tech_id and oauth == False and not scopes:
raise ValueError("`tech_id` must be specified in the configuration.")
if not secret and oauth == False:
raise ValueError("`secret` must be specified in the configuration.")
if (not path_to_key and not private_key and oauth == False) and not scopes:
raise ValueError("scopes must be specified if Oauth setup.\n `pathToKey` or `private_key` must be specified in the configuration if JWT setup.")
config_object["org_id"] = org_id
config_object["client_id"] = client_id
header["x-api-key"] = client_id
config_object["tech_id"] = tech_id
config_object["secret"] = secret
config_object["pathToKey"] = path_to_key
config_object["private_key"] = private_key
config_object["scopes"] = scopes
# ensure the reset of the state by overwriting possible values from previous import.
config_object["date_limit"] = 0
config_object["token"] = ""
if oauth:
date_limit = int(time.time()) + (22 * 60 * 60)
config_object["date_limit"] = date_limit
config_object["token"] = token
header["Authorization"] = f"Bearer {token}"
def get_private_key_from_config(config: dict) -> str:
"""
Returns the private key directly or read a file to return the private key.
"""
private_key = config.get('private_key')
if private_key is not None:
return private_key
private_key_path = find_path(config['pathToKey'])
if private_key_path is None:
raise FileNotFoundError(f'Unable to find the private key under path `{config["pathToKey"]}`.')
with open(Path(private_key_path), 'r') as f:
private_key = f.read()
return private_key
def generateLoggingObject(
level:str="WARNING",
stream:bool=True,
file:bool=False,
filename:str="aanalytics2.log",
format:str="%(asctime)s::%(name)s::%(funcName)s::%(levelname)s::%(message)s::%(lineno)d"
)->dict:
"""
Generates a dictionary for the logging object with basic configuration.
You can find the information for the different possible values on the logging documentation.
https://docs.python.org/3/library/logging.html
Arguments:
level : Level of the logger to display information (NOTSET, DEBUG,INFO,WARNING,EROR,CRITICAL)
stream : If the logger should display print statements
file : If the logger should write the messages to a file
filename : name of the file where log are written
format : format of the log to be written.
"""
myObject = {
"level" : level,
"stream" : stream,
"file" : file,
"format" : format,
"filename":filename
}
return myObject | AdobeLibManual678 | /AdobeLibManual678-4.3.tar.gz/AdobeLibManual678-4.3/aanalytics2/configs.py | configs.py |
import json, os, re
import time, datetime
from concurrent import futures
from copy import deepcopy
from pathlib import Path
from typing import IO, Union, List
from collections import defaultdict
from itertools import tee
import logging
# Non standard libraries
import pandas as pd
from urllib import parse
from aanalytics2 import config, connector, token_provider
from .projects import *
from .requestCreator import RequestCreator
from .workspace import Workspace
JsonOrDataFrameType = Union[pd.DataFrame, dict]
JsonListOrDataFrameType = Union[pd.DataFrame, List[dict]]
def retrieveToken(verbose: bool = False, save: bool = False, **kwargs)->str:
"""
LEGACY retrieve token directly following the importConfigFile or Configure method.
"""
token_with_expiry = token_provider.get_jwt_token_and_expiry_for_config(config.config_object,**kwargs)
token = token_with_expiry['token']
config.config_object['token'] = token
config.config_object['date_limit'] = time.time() + token_with_expiry['expiry'] / 1000 - 500
config.header.update({'Authorization': f'Bearer {token}'})
if verbose:
print(f"token valid till : {time.ctime(time.time() + token_with_expiry['expiry'] / 1000)}")
return token
class Login:
"""
Class to connect to the the login company.
"""
loggingEnabled = False
logger = None
def __init__(self, config: dict = config.config_object, header: dict = config.header, retry: int = 0,loggingObject:dict=None) -> None:
"""
Instantiate the Loggin class.
Arguments:
config : REQUIRED : dictionary with your configuration information.
header : REQUIRED : dictionary of your header.
retry : OPTIONAL : if you want to retry, the number of time to retry
loggingObject : OPTIONAL : If you want to set logging capability for your actions.
"""
if loggingObject is not None and sorted(["level","stream","format","filename","file"]) == sorted(list(loggingObject.keys())):
self.loggingEnabled = True
self.logger = logging.getLogger(f"{__name__}.login")
self.logger.setLevel(loggingObject["level"])
if type(loggingObject["format"]) == str:
formatter = logging.Formatter(loggingObject["format"])
elif type(loggingObject["format"]) == logging.Formatter:
formatter = loggingObject["format"]
if loggingObject["file"]:
fileHandler = logging.FileHandler(loggingObject["filename"])
fileHandler.setFormatter(formatter)
self.logger.addHandler(fileHandler)
if loggingObject["stream"]:
streamHandler = logging.StreamHandler()
streamHandler.setFormatter(formatter)
self.logger.addHandler(streamHandler)
self.connector = connector.AdobeRequest(
config_object=config, header=header, retry=retry,loggingEnabled=self.loggingEnabled,logger=self.logger)
self.header = self.connector.header
self.COMPANY_IDS = {}
self.retry = retry
def getCompanyId(self,verbose:bool=False) -> dict:
"""
Retrieve the company ids for later call for the properties.
"""
if self.loggingEnabled:
self.logger.debug("getCompanyId start")
res = self.connector.getData(
"https://analytics.adobe.io/discovery/me", headers=self.header)
json_res = res
if self.loggingEnabled:
self.logger.debug(f"getCompanyId reponse: {json_res}")
try:
companies = json_res['imsOrgs'][0]['companies']
self.COMPANY_IDS = json_res['imsOrgs'][0]['companies']
return companies
except:
if verbose:
print("exception when trying to get companies with parameter 'all'")
print(json_res)
if self.loggingEnabled:
self.logger.error(f"Error trying to get companyId: {json_res}")
return None
def createAnalyticsConnection(self, companyId: str = None,loggingObject:dict=None) -> object:
"""
Returns an instance of the Analytics class so you can query the different elements from that instance.
Arguments:
companyId: REQUIRED : The globalCompanyId that you want to use in your connection
loggingObject : OPTIONAL : If you want to set logging capability for your actions.
the retry parameter set in the previous class instantiation will be used here.
"""
analytics = Analytics(company_id=companyId,
config_object=self.connector.config, header=self.header, retry=self.retry,loggingObject=loggingObject)
return analytics
class Analytics:
"""
Class that instantiate a connection to a single login company.
"""
# Endpoints
header = {"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": "Bearer ",
"X-Api-Key": ""
}
_endpoint = 'https://analytics.adobe.io/api'
_getRS = '/collections/suites'
_getDimensions = '/dimensions'
_getMetrics = '/metrics'
_getSegments = '/segments'
_getCalcMetrics = '/calculatedmetrics'
_getDateRanges = '/dateranges'
_getReport = '/reports'
loggingEnabled = False
logger = None
def __init__(self, company_id: str = None, config_object: dict = config.config_object, header: dict = config.header,
retry: int = 0,loggingObject:dict=None):
"""
Instantiate the Analytics class.
The Analytics class will be automatically connected to the API 2.0.
You have possibility to review the connection detail by looking into the connector instance.
"header", "company_id" and "endpoint_company" are attribute accessible for debugging.
Arguments:
company_id : REQUIRED : company ID retrieved by the getCompanyId
retry : OPTIONAL : Number of time you want to retrieve fail calls
loggingObject : OPTIONAL : logging object to log actions during runtime.
config_object : OPTIONAL : config object to be used for setting token (do not update if you do not know)
header : OPTIONAL : template header used for all requests (do not update if you do not know!)
"""
if company_id is None:
raise AttributeError(
'Expected "company_id" to be referenced.\nPlease ensure you pass the globalCompanyId when instantiating this class.')
if loggingObject is not None and sorted(["level","stream","format","filename","file"]) == sorted(list(loggingObject.keys())):
self.loggingEnabled = True
self.logger = logging.getLogger(f"{__name__}.analytics")
self.logger.setLevel(loggingObject["level"])
if type(loggingObject["format"]) == str:
formatter = logging.Formatter(loggingObject["format"])
elif type(loggingObject["format"]) == logging.Formatter:
formatter = loggingObject["format"]
if loggingObject["file"]:
fileHandler = logging.FileHandler(loggingObject["filename"])
fileHandler.setFormatter(formatter)
self.logger.addHandler(fileHandler)
if loggingObject["stream"]:
streamHandler = logging.StreamHandler()
streamHandler.setFormatter(formatter)
self.logger.addHandler(streamHandler)
self.connector = connector.AdobeRequest(
config_object=config_object, header=header, retry=retry,loggingEnabled=self.loggingEnabled,logger=self.logger)
self.header = self.connector.header
self.connector.header['x-proxy-global-company-id'] = company_id
self.header['x-proxy-global-company-id'] = company_id
self.endpoint_company = f"{self._endpoint}/{company_id}"
self.company_id = company_id
self.listProjectIds = []
self.projectsDetails = {}
self.segments = []
self.calculatedMetrics = []
try:
import importlib.resources as pkg_resources
pathLOGS = pkg_resources.path(
"aanalytics2", "eventType_usageLogs.pickle")
except ImportError:
try:
# Try backported to PY<37 `importlib_resources`.
import pkg_resources
pathLOGS = pkg_resources.resource_filename(
"aanalytics2", "eventType_usageLogs.pickle")
except:
print('Empty LOGS_EVENT_TYPE attribute')
try:
with pathLOGS as f:
self.LOGS_EVENT_TYPE = pd.read_pickle(f)
except:
self.LOGS_EVENT_TYPE = "no data"
def __str__(self)->str:
obj = {
"endpoint" : self.endpoint_company,
"companyId" : self.company_id,
"header" : self.header,
"token" : self.connector.config['token']
}
return json.dumps(obj,indent=4)
def __repr__(self)->str:
obj = {
"endpoint" : self.endpoint_company,
"companyId" : self.company_id,
"header" : self.header,
"token" : self.connector.config['token']
}
return json.dumps(obj,indent=4)
def refreshToken(self, token: str = None):
if token is None:
raise AttributeError(
'Expected "token" to be referenced.\nPlease ensure you pass the token.')
self.header['Authorization'] = "Bearer " + token
def decodeAArequests(self,file:IO=None,urls:Union[list,str]=None,save:bool=False,**kwargs)->pd.DataFrame:
"""
Takes any of the parameter to load adobe url and decompose the requests into a dataframe, that you can save if you want.
Arguments:
file : OPTIONAL : file referencing the different requests saved (excel, or txt)
urls : OPTIONAL : list of requests (or a single request) that you want to decode.
save : OPTIONAL : parameter to save your decode list into a csv file.
Returns a dataframe.
possible kwargs:
encoding : the type of encoding to decode the file
"""
if self.loggingEnabled:
self.logger.debug(f"Starting decodeAArequests")
if file is None and urls is None:
raise ValueError("Require at least file or urls to contains data")
if file is not None:
if '.txt' in file:
with open(file,'r',encoding=kwargs.get('encoding','utf-8')) as f:
urls = f.readlines() ## passing decoding to urls
elif '.xlsx' in file:
temp_df = pd.read_excel(file,header=None)
urls = list(temp_df[0]) ## passing decoding to urls
if urls is not None:
if type(urls) == str:
data = parse.parse_qsl(urls)
df = pd.DataFrame(data)
df.columns = ['index','request']
df.set_index('index',inplace=True)
if save:
df.to_csv(f'request_{int(time.time())}.csv')
return df
elif type(urls) == list: ## decoding list of strings
tmp_list = [parse.parse_qsl(data) for data in urls]
tmp_dfs = [pd.DataFrame(data) for data in tmp_list]
tmp_dfs2 = []
for df, index in zip(tmp_dfs,range(len(tmp_dfs))):
df.columns = ['index',f"request {index+1}"]
## cleanup timestamp from request url
string = df.iloc[0,0]
df.iloc[0,0] = re.search('http.*://(.+?)/s[0-9]+.*',string).group(1) # tracking server
df.set_index('index',inplace=True)
new_df = df
tmp_dfs2.append(new_df)
df_full = pd.concat(tmp_dfs2,axis=1)
if save:
df_full.to_csv(f'requests_{int(time.time())}.csv')
return df_full
def getReportSuites(self, txt: str = None, rsid_list: str = None, limit: int = 100, extended_info: bool = False,
save: bool = False) -> list:
"""
Get the reportSuite IDs data. Returns a dataframe of reportSuite name and report suite id.
Arguments:
txt : OPTIONAL : returns the reportSuites that matches a speific text field
rsid_list : OPTIONAL : returns the reportSuites that matches the list of rsids set
limit : OPTIONAL : How many reportSuite retrieves per serverCall
save : OPTIONAL : if set to True, it will save the list in a file. (Default False)
"""
if self.loggingEnabled:
self.logger.debug(f"Starting getReportSuite")
nb_error, nb_empty = 0, 0 # use for multi-thread loop
params = {}
params.update({'limit': str(limit)})
params.update({'page': '0'})
if txt is not None:
params.update({'rsidContains': str(txt)})
if rsid_list is not None:
params.update({'rsids': str(rsid_list)})
params.update(
{"expansion": "name,parentRsid,currency,calendarType,timezoneZoneinfo"})
if self.loggingEnabled:
self.logger.debug(f"parameters : {params}")
rsids = self.connector.getData(self.endpoint_company + self._getRS,
params=params, headers=self.header)
content = rsids['content']
if not extended_info:
list_content = [{'name': item['name'], 'rsid': item['rsid']}
for item in content]
df_rsids = pd.DataFrame(list_content)
else:
df_rsids = pd.DataFrame(content)
total_page = rsids['totalPages']
last_page = rsids['lastPage']
if not last_page: # if last_page =False
callsToMake = total_page
list_params = [{**params, 'page': page}
for page in range(1, callsToMake)]
list_urls = [self.endpoint_company +
self._getRS for x in range(1, callsToMake)]
listheaders = [self.header for x in range(1, callsToMake)]
workers = min(10, total_page)
with futures.ThreadPoolExecutor(workers) as executor:
res = executor.map(lambda x, y, z: self.connector.getData(
x, y, headers=z), list_urls, list_params, listheaders)
res = list(res)
list_data = [val for sublist in [r['content']
for r in res if 'content' in r.keys()] for val in sublist]
nb_error = sum(1 for elem in res if 'error_code' in elem.keys())
nb_empty = sum(1 for elem in res if 'content' in elem.keys() and len(
elem['content']) == 0)
if not extended_info:
list_append = [{'name': item['name'], 'rsid': item['rsid']}
for item in list_data]
df_append = pd.DataFrame(list_append)
else:
df_append = pd.DataFrame(list_data)
df_rsids = df_rsids.append(df_append, ignore_index=True)
if save:
if self.loggingEnabled:
self.logger.debug(f"saving rsids : {params}")
df_rsids.to_csv('RSIDS.csv', sep='\t')
if nb_error > 0 or nb_empty > 0:
message = f'WARNING : Retrieved data are partial.\n{nb_error}/{len(list_urls) + 1} requests returned an error.\n{nb_empty}/{len(list_urls)} requests returned an empty response. \nTry to use filter to retrieve reportSuite or increase limit per request'
print(message)
if self.loggingEnabled:
self.logger.warning(message)
return df_rsids
def getVirtualReportSuites(self, extended_info: bool = False, limit: int = 100, filterIds: str = None,
idContains: str = None, segmentIds: str = None, save: bool = False) -> list:
"""
return a lit of virtual reportSuites and their id. It can contain more information if expansion is selected.
Arguments:
extended_info : OPTIONAL : boolean to retrieve the maximum of information.
limit : OPTIONAL : How many reportSuite retrieves per serverCall
filterIds : OPTIONAL : comma delimited list of virtual reportSuite ID to be retrieved.
idContains : OPTIONAL : element that should be contained in the Virtual ReportSuite Id
segmentIds : OPTIONAL : comma delimited list of segmentId contained in the VRSID
save : OPTIONAL : if set to True, it will save the list in a file. (Default False)
"""
if self.loggingEnabled:
self.logger.debug(f"Starting getVirtualReportSuites")
expansion_values = "globalCompanyKey,parentRsid,parentRsidName,timezone,timezoneZoneinfo,currentTimezoneOffset,segmentList,description,modified,isDeleted,dataCurrentAsOf,compatibility,dataSchema,sessionDefinition,curatedComponents,type"
params = {"limit": limit}
nb_error = 0
nb_empty = 0
list_urls = []
if extended_info:
params['expansion'] = expansion_values
if filterIds is not None:
params['filterByIds'] = filterIds
if idContains is not None:
params['idContains'] = idContains
if segmentIds is not None:
params['segmentIds'] = segmentIds
path = f"{self.endpoint_company}/reportsuites/virtualreportsuites"
if self.loggingEnabled:
self.logger.debug(f"params: {params}")
vrsid = self.connector.getData(
path, params=params, headers=self.header)
content = vrsid['content']
if not extended_info:
list_content = [{'name': item['name'], 'vrsid': item['id']}
for item in content]
df_vrsids = pd.DataFrame(list_content)
else:
df_vrsids = pd.DataFrame(content)
total_page = vrsid['totalPages']
last_page = vrsid['lastPage']
if not last_page: # if last_page =False
callsToMake = total_page
list_params = [{**params, 'page': page}
for page in range(1, callsToMake)]
list_urls = [path for x in range(1, callsToMake)]
listheaders = [self.header for x in range(1, callsToMake)]
workers = min(10, total_page)
with futures.ThreadPoolExecutor(workers) as executor:
res = executor.map(lambda x, y, z: self.connector.getData(
x, y, headers=z), list_urls, list_params, listheaders)
res = list(res)
list_data = [val for sublist in [r['content']
for r in res if 'content' in r.keys()] for val in sublist]
nb_error = sum(1 for elem in res if 'error_code' in elem.keys())
nb_empty = sum(1 for elem in res if 'content' in elem.keys() and len(
elem['content']) == 0)
if not extended_info:
list_append = [{'name': item['name'], 'vrsid': item['id']}
for item in list_data]
df_append = pd.DataFrame(list_append)
else:
df_append = pd.DataFrame(list_data)
df_vrsids = df_vrsids.append(df_append, ignore_index=True)
if save:
df_vrsids.to_csv('VRSIDS.csv', sep='\t')
if nb_error > 0 or nb_empty > 0:
message = f'WARNING : Retrieved data are partial.\n{nb_error}/{len(list_urls) + 1} requests returned an error.\n{nb_empty}/{len(list_urls)} requests returned an empty response. \nTry to use filter to retrieve reportSuite or increase limit per request'
print(message)
if self.loggingEnabled:
self.logger.warning(message)
return df_vrsids
def getVirtualReportSuite(self, vrsid: str = None, extended_info: bool = False,
format: str = 'df') -> JsonOrDataFrameType:
"""
return a single virtual report suite ID information as dataframe.
Arguments:
vrsid : REQUIRED : The virtual reportSuite to be retrieved
extended_info : OPTIONAL : boolean to add more information
format : OPTIONAL : format of the output. 2 values "df" for dataframe and "raw" for raw json.
"""
if vrsid is None:
raise Exception("require a Virtual ReportSuite ID")
if self.loggingEnabled:
self.logger.debug(f"Starting getVirtualReportSuite for {vrsid}")
expansion_values = "globalCompanyKey,parentRsid,parentRsidName,timezone,timezoneZoneinfo,currentTimezoneOffset,segmentList,description,modified,isDeleted,dataCurrentAsOf,compatibility,dataSchema,sessionDefinition,curatedComponents,type"
params = {}
if extended_info:
params['expansion'] = expansion_values
path = f"{self.endpoint_company}/reportsuites/virtualreportsuites/{vrsid}"
data = self.connector.getData(path, params=params, headers=self.header)
if format == "df":
data = pd.DataFrame({vrsid: data})
return data
def getVirtualReportSuiteComponents(self, vrsid: str = None, nan_value=""):
"""
Uses the getVirtualReportSuite function to get a VRS and returns
the VRS components for a VRS as a dataframe. VRS must have Component Curation enabled.
Arguments:
vrsid : REQUIRED : Virtual Report Suite ID
nan_value : OPTIONAL : how to handle empty cells, default = ""
"""
if self.loggingEnabled:
self.logger.debug(f"Starting getVirtualReportSuiteComponents")
vrs_data = self.getVirtualReportSuite(extended_info=True, vrsid=vrsid)
if "curatedComponents" not in vrs_data.index:
return pd.DataFrame()
components_cell = vrs_data[vrs_data.index ==
"curatedComponents"].iloc[0, 0]
return pd.DataFrame(components_cell).fillna(value=nan_value)
def createVirtualReportSuite(self, name: str = None, parentRsid: str = None, segmentList: list = None,
dataSchema: str = "Cache", data_dict: dict = None, **kwargs) -> dict:
"""
Create a new virtual report suite based on the information provided.
Arguments:
name : REQUIRED : name of the virtual reportSuite
parentRsid : REQUIRED : Parent reportSuite ID for the VRS
segmentLists : REQUIRED : list of segment id to be applied on the ReportSuite.
dataSchema : REQUIRED : Type of schema used for the VRSID. (default "Cache")
data_dict : OPTIONAL : you can pass directly the dictionary.
"""
if self.loggingEnabled:
self.logger.debug(f"Starting createVirtualReportSuite")
path = f"{self.endpoint_company}/reportsuites/virtualreportsuites"
expansion_values = "globalCompanyKey,parentRsid,parentRsidName,timezone,timezoneZoneinfo,currentTimezoneOffset,segmentList,description,modified,isDeleted,dataCurrentAsOf,compatibility,dataSchema,sessionDefinition,curatedComponents,type"
params = {'expansion': expansion_values}
if data_dict is None:
body = {
"name": name,
"parentRsid": parentRsid,
"segmentList": segmentList,
"dataSchema": dataSchema,
"description": kwargs.get('description', '')
}
else:
if 'name' not in data_dict.keys() or 'parentRsid' not in data_dict.keys() or 'segmentList' not in data_dict.keys() or 'dataSchema' not in data_dict.keys():
if self.loggingEnabled:
self.logger.error(f"Missing one or more fundamental keys : name, parentRsid, segmentList, dataSchema")
raise Exception("Missing one or more fundamental keys : name, parentRsid, segmentList, dataSchema")
body = data_dict
res = self.connector.postData(
path, params=params, data=body, headers=self.header)
return res
def updateVirtualReportSuite(self, vrsid: str = None, data_dict: dict = None, **kwargs) -> dict:
"""
Updates a Virtual Report Suite based on a JSON-like dictionary (same structure as createVirtualReportSuite)
Note that to update components, you need to supply ALL components currently associated with this suite.
Supplying only the components you want to change will remove all others from the VR Suite!
Arguments:
vrsid : REQUIRED : The id of the virtual report suite to update
data_dict : a json-like dictionary of the vrs data to update
"""
if vrsid is None:
raise Exception("require a virtual reportSuite ID")
if self.loggingEnabled:
self.logger.debug(f"Starting updateVirtualReportSuite for {vrsid}")
path = f"{self.endpoint_company}/reportsuites/virtualreportsuites/{vrsid}"
body = data_dict
res = self.connector.putData(path, data=body, headers=self.header)
if self.loggingEnabled:
self.logger.debug(f"updateVirtualReportSuite response : {res}")
return res
def deleteVirtualReportSuite(self, vrsid: str = None) -> str:
"""
Delete a Virtual Report Suite based on the id passed.
Arguments:
vrsid : REQUIRED : The id of the virtual reportSuite to delete.
"""
if vrsid is None:
raise Exception("require a Virtual ReportSuite ID")
if self.loggingEnabled:
self.logger.debug(f"Starting deleteVirtualReportSuite for {vrsid}")
path = f"{self.endpoint_company}/reportsuites/virtualreportsuites/{vrsid}"
res = self.connector.deleteData(path, headers=self.header)
if self.loggingEnabled:
self.logger.debug(f"deleteVirtualReportSuite {vrsid} response : {res}")
return res
def validateVirtualReportSuite(self, name: str = None, parentRsid: str = None, segmentList: list = None,
dataSchema: str = "Cache", data_dict: dict = None, **kwargs) -> dict:
"""
Validate the object to create a new virtual report suite based on the information provided.
Arguments:
name : REQUIRED : name of the virtual reportSuite
parentRsid : REQUIRED : Parent reportSuite ID for the VRS
segmentLists : REQUIRED : list of segment ids to be applied on the ReportSuite.
dataSchema : REQUIRED : Type of schema used for the VRSID (default : Cache).
data_dict : OPTIONAL : you can pass directly the dictionary.
"""
if self.loggingEnabled:
self.logger.debug(f"Starting validateVirtualReportSuite")
path = f"{self.endpoint_company}/reportsuites/virtualreportsuites/validate"
expansion_values = "globalCompanyKey, parentRsid, parentRsidName, timezone, timezoneZoneinfo, currentTimezoneOffset, segmentList, description, modified, isDeleted, dataCurrentAsOf, compatibility, dataSchema, sessionDefinition, curatedComponents, type"
if data_dict is None:
body = {
"name": name,
"parentRsid": parentRsid,
"segmentList": segmentList,
"dataSchema": dataSchema,
"description": kwargs.get('description', '')
}
else:
if 'name' not in data_dict.keys() or 'parentRsid' not in data_dict.keys() or 'segmentList' not in data_dict.keys() or 'dataSchema' not in data_dict.keys():
raise Exception(
"Missing one or more fundamental keys : name, parentRsid, segmentList, dataSchema")
body = data_dict
res = self.connector.postData(path, data=body, headers=self.header)
if self.loggingEnabled:
self.logger.debug(f"validateVirtualReportSuite response : {res}")
return res
def getDimensions(self, rsid: str, tags: bool = False, description:bool=False, save=False, **kwargs) -> pd.DataFrame:
"""
Retrieve the list of dimensions from a specific reportSuite. Shrink columns to simplify output.
Returns the data frame of available dimensions.
Arguments:
rsid : REQUIRED : Report Suite ID from which you want the dimensions
tags : OPTIONAL : If you would like to have additional information, such as tags. (bool : default False)
description : OPTIONAL : Trying to add the description column. It may break the method.
save : OPTIONAL : If set to True, it will save the info in a csv file (bool : default False)
Possible kwargs:
full : Boolean : Doesn't shrink the number of columns if set to true
example : getDimensions(rsid,full=True)
"""
if self.loggingEnabled:
self.logger.debug(f"Starting getDimensions")
params = {}
if tags:
params.update({'expansion': 'tags'})
params.update({'rsid': rsid})
dims = self.connector.getData(self.endpoint_company +
self._getDimensions, params=params, headers=self.header)
df_dims = pd.DataFrame(dims)
columns = ['id', 'name', 'category', 'type',
'parent', 'pathable']
if description:
columns.append('description')
if kwargs.get('full', False):
new_cols = pd.DataFrame(df_dims.support.values.tolist(),
columns=['support_oberon', 'support_dw']) # extract list in column
new_df = df_dims.merge(new_cols, right_index=True, left_index=True)
new_df.drop(['reportable', 'support'], axis=1, inplace=True)
df_dims = new_df
else:
df_dims = df_dims[columns]
if save:
df_dims.to_csv(f'dimensions_{rsid}.csv')
return df_dims
def getMetrics(self, rsid: str, tags: bool = False, save=False, description:bool=False, dataGroup:bool=False, **kwargs) -> pd.DataFrame:
"""
Retrieve the list of metrics from a specific reportSuite. Shrink columns to simplify output.
Returns the data frame of available metrics.
Arguments:
rsid : REQUIRED : Report Suite ID from which you want the dimensions (str)
tags : OPTIONAL : If you would like to have additional information, such as tags.(bool : default False)
dataGroup : OPTIONAL : Adding dataGroups to the column exported. Default False.
May break the report.
save : OPTIONAL : If set to True, it will save the info in a csv file (bool : default False)
Possible kwargs:
full : Boolean : Doesn't shrink the number of columns if set to true.
"""
if self.loggingEnabled:
self.logger.debug(f"Starting getMetrics")
params = {}
if tags:
params.update({'expansion': 'tags'})
params.update({'rsid': rsid})
metrics = self.connector.getData(self.endpoint_company +
self._getMetrics, params=params, headers=self.header)
df_metrics = pd.DataFrame(metrics)
columns = ['id', 'name', 'category', 'type',
'precision', 'segmentable']
if dataGroup:
columns.append('dataGroup')
if description:
columns.append('description')
if kwargs.get('full', False):
new_cols = pd.DataFrame(df_metrics.support.values.tolist(), columns=[
'support_oberon', 'support_dw'])
new_df = df_metrics.merge(
new_cols, right_index=True, left_index=True)
new_df.drop('support', axis=1, inplace=True)
df_metrics = new_df
else:
df_metrics = df_metrics[columns]
if save:
df_metrics.to_csv(f'metrics_{rsid}.csv', sep='\t')
return df_metrics
def getUsers(self, save: bool = False, **kwargs) -> pd.DataFrame:
"""
Retrieve the list of users for a login company.Returns a data frame.
Arguments:
save : OPTIONAL : Save the data in a file (bool : default False).
Possible kwargs:
limit : Nummber of results per requests. Default 100.
expansion : string list such as "lastAccess,createDate"
"""
if self.loggingEnabled:
self.logger.debug(f"Starting getUsers")
list_urls = []
nb_error, nb_empty = 0, 0 # use for multi-thread loop
params = {'limit': kwargs.get('limit', 100)}
if kwargs.get("expansion", None) is not None:
params["expansion"] = kwargs.get("expansion", None)
path = "/users"
users = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header)
data = users['content']
lastPage = users['lastPage']
if not lastPage: # check if lastpage is inversed of False
callsToMake = users['totalPages']
list_params = [{'limit': params['limit'], 'page': page}
for page in range(1, callsToMake)]
list_urls = [self.endpoint_company +
"/users" for x in range(1, callsToMake)]
listheaders = [self.header
for x in range(1, callsToMake)]
workers = min(10, len(list_params))
with futures.ThreadPoolExecutor(workers) as executor:
res = executor.map(lambda x, y, z: self.connector.getData(x, y, headers=z), list_urls,
list_params, listheaders)
res = list(res)
users_lists = [elem['content']
for elem in res if 'content' in elem.keys()]
nb_error = sum(1 for elem in res if 'error_code' in elem.keys())
nb_empty = sum(1 for elem in res if 'content' in elem.keys()
and len(elem['content']) == 0)
append_data = [val for sublist in [data for data in users_lists]
for val in sublist] # flatten list of list
data = data + append_data
df_users = pd.DataFrame(data)
columns = ['email', 'login', 'fullName', 'firstName', 'lastName', 'admin', 'loginId', 'imsUserId', 'login',
'createDate', 'lastAccess', 'title', 'disabled', 'phoneNumber', 'companyid']
df_users = df_users[columns]
df_users['createDate'] = pd.to_datetime(df_users['createDate'])
df_users['lastAccess'] = pd.to_datetime(df_users['lastAccess'])
if save:
df_users.to_csv(f'users_{int(time.time())}.csv', sep='\t')
if nb_error > 0 or nb_empty > 0:
print(
f'WARNING : Retrieved data are partial.\n{nb_error}/{len(list_urls) + 1} requests returned an error.\n{nb_empty}/{len(list_urls)} requests returned an empty response. \nTry to use filter to retrieve users or increase limit')
return df_users
def getUserMe(self,loginId:str=None)->dict:
"""
Retrieve a single user based on its loginId
Argument:
loginId : REQUIRED : Login ID for the user
"""
path = f"/users/me"
res = self.connector.getData(self.endpoint_company + path)
return res
def getSegments(self, name: str = None, tagNames: str = None, inclType: str = 'all', rsids_list: list = None,
sidFilter: list = None, extended_info: bool = False, format: str = "df", save: bool = False,
verbose: bool = False, **kwargs) -> JsonListOrDataFrameType:
"""
Retrieve the list of segments. Returns a data frame.
Arguments:
name : OPTIONAL : Filter to only include segments that contains the name (str)
tagNames : OPTIONAL : Filter list to only include segments that contains one of the tags (string delimited with comma, can be list as well)
inclType : OPTIONAL : type of segments to be retrieved.(str) Possible values:
- all : Default value (all segments possibles)
- shared : shared segments
- template : template segments
- deleted : deleted segments
- internal : internal segments
- curatedItem : curated segments
rsid_list : OPTIONAL : Filter list to only include segments tied to specified RSID list (list)
sidFilter : OPTIONAL : Filter list to only include segments in the specified list (list)
extended_info : OPTIONAL : additional segment metadata fields to include on response (bool : default False)
if set to true, returns reportSuiteName, ownerFullName, modified, tags, compatibility, definition
format : OPTIONAL : defined the format returned by the query. (Default df)
possibe values :
"df" : default value that return a dataframe
"raw": return a list of value. More or less what is return from server.
save : OPTIONAL : If set to True, it will save the info in a csv file (bool : default False)
verbose : OPTIONAL : If set to True, print some information
Possible kwargs:
limit : number of segments retrieved by request. default 500: Limited to 1000 by the AnalyticsAPI.
NOTE : Segment Endpoint doesn't support multi-threading. Default to 500.
"""
if self.loggingEnabled:
self.logger.debug(f"Starting getSegments")
limit = int(kwargs.get('limit', 500))
params = {'includeType': 'all', 'limit': limit}
if extended_info:
params.update(
{'expansion': 'reportSuiteName,ownerFullName,created,modified,tags,compatibility,definition,shares'})
if name is not None:
params.update({'name': str(name)})
if tagNames is not None:
if type(tagNames) == list:
tagNames = ','.join(tagNames)
params.update({'tagNames': tagNames})
if inclType != 'all':
params['includeType'] = inclType
if rsids_list is not None:
if type(rsids_list) == list:
rsids_list = ','.join(rsids_list)
params.update({'rsids': rsids_list})
if sidFilter is not None:
if type(sidFilter) == list:
sidFilter = ','.join(sidFilter)
params.update({'rsids': sidFilter})
data = []
lastPage = False
page_nb = 0
if verbose:
print("Starting requesting segments")
while not lastPage:
params['page'] = page_nb
segs = self.connector.getData(self.endpoint_company +
self._getSegments, params=params, headers=self.header)
data += segs['content']
lastPage = segs['lastPage']
page_nb += 1
if verbose and page_nb % 10 == 0:
print(f"request #{page_nb / 10}")
if format == "df":
segments = pd.DataFrame(data)
else:
segments = data
if save and format == "df":
segments.to_csv(f'segments_{int(time.time())}.csv', sep='\t')
if verbose:
print(
f'Saving data in file : {os.getcwd()}{os.sep}segments_{int(time.time())}.csv')
elif save and format == "raw":
with open(f"segments_{int(time.time())}.csv","w") as f:
f.write(json.dumps(segments,indent=4))
return segments
def getSegment(self, segment_id: str = None,full:bool=False, *args) -> dict:
"""
Get a specific segment from the ID. Returns the object of the segment.
Arguments:
segment_id : REQUIRED : the segment id to retrieve.
full : OPTIONAL : Add all possible options
Possible args:
- "reportSuiteName" : string : to retrieve reportSuite attached to the segment
- "ownerFullName" : string : to retrieve ownerFullName attached to the segment
- "modified" : string : to retrieve when segment was modified
- "tags" : string : to retrieve tags attached to the segment
- "compatibility" : string : to retrieve which tool is compatible
- "definition" : string : definition of the segment
- "publishingStatus" : string : status for the segment
- "definitionLastModified" : string : last definition of the segment
- "categories" : string : categories of the segment
"""
ValidArgs = ["reportSuiteName", "ownerFullName", "modified", "tags", "compatibility",
"definition", "publishingStatus", "publishingStatus", "definitionLastModified", "categories"]
if segment_id is None:
raise Exception("Expected a segment id")
if self.loggingEnabled:
self.logger.debug(f"Starting getSegment for {segment_id}")
path = f"/segments/{segment_id}"
for element in args:
if element not in ValidArgs:
args.remove(element)
params = {'expansion': ','.join(args)}
if full:
params = {'expansion': ','.join(ValidArgs)}
res = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header)
return res
def scanSegment(self,segment:Union[str,dict],verbose:bool=False)->dict:
"""
Return the dimensions, metrics and reportSuite used and the main scope of the segment.
Arguments:
segment : REQUIRED : either the ID of the segment or the full definition.
verbose : OPTIONAL : print some comment.
"""
if self.loggingEnabled:
self.logger.debug(f"Starting scanSegment")
if type(segment) == str:
if verbose:
print('retrieving segment definition')
defSegment = self.getSegment(segment,full=True)
elif type(segment) == dict:
defSegment = deepcopy(segment)
if 'definition' not in defSegment.keys():
raise KeyError('missing "definition" key ')
if verbose:
print('copied segment definition')
mydef = str(defSegment['definition'])
dimensions : list = re.findall("'(variables/.+?)'",mydef)
metrics : list = re.findall("'(metrics/.+?)'",mydef)
reportSuite = defSegment['rsid']
scope = re.search("'context': '(.+)'}[^'context']+",mydef)
res = {
'dimensions' : set(dimensions) if len(dimensions)>0 else {},
'metrics' : set(metrics) if len(metrics)>0 else {},
'rsid' : reportSuite,
'scope' : scope.group(1)
}
return res
def createSegment(self, segmentJSON: dict = None) -> dict:
"""
Method that creates a new segment based on the dictionary passed to it.
Arguments:
segmentJSON : REQUIRED : the dictionary that represents the JSON statement for the segment.
More information at this address <https://adobedocs.github.io/analytics-2.0-apis/#/segments/segments_createSegment>
"""
if self.loggingEnabled:
self.logger.debug(f"starting createSegment")
if segmentJSON is None:
print('No segment data has been pushed')
return None
data = deepcopy(segmentJSON)
seg = self.connector.postData(
self.endpoint_company + self._getSegments,
data=data,
headers=self.header
)
return seg
def createSegmentValidate(self, segmentJSON: dict = None) -> object:
"""
Method that validate a new segment based on the dictionary passed to it.
Arguments:
segmentJSON : REQUIRED : the dictionary that represents the JSON statement for the segment.
More information at this address <https://adobedocs.github.io/analytics-2.0-apis/#/segments/segments_createSegment>
"""
if self.loggingEnabled:
self.logger.debug(f"starting createSegmentValidate")
if segmentJSON is None:
print('No segment data has been pushed')
return None
data = deepcopy(segmentJSON)
path = "/segments/validate"
seg = self.connector.postData(self.endpoint_company +path,data=data)
return seg
def updateSegment(self, segmentID: str = None, segmentJSON: dict = None) -> object:
"""
Method that updates a specific segment based on the dictionary passed to it.
Arguments:
segmentID : REQUIRED : Segment ID to be updated
segmentJSON : REQUIRED : the dictionary that represents the JSON statement for the segment.
"""
if self.loggingEnabled:
self.logger.debug(f"starting updateSegment")
if segmentJSON is None or segmentID is None:
print('No segment or segmentID data has been pushed')
if self.loggingEnabled:
self.logger.error(f"No segment or segmentID data has been pushed")
return None
data = deepcopy(segmentJSON)
seg = self.connector.putData(
self.endpoint_company + self._getSegments + '/' + segmentID,
data=data,
headers=self.header
)
return seg
def deleteSegment(self, segmentID: str = None) -> object:
"""
Method that delete a specific segment based the ID passed.
Arguments:
segmentID : REQUIRED : Segment ID to be deleted
"""
if segmentID is None:
print('No segmentID data has been pushed')
return None
if self.loggingEnabled:
self.logger.debug(f"starting deleteSegment for {segmentID}")
seg = self.connector.deleteData(self.endpoint_company +
self._getSegments + '/' + segmentID, headers=self.header)
return seg
def getCalculatedMetrics(
self,
name: str = None,
tagNames: str = None,
inclType: str = 'all',
rsids_list: list = None,
extended_info: bool = False,
save=False,
format:str='df',
**kwargs
) -> pd.DataFrame:
"""
Retrieve the list of calculated metrics. Returns a data frame.
Arguments:
name : OPTIONAL : Filter to only include calculated metrics that contains the name (str)
tagNames : OPTIONAL : Filter list to only include calculated metrics that contains one of the tags (string delimited with comma, can be list as well)
inclType : OPTIONAL : type of calculated Metrics to be retrieved. (str) Possible values:
- all : Default value (all calculated metrics possibles)
- shared : shared calculated metrics
- template : template calculated metrics
rsid_list : OPTIONAL : Filter list to only include segments tied to specified RSID list (list)
extended_info : OPTIONAL : additional segment metadata fields to include on response (list)
additional infos: reportSuiteName,definition, ownerFullName, modified, tags, compatibility
save : OPTIONAL : If set to True, it will save the info in a csv file (Default False)
format : OPTIONAL : format of the output. 2 values "df" for dataframe and "raw" for raw json.
Possible kwargs:
limit : number of segments retrieved by request. default 500: Limited to 1000 by the AnalyticsAPI.(int)
"""
if self.loggingEnabled:
self.logger.debug(f"starting getCalculatedMetrics")
limit = int(kwargs.get('limit', 500))
params = {'includeType': inclType, 'limit': limit}
if name is not None:
params.update({'name': str(name)})
if tagNames is not None:
if type(tagNames) == list:
tagNames = ','.join(tagNames)
params.update({'tagNames': tagNames})
if inclType != 'all':
params['includeType'] = inclType
if rsids_list is not None:
if type(rsids_list) == list:
rsids_list = ','.join(rsids_list)
params.update({'rsids': rsids_list})
if extended_info:
params.update(
{'expansion': 'reportSuiteName,definition,ownerFullName,modified,tags,categories,compatibility,shares'})
metrics = self.connector.getData(self.endpoint_company +
self._getCalcMetrics, params=params)
data = metrics['content']
lastPage = metrics['lastPage']
if not lastPage: # check if lastpage is inversed of False
page_nb = 0
while not lastPage:
page_nb += 1
params['page'] = page_nb
metrics = self.connector.getData(self.endpoint_company +
self._getCalcMetrics, params=params, headers=self.header)
data += metrics['content']
lastPage = metrics['lastPage']
if format == "raw":
if save:
with open(f'calculated_metrics_{int(time.time())}.json','w') as f:
f.write(json.dumps(data,indent=4))
return data
df_calc_metrics = pd.DataFrame(data)
if save:
df_calc_metrics.to_csv(f'calculated_metrics_{int(time.time())}.csv', sep='\t')
return df_calc_metrics
def getCalculatedMetric(self,calculatedMetricId:str=None,full:bool=True)->dict:
"""
Return a dictionary on the calculated metrics requested.
Arguments:
calculatedMetricId : REQUIRED : The calculated metric ID to be retrieved.
full : OPTIONAL : additional segment metadata fields to include on response (list)
additional infos: reportSuiteName,definition, ownerFullName, modified, tags, compatibility
"""
if calculatedMetricId is None:
raise ValueError("Require a calculated metrics ID")
if self.loggingEnabled:
self.logger.debug(f"starting getCalculatedMetric for {calculatedMetricId}")
params = {}
if full:
params.update({'expansion': 'reportSuiteName,definition,ownerFullName,modified,tags,categories,compatibility'})
path = f"/calculatedmetrics/{calculatedMetricId}"
res = self.connector.getData(self.endpoint_company+path,params=params)
return res
def scanCalculatedMetric(self,calculatedMetric:Union[str,dict],verbose:bool=False)->dict:
"""
Return a dictionary of metrics and dimensions used in the calculated metrics.
"""
if self.loggingEnabled:
self.logger.debug(f"starting scanCalculatedMetric")
if type(calculatedMetric) == str:
if verbose:
print('retrieving calculated metrics definition')
cm = self.getCalculatedMetric(calculatedMetric,full=True)
elif type(calculatedMetric) == dict:
cm = deepcopy(calculatedMetric)
if 'definition' not in cm.keys():
raise KeyError('missing "definition" key')
if verbose:
print('copied calculated metrics definition')
mydef = str(cm['definition'])
segments:list = cm['compatibility'].get('segments',[])
res = {"dimensions":[],'metrics':[]}
for segment in segments:
if verbose:
print(f"retrieving segment {segment} definition")
tmp:dict = self.scanSegment(segment)
res['dimensions'] += [dim for dim in tmp['dimensions']]
res['metrics'] += [met for met in tmp['metrics']]
metrics : list = re.findall("'(metrics/.+?)'",mydef)
res['metrics'] += metrics
res['rsid'] = cm['rsid']
res['metrics'] = set(res['metrics']) if len(res['metrics'])>0 else {}
res['dimensions'] = set(res['dimensions']) if len(res['dimensions'])>0 else {}
return res
def createCalculatedMetric(self, metricJSON: dict = None) -> dict:
"""
Method that create a specific calculated metric based on the dictionary passed to it.
Arguments:
metricJSON : REQUIRED : Calculated Metrics information to create. (Required: name, definition, rsid)
More information can be found at this address https://adobedocs.github.io/analytics-2.0-apis/#/calculatedmetrics/calculatedmetrics_createCalculatedMetric
"""
if self.loggingEnabled:
self.logger.debug(f"starting createCalculatedMetric")
if metricJSON is None or type(metricJSON) != dict:
if self.loggingEnabled:
self.logger.error(f'Expected a dictionary to create the calculated metrics')
raise Exception(
"Expected a dictionary to create the calculated metrics")
if 'name' not in metricJSON.keys() or 'definition' not in metricJSON.keys() or 'rsid' not in metricJSON.keys():
if self.loggingEnabled:
self.logger.error(f'Expected "name", "definition" and "rsid" in the data')
raise KeyError(
'Expected "name", "definition" and "rsid" in the data')
cm = self.connector.postData(self.endpoint_company +
self._getCalcMetrics, headers=self.header, data=metricJSON)
return cm
def createCalculatedMetricValidate(self,metricJSON: dict=None)->dict:
"""
Method that validate a specific calculated metrics definition based on the dictionary passed to it.
Arguments:
metricJSON : REQUIRED : Calculated Metrics information to create. (Required: name, definition, rsid)
More information can be found at this address https://adobedocs.github.io/analytics-2.0-apis/#/calculatedmetrics/calculatedmetrics_createCalculatedMetric
"""
if self.loggingEnabled:
self.logger.debug(f"starting createCalculatedMetricValidate")
if metricJSON is None or type(metricJSON) != dict:
raise Exception(
"Expected a dictionary to create the calculated metrics")
if 'name' not in metricJSON.keys() or 'definition' not in metricJSON.keys() or 'rsid' not in metricJSON.keys():
if self.loggingEnabled:
self.logger.error(f'Expected "name", "definition" and "rsid" in the data')
raise KeyError(
'Expected "name", "definition" and "rsid" in the data')
path = "/calculatedmetrics/validate"
cm = self.connector.postData(self.endpoint_company+path, data=metricJSON)
return cm
def updateCalculatedMetric(self, calcID: str = None, calcJSON: dict = None) -> object:
"""
Method that updates a specific Calculated Metrics based on the dictionary passed to it.
Arguments:
calcID : REQUIRED : Calculated Metric ID to be updated
calcJSON : REQUIRED : the dictionary that represents the JSON statement for the calculated metric.
"""
if calcJSON is None or calcID is None:
print('No calcMetric or calcMetric JSON data has been passed')
return None
if self.loggingEnabled:
self.logger.debug(f"starting updateCalculatedMetric for {calcID}")
data = deepcopy(calcJSON)
cm = self.connector.putData(
self.endpoint_company + self._getCalcMetrics + '/' + calcID,
data=data,
headers=self.header
)
return cm
def deleteCalculatedMetric(self, calcID: str = None) -> object:
"""
Method that delete a specific calculated metrics based on the id passed..
Arguments:
calcID : REQUIRED : Calculated Metrics ID to be deleted
"""
if calcID is None:
print('No calculated metrics data has been passed')
return None
if self.loggingEnabled:
self.logger.debug(f"starting deleteCalculatedMetric for {calcID}")
cm = self.connector.deleteData(
self.endpoint_company + self._getCalcMetrics + '/' + calcID,
headers=self.header
)
return cm
def getDateRanges(self, extended_info: bool = False, save: bool = False, includeType: str = 'all',verbose:bool=False,
**kwargs) -> pd.DataFrame:
"""
Get the list of date ranges available for the user.
Arguments:
extended_info : OPTIONAL : additional segment metadata fields to include on response
additional infos: reportSuiteName, ownerFullName, modified, tags, compatibility, definition
save : OPTIONAL : If set to True, it will save the info in a csv file (Default False)
includeType : Include additional date ranges not owned by user. The "all" option takes precedence over "shared"
Possible values are all, shared, templates. You can add all of them as comma separated string.
Possible kwargs:
limit : number of segments retrieved by request. default 500: Limited to 1000 by the AnalyticsAPI.
full : Boolean : Doesn't shrink the number of columns if set to true
"""
if self.loggingEnabled:
self.logger.debug(f"starting getDateRanges")
limit = int(kwargs.get('limit', 500))
includeType = includeType.split(',')
params = {'limit': limit, 'includeType': includeType}
if extended_info:
params.update(
{'expansion': 'definition,ownerFullName,modified,tags'})
dateRanges = self.connector.getData(
self.endpoint_company + self._getDateRanges,
params=params,
headers=self.header,
verbose=verbose
)
data = dateRanges['content']
df_dates = pd.DataFrame(data)
if save:
df_dates.to_csv('date_range.csv', index=False)
return df_dates
def getDateRange(self,dateRangeID:str=None)->dict:
"""
Get a specific Data Range based on the ID
Arguments:
dateRangeID : REQUIRED : the date range ID to be retrieved.
"""
if dateRangeID is None:
raise ValueError("No date range ID has been passed")
if self.loggingEnabled:
self.logger.debug(f"starting getDateRange with ID: {dateRangeID}")
params ={
"expansion":"definition,ownerFullName,modified,tags"
}
dr = self.connector.getData(
self.endpoint_company + f"{self._getDateRanges}/{dateRangeID}",
params=params
)
return dr
def updateDateRange(self, dateRangeID: str = None, dateRangeJSON: dict = None) -> dict:
"""
Method that updates a specific Date Range based on the dictionary passed to it.
Arguments:
dateRangeID : REQUIRED : Date Range ID to be updated
dateRangeJSON : REQUIRED : the dictionary that represents the JSON statement for the date Range.
"""
if dateRangeJSON is None or dateRangeID is None:
raise ValueError("No date range or date range JSON data have been passed")
if self.loggingEnabled:
self.logger.debug(f"starting updateDateRange")
data = deepcopy(dateRangeJSON)
dr = self.connector.putData(
self.endpoint_company + self._getDateRanges + '/' + dateRangeID,
data=data,
headers=self.header
)
return dr
def deleteDateRange(self, dateRangeID: str = None) -> object:
"""
Method that deletes a specific date Range based on the id passed.
Arguments:
dateRangeID : REQUIRED : ID of Date Range to be deleted
"""
if dateRangeID is None:
print('No Date Range ID has been pushed')
return None
if self.loggingEnabled:
self.logger.debug(f"starting deleteDateRange for {dateRangeID}")
response = self.connector.deleteData(
self.endpoint_company + self._getDateRanges + '/' + dateRangeID,
headers=self.header
)
return response
def getCalculatedFunctions(self, **kwargs) -> pd.DataFrame:
"""
Returns the calculated metrics functions.
"""
if self.loggingEnabled:
self.logger.debug(f"starting getCalculatedFunctions")
path = "/calculatedmetrics/functions"
limit = int(kwargs.get('limit', 500))
params = {'limit': limit}
funcs = self.connector.getData(
self.endpoint_company + path,
params=params,
headers=self.header
)
df = pd.DataFrame(funcs)
return df
def getTags(self, limit: int = 100, **kwargs) -> list:
"""
Return the list of tags
Arguments:
limit : OPTIONAL : Amount of tag to be returned by request. Default 100
"""
if self.loggingEnabled:
self.logger.debug(f"starting getTags")
path = "/componentmetadata/tags"
params = {'limit': limit}
if kwargs.get('page', False):
params['page'] = kwargs.get('page', 0)
res = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header)
data = res['content']
if not res['lastPage']:
page = res['number'] + 1
data += self.getTags(limit=limit, page=page)
return data
def getTag(self, tagId: str = None) -> dict:
"""
Return the a tag by its ID.
Arguments:
tagId : REQUIRED : the Tag ID to be retrieved.
"""
if tagId is None:
raise Exception("Require a tag ID for this method.")
if self.loggingEnabled:
self.logger.debug(f"starting getTag for {tagId}")
path = f"/componentmetadata/tags/{tagId}"
res = self.connector.getData(self.endpoint_company + path, headers=self.header)
return res
def getComponentTagName(self, tagNames: str = None, componentType: str = None) -> dict:
"""
Given a comma separated list of tag names, return component ids associated with them.
Arguments:
tagNames : REQUIRED : Comma separated list of tag names.
componentType : REQUIRED : The component type to operate on.
Available values : segment, dashboard, bookmark, calculatedMetric, project, dateRange, metric, dimension, virtualReportSuite, scheduledJob, alert, classificationSet
"""
path = "/componentmetadata/tags/tagnames"
if tagNames is None:
raise Exception("Requires tag names to be provided")
if self.loggingEnabled:
self.logger.debug(f"starting getComponentTagName for {tagNames}")
if componentType is None:
raise Exception("Requires a Component Type to be provided")
params = {
"tagNames": tagNames,
"componentType": componentType
}
res = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header)
return res
def searchComponentsTags(self, componentType: str = None, componentIds: list = None) -> dict:
"""
Search for the tags of a list of component by their ids.
Arguments:
componentType : REQUIRED : The component type to use in the search.
Available values : segment, dashboard, bookmark, calculatedMetric, project, dateRange, metric, dimension, virtualReportSuite, scheduledJob, alert, classificationSet
componentIds : REQUIRED : List of components Ids to use.
"""
if self.loggingEnabled:
self.logger.debug(f"starting searchComponentsTags")
if componentType is None:
raise Exception("ComponentType is required")
if componentIds is None or type(componentIds) != list:
raise Exception("componentIds is required as a list of ids")
path = "/componentmetadata/tags/component/search"
obj = {
"componentType": componentType,
"componentIds": componentIds
}
if self.loggingEnabled:
self.logger.debug(f"params {obj}")
res = self.connector.postData(self.endpoint_company + path, data=obj, headers=self.header)
return res
def createTags(self, data: list = None) -> dict:
"""
Create a new tag and applies that new tag to the passed components.
Arguments:
data : REQUIRED : list of the tag to be created with their component relation.
Example of data :
[
{
"id": 0,
"name": "string",
"description": "string",
"components": [
{
"componentType": "string",
"componentId": "string",
"tags": [
"Unknown Type: Tag"
]
}
]
}
]
"""
if self.loggingEnabled:
self.logger.debug(f"starting createTags")
if data is None:
raise Exception("Requires a list of tags to be created")
path = "โ/componentmetadataโ/tags"
if self.loggingEnabled:
self.logger.debug(f"data: {data}")
res = self.connector.postData(self.endpoint_company + path, data=data, headers=self.header)
return res
def deleteTags(self, componentType: str = None, componentIds: str = None) -> str:
"""
Delete all tags from the component Type and the component ids specified.
Arguments:
componentIds : REQUIRED : the Comma-separated list of componentIds to operate on.
componentType : REQUIRED : The component type to operate on.
Available values : segment, dashboard, bookmark, calculatedMetric, project, dateRange, metric, dimension, virtualReportSuite, scheduledJob, alert, classificationSet
"""
if self.loggingEnabled:
self.logger.debug(f"starting deleteTags")
if componentType is None:
raise Exception("require a component type")
if componentIds is None:
raise Exception("require component ID(s)")
path = "/componentmetadata/tags"
params = {
"componentType": componentType,
"componentIds": componentIds
}
res = self.connector.deleteData(self.endpoint_company + path, params=params, headers=self.header)
return res
def deleteTag(self, tagId: str = None) -> str:
"""
Delete a Tag based on its id.
Arguments:
tagId : REQUIRED : The tag ID to be deleted.
"""
if tagId is None:
raise Exception("A tag ID is required")
if self.loggingEnabled:
self.logger.debug(f"starting deleteTag for {tagId}")
path = "โ/componentmetadataโ/tagsโ/{tagId}"
res = self.connector.deleteData(self.endpoint_company + path, headers=self.header)
return res
def getComponentTags(self, componentId: str = None, componentType: str = None) -> list:
"""
Given a componentId, return all tags associated with that component.
Arguments:
componentId : REQUIRED : The componentId to operate on. Currently this is just the segmentId.
componentType : REQUIRED : The component type to operate on.
segment, dashboard, bookmark, calculatedMetric, project, dateRange, metric, dimension, virtualReportSuite, scheduledJob, alert, classificationSet
"""
if self.loggingEnabled:
self.logger.debug(f"starting getComponentTags")
path = "/componentmetadata/tags/search"
if componentType is None:
raise Exception("require a component type")
if componentId is None:
raise Exception("require a component ID")
params = {"componentId": componentId, "componentType": componentType}
res = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header)
return res
def updateComponentTags(self, data: list = None):
"""
Overwrite the component Tags with the list send.
Arguments:
data : REQUIRED : list of the components to be udpated with their respective list of tag names.
Object looks like the following:
[
{
"componentType": "string",
"componentId": "string",
"tags": [
"Unknown Type: Tag"
]
}
]
"""
if self.loggingEnabled:
self.logger.debug(f"starting updateComponentTags")
if data is None or type(data) != list:
raise Exception("require list of update to be sent.")
path = "/componentmetadata/tags/tagitems"
res = self.connector.putData(self.endpoint_company + path, data=data, headers=self.header)
return res
def getScheduledJobs(self, includeType: str = "all", full: bool = True,limit:int=1000,format:str="df",verbose: bool = False) -> JsonListOrDataFrameType:
"""
Get Scheduled Projects. You can retrieve the projectID out of the tasks column to see for which workspace a schedule
Arguments:
includeType : OPTIONAL : By default gets all non-expired or deleted projects. (default "all")
You can specify e.g. "all,shared,expired,deleted" to get more.
Active schedules always get exported,so you need to use the `rsLocalExpirationTime` parameter in the `schedule` column to e.g. see which schedules are expired
full : OPTIONAL : By default True. It returns the following additional information "ownerFullName,groups,tags,sharesFullName,modified,favorite,approved,scheduledItemName,scheduledUsersFullNames,deletedReason"
limit : OPTIONAL : Number of element retrieved by request (default max 1000)
format : OPTIONAL : Define the format you want to output the result. Default "df" for dataframe, other option "raw"
verbose: OPTIONAL : set to True for debug output
"""
if self.loggingEnabled:
self.logger.debug(f"starting getScheduledJobs")
params = {"includeType": includeType,
"pagination": True,
"locale": "en_US",
"page": 0,
"limit": limit
}
if full is True:
params["expansion"] = "ownerFullName,groups,tags,sharesFullName,modified,favorite,approved,scheduledItemName,scheduledUsersFullNames,deletedReason"
path = "/scheduler/scheduler/scheduledjobs/"
if verbose:
print(f"Getting Scheduled Jobs with Parameters {params}")
res = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header)
if res.get("content") is None:
raise Exception(f"Scheduled Job had no content in response. Parameters were: {params}")
# get Scheduled Jobs data into Data Frame
data = res.get("content")
last_page = res.get("lastPage",True)
total_el = res.get("totalElements")
number_el = res.get("numberOfElements")
if verbose:
print(f"Last Page {last_page}, total elements: {total_el}, number_el: {number_el}")
# iterate through pages if not on last page yet
while last_page == False:
if verbose:
print(f"last_page is {last_page}, next round")
params["page"] += 1
res = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header)
data += res.get("content")
last_page = res.get("lastPage",True)
if format == "df":
df = pd.DataFrame(data)
return df
return data
def getScheduledJob(self,scheduleId:str=None)->dict:
"""
Return a scheduled project definition.
Arguments:
scheduleId : REQUIRED : Schedule project ID
"""
if scheduleId is None:
raise ValueError("A schedule ID is required")
if self.loggingEnabled:
self.logger.debug(f"starting getScheduledJob with ID: {scheduleId}")
path = f"/scheduler/scheduler/scheduledjobs/{scheduleId}"
params = {
'expansion': 'modified,favorite,approved,tags,shares,sharesFullName,reportSuiteName,schedule,triggerObject,tasks,deliverySetting'}
res = self.connector.getData(self.endpoint_company + path, params=params)
return res
def createScheduledJob(self,projectId:str=None,type:str="pdf",schedule:dict=None,loginIds:list=None,emails:list=None,groupIds:list=None,width:int=None)->dict:
"""
Creates a schedule job based on the information provided as arguments.
Expiration will be in one year by default.
Arguments:
projectId : REQUIRED : The workspace project ID to send.
type : REQUIRED : how to send the project, default "pdf"
schedule : REQUIRED : object to specify the schedule used.
example: {
"hour": 10,
"minute": 45,
"second": 25,
"interval": 1,
"type": "daily"
}
{
'type': 'weekly',
'second': 53,
'minute': 0,
'hour': 8,
'daysOfWeek': [2],
'interval': 1
}
{
'type': 'monthly',
'second': 53,
'minute': 30,
'hour': 16,
'dayOfMonth': 21,
'interval': 1
}
loginIds : REQUIRED : A list of login ID of the users that are recipient of the report. It can be retrieved by the getUsers method.
emails : OPTIONAL : If users are not registered in AA, you can specify a list of email addresses.
groupIds : OPTIONAL : Group Id to send the report to.
width : OPTIONAL : width of the report to be sent. (Minimum 800)
"""
if self.loggingEnabled:
self.logger.debug(f"starting createScheduleJob")
path = f"/scheduler/scheduler/scheduledjobs/"
dateNow = datetime.datetime.now()
nowDateTime = datetime.datetime.isoformat(dateNow,timespec='seconds')
futureDate = datetime.datetime.isoformat(dateNow.replace(dateNow.year + 1),timespec='seconds')
deliveryId_res = self.createDeliverySetting(loginIds=loginIds, emails=emails,groupIds=groupIds)
deliveryId = deliveryId_res.get('id','')
if deliveryId == "":
if self.loggingEnabled:
self.logger.error(f"erro creating the delivery ID")
self.logger.error(json.dumps(deliveryId_res))
raise Exception("Error creating the delivery ID")
me = self.getUserMe()
projectDetail = self.getProject(projectId)
data = {
"approved" : False,
"complexity":{},
"curatedItem":False,
"description" : "",
"favorite" : False,
"hidden":False,
"internal":False,
"intrinsicIdentity" : False,
"isDeleted":False,
"isDisabled":False,
"locale":"en_US",
"noAccess":False,
"template":False,
"version":"1.0.1",
"rsid":projectDetail.get('rsid',''),
"schedule":{
"rsLocalStartTime":nowDateTime,
"rsLocalExpirationTime":futureDate,
"triggerObject":schedule
},
"tasks":[
{
"tasktype":"generate",
"tasksubtype":"analysisworkspace",
"requestParams":{
"artifacts":[type],
"imsOrgId": self.connector.config['org_id'],
"imsUserId": me.get('imsUserId',''),
"imsUserName":"API",
"projectId" : projectDetail.get('id'),
"projectName" : projectDetail.get('name')
}
},
{
"tasktype":"deliver",
"artifactType":type,
"deliverySettingId": deliveryId,
}
]
}
if width is not None and width >= 800:
data['tasks'][0]['requestParams']['width'] = width
res = self.connector.postData(self.endpoint_company+path,data=data)
return res
def updateScheduledJob(self,scheduleId:str=None,scheduleObj:dict=None)->dict:
"""
Update a schedule Job based on its id and the definition attached to it.
Arguments:
scheduleId : REQUIRED : the jobs to be updated.
scheduleObj : REQUIRED : The object to replace the current definition.
"""
if scheduleId is None:
raise ValueError("A schedule ID is required")
if scheduleObj is None:
raise ValueError('A schedule Object is required')
if self.loggingEnabled:
self.logger.debug(f"starting updateScheduleJob with ID: {scheduleId}")
path = f"/scheduler/scheduler/scheduledjobs/{scheduleId}"
res = self.connector.putData(self.endpoint_company+path,data=scheduleObj)
return res
def deleteScheduledJob(self,scheduleId:str=None)->dict:
"""
Delete a schedule project based on its ID.
Arguments:
scheduleId : REQUIRED : the schedule ID to be deleted.
"""
if scheduleId is None:
raise Exception("A schedule ID is required for deletion")
if self.loggingEnabled:
self.logger.debug(f"starting deleteScheduleJob with ID: {scheduleId}")
path = f"/scheduler/scheduler/scheduledjobs/{scheduleId}"
res = self.connector.deleteData(self.endpoint_company + path)
return res
def getDeliverySettings(self)->list:
"""
Return a list of delivery settings.
"""
path = f"/scheduler/scheduler/deliverysettings/"
params = {'expansion': 'definition',"limit" : 2000}
lastPage = False
page_nb = 0
data = []
while lastPage != True:
params['page'] = page_nb
res = self.connector.getData(self.endpoint_company + path, params=params)
data += res.get('content',[])
if len(res.get('content',[]))==params["limit"]:
lastPage = False
else:
lastPage = True
page_nb += 1
return data
def getDeliverySetting(self,deliverySettingId:str=None)->dict:
"""
Retrieve the delivery setting from a scheduled project.
Argument:
deliverySettingId : REQUIRED : The delivery setting ID of the scheduled project.
"""
path = f"/scheduler/scheduler/deliverysettings/{deliverySettingId}/"
params = {'expansion': 'definition'}
res = self.connector.getData(self.endpoint_company + path, params=params)
return res
def createDeliverySetting(self,loginIds:list=None,emails:list=None,groupIds:list=None)->dict:
"""
Create a delivery setting for a specific scheduled project.
Automatically used when using `createScheduleJob`.
Arguments:
loginIds : REQUIRED : List of login ID to send the scheduled project to. Can be retrieved by the getUsers method.
emails : OPTIONAL : In case the recipient are not in the analytics interface.
groupIds : OPTIONAL : List of group ID to send the scheduled project to.
"""
path = f"/scheduler/scheduler/deliverysettings/"
if loginIds is None:
loginIds = []
if emails is None:
emails = []
if groupIds is None:
groupIds = []
data = {
"definition" : {
"allAdmins" : False,
"emailAddresses" : emails,
"groupIds" : groupIds,
"loginIds": loginIds,
"type": "email"
},
"name" : "email-aanalytics2"
}
res = self.connector.postData(self.endpoint_company + path, data=data)
return res
def updateDeliverySetting(self,deliveryId:str=None,loginIds:list=None,emails:list=None,groupIds:list=None)->dict:
"""
Create a delivery setting for a specific scheduled project.
Automatically created for email setting.
Arguments:
deliveryId : REQUIRED : the delivery setting ID to be updated
loginIds : REQUIRED : List of login ID to send the scheduled project to. Can be retrieved by the getUsers method.
emails : OPTIONAL : In case the recipient are not in the analytics interface.
groupIds : OPTIONAL : List of group ID to send the scheduled project to.
"""
if deliveryId is None:
raise ValueError("Require a delivery setting ID")
path = f"/scheduler/scheduler/deliverysettings/{deliveryId}"
if loginIds is None:
loginIds = []
if emails is None:
emails = []
if groupIds is None:
groupIds = []
data = {
"definition" : {
"allAdmins" : False,
"emailAddresses" : emails,
"groupIds" : groupIds,
"loginIds": loginIds,
"type": "email"
},
"name" : "email-aanalytics2"
}
res = self.connector.putData(self.endpoint_company + path, data=data)
return res
def deleteDeliverySetting(self,deliveryId:str=None)->dict:
"""
Delete a delivery setting based on the ID passed.
Arguments:
deliveryId : REQUIRED : The delivery setting ID to be deleted.
"""
if deliveryId is None:
raise ValueError("Require a delivery setting ID")
path = f"/scheduler/scheduler/deliverysettings/{deliveryId}"
res = self.connector.deleteData(self.endpoint_company + path)
return res
def getProjects(self, includeType: str = 'all', full: bool = False, limit: int = None, includeShared: bool = False,
includeTemplate: bool = False, format: str = 'df', cache:bool=False, save: bool = False) -> JsonListOrDataFrameType:
"""
Returns the list of projects through either a dataframe or a list.
Arguments:
includeType : OPTIONAL : type of projects to be retrieved.(str) Possible values:
- all : Default value (all projects possibles)
- shared : shared projects
full : OPTIONAL : if set to True, returns all information about projects.
limit : OPTIONAL : Limit the number of result returned.
includeShared : OPTIONAL : If full is set to False, you can retrieve only information about sharing.
includeTemplate: OPTIONAL : If full is set to False, you can add information about template here.
format : OPTIONAL : format : OPTIONAL : format of the output. 2 values "df" for dataframe (default) and "raw" for raw json.
cache : OPTIONAL : Boolean in case you want to cache the result in the "listProjectIds" attribute.
save : OPTIONAL : If set to True, it will save the info in a csv file (bool : default False)
"""
if self.loggingEnabled:
self.logger.debug(f"starting getProjects")
path = "/projects"
params = {"includeType": includeType}
if full:
params[
"expansion"] = 'reportSuiteName,ownerFullName,tags,shares,sharesFullName,modified,favorite,approved,companyTemplate,externalReferences,accessLevel'
else:
params["expansion"] = "ownerFullName,modified"
if includeShared:
params["expansion"] += ',shares,sharesFullName'
if includeTemplate:
params["expansion"] += ',companyTemplate'
if limit is not None:
params['limit'] = limit
if self.loggingEnabled:
self.logger.debug(f"params: {params}")
res = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header)
if cache:
self.listProjectIds = res
if format == "raw":
if save:
with open('projects.json', 'w') as f:
f.write(json.dumps(res, indent=2))
return res
df = pd.DataFrame(res)
if df.empty == False:
df['created'] = pd.to_datetime(df['created'], format='%Y-%m-%dT%H:%M:%SZ')
df['modified'] = pd.to_datetime(df['modified'], format='%Y-%m-%dT%H:%M:%SZ')
if save:
df.to_csv(f'projects_{int(time.time())}.csv', index=False)
return df
def getProject(self, projectId: str = None, projectClass: bool = False, rsidSuffix: bool = False, retry: int = 0, cache:bool=False, verbose: bool = False) -> Union[dict,Project]:
"""
Return the dictionary of the project information and its definition.
It will return a dictionary or a Project class.
The project detail will be saved as Project class in the projectsDetails class attribute.
Arguments:
projectId : REQUIRED : the project ID to be retrieved.
projectClass : OPTIONAL : if set to True. Returns a class of the project with prefiltered information
rsidSuffix : OPTIONAL : if set to True, returns project class with rsid as suffic to dimensions and metrics.
retry : OPTIONAL : If you want to retry the request if it fails. Specify number of retry (0 default)
cache : OPTIONAL : If you want to cache the result as Project class in the "projectsDetails" attribute.
verbose : OPTIONAL : If you wish to have logs of status
"""
if projectId is None:
raise Exception("Requires a projectId parameter")
params = {
'expansion': 'definition,ownerFullName,modified,favorite,approved,tags,shares,sharesFullName,reportSuiteName,companyTemplate,accessLevel'}
path = f"/projects/{projectId}"
if self.loggingEnabled:
self.logger.debug(f"starting getProject for {projectId}")
res = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header,retry=retry, verbose=verbose)
if projectClass:
if self.loggingEnabled:
self.logger.info(f"building an instance of Project class")
myProject = Project(res,rsidSuffix=rsidSuffix)
return myProject
if cache:
if self.loggingEnabled:
self.logger.info(f"caching the project as Project class")
try:
self.projectsDetails[projectId] = Project(res)
except:
if verbose:
print('WARNING : Cannot convert Project to Project class')
if self.loggingEnabled:
self.logger.warning(f"Cannot convert Project to Project class")
return res
def getAllProjectDetails(self, projects:JsonListOrDataFrameType=None, filterNameProject:str=None, filterNameOwner:str=None, useAttribute:bool=True, cache:bool=False, rsidSuffix:bool=False, output:str="dict", verbose:bool=False)->dict:
"""
Retrieve all projects details. You can either pass the list of dataframe returned from the getProjects methods and some filters.
Returns a dict of ProjectId and the value is the Project class for analysis.
Arguments:
projects : OPTIONAL : Takes the type of object returned from the getProjects (all data - not only the ID).
If None is provided and you never ran the getProjects method, we will call the getProjects method and retrieve the elements.
Otherwise you can pass either a limited list of elements that you want to check details for.
filterNameProject : OPTIONAL : If you want to retrieve project details for project with a specific string in their name.
filterNameOwner : OPTIONAL : If you want to retrieve project details for project with an owner having a specific name.
useAttribute : OPTIONAL : True by default, it will use the projectList saved in the listProjectIds attribute.
If you want to start from scratch on the retrieval process of your projects.
rsidSuffix : OPTIONAL : If you want to add rsid as suffix of metrics and dimensions (::rsid)
cache : OPTIONAL : If you want to cache the different elements retrieved for future usage.
output : OPTIONAL : If you want to return a "list" or "dict" from this method. (default "dict")
verbose : OPTIONAL : Set to True to print information.
Not using filter may end up taking a while to retrieve the information.
"""
if self.loggingEnabled:
self.logger.debug(f"starting getAllProjectDetails")
## if no project data
if projects is None:
if self.loggingEnabled:
self.logger.debug(f"No projects passed")
if len(self.listProjectIds)>0 and useAttribute:
fullProjectIds = self.listProjectIds
else:
fullProjectIds = self.getProjects(format='raw',cache=cache)
## if project data is passed
elif projects is not None:
if self.loggingEnabled:
self.logger.debug(f"projects passed")
if isinstance(projects,pd.DataFrame):
fullProjectIds = projects.to_dict(orient='records')
elif isinstance(projects,list):
fullProjectIds = (proj['id'] for proj in projects)
if filterNameProject is not None:
if self.loggingEnabled:
self.logger.debug(f"filterNameProject passed")
fullProjectIds = [project for project in fullProjectIds if filterNameProject in project['name']]
if filterNameOwner is not None:
if self.loggingEnabled:
self.logger.debug(f"filterNameOwner passed")
fullProjectIds = [project for project in fullProjectIds if filterNameOwner in project['owner'].get('name','')]
if verbose:
print(f'{len(fullProjectIds)} project details to retrieve')
print(f"estimated time required : {int(len(fullProjectIds)/60)} minutes")
if self.loggingEnabled:
self.logger.debug(f'{len(fullProjectIds)} project details to retrieve')
projectIds = (project['id'] for project in fullProjectIds)
projectsDetails = {projectId:self.getProject(projectId,projectClass=True,rsidSuffix=rsidSuffix) for projectId in projectIds}
if filterNameProject is None and filterNameOwner is None:
self.projectsDetails = projectsDetails
if output == "list":
list_projectsDetails = [projectsDetails[key] for key in projectsDetails]
return list_projectsDetails
return projectsDetails
def deleteProject(self, projectId: str = None) -> dict:
"""
Delete the project specified by its ID.
Arguments:
projectId : REQUIRED : the project ID to be deleted.
"""
if self.loggingEnabled:
self.logger.debug(f"starting deleteProject")
if projectId is None:
raise Exception("Requires a projectId parameter")
path = f"/projects/{projectId}"
res = self.connector.deleteData(self.endpoint_company + path, headers=self.header)
return res
def validateProject(self,projectObj:dict = None)->dict:
"""
Validate a project definition based on the definition passed.
Arguments:
projectObj : REQUIRED : the dictionary that represents the Workspace definition.
requires the following elements: name,description,rsid, definition, owner
"""
if self.loggingEnabled:
self.logger.debug(f"starting validateProject")
if projectObj is None and type(projectObj) != dict :
raise Exception("Requires a projectObj data to be sent to the server.")
if 'project' in projectObj.keys():
rsid = projectObj['project'].get('rsid',None)
else:
rsid = projectObj.get('rsid',None)
projectObj = {'project':projectObj}
if rsid is None:
raise Exception("Could not find a rsid parameter in your project definition")
path = "/projects/validate"
params = {'rsid':rsid}
res = self.connector.postData(self.endpoint_company + path, data=projectObj, headers=self.header,params=params)
return res
def updateProject(self, projectId: str = None, projectObj: dict = None) -> dict:
"""
Update your project with the new object placed as parameter.
Arguments:
projectId : REQUIRED : the project ID to be updated.
projectObj : REQUIRED : the dictionary to replace the previous Workspace.
requires the following elements: name,description,rsid, definition, owner
"""
if self.loggingEnabled:
self.logger.debug(f"starting updateProject")
if projectId is None:
raise Exception("Requires a projectId parameter")
path = f"/projects/{projectId}"
if projectObj is None:
raise Exception("Requires a projectObj parameter")
if 'name' not in projectObj.keys():
raise KeyError("Requires name key in the project object")
if 'description' not in projectObj.keys():
raise KeyError("Requires description key in the project object")
if 'rsid' not in projectObj.keys():
raise KeyError("Requires rsid key in the project object")
if 'owner' not in projectObj.keys():
raise KeyError("Requires owner key in the project object")
if type(projectObj['owner']) != dict:
raise ValueError("Requires owner key to be a dictionary")
if 'definition' not in projectObj.keys():
raise KeyError("Requires definition key in the project object")
if type(projectObj['definition']) != dict:
raise ValueError("Requires definition key to be a dictionary")
res = self.connector.putData(self.endpoint_company + path, data=projectObj, headers=self.header)
return res
def createProject(self, projectObj: dict = None) -> dict:
"""
Create a project based on the definition you have set.
Arguments:
projectObj : REQUIRED : the dictionary to create a new Workspace.
requires the following elements: name,description,rsid, definition, owner
"""
if self.loggingEnabled:
self.logger.debug(f"starting createProject")
path = "/projects/"
if projectObj is None:
raise Exception("Requires a projectId parameter")
if 'name' not in projectObj.keys():
raise KeyError("Requires name key in the project object")
if 'description' not in projectObj.keys():
raise KeyError("Requires description key in the project object")
if 'rsid' not in projectObj.keys():
raise KeyError("Requires rsid key in the project object")
if 'owner' not in projectObj.keys():
raise KeyError("Requires owner key in the project object")
if type(projectObj['owner']) != dict:
raise ValueError("Requires owner key to be a dictionary")
if 'definition' not in projectObj.keys():
raise KeyError("Requires definition key in the project object")
if type(projectObj['definition']) != dict:
raise ValueError("Requires definition key to be a dictionary")
res = self.connector.postData(self.endpoint_company + path, data=projectObj, headers=self.header)
return res
def findComponentsUsage(self,components:list=None,
projectDetails:list=None,
segments:Union[list,pd.DataFrame]=None,
calculatedMetrics:Union[list,pd.DataFrame]=None,
recursive:bool=False,
regexUsed:bool=False,
verbose:bool=False,
resetProjectDetails:bool=False,
rsidSuffix:bool=False,
)->dict:
"""
Find the usage of components in the different part of Adobe Analytics setup.
Projects, Segment, Calculated metrics.
Arguments:
components : REQUIRED : list of component to look for.
Example : evar10,event1,prop3,segmentId, calculatedMetricsId
projectDetails: OPTIONAL : list of instances of Project class.
segments : OPTIONAL : If you wish to pass the segments to look for. (should contain definition)
calculatedMetrics : OPTIONAL : If you wish to pass the segments to look for. (should contain definition)
recursive : OPTIONAL : if set to True, will also find the reference where the meta component are used.
segments based on your elements will also be searched to see where they are located.
regexUsed : OPTIONAL : If set to True, the element are definied as a regex and some default setup is turned off.
resetProjectDetails : OPTIONAL : Set to false by default. If set to True, it will NOT use the cache.
rsidSuffix : OPTIONAL : If you do not give projectDetails and you want to look for rsid usage in report for dimensions and metrics.
"""
if components is None or type(components) != list:
raise ValueError("components must be present as a list")
if self.loggingEnabled:
self.logger.debug(f"starting findComponentsUsage for {components}")
listComponentProp = [comp for comp in components if 'prop' in comp]
listComponentVar = [comp for comp in components if 'evar' in comp]
listComponentEvent = [comp for comp in components if 'event' in comp]
listComponentSegs = [comp for comp in components if comp.startswith('s')]
listComponentCalcs = [comp for comp in components if comp.startswith('cm')]
restComponents = set(components) - set(listComponentProp+listComponentVar+listComponentEvent+listComponentSegs+listComponentCalcs)
listDefaultElements = [comp for comp in restComponents]
listRecusion = []
## adding unregular ones
regPartSeg = "('|\.)" ## ensure to not catch evar100 for evar10
regPartProj = "($|\.|\::)" ## ensure to not catch evar100 for evar10
if regexUsed:
if self.loggingEnabled:
self.logger.debug(f"regex is used")
regPartSeg = ""
regPartProj = ""
## Segments
if verbose:
print('retrieving segments')
if self.loggingEnabled:
self.logger.debug(f"retrieving segments")
if len(self.segments) == 0 and segments is None:
self.segments = self.getSegments(extended_info=True)
mySegments = self.segments
elif len(self.segments) > 0 and segments is None:
mySegments = self.segments
elif segments is not None:
if type(segments) == list:
mySegments = pd.DataFrame(segments)
elif type(segments) == pd.DataFrame:
mySegments = segments
else:
mySegments = segments
### Calculated Metrics
if verbose:
print('retrieving calculated metrics')
if self.loggingEnabled:
self.logger.debug(f"retrieving calculated metrics")
if len(self.calculatedMetrics) == 0 and calculatedMetrics is None:
self.calculatedMetrics = self.getCalculatedMetrics(extended_info=True)
myMetrics = self.calculatedMetrics
elif len(self.segments) > 0 and calculatedMetrics is None:
myMetrics = self.calculatedMetrics
elif calculatedMetrics is not None:
if type(calculatedMetrics) == list:
myMetrics = pd.DataFrame(calculatedMetrics)
elif type(calculatedMetrics) == pd.DataFrame:
myMetrics = calculatedMetrics
else:
myMetrics = calculatedMetrics
### Projects
if (len(self.projectsDetails) == 0 and projectDetails is None) or resetProjectDetails:
if self.loggingEnabled:
self.logger.debug(f"retrieving projects details")
self.projectDetails = self.getAllProjectDetails(verbose=verbose,rsidSuffix=rsidSuffix)
myProjectDetails = (self.projectsDetails[key].to_dict() for key in self.projectsDetails)
elif len(self.projectsDetails) > 0 and projectDetails is None and resetProjectDetails==False:
if self.loggingEnabled:
self.logger.debug(f"transforming projects details")
myProjectDetails = (self.projectsDetails[key].to_dict() for key in self.projectsDetails)
elif projectDetails is not None:
if self.loggingEnabled:
self.logger.debug(f"setting the project details")
if isinstance(projectDetails[0],Project):
myProjectDetails = (item.to_dict() for item in projectDetails)
elif isinstance(projectDetails[0],dict):
myProjectDetails = (Project(item).to_dict() for item in projectDetails)
else:
raise Exception("Project details were not able to be processed")
teeProjects:tuple = tee(myProjectDetails) ## duplicating the project generator for recursive pass (low memory - intensive computation)
returnObj = {element : {'segments':[],'calculatedMetrics':[],'projects':[]} for element in components}
recurseObj = defaultdict(list)
if verbose:
print('search started')
print(f'recursive option : {recursive}')
print('start looking into segments')
if self.loggingEnabled:
self.logger.debug(f"Analyzing segments")
for _,seg in mySegments.iterrows():
for prop in listComponentProp:
if re.search(f"{prop+regPartSeg}",str(seg['definition'])):
returnObj[prop]['segments'].append({seg['name']:seg['id']})
if recursive:
listRecusion.append(seg['id'])
for var in listComponentVar:
if re.search(f"{var+regPartSeg}",str(seg['definition'])):
returnObj[var]['segments'].append({seg['name']:seg['id']})
if recursive:
listRecusion.append(seg['id'])
for event in listComponentEvent:
if re.search(f"{event}'",str(seg['definition'])):
returnObj[event]['segments'].append({seg['name']:seg['id']})
if recursive:
listRecusion.append(seg['id'])
for element in listDefaultElements:
if re.search(f"{element}",str(seg['definition'])):
returnObj[element]['segments'].append({seg['name']:seg['id']})
if recursive:
listRecusion.append(seg['id'])
if self.loggingEnabled:
self.logger.debug(f"Analyzing calculated metrics")
if verbose:
print('start looking into calculated metrics')
for _,met in myMetrics.iterrows():
for prop in listComponentProp:
if re.search(f"{prop+regPartSeg}",str(met['definition'])):
returnObj[prop]['calculatedMetrics'].append({met['name']:met['id']})
if recursive:
listRecusion.append(met['id'])
for var in listComponentVar:
if re.search(f"{var+regPartSeg}",str(met['definition'])):
returnObj[var]['calculatedMetrics'].append({met['name']:met['id']})
if recursive:
listRecusion.append(met['id'])
for event in listComponentEvent:
if re.search(f"{event}'",str(met['definition'])):
returnObj[event]['calculatedMetrics'].append({met['name']:met['id']})
if recursive:
listRecusion.append(met['id'])
for element in listDefaultElements:
if re.search(f"{element}'",str(met['definition'])):
returnObj[element]['calculatedMetrics'].append({met['name']:met['id']})
if recursive:
listRecusion.append(met['id'])
if verbose:
print('start looking into projects')
if self.loggingEnabled:
self.logger.debug(f"Analyzing projects")
for proj in teeProjects[0]:
## mobile reports don't have dimensions.
if proj['reportType'] == "desktop":
for prop in listComponentProp:
for element in proj['dimensions']:
if re.search(f"{prop+regPartProj}",element):
returnObj[prop]['projects'].append({proj['name']:proj['id']})
for var in listComponentVar:
for element in proj['dimensions']:
if re.search(f"{var+regPartProj}",element):
returnObj[var]['projects'].append({proj['name']:proj['id']})
for event in listComponentEvent:
for element in proj['metrics']:
if re.search(f"{event}",element):
returnObj[event]['projects'].append({proj['name']:proj['id']})
for seg in listComponentSegs:
for element in proj.get('segments',[]):
if re.search(f"{seg}",element):
returnObj[seg]['projects'].append({proj['name']:proj['id']})
for met in listComponentCalcs:
for element in proj.get('calculatedMetrics',[]):
if re.search(f"{met}",element):
returnObj[met]['projects'].append({proj['name']:proj['id']})
for element in listDefaultElements:
for met in proj['calculatedMetrics']:
if re.search(f"{element}",met):
returnObj[element]['projects'].append({proj['name']:proj['id']})
for dim in proj['dimensions']:
if re.search(f"{element}",dim):
returnObj[element]['projects'].append({proj['name']:proj['id']})
for rsid in proj['rsids']:
if re.search(f"{element}",rsid):
returnObj[element]['projects'].append({proj['name']:proj['id']})
for event in proj['metrics']:
if re.search(f"{element}",event):
returnObj[element]['projects'].append({proj['name']:proj['id']})
if recursive:
if verbose:
print('start looking into recursive elements')
if self.loggingEnabled:
self.logger.debug(f"recursive option checked")
for proj in teeProjects[1]:
for rec in listRecusion:
for element in proj.get('segments',[]):
if re.search(f"{rec}",element):
recurseObj[rec].append({proj['name']:proj['id']})
for element in proj.get('calculatedMetrics',[]):
if re.search(f"{rec}",element):
recurseObj[rec].append({proj['name']:proj['id']})
if recursive:
returnObj['recursion'] = recurseObj
if verbose:
print('done')
return returnObj
def getUsageLogs(self,
startDate:str=None,
endDate:str=None,
eventType:str=None,
event:str=None,
rsid:str=None,
login:str=None,
ip:str=None,
limit:int=100,
max_result:int=None,
format:str="df",
verbose:bool=False,
**kwargs)->dict:
"""
Returns the Audit Usage Logs from your company analytics setup.
Arguments:
startDate : REQUIRED : Start date, format : 2020-12-01T00:00:00-07.(default 60 days prior today)
endDate : REQUIRED : End date, format : 2020-12-15T14:32:33-07. (default today)
Should be a maximum of a 3 month period between startDate and endDate.
eventType : OPTIONAL : The numeric id for the event type you want to filter logs by.
Please reference the lookup table in the LOGS_EVENT_TYPE
event : OPTIONAL : The event description you want to filter logs by.
No wildcards are permitted, but this filter is case insensitive and supports partial matches.
rsid : OPTIONAL : ReportSuite ID to filter on.
login : OPTIONAL : The login value of the user you want to filter logs by. This filter functions as an exact match.
ip : OPTIONAL : The IP address you want to filter logs by. This filter supports a partial match.
limit : OPTIONAL : Number of results per page.
max_result : OPTIONAL : Number of maximum amount of results if you want. If you want to cap the process. Ex : max_result=1000
format : OPTIONAL : If you wish to have a DataFrame ("df" - default) or list("raw") as output.
verbose : OPTIONAL : Set it to True if you want to have console info.
possible kwargs:
page : page number (default 0)
"""
if self.loggingEnabled:
self.logger.debug(f"starting getUsageLogs")
import datetime
now = datetime.datetime.now()
if startDate is None:
startDate = datetime.datetime.isoformat(now - datetime.timedelta(days=60)).split('.')[0]
if endDate is None:
endDate = datetime.datetime.isoformat(now).split('.')[0]
path = "/auditlogs/usage"
params = {"page":kwargs.get('page',0),"limit":limit,"startDate":startDate,"endDate":endDate}
if eventType is not None:
params['eventType'] = eventType
if event is not None:
params['event'] = event
if rsid is not None:
params['rsid'] = rsid
if login is not None:
params['login'] = login
if ip is not None:
params['ip'] = ip
if self.loggingEnabled:
self.logger.debug(f"params: {params}")
res = self.connector.getData(self.endpoint_company + path, params=params,verbose=verbose)
data = res['content']
lastPage = res['lastPage']
while lastPage == False:
params["page"] += 1
res = self.connector.getData(self.endpoint_company + path, params=params,verbose=verbose)
data += res['content']
lastPage = res['lastPage']
if max_result is not None:
if len(data) >= max_result:
lastPage = True
if format == "df":
df = pd.DataFrame(data)
return df
return data
def getTopItems(self,rsid:str=None,dimension:str=None,dateRange:str=None,searchClause:str=None,lookupNoneValues:bool = True,limit:int=10,verbose:bool=False,**kwargs)->object:
"""
Returns the top items of a request.
Arguments:
rsid : REQUIRED : ReportSuite ID of the data
dimension : REQUIRED : The dimension to retrieve
dateRange : OPTIONAL : Format YYYY-MM-DD/YYYY-MM-DD (default 90 days)
searchClause : OPTIONAL : General search string; wrap with single quotes. Example: 'PageABC'
lookupNoneValues : OPTIONAL : None values to be included (default True)
limit : OPTIONAL : Number of items to be returned per page.
verbose : OPTIONAL : If you want to have comments displayed (default False)
possible kwargs:
page : page to look for
startDate : start date with format YYYY-MM-DD
endDate : end date with format YYYY-MM-DD
searchAnd, searchOr, searchNot, searchPhrase : Search element to be included (or not), partial match or not.
"""
if self.loggingEnabled:
self.logger.debug(f"starting getTopItems")
path = "/reports/topItems"
page = kwargs.get("page",0)
if rsid is None:
raise ValueError("Require a reportSuite ID")
if dimension is None:
raise ValueError("Require a dimension")
params = {"rsid" : rsid, "dimension":dimension,"lookupNoneValues":lookupNoneValues,"limit":limit,"page":page}
if searchClause is not None:
params["search-clause"] = searchClause
if dateRange is not None and '/' in dateRange:
params["dateRange"] = dateRange
if kwargs.get('page',None) is not None:
params["page"] = kwargs.get('page')
if kwargs.get("startDate",None) is not None:
params["startDate"] = kwargs.get("startDate")
if kwargs.get("endDate",None) is not None:
params["endDate"] = kwargs.get("endDate")
if kwargs.get("searchAnd", None) is not None:
params["searchAnd"] = kwargs.get("searchAnd")
if kwargs.get("searchOr",None) is not None:
params["searchOr"] = kwargs.get("searchOr")
if kwargs.get("searchNot",None) is not None:
params["searchNot"] = kwargs.get("searchNot")
if kwargs.get("searchPhrase",None) is not None:
params["searchPhrase"] = kwargs.get("searchPhrase")
last_page = False
if verbose:
print('Starting to fetch the data...')
data = []
while not last_page:
if verbose:
print(f'request page : {page}')
res = self.connector.getData(self.endpoint_company+path,params=params)
last_page = res.get("lastPage",True)
data += res["rows"]
page += 1
params["page"] = page
df = pd.DataFrame(data)
return df
def getAnnotations(self,full:bool=True,includeType:str='all',limit:int=1000,page:int=0)->list:
"""
Returns a list of the available annotations
Arguments:
full : OPTIONAL : If set to True (default), returned all available information of the annotation.
includeType : OPTIONAL : use to return only "shared" or "all"(default) annotation available.
limit : OPTIONAL : number of result per page (default 1000)
page : OPTIONAL : page used for pagination
"""
params = {"includeType":includeType,"page":page}
if full:
params['expansion'] = "name,description,dateRange,color,applyToAllReports,scope,createdDate,modifiedDate,modifiedById,tags,shares,approved,favorite,owner,usageSummary,companyId,reportSuiteName,rsid"
path = f"/annotations"
lastPage = False
data = []
while lastPage == False:
res = self.connector.getData(self.endpoint_company + path,params=params)
data += res.get('content',[])
lastPage = res.get('lastPage',True)
params['page'] += 1
return data
def getAnnotation(self,annotationId:str=None)->dict:
"""
Return a specific annotation definition.
Arguments:
annotationId : REQUIRED : The annotation ID
"""
if annotationId is None:
raise ValueError("Require an annotation ID")
path = f"/annotations/{annotationId}"
params ={
"expansion" : "name,description,dateRange,color,applyToAllReports,scope,createdDate,modifiedDate,modifiedById,tags,shares,approved,favorite,owner,usageSummary,companyId,reportSuiteName,rsid"
}
res = self.connector.getData(self.endpoint_company + path,params=params)
return res
def deleteAnnotation(self,annotationId:str=None)->dict:
"""
Delete a specific annotation definition.
Arguments:
annotationId : REQUIRED : The annotation ID to be deleted
"""
if annotationId is None:
raise ValueError("Require an annotation ID")
path = f"/annotations/{annotationId}"
res = self.connector.deleteData(self.endpoint_company + path)
return res
def createAnnotation(self,
name:str=None,
dateRange:str=None,
rsid:str=None,
metricIds:list=None,
dimensionObj:list=None,
description:str=None,
filterIds:list=None,
applyToAllReports:bool=False,
**kwargs)->dict:
"""
Create an Annotation.
Arguments:
name : REQUIRED : Name of the annotation
dateRange : REQUIRED : Date range of the annotation to be used.
Example: 2022-04-19T00:00:00/2022-04-19T23:59:59
rsid : REQUIRED : ReportSuite ID
metricIds : OPTIONAL : List of metrics ID to be annotated
filterIds : OPTIONAL : List of Segments ID to apply for annotation for context.
dimensionObj : OPTIONAL : List of dimensions object specification:
{
componentType: "dimension"
dimensionType: "string"
id: "variables/product"
operator: "streq"
terms: ["unknown"]
}
applyToAllReports : OPTIONAL : If the annotation apply to all ReportSuites.
possible kwargs:
colors: Color to be used, examples: "STANDARD1"
shares: List of userId for sharing the annotation
tags: List of tagIds to be applied
favorite: boolean to set the annotation as favorite (false by default)
approved: boolean to set the annotation as approved (false by default)
"""
path = f"/annotations"
if name is None:
raise ValueError("A name must be specified")
if dateRange is None:
raise ValueError("A dateRange must be specified")
if rsid is None:
raise ValueError("a master ReportSuite ID must be specified")
description = description or "api generated"
data = {
"name": name,
"description": description,
"dateRange": dateRange,
"color": kwargs.get('colors',"STANDARD1"),
"applyToAllReports": applyToAllReports,
"scope": {
"metrics":[],
"filters":[]
},
"tags": [],
"approved": kwargs.get('approved',False),
"favorite": kwargs.get('favorite',False),
"rsid": rsid
}
if metricIds is not None and type(metricIds) == list:
for metric in metricIds:
data['scopes']['metrics'].append({
"id" : metric,
"componentType":"metric"
})
if filterIds is None and type(filterIds) == list:
for filter in filterIds:
data['scopes']['filters'].append({
"id" : filter,
"componentType":"segment"
})
if dimensionObj is not None and type(dimensionObj) == list:
for obj in dimensionObj:
data['scopes']['filters'].append(obj)
if kwargs.get("shares",None) is not None:
data['shares'] = []
for user in kwargs.get("shares",[]):
data['shares'].append({
"shareToId" : user,
"shareToType":"user"
})
if kwargs.get('tags',None) is not None:
for tag in kwargs.get('tags'):
res = self.getTag(tag)
data['tags'].append({
"id":tag,
"name":res['name']
})
res = self.connector.postData(self.endpoint_company + path,data=data)
return res
def updateAnnotation(self,annotationId:str=None,annotationObj:dict=None)->dict:
"""
Update an annotation based on its ID. PUT method.
Arguments:
annotationId : REQUIRED : The annotation ID to be updated
annotationObj : REQUIRED : The object to replace the annotation.
"""
if annotationObj is None or type(annotationObj) != dict:
raise ValueError('Require a dictionary representing the annotation definition')
if annotationId is None:
raise ValueError('Require the annotation ID')
path = f"/annotations/{annotationId}"
res = self.connector.putData(self.endpoint_company+path,data=annotationObj)
return res
# def getDataWarehouseReports(self,reportSuite:str=None,reportName:str=None,deliveryUUID:str=None,status:str=None,
# ScheduledRequestUUID:str=None,limit:int=1000)-> dict:
# """
# Get all DW reports that matched filter parameters.
# Arguments:
# reportSuite : OPTIONAL : The name of the reportSuite
# reportName : OPTIONAL : The name of the report
# deliveryUUID : OPTIONAL : the UUID generated for that report
# status : OPTIONAL : Status of the report Generation, can be any of [COMPLETED, CANCELED, ERROR_DELIVERY, ERROR_PROCESSING, CREATED, PROCESSING, PENDING]
# scheduledRequestUUID : OPTIONAL : THe schedule report UUID generated by this report
# limit : OPTIONAL : Maximum amount of data returned
# """
# path = '/data_warehouse/report'
# params = {"limit":limit}
# if reportSuite is not None:
# params['ReportSuite'] = reportSuite
# if reportName is not None:
# params['ReportName'] = reportName
# if deliveryUUID is not None:
# params['DeliveryProfileUUID'] = deliveryUUID
# if status is not None and status in ["COMPLETED", "CANCELED", "ERROR_DELIVERY", "ERROR_PROCESSING", "CREATED", "PROCESSING", "PENDING"]:
# params["Status"] = status
# if ScheduledRequestUUID is not None:
# params['ScheduledRequestUUID'] = ScheduledRequestUUID
# res = self.connector.getData('https://analytics.adobe.io/api' + path,params=params)
# return res
# def getDataWarehouseReport(self,reportUUID:str=None)-> dict:
# """
# Return a single report information out of the report UUID.
# Arguments:
# reportUUID : REQUIRED : the report UUID
# """
# if reportUUID is None:
# raise ValueError("Require a report UUID")
# path = f'/data_warehouse/report/{reportUUID}'
# res = self.connector.getData('https://analytics.adobe.io/api' + path)
# return res
# def getDataWarehouseRequests(self,reportSuite:str=None,reportName:str=None,status:str=None,limit:int=1000)-> dict:
# """
# Get all DW requests that matched filter parameters.
# Arguments:
# reportSuite : OPTIONAL : The name of the reportSuite
# reportName : OPTIONAL : The name of the report
# status : OPTIONAL : Status of the report Generation, can be any of [COMPLETED, CANCELED, ERROR_DELIVERY, ERROR_PROCESSING, CREATED, PROCESSING, PENDING]
# scheduledRequestUUID : OPTIONAL : THe schedule report UUID generated by this report
# limit : OPTIONAL : Maximum amount of data returned
# """
# path = '/data_warehouse/scheduled'
# params = {"limit":limit}
# if reportSuite is not None:
# params['ReportSuite'] = reportSuite
# if reportName is not None:
# params['ReportName'] = reportName
# if status is not None and status in ["COMPLETED", "CANCELED", "ERROR_DELIVERY", "ERROR_PROCESSING", "CREATED", "PROCESSING", "PENDING"]:
# params["Status"] = status
# res = self.connector.getData('https://analytics.adobe.io/api'+ path,params=params)
# return res
# def getDataWarehouseRequest(self,scheduleUUID:str=None)-> dict:
# """
# Return a single request information out of the schedule UUID.
# Arguments:
# scheduleUUID : REQUIRED : the schedule UUID
# """
# if scheduleUUID is None:
# raise ValueError("Require a report UUID")
# path = f'/data_warehouse/scheduled/{scheduleUUID}'
# res = self.connector.getData('https://analytics.adobe.io' + path)
# return res
# def createDataWarehouseRequest(self,
# requestDict:dict=None,
# reportName:str=None,
# login:str=None,
# emails:list=None,
# emailNote:str=None,
# )->dict:
# """
# Create a Data Warehouse request based on either the dictionary provided or the parameters filled.
# Arguments:
# requestDict : OPTIONAL : The complete dictionary definition for a datawarehouse export.
# If not provided, require the other parameters to be used.
# reportName : OPTIONAL : The name of the report
# login : OPTIONAL : The login Id of the user
# emails : OPTIONAL : List of emails for notification. example : ['[email protected]']
# dimensions : OPTIONAL : List of dimensions to use, example : ['prop1']
# metrics : OPTIONAL : List of metrics to use, example : ['event1','event2']
# segments : OPTIONAL : List of segments to use, example : ['seg1','seg2']
# dateGranularity : OPTIONAL :
# reportPeriod : OPTIONAL :
# emailNote : OPTIONAL : Note for the email
# """
# f'/data_warehouse/scheduled/'
# def getDataWarehouseDeliveryAccounts(self)->dict:
# """
# Get All delivery Account used by a company.
# """
# path = f'/data_warehouse/delivery/account'
# res = self.connector.getData('https://analytics.adobe.io'+path)
# return res
# def getDataWarehouseDeliveryProfile(self)->dict:
# """
# Get all Delivery Profile for a given global company id
# """
# path = f'/data_warehouse/delivery/profile'
# res = self.connector.getData('https://analytics.adobe.io'+path)
# return res
def compareReportSuites(self,listRsids:list=None,element:str='dimensions',comparison:str="full",save: bool=False)->pd.DataFrame:
"""
Compare reportSuite on dimensions (default) or metrics based on the comparison selected.
Returns a dataframe with multi-index and a column telling which elements are differents
Arguments:
listRsids : REQUIRED : list of report suite ID to compare
element : REQUIRED : Elements to compare. 2 possible choices:
dimensions (default)
metrics
comparison : REQUIRED : Type of comparison to do:
full (default) : compare name and settings
name : compare only names
save : OPTIONAL : if you want to save in a csv.
"""
if self.loggingEnabled:
self.logger.debug(f"starting compareReportSuites")
if listRsids is None or type(listRsids) != list:
raise ValueError("Require a list of rsids")
if element=="dimensions":
if self.loggingEnabled:
self.logger.debug(f"dimensions selected")
listDFs = [self.getDimensions(rsid,full=True) for rsid in listRsids]
elif element == "metrics":
listDFs = [self.getMetrics(rsid,full=True) for rsid in listRsids]
if self.loggingEnabled:
self.logger.debug(f"metrics selected")
for df,rsid in zip(listDFs, listRsids):
df['rsid']=rsid
df.set_index('id',inplace=True)
df.set_index('rsid',append=True,inplace=True)
df = pd.concat(listDFs)
df = df.unstack()
if comparison=='name':
df_name = df['name'].copy()
## transforming to a new df with boolean value comparison to col 0
temp_df = df_name.eq(df_name.iloc[:, 0], axis=0)
## now doing a complete comparison of all boolean with all
df_name['different'] = ~temp_df.eq(temp_df.iloc[:,0],axis=0).all(1)
if save:
df_name.to_csv(f'comparison_name_{int(time.time())}.csv')
if self.loggingEnabled:
self.logger.debug(f'Name only comparison, file : comparison_name_{int(time.time())}.csv')
return df_name
## retrieve main indexes from multi level indexes
mainIndex = set([val[0] for val in list(df.columns)])
dict_temp = {}
for index in mainIndex:
temp_df = df[index].copy()
temp_df.fillna('',inplace=True)
## transforming to a new df with boolean value comparison to col 0
temp_df.eq(temp_df.iloc[:, 0], axis=0)
## now doing a complete comparison of all boolean with all
dict_temp[index] = list(temp_df.eq(temp_df.iloc[:,0],axis=0).all(1))
df_bool = pd.DataFrame(dict_temp)
df['different'] = list(~df_bool.eq(df_bool.iloc[:,0],axis=0).all(1))
if save:
df.to_csv(f'comparison_full_{element}_{int(time.time())}.csv')
if self.loggingEnabled:
self.logger.debug(f'Full comparison, file : comparison_full_{element}_{int(time.time())}.csv')
return df
def shareComponent(self, componentId: str = None, componentType: str = None, shareToId: int = None,
shareToImsId: int = None, shareToType: str = None, shareToLogin: str = None,
accessLevel: str = None, shareFromImsId: str = None) -> dict:
"""
Shares a component with an individual or a group (product profile ID) a dictionary on the calculated metrics requested.
Returns the JSON response from the API.
Arguments:
componentId : REQUIRED : The component ID to share.
componentType : REQUIRED : The component Type ("calculatedMetric", "segment", "project", "dateRange")
shareToId: ID of the user or the group to share to
shareToImsId: IMS ID of the user to share to (alternative to ID)
shareToLogin: Login of the user to share to (alternative to ID)
shareToType: "group" => share to a group (product profile), "user" => share to a user, "all" => share to all users (in this case, no shareToId or shareToImsId is needed)
"""
if self.loggingEnabled:
self.logger.debug(f"Starting to share component ID {componentId} with parameters: {locals()}")
path = f"/componentmetadata/shares/"
data = {
"accessLevel": accessLevel,
"componentId": componentId,
"componentType": componentType,
"shareToId": shareToId,
"shareToImsId": shareToImsId,
"shareToLogin": shareToLogin,
"shareToType": shareToType
}
res = self.connector.postData(self.endpoint_company + path, data=data)
return res
def _dataDescriptor(self, json_request: dict):
"""
read the request and returns an object with information about the request.
It will be used in order to build the dataclass and the dataframe.
"""
if self.loggingEnabled:
self.logger.debug(f"starting _dataDescriptor")
obj = {}
if json_request.get('dimension',None) is not None:
obj['dimension'] = json_request.get('dimension')
obj['filters'] = {'globalFilters': [], 'metricsFilters': {}}
obj['rsid'] = json_request['rsid']
metrics_info = json_request['metricContainer']
obj['metrics'] = [metric['id'] for metric in metrics_info['metrics']]
if 'metricFilters' in metrics_info.keys():
metricsFilter = {metric['id']: metric['filters'] for metric in metrics_info['metrics'] if
len(metric.get('filters', [])) > 0}
filters = []
for metric in metricsFilter:
for item in metricsFilter[metric]:
if 'segmentId' in metrics_info['metricFilters'][int(item)].keys():
filters.append(
metrics_info['metricFilters'][int(item)]['segmentId'])
if 'dimension' in metrics_info['metricFilters'][int(item)].keys():
filters.append(
metrics_info['metricFilters'][int(item)]['dimension'])
obj['filters']['metricsFilters'][metric] = set(filters)
for fil in json_request['globalFilters']:
if 'dateRange' in fil.keys():
obj['filters']['globalFilters'].append(fil['dateRange'])
if 'dimension' in fil.keys():
obj['filters']['globalFilters'].append(fil['dimension'])
if 'segmentId' in fil.keys():
obj['filters']['globalFilters'].append(fil['segmentId'])
return obj
def _readData(
self,
data_rows: list,
anomaly: bool = False,
cols: list = None,
item_id: bool = False
) -> pd.DataFrame:
"""
read the data from the requests and returns a dataframe.
Parameters:
data_rows : REQUIRED : Rows that have been returned by the request.
anomaly : OPTIONAL : Boolean to tell if the anomaly detection has been used.
cols : OPTIONAL : list of columns names
"""
if self.loggingEnabled:
self.logger.debug(f"starting _readData")
if cols is None:
raise ValueError("list of columns must be specified")
data_rows = deepcopy(data_rows)
dict_data = {row.get('value', 'missing_value'): row['data'] for row in data_rows}
if cols is not None:
n_metrics = len(cols) - 1
if item_id: # adding the itemId in the data returned
cols.append('item_id')
for row in data_rows:
dict_data[row.get('value', 'missing_value')].append(row['itemId'])
if anomaly:
# set full columns
cols = cols + [f'{metric}-{suffix}' for metric in cols[1:] for suffix in
['expected', 'UpperBound', 'LowerBound']]
# add data to the dictionary
for row in data_rows:
for item in range(n_metrics):
dict_data[row['value']].append(
row.get('dataExpected', [0 for i in range(n_metrics)])[item])
dict_data[row['value']].append(
row.get('dataUpperBound', [0 for i in range(n_metrics)])[item])
dict_data[row['value']].append(
row.get('dataLowerBound', [0 for i in range(n_metrics)])[item])
df = pd.DataFrame(dict_data).T # require to transform the data
df.reset_index(inplace=True, )
df.columns = cols
return df
def getReport(
self,
json_request: Union[dict, str, IO,RequestCreator],
limit: int = 1000,
n_results: Union[int, str] = 1000,
save: bool = False,
item_id: bool = False,
unsafe: bool = False,
verbose: bool = False,
debug=False,
**kwargs,
) -> object:
"""
Retrieve data from a JSON request.Returns an object containing meta info and dataframe.
Arguments:
json_request: REQUIRED : JSON statement that contains your request for Analytics API 2.0.
The argument can be :
- a dictionary : It will be used as it is.
- a string that is a dictionary : It will be transformed to a dictionary / JSON.
- a path to a JSON file that contains the statement (must end with ".json").
- an instance of the RequestCreator class
limit : OPTIONAL : number of result per request (defaut 1000)
n_results : OPTIONAL : Number of result that you would like to retrieve. (default 1000)
if you want to have all possible data, use "inf".
item_id : OPTIONAL : Boolean to define if you want to return the item id for sub requests (default False)
unsafe : OPTIONAL : If set to True, it will not check "lastPage" parameter and assume first request is complete.
This may break the script or return incomplete data. (default False).
save : OPTIONAL : If you would like to save the data within a CSV file. (default False)
verbose : OPTIONAL : If you want to have comments displayed (default False)
"""
if unsafe and verbose:
print('---- running the getReport in "unsafe" mode ----')
obj = {}
if isinstance(json_request,RequestCreator):
request = json_request.to_dict()
elif type(json_request) == dict:
request = json_request
elif type(json_request) == str and '.json' not in json_request:
try:
request = json.loads(json_request)
except:
raise TypeError("expected a parsable string")
elif '.json' in json_request:
try:
with open(Path(json_request), 'r') as file:
file_string = file.read()
request = json.loads(file_string)
except:
raise TypeError("expected a parsable string")
request['settings']['limit'] = limit
# info for creating report
data_info = self._dataDescriptor(request)
if verbose:
print('Request decrypted')
obj.update(data_info)
anomaly = request['settings'].get('includeAnomalyDetection', False)
columns = [data_info['dimension']] + data_info['metrics']
# preparing for the loop
# in case "inf" has been used. Turn it to a number
n_results = kwargs.get('n_result',n_results)
n_results = float(n_results)
if n_results != float('inf') and n_results < request['settings']['limit']:
# making sure we don't call more than set in wrapper
request['settings']['limit'] = n_results
data_list = []
last_page = False
page_nb, count_elements, total_elements = 0, 0, 0
if verbose:
print('Starting to fetch the data...')
while not last_page:
timestamp = round(time.time())
request['settings']['page'] = page_nb
report = self.connector.postData(self.endpoint_company +
self._getReport, data=request, headers=self.header)
if verbose:
print('Data received.')
# Recursion to take care of throttling limit
while report.get('status_code', 200) == 429 or report.get('error_code',None) == "429050":
if verbose:
print('reaching the limit : pause for 50 s and entering recursion.')
if debug:
with open(f'limit_reach_{timestamp}.json', 'w') as f:
f.write(json.dumps(report, indent=4))
time.sleep(50)
report = self.connector.postData(self.endpoint_company +
self._getReport, data=request, headers=self.header)
if 'lastPage' not in report and unsafe == False: # checking error when no lastPage key in report
if verbose:
print(json.dumps(report, indent=2))
print('Warning : Server Error')
print(json.dumps(report))
if debug:
with open(f'server_failure_request_{timestamp}.json', 'w') as f:
f.write(json.dumps(request, indent=4))
with open(f'server_failure_response_{timestamp}.json', 'w') as f:
f.write(json.dumps(report, indent=4))
print(
f'Warning : Save JSON request : server_failure_request_{timestamp}.json')
print(
f'Warning : Save JSON response : server_failure_response_{timestamp}.json')
obj['data'] = pd.DataFrame()
return obj
# fallback when no lastPage in report
last_page = report.get('lastPage', True)
if verbose:
print(f'last page status : {last_page}')
if 'errorCode' in report.keys():
print('Error with your statement \n' +
report['errorDescription'])
return {report['errorCode']: report['errorDescription']}
count_elements += report.get('numberOfElements', 0)
total_elements = report.get(
'totalElements', request['settings']['limit'])
if total_elements == 0:
obj['data'] = pd.DataFrame()
print(
'Warning : No data returned & lastPage is False.\nExit the loop - no save file & empty dataframe.')
if debug:
with open(f'report_no_element_{timestamp}.json', 'w') as f:
f.write(json.dumps(report, indent=4))
if verbose:
print(
f'% of total elements retrieved. TotalElements: {report.get("totalElements", "no data")}')
return obj # in case loop happening with empty data, returns empty data
if verbose and total_elements != 0:
print(
f'% of total elements retrieved: {round((count_elements / total_elements) * 100, 2)} %')
if last_page == False and n_results != float('inf'):
if count_elements >= n_results:
last_page = True
data = report['rows']
data_list += deepcopy(data) # do a deepcopy
page_nb += 1
if verbose:
print(f'# of requests : {page_nb}')
# return report
df = self._readData(data_list, anomaly=anomaly,
cols=columns, item_id=item_id)
if save:
timestampReport = round(time.time())
df.to_csv(f'report-{timestampReport}.csv', index=False)
if verbose:
print(
f'Saving data in file : {os.getcwd()}{os.sep}report-{timestampReport}.csv')
obj['data'] = df
if verbose:
print(
f'Report contains {(count_elements / total_elements) * 100} % of the available dimensions')
return obj
def _prepareData(
self,
dataRows: list = None,
reportType: str = "normal",
) -> dict:
"""
Read the data returned by the getReport and returns a dictionary used by the Workspace class.
Arguments:
dataRows : REQUIRED : data rows data from CJA API getReport
reportType : REQUIRED : "normal" or "static"
"""
if dataRows is None:
raise ValueError("Require dataRows")
data_rows = deepcopy(dataRows)
expanded_rows = {}
if reportType == "normal":
for row in data_rows:
expanded_rows[row["itemId"]] = [row["value"]]
expanded_rows[row["itemId"]] += row["data"]
elif reportType == "static":
expanded_rows = data_rows
return expanded_rows
def _decrypteStaticData(
self, dataRequest: dict = None, response: dict = None,resolveColumns:bool=False
) -> dict:
"""
From the request dictionary and the response, decrypte the data to standardise the reading.
"""
dataRows = []
## retrieve StaticRow ID and segmentID
if len([metric for metric in dataRequest['metricContainer'].get('metricFilters',[]) if metric.get('id','').startswith("STATIC_ROW_COMPONENT")])>0:
if "dateRange" in list(dataRequest['metricContainer'].get('metricFilters',[])[0].keys()):
tableSegmentsRows = {
obj["id"]: obj["dateRange"]
for obj in dataRequest["metricContainer"]["metricFilters"]
if obj["id"].startswith("STATIC_ROW_COMPONENT")
}
elif "segmentId" in list(dataRequest['metricContainer'].get('metricFilters',[])[0].keys()):
tableSegmentsRows = {
obj["id"]: obj["segmentId"]
for obj in dataRequest["metricContainer"]["metricFilters"]
if obj["id"].startswith("STATIC_ROW_COMPONENT")
}
else:
tableSegmentsRows = {
obj["id"]: obj["segmentId"]
for obj in dataRequest["metricContainer"]["metricFilters"]
}
## retrieve place and segmentID
segmentApplied = {}
for obj in dataRequest["metricContainer"]["metricFilters"]:
if obj["id"].startswith("STATIC_ROW") == False:
if obj["type"] == "breakdown":
segmentApplied[obj["id"]] = f"{obj['dimension']}:::{obj['itemId']}"
elif obj["type"] == "segment":
segmentApplied[obj["id"]] = obj["segmentId"]
elif obj["type"] == "dateRange":
segmentApplied[obj["id"]] = obj["dateRange"]
### table columnIds and StaticRow IDs
tableColumnIds = {
obj["columnId"]: obj["filters"][0]
for obj in dataRequest["metricContainer"]["metrics"]
}
### create relations for metrics with Filter on top
filterRelations = {
obj["filters"][0]: obj["filters"][1:]
for obj in dataRequest["metricContainer"]["metrics"]
if len(obj["filters"]) > 1
}
staticRows = set(val for val in tableSegmentsRows.values())
nb_rows = len(staticRows) ## define how many segment used as rows
nb_columns = int(
len(dataRequest["metricContainer"]["metrics"]) / nb_rows
) ## use to detect rows
staticRows = set(val for val in tableSegmentsRows.values())
staticRowsNames = []
for row in staticRows:
if row.startswith("s") and "@AdobeOrg" in row:
filter = self.Segment(row)
staticRowsNames.append(filter["name"])
else:
staticRowsNames.append(row)
if resolveColumns:
staticRowDict = {
row: self.getSegment(rowName).get('name',rowName) for row, rowName in zip(staticRows, staticRowsNames)
}
else:
staticRowDict = {
row: rowName for row, rowName in zip(staticRows, staticRowsNames)
}
### metrics
dataRows = defaultdict(list)
for row in staticRowDict: ## iter on the different static rows
for column, data in zip(
response["columns"]["columnIds"], response["summaryData"]["totals"]
):
if tableSegmentsRows[tableColumnIds[column]] == row:
## check translation of metricId with Static Row ID
if row not in dataRows[staticRowDict[row]]:
dataRows[staticRowDict[row]].append(row)
dataRows[staticRowDict[row]].append(data)
## should ends like : {'segmentName' : ['STATIC',123,456]}
return nb_columns, tableColumnIds, segmentApplied, filterRelations, dataRows
def getReport2(
self,
request: Union[dict, IO,RequestCreator] = None,
limit: int = 20000,
n_results: Union[int, str] = "inf",
allowRemoteLoad: str = "default",
useCache: bool = True,
useResultsCache: bool = False,
includeOberonXml: bool = False,
includePredictiveObjects: bool = False,
returnsNone: bool = None,
countRepeatInstances: bool = None,
ignoreZeroes: bool = None,
rsid: str = None,
resolveColumns: bool = True,
save: bool = False,
returnClass: bool = True,
) -> Union[Workspace, dict]:
"""
Return an instance of Workspace that contains the data requested.
Argumnents:
request : REQUIRED : either a dictionary of a JSON file that contains the request information.
limit : OPTIONAL : number of results per request (default 1000)
n_results : OPTIONAL : total number of results returns. Use "inf" to return everything (default "inf")
allowRemoteLoad : OPTIONAL : Controls if Oberon should remote load data. Default behavior is true with fallback to false if remote data does not exist
useCache : OPTIONAL : Use caching for faster requests (Do not do any report caching)
useResultsCache : OPTIONAL : Use results caching for faster reporting times (This is a pass through to Oberon which manages the Cache)
includeOberonXml : OPTIONAL : Controls if Oberon XML should be returned in the response - DEBUG ONLY
includePredictiveObjects : OPTIONAL : Controls if platform Predictive Objects should be returned in the response. Only available when using Anomaly Detection or Forecasting- DEBUG ONLY
returnsNone : OPTIONAL: Overwritte the request setting to return None values.
countRepeatInstances : OPTIONAL: Overwrite the request setting to count repeatInstances values.
ignoreZeroes : OPTIONAL : Ignore zeros in the results
rsid : OPTIONAL : Overwrite the ReportSuiteId used for report. Only works if the same components are presents.
resolveColumns: OPTIONAL : automatically resolve columns from ID to name for calculated metrics & segments. Default True. (works on returnClass only)
save : OPTIONAL : If you want to save the data (in JSON or CSV, depending the class is used or not)
returnClass : OPTIONAL : return the class building dataframe and better comprehension of data. (default yes)
"""
if self.loggingEnabled:
self.logger.debug(f"Start getReport")
path = "/reports"
params = {
"allowRemoteLoad": allowRemoteLoad,
"useCache": useCache,
"useResultsCache": useResultsCache,
"includeOberonXml": includeOberonXml,
"includePlatformPredictiveObjects": includePredictiveObjects,
}
if type(request) == dict:
dataRequest = request
elif isinstance(request,RequestCreator):
dataRequest = request.to_dict()
elif ".json" in request:
with open(request, "r") as f:
dataRequest = json.load(f)
else:
raise ValueError("Require a JSON or Dictionary to request data")
### Settings
dataRequest = deepcopy(dataRequest)
dataRequest["settings"]["page"] = 0
dataRequest["settings"]["limit"] = limit
if returnsNone:
dataRequest["settings"]["nonesBehavior"] = "return-nones"
elif dataRequest['settings'].get('nonesBehavior',False) != False:
pass ## keeping current settings
else:
dataRequest["settings"]["nonesBehavior"] = "exclude-nones"
if countRepeatInstances:
dataRequest["settings"]["countRepeatInstances"] = True
elif dataRequest["settings"].get("countRepeatInstances",False) != False:
pass ## keeping current settings
else:
dataRequest["settings"]["countRepeatInstances"] = False
if rsid is not None:
dataRequest["rsid"] = rsid
if ignoreZeroes:
dataRequest.get("statistics",{'ignoreZeroes':True})["ignoreZeroes"] = True
deepCopyRequest = deepcopy(dataRequest)
### Request data
if self.loggingEnabled:
self.logger.debug(f"getReport request: {json.dumps(dataRequest,indent=4)}")
res = self.connector.postData(
self.endpoint_company + path, data=dataRequest, params=params
)
if "rows" in res.keys():
reportType = "normal"
if self.loggingEnabled:
self.logger.debug(f"reportType: {reportType}")
dataRows = res.get("rows")
columns = res.get("columns")
summaryData = res.get("summaryData")
totalElements = res.get("numberOfElements")
lastPage = res.get("lastPage", True)
if float(len(dataRows)) >= float(n_results):
## force end of loop when a limit is set on n_results
lastPage = True
while lastPage != True:
dataRequest["settings"]["page"] += 1
res = self.connector.postData(
self.endpoint_company + path, data=dataRequest, params=params
)
dataRows += res.get("rows")
lastPage = res.get("lastPage", True)
totalElements += res.get("numberOfElements")
if float(len(dataRows)) >= float(n_results):
## force end of loop when a limit is set on n_results
lastPage = True
if self.loggingEnabled:
self.logger.debug(f"loop for report over: {len(dataRows)} results")
if returnClass == False:
return dataRows
### create relation between metrics and filters applied
columnIdRelations = {
obj["columnId"]: obj["id"]
for obj in dataRequest["metricContainer"]["metrics"]
}
filterRelations = {
obj["columnId"]: obj["filters"]
for obj in dataRequest["metricContainer"]["metrics"]
if len(obj.get("filters", [])) > 0
}
metricFilters = {}
metricFilterTranslation = {}
for filter in dataRequest["metricContainer"].get("metricFilters", []):
filterId = filter["id"]
if filter["type"] == "breakdown":
filterValue = f"{filter['dimension']}:{filter['itemId']}"
metricFilters[filter["dimension"]] = filter["itemId"]
if filter["type"] == "dateRange":
filterValue = f"{filter['dateRange']}"
metricFilters[filterValue] = filterValue
if filter["type"] == "segment":
filterValue = f"{filter['segmentId']}"
if filterValue.startswith("s") and "@AdobeOrg" in filterValue:
seg = self.getSegment(filterValue)
metricFilters[filterValue] = seg["name"]
metricFilterTranslation[filterId] = filterValue
metricColumns = {}
for colId in columnIdRelations.keys():
metricColumns[colId] = columnIdRelations[colId]
for element in filterRelations.get(colId, []):
metricColumns[colId] += f":::{metricFilterTranslation[element]}"
else:
if returnClass == False:
return res
reportType = "static"
if self.loggingEnabled:
self.logger.debug(f"reportType: {reportType}")
columns = None ## no "columns" key in response
summaryData = res.get("summaryData")
(
nb_columns,
tableColumnIds,
segmentApplied,
filterRelations,
dataRows,
) = self._decrypteStaticData(dataRequest=dataRequest, response=res,resolveColumns=resolveColumns)
### Findings metrics
metricFilters = {}
metricColumns = []
for i in range(nb_columns):
metric: str = res["columns"]["columnIds"][i]
metricName = metric.split(":::")[0]
if metricName.startswith("cm"):
calcMetric = self.getCalculatedMetric(metricName)
metricName = calcMetric["name"]
correspondingStatic = tableColumnIds[metric]
## if the static row has a filter
if correspondingStatic in list(filterRelations.keys()):
## finding segment applied to metrics
for element in filterRelations[correspondingStatic]:
segId:str = segmentApplied[element]
metricName += f":::{segId}"
metricFilters[segId] = segId
if segId.startswith("s") and "@AdobeOrg" in segId:
seg = self.getSegment(segId)
metricFilters[segId] = seg["name"]
metricColumns.append(metricName)
### ending with ['metric1','metric2 + segId',...]
### preparing data points
if self.loggingEnabled:
self.logger.debug(f"preparing data")
preparedData = self._prepareData(dataRows, reportType=reportType)
if returnClass:
if self.loggingEnabled:
self.logger.debug(f"returning Workspace class")
## Using the class
data = Workspace(
responseData=preparedData,
dataRequest=deepCopyRequest,
columns=columns,
summaryData=summaryData,
analyticsConnector=self,
reportType=reportType,
metrics=metricColumns, ## for normal type ## for staticReport
metricFilters=metricFilters,
resolveColumns=resolveColumns,
)
if save:
data.to_csv()
return data | AdobeLibManual678 | /AdobeLibManual678-4.3.tar.gz/AdobeLibManual678-4.3/aanalytics2/aanalytics2.py | aanalytics2.py |
import json
import time
from copy import deepcopy
# Non standard libraries
import requests
from aanalytics2 import config, token_provider
class AdobeRequest:
"""
Handle request to Audience Manager and taking care that the request have a valid token set each time.
Attributes:
restTime : Time to rest before sending new request when reaching too many request status code.
"""
loggingEnabled = False
def __init__(self,
config_object: dict = config.config_object,
header: dict = config.header,
verbose: bool = False,
retry: int = 0,
loggingEnabled:bool=False,
logger:object=None
) -> None:
"""
Set the connector to be used for handling request to AAM
Arguments:
config_object : OPTIONAL : Require the importConfig file to have been used.
header : OPTIONAL : header of the config modules
verbose : OPTIONAL : display comment on the request.
retry : OPTIONAL : If you wish to retry failed GET requests
loggingEnabled : OPTIONAL : if the logging is enable for that instance.
logger : OPTIONAL : instance of the logger created
"""
if config_object['org_id'] == '':
raise Exception(
'You have to upload the configuration file with importConfigFile method.')
self.config = deepcopy(config_object)
self.header = deepcopy(header)
self.loggingEnabled = loggingEnabled
self.logger = logger
self.restTime = 30
self.retry = retry
if self.config['token'] == '' or time.time() > self.config['date_limit']:
if 'scopes' in self.config.keys() and self.config.get('scopes',None) is not None:
self.connectionType = 'oauthV2'
token_and_expiry = token_provider.get_oauth_token_and_expiry_for_config(config=self.config, verbose=verbose)
elif self.config.get("private_key",None) is not None or self.config.get("pathToKey",None) is not None:
self.connectionType = 'jwt'
token_and_expiry = token_provider.get_jwt_token_and_expiry_for_config(config=self.config, verbose=verbose)
token = token_and_expiry['token']
expiry = token_and_expiry['expiry']
self.token = token
if self.loggingEnabled:
self.logger.info("token retrieved : {token}")
self.config['token'] = token
self.config['date_limit'] = time.time() + expiry - 500
self.header.update({'Authorization': f'Bearer {token}'})
def _checkingDate(self) -> None:
"""
Checking if the token is still valid
"""
now = time.time()
if now > self.config['date_limit']:
if self.loggingEnabled:
self.logger.warning("token expired. Trying to retrieve a new token")
if self.connectionType =='oauthV2':
token_and_expiry = token_provider.get_oauth_token_and_expiry_for_config(config=self.config)
elif self.connectionType == 'jwt':
token_and_expiry = token_provider.get_jwt_token_and_expiry_for_config(config=self.config)
token = token_and_expiry['token']
if self.loggingEnabled:
self.logger.info(f"new token retrieved : {token}")
self.config['token'] = token
self.config['date_limit'] = time.time() + token_and_expiry['expiry'] - 500
self.header.update({'Authorization': f'Bearer {token}'})
def getData(self, endpoint: str, params: dict = None, data: dict = None, headers: dict = None, *args, **kwargs):
"""
Abstraction for getting data
"""
internRetry = kwargs.get("retry", self.retry)
self._checkingDate()
if self.loggingEnabled:
self.logger.info(f"endpoint: {endpoint}")
self.logger.info(f"params: {params}")
if headers is None:
headers = self.header
if params is None and data is None:
res = requests.get(
endpoint, headers=headers)
elif params is not None and data is None:
res = requests.get(
endpoint, headers=headers, params=params)
elif params is None and data is not None:
res = requests.get(
endpoint, headers=headers, data=data)
elif params is not None and data is not None:
res = requests.get(endpoint, headers=headers, params=params, data=data)
if kwargs.get("verbose", False):
print(f"request URL : {res.request.url}")
print(f"statut_code : {res.status_code}")
try:
while str(res.status_code) == "429":
if kwargs.get("verbose", False):
print(f'Too many requests: retrying in {self.restTime} seconds')
if self.loggingEnabled:
self.logger.info(f"Too many requests: retrying in {self.restTime} seconds")
time.sleep(self.restTime)
res = requests.get(endpoint, headers=headers, params=params, data=data)
res_json = res.json()
except:
## handling 1.4
if self.loggingEnabled:
self.logger.warning(f"handling exception as res.json() cannot be managed")
self.logger.warning(f"status code: {res.status_code}")
if kwargs.get('legacy',False):
try:
return json.loads(res.text)
except:
if self.loggingEnabled:
self.logger.error(f"GET method failed: {res.status_code}, {res.text}")
return res.text
else:
if self.loggingEnabled:
self.logger.error(f"text: {res.text}")
res_json = {'error': 'Request Error'}
while internRetry > 0:
if self.loggingEnabled:
self.logger.warning(f"Trying again with internal retry")
if kwargs.get("verbose", False):
print('Retry parameter activated')
print(f'{internRetry} retry left')
if 'error' in res_json.keys():
time.sleep(30)
res_json = self.getData(endpoint, params=params, data=data, headers=headers, retry=internRetry-1, **kwargs)
return res_json
return res_json
def postData(self, endpoint: str, params: dict = None, data: dict = None, headers: dict = None, *args, **kwargs):
"""
Abstraction for posting data
"""
self._checkingDate()
if headers is None:
headers = self.header
if params is None and data is None:
res = requests.post(endpoint, headers=headers)
elif params is not None and data is None:
res = requests.post(endpoint, headers=headers, params=params)
elif params is None and data is not None:
res = requests.post(endpoint, headers=headers, data=json.dumps(data))
elif params is not None and data is not None:
res = requests.post(endpoint, headers=headers, params=params, data=json.dumps(data))
try:
res_json = res.json()
if res.status_code == 429 or res_json.get('error_code', None) == "429050":
res_json['status_code'] = 429
except:
## handling 1.4
if kwargs.get('legacy',False):
try:
return json.loads(res.text)
except:
if self.loggingEnabled:
self.logger.error(f"POST method failed: {res.status_code}, {res.text}")
return res.text
res_json = {'error': res.get('status_code','Request Error')}
return res_json
def patchData(self, endpoint: str, params: dict = None, data=None, headers: dict = None, *args, **kwargs):
"""
Abstraction for patching data
"""
self._checkingDate()
if headers is None:
headers = self.header
if params is not None and data is None:
res = requests.patch(endpoint, headers=headers, params=params)
elif params is None and data is not None:
res = requests.patch(endpoint, headers=headers, data=json.dumps(data))
elif params is not None and data is not None:
res = requests.patch(endpoint, headers=headers, params=params, data=json.dumps(data))
try:
while str(res.status_code) == "429":
if kwargs.get("verbose", False):
print(f'Too many requests: retrying in {self.restTime} seconds')
time.sleep(self.restTime)
res = requests.patch(endpoint, headers=headers, params=params,data=json.dumps(data))
res_json = res.json()
except:
if self.loggingEnabled:
self.logger.error(f"PATCH method failed: {res.status_code}, {res.text}")
res_json = {'error': res.get('status_code','Request Error')}
return res_json
def putData(self, endpoint: str, params: dict = None, data=None, headers: dict = None, *args, **kwargs):
"""
Abstraction for putting data
"""
self._checkingDate()
if headers is None:
headers = self.header
if params is not None and data is None:
res = requests.put(endpoint, headers=headers, params=params)
elif params is None and data is not None:
res = requests.put(endpoint, headers=headers, data=json.dumps(data))
elif params is not None and data is not None:
res = requests.put(endpoint, headers=headers, params=params, data=json.dumps(data=data))
try:
status_code = res.json()
except:
if self.loggingEnabled:
self.logger.error(f"PUT method failed: {res.status_code}, {res.text}")
status_code = {'error': res.get('status_code','Request Error')}
return status_code
def deleteData(self, endpoint: str, params: dict = None, headers: dict = None, *args, **kwargs):
"""
Abstraction for deleting data
"""
self._checkingDate()
if headers is None:
headers = self.header
if params is None:
res = requests.delete(endpoint, headers=headers)
elif params is not None:
res = requests.delete(endpoint, headers=headers, params=params)
try:
while str(res.status_code) == "429":
if kwargs.get("verbose", False):
print(f'Too many requests: retrying in {self.restTime} seconds')
time.sleep(self.restTime)
res = requests.delete(endpoint, headers=headers, params=params)
status_code = res.status_code
except:
if self.loggingEnabled:
self.logger.error(f"DELETE method failed: {res.status_code}, {res.text}")
status_code = {'error': 'Request Error'}
return status_code | AdobeLibManual678 | /AdobeLibManual678-4.3.tar.gz/AdobeLibManual678-4.3/aanalytics2/connector.py | connector.py |
import os
import time
from typing import Dict, Union
import json
import jwt
import requests
from aanalytics2 import configs
def get_jwt_token_and_expiry_for_config(config: dict, verbose: bool = False, save: bool = False, *args, **kwargs) -> \
Dict[str, str]:
"""
Retrieve the token by using the information provided by the user during the import importConfigFile function.
ArgumentS :
verbose : OPTIONAL : Default False. If set to True, print information.
save : OPTIONAL : Default False. If set to True, save the toke in the .
"""
private_key = configs.get_private_key_from_config(config)
header_jwt = {
'cache-control': 'no-cache',
'content-type': 'application/x-www-form-urlencoded'
}
now_plus_24h = int(time.time()) + 8760 * 60 * 60
jwt_payload = {
'exp': now_plus_24h,
'iss': config['org_id'],
'sub': config['tech_id'],
'https://ims-na1.adobelogin.com/s/ent_analytics_bulk_ingest_sdk': True,
'aud': f'https://ims-na1.adobelogin.com/c/{config["client_id"]}'
}
encoded_jwt = _get_jwt(payload=jwt_payload, private_key=private_key)
payload = {
'client_id': config['client_id'],
'client_secret': config['secret'],
'jwt_token': encoded_jwt
}
response = requests.post(config['jwtTokenEndpoint'], headers=header_jwt, data=payload)
json_response = response.json()
try:
token = json_response['access_token']
except KeyError:
print('Issue retrieving token')
print(json_response)
raise Exception(json.dumps(json_response,indent=2))
expiry = json_response['expires_in'] / 1000 ## returns milliseconds expiring
if save:
with open('token.txt', 'w') as f:
f.write(token)
print(f'token has been saved here: {os.getcwd()}{os.sep}token.txt')
if verbose:
print('token valid till : ' + time.ctime(time.time() + expiry))
return {'token': token, 'expiry': expiry}
def get_oauth_token_and_expiry_for_config(config:dict,verbose:bool=False,save:bool=False)->Dict[str,str]:
"""
Retrieve the access token by using the OAuth information provided by the user
during the import importConfigFile function.
Arguments :
config : REQUIRED : Configuration object.
verbose : OPTIONAL : Default False. If set to True, print information.
save : OPTIONAL : Default False. If set to True, save the toke in the .
"""
if config is None:
raise ValueError("config dictionary is required")
oauth_payload = {
"grant_type": "client_credentials",
"client_id": config["client_id"],
"client_secret": config["secret"],
"scope": config["scopes"]
}
response = requests.post(
config["oauthTokenEndpointV2"], data=oauth_payload)
json_response = response.json()
if 'access_token' in json_response.keys():
token = json_response['access_token']
expiry = json_response["expires_in"]
else:
return json.dumps(json_response,indent=2)
if save:
with open('token.txt', 'w') as f:
f.write(token)
if verbose:
print('token valid till : ' + time.ctime(time.time() + expiry))
return {'token': token, 'expiry': expiry}
def _get_jwt(payload: dict, private_key: str) -> str:
"""
Ensure that jwt enconding return the same type (str) as versions < 2.0.0 returned bytes and >2.0.0 return strings.
"""
token: Union[str, bytes] = jwt.encode(payload, private_key, algorithm='RS256')
if isinstance(token, bytes):
return token.decode('utf-8')
return token | AdobeLibManual678 | /AdobeLibManual678-4.3.tar.gz/AdobeLibManual678-4.3/aanalytics2/token_provider.py | token_provider.py |
from dataclasses import dataclass
import json
@dataclass
class Project:
"""
This dataclass extract the information retrieved from the getProjet method.
It flatten the elements and gives you insights on what your project contains.
"""
def __init__(self, projectDict: dict = None,rsidSuffix:bool=False):
"""
Instancialize the class.
Arguments:
projectDict : REQUIRED : the dictionary of the project (returned by getProject method)
rsidSuffix : OPTIONAL : If you want to have the rsid suffix to dimension and metrics.
"""
if projectDict is None:
raise Exception("require a dictionary with project information. Retrievable via getProject")
self.id: str = projectDict.get('id', '')
self.name: str = projectDict.get('name', '')
self.description: str = projectDict.get('description', '')
self.rsid: str = projectDict.get('rsid', '')
self.ownerName: str = projectDict['owner'].get('name', '')
self.ownerId: int = projectDict['owner'].get('id', '')
self.ownerEmail: int = projectDict['owner'].get('login', '')
self.template: bool = projectDict.get('companyTemplate', False)
self.version: str = None
if 'definition' in projectDict.keys():
definition: dict = projectDict['definition']
self.version: str = definition.get('version',None)
self.curation: bool = definition.get('isCurated', False)
if definition.get('device', 'desktop') != 'cell':
self.reportType = "desktop"
infos = self._findPanelsInfos(definition['workspaces'][0])
self.nbPanels: int = infos["nb_Panels"]
self.nbSubPanels: int = 0
self.subPanelsTypes: list = []
for panel in infos["panels"]:
self.nbSubPanels += infos["panels"][panel]['nb_subPanels']
self.subPanelsTypes += infos["panels"][panel]['subPanels_types']
self.elementsUsed: dict = self._findElements(definition['workspaces'][0],rsidSuffix=rsidSuffix)
self.nbElementsUsed: int = len(self.elementsUsed['dimensions']) + len(
self.elementsUsed['metrics']) + len(self.elementsUsed['segments']) + len(
self.elementsUsed['calculatedMetrics'])
else:
self.reportType = "mobile"
def __str__(self)->str:
return json.dumps(self.to_dict(),indent=4)
def __repr__(self)->str:
return json.dumps(self.to_dict(),indent=4)
def _findPanelsInfos(self, workspace: dict = None) -> dict:
"""
Return a dict of the different information for each Panel.
Arguments:
workspace : REQUIRED : the workspace dictionary.
"""
dict_data = {'workspace_id': workspace['id']}
dict_data['nb_Panels'] = len(workspace['panels'])
dict_data['panels'] = {}
for panel in workspace['panels']:
dict_data["panels"][panel['id']] = {}
dict_data["panels"][panel['id']]['name'] = panel.get('name', 'Default Name')
dict_data["panels"][panel['id']]['nb_subPanels'] = len(panel['subPanels'])
dict_data["panels"][panel['id']]['subPanels_types'] = [subPanel['reportlet']['type'] for subPanel in
panel['subPanels']]
return dict_data
def _findElements(self, workspace: dict,rsidSuffix:bool=False) -> list:
"""
Returns the list of dimensions used in the FreeformReportlet.
Arguments :
workspace : REQUIRED : the workspace dictionary.
"""
dict_elements: dict = {'dimensions': [], "metrics": [], 'segments': [], "reportSuites": [],
'calculatedMetrics': []}
tmp_rsid = "" # default empty value
for panel in workspace['panels']:
if "reportSuite" in panel.keys():
dict_elements['reportSuites'].append(panel['reportSuite']['id'])
if rsidSuffix:
tmp_rsid = f"::{panel['reportSuite']['id']}"
elif "rsid" in panel.keys():
dict_elements['reportSuites'].append(panel['rsid'])
if rsidSuffix:
tmp_rsid = f"::{panel['rsid']}"
filters: list = panel.get('segmentGroups',[])
if len(filters) > 0:
for element in filters:
typeElement = element['componentOptions'][0].get('component',{}).get('type','')
idElement = element['componentOptions'][0].get('component',{}).get('id','')
if typeElement == "Segment":
dict_elements['segments'].append(idElement)
if typeElement == "DimensionItem":
clean_id: str = idElement[:idElement.find(
'::')] ## cleaning this type of element : 'variables/evar7.6::3000623228'
dict_elements['dimensions'].append(clean_id)
for subPanel in panel['subPanels']:
if subPanel['reportlet']['type'] == "FreeformReportlet":
reportlet = subPanel['reportlet']
rows = reportlet['freeformTable']
if 'dimension' in rows.keys():
dict_elements['dimensions'].append(f"{rows['dimension']['id']}{tmp_rsid}")
if len(rows["staticRows"]) > 0:
for row in rows["staticRows"]:
## I have to get a temp dimension to clean them before loading them in order to avoid counting them multiple time for each rows.
temp_list_dim = []
componentType: str = row.get('component',{}).get('type','')
if componentType == "DimensionItem":
temp_list_dim.append(f"{row['component']['id']}{tmp_rsid}")
elif componentType == "Segments" or componentType == "Segment":
dict_elements['segments'].append(row['component']['id'])
elif componentType == "Metric":
dict_elements['metrics'].append(f"{row['component']['id']}{tmp_rsid}")
elif componentType == "CalculatedMetric":
dict_elements['calculatedMetrics'].append(row['component']['id'])
if len(temp_list_dim) > 0:
temp_list_dim = list(set([el[:el.find('::')] for el in temp_list_dim]))
for dim in temp_list_dim:
dict_elements['dimensions'].append(f"{dim}{tmp_rsid}")
columns = reportlet['columnTree']
for node in columns['nodes']:
temp_data = self._recursiveColumn(node,tmp_rsid=tmp_rsid)
dict_elements['calculatedMetrics'] += temp_data['calculatedMetrics']
dict_elements['segments'] += temp_data['segments']
dict_elements['metrics'] += temp_data['metrics']
if len(temp_data['dimensions']) > 0:
for dim in set(temp_data['dimensions']):
dict_elements['dimensions'].append(dim)
dict_elements['metrics'] = list(set(dict_elements['metrics']))
dict_elements['segments'] = list(set(dict_elements['segments']))
dict_elements['dimensions'] = list(set(dict_elements['dimensions']))
dict_elements['calculatedMetrics'] = list(set(dict_elements['calculatedMetrics']))
return dict_elements
def _recursiveColumn(self, node: dict = None, temp_data: dict = None,tmp_rsid:str=""):
"""
recursive function to fetch elements in column stack
tmp_rsid : OPTIONAL : empty by default, if rsid is pass, it will add the value to dimension and metrics
"""
if temp_data is None:
temp_data: dict = {'dimensions': [], "metrics": [], 'segments': [], "reportSuites": [],
'calculatedMetrics': []}
componentType: str = node.get('component',{}).get('type','')
if componentType == "Metric":
temp_data['metrics'].append(f"{node['component']['id']}{tmp_rsid}")
elif componentType == "CalculatedMetric":
temp_data['calculatedMetrics'].append(node['component']['id'])
elif componentType == "Segment":
temp_data['segments'].append(node['component']['id'])
elif componentType == "DimensionItem":
old_id: str = node['component']['id']
new_id: str = old_id[:old_id.find('::')]
temp_data['dimensions'].append(f"{new_id}{tmp_rsid}")
if len(node['nodes']) > 0:
for new_node in node['nodes']:
temp_data = self._recursiveColumn(new_node, temp_data=temp_data,tmp_rsid=tmp_rsid)
return temp_data
def to_dict(self) -> dict:
"""
transform the class into a dictionary
"""
obj = {
'id': self.id,
'name': self.name,
'description': self.description,
'rsid': self.rsid,
'ownerName': self.ownerName,
'ownerId': self.ownerId,
'ownerEmail': self.ownerEmail,
'template': self.template,
'reportType':self.reportType,
'curation': self.curation or False,
'version': self.version or None,
}
add_object = {}
if hasattr(self, 'nbPanels'):
add_object = {
'curation': self.curation,
'version': self.version,
'nbPanels': self.nbPanels,
'nbSubPanels': self.nbSubPanels,
'subPanelsTypes': self.subPanelsTypes,
'nbElementsUsed': self.nbElementsUsed,
'dimensions': self.elementsUsed['dimensions'],
'metrics': self.elementsUsed['metrics'],
'segments': self.elementsUsed['segments'],
'calculatedMetrics': self.elementsUsed['calculatedMetrics'],
'rsids': self.elementsUsed['reportSuites'],
}
full_obj = {**obj, **add_object}
return full_obj | AdobeLibManual678 | /AdobeLibManual678-4.3.tar.gz/AdobeLibManual678-4.3/aanalytics2/projects.py | projects.py |
import gzip
import io
from concurrent import futures
from pathlib import Path
from typing import IO, Union
# Non standard libraries
import pandas as pd
import requests
from aanalytics2 import config, connector
class DIAPI:
"""
This class provide an easy way to use the Data Insertion API.
You can initialize it with the required information to be present in the request and then select to send POST or GET request.
Arguments to instantiate:
rsid : REQUIRED : Report Suite ID
tracking_server : REQUIRED : tracking server for tracking.
example : "xxxx.sc.omtrdc.net"
"""
def __init__(self, rsid: str = None, tracking_server: str = None):
"""
Arguments:
rsid : REQUIRED : Report Suite ID
tracking_server : REQUIRED : tracking server for tracking.
"""
if rsid is None:
raise Exception("Expecting a ReportSuite ID (rsid)")
self.rsid = rsid
if tracking_server is None:
raise Exception("Expecting a tracking server")
self.tracking_server = tracking_server
try:
import importlib.resources as pkg_resources
path = pkg_resources.path("aanalytics2", "supported_tags.pickle")
except ImportError:
# Try backported to PY<37 with pkg_resources.
try:
import pkg_resources
path = pkg_resources.resource_filename(
"aanalytics2", "supported_tags.pickle")
except:
print('no supported_tags file')
try:
with path as f:
self.REFERENCE = pd.read_pickle(f)
except:
self.REFERENCE = None
def getMethod(self, pageName: str = None, g: str = None, pe: str = None, pev1: str = None, pev2: str = None, events: str = None, **kwargs):
"""
Use the GET method to send information to Adobe Analytics
Arguments:
pageName : REQUIRED : The Web page name.
g : REQUIRED : The Web page URL
pe : OPTIONAL : For custom link tracking (Type of link ("d", "e", or "o"))
if selected, require "pev1" or "pev2", additionally pageName is set to Null
pev1 : OPTIONAL : The link's HREF. For custom links, page values are ignored.
pev2 : OPTIONAL : Name of link.
events : OPTIONAL : If you want to pass some events
Possible kwargs:
- see the SUPPORTED_TAGS attributes. Tags should be in the supported format.
"""
if pageName is None and g is None:
raise Exception("Expecting a pageName or g arguments")
if pe is not None and pe not in ["d", "e", "o"]:
raise Exception('Expecting pe argument to be ("d", "e", or "o")')
header = {'Content-Type': 'application/json'}
endpoint = f"https://{self.tracking_server}/b/ss/{self.rsid}/0"
params = {"pageName": pageName, "g": g,
"pe": pe, "pev1": pev1, "pev2": pev2, "events": events, **kwargs}
res = requests.get(endpoint, params=params, headers=header)
return res
def postMethod(self, pageName: str = None, pageURL: str = None, linkType: str = None, linkURL: str = None, linkName: str = None, events: str = None, **kwargs):
"""
Use the POST method to send information to Adobe Analytics
Arguments:
pageName : REQUIRED : The Web page name.
pageURL : REQUIRED : The Web page URL
linkType : OPTIONAL : For custom link tracking (Type of link ("d", "e", or "o"))
if selected, require "pev1" or "pev2", additionally pageName is set to Null
linkURL : OPTIONAL : The link's HREF. For custom links, page values are ignored.
linkName : OPTIONAL : Name of link.
events : OPTIONAL : If you want to pass some events
Possible kwargs:
- see the SUPPORTED_TAGS attributes. Tags should be in the supported format.
"""
if pageName is None and pageURL is None:
raise Exception("Expecting a pageName or pageURL argument")
if linkType is not None and linkType not in ["d", "e", "o"]:
raise Exception('Expecting pe argument to be ("d", "e", or "o")')
header = {'Content-Type': 'application/xml'}
endpoint = f"https://{self.tracking_server}/b/ss//6"
dictionary = {"pageName": pageName, "pageURL": pageURL,
"linkType": linkType, "linkURL": linkURL, "linkName": linkName, "events": events, "reportSuite": self.rsid, **kwargs}
import dicttoxml as dxml
myxml = dxml.dicttoxml(
dictionary, custom_root='request', attr_type=False)
xml_data = myxml.decode()
res = requests.post(endpoint, data=xml_data, headers=header)
return res
class Bulkapi:
"""
This is the bulk API from Adobe Analytics.
By default, the file are sent to the global endpoints for auto-routing.
If you wish to select a specific endpoint, you can modify it during instantiation.
It requires you to upload some adobeio configuration file through the main aanalytics2 module.
Arguments:
endpoint : OPTIONAL : by default using https://analytics-collection.adobe.io
"""
def __init__(self, endpoint: str = "https://analytics-collection.adobe.io", config_object: dict = config.config_object):
"""
Initialize the Bulk API connection. Returns an object with methods to send data to Analytics.
Arguments:
endpoint : REQUIRED : Endpoint to send data to. Default to analytics-collection.adobe.io
possible values, on top of the default choice are:
- https://analytics-collection-va7.adobe.io (US)
- https://analytics-collection-nld2.adobe.io (EU)
config_object : REQUIRED : config object containing the different information to send data.
"""
self.endpoint = endpoint
try:
import importlib.resources as pkg_resources
path = pkg_resources.path(
"aanalytics2", "CSV_Column_and_Query_String_Reference.pickle")
except ImportError:
try:
# Try backported to PY<37 `importlib_resources`.
import pkg_resources
path = pkg_resources.resource_filename(
"aanalytics2", "CSV_Column_and_Query_String_Reference.pickle")
except:
print('no CSV_Column_and_Query_string_Reference file')
try:
with path as f:
self.REFERENCE = pd.read_pickle(f)
except:
self.REFERENCE = None
# if no token has been generated.
self.connector = connector.AdobeRequest()
self.header = self.connector.header
self.header["x-adobe-vgid"] = "ingestion"
del self.header["Content-Type"]
self._createdFiles = []
def validation(self, file: IO = None,encoding:str='utf-8', **kwargs):
"""
Send the file to a validation endpoint. Return the response object from requests.
Argument:
file : REQUIRED : File in a string of byte format.
encoding : OPTIONAL : type of encoding used for the file.
Possible kwargs:
compress_level : handle the compression level, from 0 (no compression) to 9 (slow but more compressed). default 5.
"""
compress_level = kwargs.get("compress_level", 5)
if file is None:
raise Exception("Expecting a file")
path = "/aa/collect/v1/events/validate"
if file.endswith(".gz") == False:
with open(file, "r",encoding=encoding) as f:
content = f.read()
data = gzip.compress(content.encode('utf-8'),
compresslevel=compress_level)
filename = f"{file}.gz"
elif file.endswith(".gz"):
filename = file
with open(file, "rb") as f:
data = f.read()
res = requests.post(self.endpoint + path, files={"file": (None, data)},
headers=self.header)
return res
def generateTemplate(self, includeAdv: bool = False, returnDF: bool = False, save: bool = True):
"""
Generate a CSV file with minimum fields.
Arguments:
includeAdv : OPTIONAL : Include advanced fields in the csv (pe & queryString). Not included by default to avoid confusion for new users. (Default False)
returnDF : OPTIONAL : Return a pandas dataFrame if you want to work directly with a data frame.(default False)
save : OPTIONAL : Save the file created directly in your working folder.
"""
## 2 rows being created
string = """timestamp,marketingCloudVisitorID,events,pageName,pageURL,reportSuiteID,userAgent,pe,queryString\ntimestampValuePOSIX/Epoch Time (e.g. 1486769029) or ISO-8601 (e.g. 2017-02-10T16:23:49-07:00),marketingCloudVisitorIDValue,eventsValue,pageNameValue,pageURLValue,reportSuiteIDValue,userAgentValue,peValue,queryStringValue
"""
data = io.StringIO(string)
df = pd.read_csv(data, sep=',')
if includeAdv == False:
df.drop(["pe", "queryString"], axis=1, inplace=True)
if save:
df.to_csv('template.csv', index=False)
if returnDF:
return df
def _checkFiles(self, file: str = None,encoding:str = "utf-8"):
"""
Internal method that check content and format of the file
"""
if file.endswith(".gz"):
return file
else: # if sending not gzipped file.
new_folder = Path('tmp/')
new_folder.mkdir(exist_ok=True)
with open(file, "r",encoding=encoding) as f:
content = f.read()
new_path = new_folder / f"{file}.gz"
with gzip.open(Path(new_path), 'wb') as f:
f.write(content.encode('utf-8'))
# save the filename to delete
self._createdFiles.append(new_path)
return new_path
def sendFiles(self, files: Union[list, IO] = None,encoding:str='utf-8',**kwargs):
"""
Method to send the file(s) through the Bulk API. Returns a list with the different status file sent.
Arguments:
files : REQUIRED : file to be send to the aalytics collection server. It can be a list or the name of the file to be send.
If list is being send, we assume that each file are to be sent in different visitor groups.
If file are not gzipped, we will compress the file and saved it as gz in the folder.
encoding : OPTIONAL : if encoding is different that default utf-8.
possible kwargs:
workers : maximum amount of worker for parallele processing. (default 4)
"""
path = "/aa/collect/v1/events"
if files is None:
raise Exception("Expecting a file")
compress_level = kwargs.get("compress_level", 5)
files_gz = list()
if type(files) == list:
for file in files:
fileName = self._checkFiles(file,encoding=encoding)
files_gz.append(fileName)
elif type(files) == str:
fileName = self._checkFiles(files,encoding=encoding)
files_gz.append(fileName)
vgid_headers = [f"ingestion_{x}" for x in range(len(files_gz))]
list_headers = [{**self.header, 'x-adobe-vgid': vgid}
for vgid in vgid_headers]
list_urls = [self.endpoint + path for x in range(len(files_gz))]
list_files = ({"file": (None, open(Path(file), "rb").read())}
for file in files_gz) # generator for files
workers_input = kwargs.get("workers", 4)
workers = max(1, workers_input)
with futures.ThreadPoolExecutor(workers) as executor:
res = executor.map(lambda x, y, z: requests.post(
x, headers=y, files=z), list_urls, list_headers, list_files)
list_res = [response.json() for response in res]
# cleaning temp folder
if len(self._createdFiles) > 0:
for file in self._createdFiles:
file_path = Path(file)
file_path.unlink()
self._createdFiles = []
tmp = Path('tmp/')
tmp.rmdir()
return list_res | AdobeLibManual678 | /AdobeLibManual678-4.3.tar.gz/AdobeLibManual678-4.3/aanalytics2/ingestion.py | ingestion.py |
import pandas as pd
from aanalytics2 import config, connector
from typing import Union
class LegacyAnalytics:
"""
Class that will help you realize basic requests to the old API 1.4 endpoints
"""
def __init__(self,company_name:str=None,config:dict=config.config_object)->None:
"""
Instancialize the Legacy Analytics wrapper.
"""
if company_name is None:
raise Exception("Require a company name")
self.connector = connector.AdobeRequest(config_object=config)
self.token = self.connector.token
self.endpoint = "https://api.omniture.com/admin/1.4/rest"
self.header = header = {
'Accept': 'application/json',
'Authorization': f'Bearer {self.token}',
'X-ADOBE-DMA-COMPANY': company_name
}
def getData(self,path:str="/",method:str=None,params:dict=None)->dict:
"""
Use the GET method to the parameter used.
Arguments:
path : REQUIRED : If you need a specific path (default "/")
method : OPTIONAL : if you want to pass the method directly there for the parameter.
params : OPTIONAL : If you need to pass parameter to your url, use dictionary i.e. : {"param":"value"}
"""
if params is not None and type(params) != dict:
raise TypeError("Require a dictionary")
myParams = {}
myParams.update(**params or {})
if method is not None:
myParams['method'] = method
path = path
res = self.connector.getData(self.endpoint + path,params=myParams,headers=self.header,legacy=True)
return res
def postData(self,path:str="/",method:str=None,params:dict=None,data:Union[dict,list]=None)->dict:
"""
Use the POST method to the parameter used.
Arguments:
path : REQUIRED : If you need a specific path (default "/")
method : OPTIONAL : if you want to pass the method directly there for the parameter.
params : OPTIONAL : If you need to pass parameter to your url, use dictionary i.e. : {"param":"value"}
data : OPTIONAL : Usually required to pass the dictionary or list to the request
"""
if params is not None and type(params) != dict:
raise TypeError("Require a dictionary")
if data is not None and (type(data) != dict and type(data) != list):
raise TypeError("data should be dictionary or list")
myParams = {}
myParams.update(**params or {})
if method is not None:
myParams['method'] = method
path = path
res = self.connector.postData(self.endpoint + path,params=myParams, data=data,headers=self.header,legacy=True)
return res | AdobeLibManual678 | /AdobeLibManual678-4.3.tar.gz/AdobeLibManual678-4.3/aanalytics2/aanalytics14.py | aanalytics14.py |
# AdonisAI
Official Website: [Click Here](https://adonis-ai.herokuapp.com)
Official Instagram Page: [Click Here](https://www.instagram.com/_jarvisai_)

1. What is AdonisAI?
2. Prerequisite
3. Getting Started- How to use it?
4. What it can do (Features it supports)
5. Future / Request Features
6. What's new?
7. Contribute
8. Contact me
9. Donate
10. Thank me on-
## 1. What is AdonisAI?
AdonisAI is as advance version of [JarvisAI](https://pypi.org/project/JarvisAI/). AdonisAI is a Python Module which is able to perform task like Chatbot, Assistant etc. It provides base functionality for any assistant application. This library is built using Tensorflow, Pytorch, Transformers and other opensource libraries and frameworks. Well, you can contribute on this project to make it more powerful.
This project is crated only for those who is having interest in building Virtual Assistant. Generally it took lots of time to write code from scratch to build Virtual Assistant. So, I have build an Library called "Adonis", which gives you easy functionality to build your own Virtual Assistant.
**AdonisAI is more powerful and light weight version of https://pypi.org/project/JarvisAI/**
## 2. Prerequisite
- Get your Free API key from https://adonis-ai.herokuapp.com
- To use it only Python (> 3.6) is required.
- To contribute in project: Python is the only prerequisite for basic scripting, Machine Learning and Deep Learning knowledge will help this model to do task like AI-ML. Read How to contribute section of this page.
## 3. Getting Started- How to use it?
- Install the latest version-
`pip install AdonisAI`
It will install all the required package automatically.
- You need only this piece of code-
```
# create your own function # RULES (Optional)-
# It must contain parameter 'feature_command' (What ever input you provide when AI ask for input will be passed to this function)
# Return is optional
# If you want to provide return value it should only return text (str)
def pprint(feature_command="custom feature (What ever input you provide when AI ask for input will be passed to this function)"):
# write your code here to do something with the command
# perform some tasks
# return is optional
return feature_command + ' Executed'
obj = AdonisEngine(bot_name='alexa',
input_mechanism=InputOutput.speech_to_text_deepspeech_streaming,
output_mechanism=[InputOutput.text_output, InputOutput.text_to_speech],
backend_tts_api='pyttsx3',
wake_word_detection_status=True,
wake_word_detection_mechanism=InputOutput.speech_to_text_deepspeech_streaming,
shutdown_command='shutdown',
secret_key='your_secret_key')
# Check existing list of commands, Existing command you can not use while registering your function
print(obj.check_registered_command())
# Register your function (Optional)
obj.register_feature(feature_obj=pprint, feature_command='custom feature')
# Start AI in background. It will always run forever until you don't stop it manually.
obj.engine_start()
```
**Whats now?**
It will start your AI, it will ask you to give input and accordingly it will produce output.
You can configure `input_mechanism` and `output_mechanism` parameter for voice input/output or text input/output.
### Parameters-
-

# 4. What it can do (Features it supports)-
1. Currently, it supports only english language
2. Supports voice and text input/output.
3. Supports AI based voice input and by using google api voice input.
### 4.1. Supported Commands-

### 4.3. Supported Input/Output Methods (Which option do I need to choose?)-

# 5. Future/Request Features-
**WIP**
**You tell me**
# 6. What new-
1. AdonisAI==1.0: Initial Release.
2. AdonisAI==1.1: Added news and weather features. Added AdonisAI.InputOutput.wake_word_detection_mechanism.
3. AdonisAI==1.2: Added new input mechanism (AdonisAI.InputOutput.speech_to_text_deepspeech_streaming) fast and free. And new features (jokes, about).
4. AdonisAI==1.3: Added New feature (send whatsapp, open website, play on youtube, send email).
5. AdonisAI==1.4: Added new feature (AI Based Chatbot Added, from now you need Secret key for AdonisAI, it's used for security purpose. Get your free key from https://adonis-ai.herokuapp.com).
6. AdonisAI==1.5: Major Bug Fix from version 1.4. *[DO NOT USE AdonisAI==1.4]*
7. AdonisAI==1.6: New features added (screenshot, photo click, youtube video download, play games, covid updates, internet speed check)
8. AdonisAI==1.7: Bug Fixes from version 1.6. *[DO NOT USE AdonisAI==1.6]*
# 7. Contribute-
Instructions Coming Soon
# 8. Contact me-
- [Instagram](https://www.instagram.com/dipesh_pal17)
- [YouTube](https://www.youtube.com/dipeshpal17)
# 9. Donate-
[Donate and Contribute to run me this project, and buy a domain](https://www.buymeacoffee.com/dipeshpal)
**_Feel free to use my code, don't forget to mention credit. All the contributors will get credits in this repo._**
**_Mention below line for credits-_**
Credits-
- https://jarvis-ai-api.herokuapp.com/
- https://github.com/Dipeshpal/Jarvis_AI/
- https://www.youtube.com/dipeshpal17
- https://www.instagram.com/dipesh_pal17/
# 10. Thank me on-
- Follow me on Instagram: https://www.instagram.com/dipesh_pal17/
- Subscribe me on YouTube: https://www.youtube.com/dipeshpal17 | AdonisAI | /AdonisAI-1.7.tar.gz/AdonisAI-1.7/README.md | README.md |





[](https://travis-ci.org/MaximeChallon/AdresseParser)

# AddresseParser
Package Python pour parser et comparer les adresses franรงaises.
# Lancement
Package disponible sur [PyPI](https://pypi.org/project/AdresseParser)
Vous pouvez l'installer avec pip:
```bash
pip install AdresseParser
```
Exemple d'utilisation en console Python:
```bash
>>> from AdresseParser import AdresseParser
>>> adr_parser = AdresseParser()
>>> result = adr_parser.parse("88 rue de rivoli 75002 paris")
>>> print(result)
{'numero': '88', 'indice': None, 'rue': {'type': 'RUE', 'nom': 'RIVOLI'}, 'code_postal': '75002', 'ville': {'arrondissement': 2, 'nom': 'PARIS'}, 'departement': {'numero': 75, 'nom': 'Paris'}, 'region': 'รle-de-France', 'pays': 'France'}
>>> print(result['rue'])
{'type': 'RUE', 'nom': 'RIVOLI'}
>>> print(result['ville']['arrondissement'])
2
```
# Return
```json
{
"numero": "str",
"indice": "str",
"rue":{
"type": "str",
"nom": "str"
},
"code_postal": "str",
"ville": {
"arrondissement": "int",
"nom": "str"
},
"departement": {
"numero": "str",
"nom": "str"
},
"region": "str",
"pays": "France"
}
```
| AdresseParser | /AdresseParser-1.0.2.tar.gz/AdresseParser-1.0.2/README.md | README.md |
example!
```python
from AdroitFisherman.DoubleLinkedListWithoutHeadNode.Double import DoubleLinkedListWithoutHeadNode_double
if __name__ == '__main__':
operate=0
test=DoubleLinkedListWithoutHeadNode_double()
while operate<16:
print("1:Init linear list 2:Destroy linear list 3:Clear Linear list", end='\n')
print("4:Is list empty 5:Get list length 6:Get elem's value", end='\n')
print("7:Get elem's index 8:Get elem's prior elem 9:Get elem's next elem", end='\n')
print("10:Add elem to the first position 11:Add elem to the last position 12:Insert elem into list", end='\n')
print("13:Delete elem 14:View list 15:View list by reverse order", end='\n')
operate = int(input("please choose operation options:"))
if operate==1:
test.init_list()
elif operate==2:
test.destroy_list()
elif operate==3:
test.clear_list()
elif operate==4:
if test.list_empty()==True:
print("empty",end='\n')
else:
print("not empty",end='\n')
elif operate==5:
print(f"length:{test.list_length()}",end='\n')
elif operate==6:
index=int(input("please input elem's position:"))
print(f"elem value:{test.get_elem(index)}")
elif operate==7:
elem=float(input("please input elem's value:"))
print("elem position:%d"%test.locate_elem(elem))
elif operate==8:
elem = float(input("please input elem's value:"))
print("prior elem's value:%f" % test.prior_elem(elem))
elif operate==9:
elem = float(input("please input elem's value:"))
print("next elem's value:%f" % test.next_elem(elem))
elif operate==10:
elem = float(input("please input elem's value:"))
test.add_first(elem)
for i in range(0,test.list_length(),1):
print(test.get_elem(i),end='\t')
print(end='\n')
elif operate==11:
elem = float(input("please input elem's value:"))
test.add_after(elem)
for i in range(0,test.list_length(),1):
print(test.get_elem(i),end='\t')
print(end='\n')
elif operate==12:
index = int(input("please input elem's position:"))
elem = float(input("please input elem's value:"))
test.list_insert(index, elem)
for i in range(0,test.list_length(),1):
print(test.get_elem(i),end='\t')
print(end='\n')
elif operate==13:
index = int(input("please input elem's position:"))
test.list_delete(index)
for i in range(0,test.list_length(),1):
print(test.get_elem(i),end='\t')
print(end='\n')
elif operate==14:
for i in range(0,test.list_length(),1):
print(test.get_elem(i),end='\t')
print(end='\n')
elif operate==15:
test.traverse_list_by_reverse_order()
``` | AdroitFisherman | /AdroitFisherman-0.0.22.tar.gz/AdroitFisherman-0.0.22/README.md | README.md |

# The Adsorber Program
[](https://docs.python.org/3/)
[](https://github.com/GardenGroupUO/Adsorber)
[](https://pypi.org/project/Adsorber/)
[](https://anaconda.org/GardenGroupUO/adsorber)
[](https://adsorber.readthedocs.io/en/latest/)
[](https://www.gnu.org/licenses/agpl-3.0.en.html)
[](https://lgtm.com/projects/g/GardenGroupUO/Adsorber/context:python)
Authors: Dr. Geoffrey R. Weal and Dr. Anna L. Garden (University of Otago, Dunedin, New Zealand)
Group page: https://blogs.otago.ac.nz/annagarden/
## What is Adsorber
Adsorber is designed to create a number of models that have adsorbates adsorbed to various top, bridge, three-fold, and four-fold site on a cluster or surface model.
## Installation
It is recommended to read the installation page before using the Adsorber program.
[adsorber.readthedocs.io/en/latest/Installation.html](https://adsorber.readthedocs.io/en/latest/Installation.html)
Note that you can install Adsorber through ``pip3`` and ``conda``.
Jmol is also used for looking at your cluster/surface model with adsorbed atoms and molecules upon it. You can see how to install and use it at [Installing and Using ASE GUI and Jmol](https://adsorber.readthedocs.io/en/latest/External_programs_that_will_be_useful_to_install_for_using_Adsorber.html).
## Output files that are created by Adsorber
Adsorber will adsorb atoms and molecules on various binding sites across your cluster or surface model. These include top, bridge, three-fold, and four-fold sites. An example of an COOH molecules adsorbed to a corner top-site on a Cu<sub>78</sub> cluster is shown below
<p align="center">
<img src="https://github.com/GardenGroupUO/Adsorber/blob/main/Documentation/source/Images/COOH_site_1_rotation_0.png">
</p>
## Where can I find the documentation for Adsorber
All the information about this program is found online at [adsorber.readthedocs.io/en/latest/](https://adsorber.readthedocs.io/en/latest/). Click the button below to also see the documentation:
[](https://adsorber.readthedocs.io/en/latest/)
## The ``Adsorber`` Program is a "work in progress"
This program is definitely a "work in progress". I have made it as easy to use as possible, but there are always oversights to program development and some parts of it may not be as easy to use as it could be. If you have any issues with the program or you think there would be better/easier ways to use and implement things in ``Adsorber``, feel free to email Geoffrey about these ([email protected]). Feedback is very much welcome!
## About
<div align="center">
| Python | [](https://docs.python.org/3/) |
|:----------------------:|:-------------------------------------------------------------:|
| Repositories | [](https://github.com/GardenGroupUO/Adsorber) [](https://pypi.org/project/Adsorber/) [](https://anaconda.org/GardenGroupUO/adsorber) |
| Documentation | [](https://adsorber.readthedocs.io/en/latest/) |
| Tests | [](https://lgtm.com/projects/g/GardenGroupUO/Adsorber/context:python)
| License | [](https://www.gnu.org/licenses/agpl-3.0.en.html) |
| Authors | Geoffrey R. Weal, Dr. Anna L. Garden |
| Group Website | https://blogs.otago.ac.nz/annagarden/ |
</div>
| Adsorber | /Adsorber-1.10.tar.gz/Adsorber-1.10/README.md | README.md |
<!--
<p align="center">
<img src="https://github.com/RCCS-CaptureTeam/Adsorption_Breakthrough_Analysis/raw/main/docs/source/logo.png" height="150">
</p>
-->
<h1 align="center">
AdsorptionBreakthroughAnalysis
</h1>
<p align="center">
<a href="https://github.com/RCCS-CaptureTeam/Adsorption_Breakthrough_Analysis/actions/workflows/tests.yml">
<img alt="Tests" src="https://github.com/RCCS-CaptureTeam/Adsorption_Breakthrough_Analysis/workflows/tests.yml/badge.svg" />
</a>
<a href="https://pypi.org/project/AdsorptionBreakthroughAnalysis">
<img alt="PyPI" src="https://img.shields.io/pypi/v/AdsorptionBreakthroughAnalysis" />
</a>
<a href="https://pypi.org/project/AdsorptionBreakthroughAnalysis">
<img alt="PyPI - Python Version" src="https://img.shields.io/pypi/pyversions/AdsorptionBreakthroughAnalysis" />
</a>
<a href="https://github.com/RCCS-CaptureTeam/Adsorption_Breakthrough_Analysis/blob/main/LICENSE">
<img alt="PyPI - License" src="https://img.shields.io/pypi/l/AdsorptionBreakthroughAnalysis" />
</a>
<a href='https://AdsorptionBreakthroughAnalysis.readthedocs.io/en/latest/?badge=latest'>
<img src='https://readthedocs.org/projects/AdsorptionBreakthroughAnalysis/badge/?version=latest' alt='Documentation Status' />
</a>
<a href="https://codecov.io/gh/RCCS-CaptureTeam/Adsorption_Breakthrough_Analysis/branch/main">
<img src="https://codecov.io/gh/RCCS-CaptureTeam/Adsorption_Breakthrough_Analysis/branch/main/graph/badge.svg" alt="Codecov status" />
</a>
<a href="https://github.com/cthoyt/cookiecutter-python-package">
<img alt="Cookiecutter template from @cthoyt" src="https://img.shields.io/badge/Cookiecutter-snekpack-blue" />
</a>
<a href='https://github.com/psf/black'>
<img src='https://img.shields.io/badge/code%20style-black-000000.svg' alt='Code style: black' />
</a>
<a href="https://github.com/RCCS-CaptureTeam/Adsorption_Breakthrough_Analysis/blob/main/.github/CODE_OF_CONDUCT.md">
<img src="https://img.shields.io/badge/Contributor%20Covenant-2.1-4baaaa.svg" alt="Contributor Covenant"/>
</a>
</p>
# Adsorption Breakthrough Analysis
This program is used to analyse the breakthrough curves generated by adsorption in the rig created by Dr E.G (in the lab as of 1st of September 2022).
The folders include:
* Code
* Contains the python script
* Explaining program
* This contains a jupyter notebook explaining how to use the Python script and some extra information if neeeded
* Generating results
* Contains a simple jupyter notebook with the essentials needed to run and produce the required outputs
## ๐ Package interaction
There are two ways to interact with this package.
<b> 1. Local installation </b>
- Install via pip, (`pip install AdsorptionBreakthroughAnalysis`)
- Cloning the github repo (`git clone https://github.com/dm937/Adsorption_Breakthrough_Analysis/`)
- Use the jupyter notebook locally. (Check `Explaining program/Explaining_program.ipynb`)
<b> 2. Using the online notebook (easy) </b>
- Using the [online notebook](https://deepnote.com/workspace/fmcil-1f244322-b560-46a9-bfe3-cb29fad834c7/project/AdsorptionBreakthroughAnalysis-06bd4f69-f127-42b0-bbc2-792ba35155d4/%2FExplaining_program.ipynb)
If you are unfamiliar with pip then we recommend using online notebook.
## Usage
The program takes in MS and coriolis readings and then creates a dataframe containing only the relevant breakthrough data
This is done by the use of classes. Each part of the experiment will be an object containing the related data. for example 14%_CO2_UiO66_sample may be one object.
To create the object setup the ExperimentalSetup dictionary with the relevant values and the MS and coriolis files must be in the same folder and inputted to the ExperimentalSetup aswell. The object is then created by calling the class and inputting the relevant conditions.
Once a blank and sample object are created you can call the standard output function in order to produce the standard set of results.
This is all explained further (with functions/methods called in code cells) in the "Explaining program" notebook
## Acknowledgements
This work is part of the PrISMa Project (299659), funded through the ACT Programme (Accelerating CCS Technologies, Horizon 2020 Project 294766). Financial contributions from the Department for Business, Energy & Industrial Strategy (BEIS) together with extra funding from the NERC and EPSRC Research Councils, United Kingdom, the Research Council of Norway (RCN), the Swiss Federal Office of Energy (SFOE), and the U.S. Department of Energy are gratefully acknowledged. Additional financial support from TOTAL and Equinor is also gratefully acknowledged. This work is also part of the USorb-DAC Project, which is supported by a grant from The Grantham Foundation for the Protection of the Environment to RMIโs climate tech accelerator program, Third Derivative.
### โ๏ธ License
The code in this package is licensed under the MIT License.
<!--
### ๐ Citation
Citation goes here!
-->
<!--
### ๐ Support
This project has been supported by the following organizations (in alphabetical order):
- [Harvard Program in Therapeutic Science - Laboratory of Systems Pharmacology](https://hits.harvard.edu/the-program/laboratory-of-systems-pharmacology/)
-->
<!--
### ๐ฐ Funding
This project has been supported by the following grants:
| Funding Body | Program | Grant |
|----------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------|-----------------|
| DARPA | [Automating Scientific Knowledge Extraction (ASKE)](https://www.darpa.mil/program/automating-scientific-knowledge-extraction) | HR00111990009 |
-->
### ๐ช Cookiecutter
This package was created with [@audreyfeldroy](https://github.com/audreyfeldroy)'s
[cookiecutter](https://github.com/cookiecutter/cookiecutter) package using [@cthoyt](https://github.com/cthoyt)'s
[cookiecutter-snekpack](https://github.com/cthoyt/cookiecutter-snekpack) template.
## ๐ ๏ธ For Developers
<details>
<summary>See developer instructions</summary>
### Development Installation
To install in development mode, use the following:
```bash
$ git clone git+https://github.com/RCCS-CaptureTeam/Adsorption_Breakthrough_Analysis.git
$ cd Adsorption_Breakthrough_Analysis
$ pip install -e .
```
### ๐ฅผ Testing
After cloning the repository and installing `tox` with `pip install tox`, the unit tests in the `tests/` folder can be
run reproducibly with:
```shell
$ tox
```
Additionally, these tests are automatically re-run with each commit in a [GitHub Action](https://github.com/RCCS-CaptureTeam/Adsorption_Breakthrough_Analysis/actions?query=workflow%3ATests).
### ๐ Building the Documentation
The documentation can be built locally using the following:
```shell
$ git clone git+https://github.com/RCCS-CaptureTeam/Adsorption_Breakthrough_Analysis.git
$ cd Adsorption_Breakthrough_Analysis
$ tox -e docs
$ open docs/build/html/index.html
```
The documentation automatically installs the package as well as the `docs`
extra specified in the [`setup.cfg`](setup.cfg). `sphinx` plugins
like `texext` can be added there. Additionally, they need to be added to the
`extensions` list in [`docs/source/conf.py`](docs/source/conf.py).
### ๐ฆ Making a Release
After installing the package in development mode and installing
`tox` with `pip install tox`, the commands for making a new release are contained within the `finish` environment
in `tox.ini`. Run the following from the shell:
```shell
$ tox -e finish
```
This script does the following:
1. Uses [Bump2Version](https://github.com/c4urself/bump2version) to switch the version number in the `setup.cfg`,
`src/AdsorptionBreakthroughAnalysis/version.py`, and [`docs/source/conf.py`](docs/source/conf.py) to not have the `-dev` suffix
2. Packages the code in both a tar archive and a wheel using [`build`](https://github.com/pypa/build)
3. Uploads to PyPI using [`twine`](https://github.com/pypa/twine). Be sure to have a `.pypirc` file configured to avoid the need for manual input at this
step
4. Push to GitHub. You'll need to make a release going with the commit where the version was bumped.
5. Bump the version to the next patch. If you made big changes and want to bump the version by minor, you can
use `tox -e bumpversion -- minor` after.
</details>
| AdsorptionBreakthroughAnalysis | /AdsorptionBreakthroughAnalysis-0.0.2.tar.gz/AdsorptionBreakthroughAnalysis-0.0.2/README.md | README.md |
AdsorptionBreakthroughAnalysis |release| Documentation
======================================================
Cookiecutter
------------
This package was created with the `cookiecutter <https://github.com/cookiecutter/cookiecutter>`_
package using `cookiecutter-snekpack <https://github.com/cthoyt/cookiecutter-snekpack>`_ template.
It comes with the following:
- Standard `src/` layout
- Declarative setup with `setup.cfg` and `pyproject.toml`
- Reproducible tests with `pytest` and `tox`
- A command line interface with `click`
- A vanity CLI via python entrypoints
- Version management with `bumpversion`
- Documentation build with `sphinx`
- Testing of code quality with `flake8` in `tox`
- Testing of documentation coverage with `docstr-coverage` in `tox`
- Testing of documentation format and build in `tox`
- Testing of package metadata completeness with `pyroma` in `tox`
- Testing of MANIFEST correctness with `check-manifest` in `tox`
- Testing of optional static typing with `mypy` in `tox`
- A `py.typed` file so other packages can use your type hints
- Automated running of tests on each push with GitHub Actions
- Configuration for `ReadTheDocs <https://readthedocs.org/>`_
- A good base `.gitignore` generated from `gitignore.io <https://gitignore.io>`_.
- A pre-formatted README with badges
- A pre-formatted LICENSE file with the MIT License (you can change this to whatever you want, though)
- A pre-formatted CONTRIBUTING guide
- Automatic tool for releasing to PyPI with ``tox -e finish``
- A copy of the `Contributor Covenant <https://www.contributor-covenant.org>`_ as a basic code of conduct
Table of Contents
-----------------
.. toctree::
:maxdepth: 2
:caption: Getting Started
:name: start
installation
usage
cli
Indices and Tables
------------------
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| AdsorptionBreakthroughAnalysis | /AdsorptionBreakthroughAnalysis-0.0.2.tar.gz/AdsorptionBreakthroughAnalysis-0.0.2/docs/source/index.rst | index.rst |
Installation
============
The most recent release can be installed from
`PyPI <https://pypi.org/project/AdsorptionBreakthroughAnalysis>`_ with:
.. code-block:: shell
$ pip install AdsorptionBreakthroughAnalysis
The most recent code and data can be installed directly from GitHub with:
.. code-block:: shell
$ pip install git+https://github.com/RCCS-CaptureTeam/Adsorption_Breakthrough_Analysis.git
To install in development mode, use the following:
.. code-block:: shell
$ git clone git+https://github.com/RCCS-CaptureTeam/Adsorption_Breakthrough_Analysis.git
$ cd Adsorption_Breakthrough_Analysis
$ pip install -e .
| AdsorptionBreakthroughAnalysis | /AdsorptionBreakthroughAnalysis-0.0.2.tar.gz/AdsorptionBreakthroughAnalysis-0.0.2/docs/source/installation.rst | installation.rst |
============
tfidfpackage
============
.. image:: https://img.shields.io/pypi/v/term_frequency.svg
:target: https://pypi.python.org/pypi/term_frequency
.. image:: https://img.shields.io/travis/dsmall/term_frequency.svg
:target: https://travis-ci.org/dsmall/term_frequency
.. image:: https://readthedocs.org/projects/term-frequency/badge/?version=latest
:target: https://term-frequency.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
Python Boilerplate contains all the boilerplate you need to create a Python package.
* Free software: MIT license
* Documentation: https://term-frequency.readthedocs.io.
Features
--------
* TODO
Credits
-------
This package was created with Cookiecutter_ and the `audreyr/cookiecutter-pypackage`_ project template.
.. _Cookiecutter: https://github.com/audreyr/cookiecutter
.. _`audreyr/cookiecutter-pypackage`: https://github.com/audreyr/cookiecutter-pypackage
| Adsys-PDFReaderTool | /Adsys_PDFReaderTool-0.0.1.tar.gz/Adsys_PDFReaderTool-0.0.1/README.rst | README.rst |
.. highlight:: shell
============
Contributing
============
Contributions are welcome, and they are greatly appreciated! Every little bit
helps, and credit will always be given.
You can contribute in many ways:
Types of Contributions
----------------------
Report Bugs
~~~~~~~~~~~
Report bugs at https://github.com/dsmall/term_frequency/issues.
If you are reporting a bug, please include:
* Your operating system name and version.
* Any details about your local setup that might be helpful in troubleshooting.
* Detailed steps to reproduce the bug.
Fix Bugs
~~~~~~~~
Look through the GitHub issues for bugs. Anything tagged with "bug" and "help
wanted" is open to whoever wants to implement it.
Implement Features
~~~~~~~~~~~~~~~~~~
Look through the GitHub issues for features. Anything tagged with "enhancement"
and "help wanted" is open to whoever wants to implement it.
Write Documentation
~~~~~~~~~~~~~~~~~~~
tfidfpackage could always use more documentation, whether as part of the
official tfidfpackage docs, in docstrings, or even on the web in blog posts,
articles, and such.
Submit Feedback
~~~~~~~~~~~~~~~
The best way to send feedback is to file an issue at https://github.com/dsmall/term_frequency/issues.
If you are proposing a feature:
* Explain in detail how it would work.
* Keep the scope as narrow as possible, to make it easier to implement.
* Remember that this is a volunteer-driven project, and that contributions
are welcome :)
Get Started!
------------
Ready to contribute? Here's how to set up `term_frequency` for local development.
1. Fork the `term_frequency` repo on GitHub.
2. Clone your fork locally::
$ git clone [email protected]:your_name_here/term_frequency.git
3. Install your local copy into a virtualenv. Assuming you have virtualenvwrapper installed, this is how you set up your fork for local development::
$ mkvirtualenv term_frequency
$ cd term_frequency/
$ python setup.py develop
4. Create a branch for local development::
$ git checkout -b name-of-your-bugfix-or-feature
Now you can make your changes locally.
5. When you're done making changes, check that your changes pass flake8 and the
tests, including testing other Python versions with tox::
$ flake8 term_frequency tests
$ python setup.py test or py.test
$ tox
To get flake8 and tox, just pip install them into your virtualenv.
6. Commit your changes and push your branch to GitHub::
$ git add .
$ git commit -m "Your detailed description of your changes."
$ git push origin name-of-your-bugfix-or-feature
7. Submit a pull request through the GitHub website.
Pull Request Guidelines
-----------------------
Before you submit a pull request, check that it meets these guidelines:
1. The pull request should include tests.
2. If the pull request adds functionality, the docs should be updated. Put
your new functionality into a function with a docstring, and add the
feature to the list in README.rst.
3. The pull request should work for Python 2.7, 3.4, 3.5 and 3.6, and for PyPy. Check
https://travis-ci.org/dsmall/term_frequency/pull_requests
and make sure that the tests pass for all supported Python versions.
Tips
----
To run a subset of tests::
$ python -m unittest tests.test_term_frequency
Deploying
---------
A reminder for the maintainers on how to deploy.
Make sure all your changes are committed (including an entry in HISTORY.rst).
Then run::
$ bumpversion patch # possible: major / minor / patch
$ git push
$ git push --tags
Travis will then deploy to PyPI if tests pass.
| Adsys-PDFReaderTool | /Adsys_PDFReaderTool-0.0.1.tar.gz/Adsys_PDFReaderTool-0.0.1/CONTRIBUTING.rst | CONTRIBUTING.rst |
.. highlight:: shell
============
Installation
============
Stable release
--------------
To install tfidfpackage, run this command in your terminal:
.. code-block:: console
$ pip install term_frequency
This is the preferred method to install tfidfpackage, as it will always install the most recent stable release.
If you don't have `pip`_ installed, this `Python installation guide`_ can guide
you through the process.
.. _pip: https://pip.pypa.io
.. _Python installation guide: http://docs.python-guide.org/en/latest/starting/installation/
From sources
------------
The sources for tfidfpackage can be downloaded from the `Github repo`_.
You can either clone the public repository:
.. code-block:: console
$ git clone git://github.com/dsmall/term_frequency
Or download the `tarball`_:
.. code-block:: console
$ curl -OL https://github.com/dsmall/term_frequency/tarball/master
Once you have a copy of the source, you can install it with:
.. code-block:: console
$ python setup.py install
.. _Github repo: https://github.com/dsmall/term_frequency
.. _tarball: https://github.com/dsmall/term_frequency/tarball/master
| Adsys-PDFReaderTool | /Adsys_PDFReaderTool-0.0.1.tar.gz/Adsys_PDFReaderTool-0.0.1/docs/installation.rst | installation.rst |
from os import listdir
from os.path import isfile, join
import pprint as pp
import extract
import kivy
import config
DEFAULT_DIR = config.pathName
normalizedTermFrequency = {}
dictOFIDFNoDuplicates = {}
def run_tfidf(dirname):
#Here is where load needs to be called, disable other buttons if filename_ None
def create_dirifles(fName = None):
if fName is None:
return "Select File/Folder to Generate PDF Keywords"
docu = extract.extractTexttoarray(fName)
docs = []
for indx in docu:
docs.append(", ".join(map(str, indx)))
return docs
#---Calculate term frequency --
documents = create_dirifles(DEFAULT_DIR)
#First: tokenize words
dictOfWords = {}
for index, sentence in enumerate(documents):
tokenizedWords = sentence.split(' ')
dictOfWords[index] = [(word,tokenizedWords.count(word)) for word in tokenizedWords]
#print(dictOfWords)
#second: remove duplicates
termFrequency = {}
for i in range(0, len(documents)):
listOfNoDuplicates = []
for wordFreq in dictOfWords[i]:
if wordFreq not in listOfNoDuplicates:
listOfNoDuplicates.append(wordFreq)
termFrequency[i] = listOfNoDuplicates
#print(termFrequency)
#Third: normalized term frequency
#normalizedTermFrequency = {}
for i in range(0, len(documents)):
sentence = dictOfWords[i]
lenOfSentence = len(sentence)
listOfNormalized = []
for wordFreq in termFrequency[i]:
normalizedFreq = wordFreq[1]/lenOfSentence
listOfNormalized.append((wordFreq[0],normalizedFreq))
normalizedTermFrequency[i] = listOfNormalized
#print(normalizedTermFrequency)
#---Calculate IDF
#First: put all sentences together and tokenze words
allDocuments = ''
for sentence in documents:
allDocuments += sentence + ' '
allDocumentsTokenized = allDocuments.split(' ')
#print(allDocumentsTokenized)
allDocumentsNoDuplicates = []
for word in allDocumentsTokenized:
if word not in allDocumentsNoDuplicates:
allDocumentsNoDuplicates.append(word)
#print(allDocumentsNoDuplicates)
#Calculate the number of documents where the term t appears
dictOfNumberOfDocumentsWithTermInside = {}
for index, vocab in enumerate(allDocumentsNoDuplicates):
count = 0
for sentence in documents:
if vocab in sentence:
count += 1
dictOfNumberOfDocumentsWithTermInside[index] = (vocab, count)
#print(dictOfNumberOfDocumentsWithTermInside)
#calculate IDF
#dictOFIDFNoDuplicates = {}
import math
for i in range(0, len(normalizedTermFrequency)):
listOfIDFCalcs = []
for word in normalizedTermFrequency[i]:
for x in range(0, len(dictOfNumberOfDocumentsWithTermInside)):
if word[0] == dictOfNumberOfDocumentsWithTermInside[x][0]:
listOfIDFCalcs.append((word[0],math.log(len(documents)/dictOfNumberOfDocumentsWithTermInside[x][1])))
dictOFIDFNoDuplicates[i] = listOfIDFCalcs
#return normalizedTermFrequency dictOFIDFNoDuplicates
# for word,b in dictOFIDFNoDuplicates.items():
# print(word, ":",b)
def PDF_keywords():
run_tfidf(DEFAULT_DIR)
dictOFTF_IDF = {}
bad_chars = [';', ':', '!', "*", "'", ")", ".", "-", "...", "(",',','``']
for i in range(0,len(normalizedTermFrequency)):
listOFTF_IDF = []
TFIDF_Sort = {}
TFsentence = normalizedTermFrequency[i]
IDFsentence = dictOFIDFNoDuplicates[i]
for doc_Keyidx in range(0, len(TFsentence)):
if TFsentence[doc_Keyidx ][0] not in bad_chars and not TFsentence[doc_Keyidx][0].isdigit():
#listOFTF_IDF.append((TFsentence[x][0],TFsentence[x][1]*IDFsentence[x][1]))
tf_Generated_Keywords = TFsentence[doc_Keyidx ][0]
tf_keywordscores = TFsentence[doc_Keyidx][1]*IDFsentence[doc_Keyidx][1]
#Need to format output text of tf_keywords,tf_keywordscores
listOFTF_IDF.append((("%s" % (tf_Generated_Keywords)),tf_keywordscores))
TFIDF_Sort[tf_Generated_Keywords] = tf_keywordscores
dictOFTF_IDF[i] = listOFTF_IDF
#sort Functionality
# pairs = [(word, tfidf) for word, tfidf in TFIDF_Sort.items()]
# # Why by [1] ?
# pairs.sort(key = lambda p: p[1])
# top_10 = pairs[-20:]
# print("TOP 10 TFIDF")
# pp.pprint(top_10)
# print("BOTTOM 10 TFIDF")
# pp.pprint(pairs[0:20])
return dictOFTF_IDF
if __name__ == '__main__':
run_tfidf(DEFAULT_DIR) | Adsys-PDFReaderTool | /Adsys_PDFReaderTool-0.0.1.tar.gz/Adsys_PDFReaderTool-0.0.1/term_frequency/tf_idf.py | tf_idf.py |
import tf_idf as terms, extract
import kivy,pprint,os,re
from kivy.app import App
from kivy.uix.gridlayout import GridLayout
from kivy.uix.floatlayout import FloatLayout
from kivy.uix.widget import Widget
from kivy.uix.textinput import TextInput
from kivy.uix.button import Button
from kivy.uix.label import Label
from kivy.uix.popup import Popup
from kivy.properties import StringProperty
from kivy.properties import ObjectProperty
from kivy.factory import Factory
import pandas as pd
import config
kivy.require('1.9.1')
class LoadDialog(FloatLayout):
load = ObjectProperty(None)
cancel = ObjectProperty(None)
class PdfReaderMainScreen(Widget):
#instance variables
textinputtext = StringProperty()
dirtext = StringProperty()
#Dont need to pass in FILENAME here consider refactoring method call
files = extract.opendir(config.pathName)
# Why is this returning nothing when it getting output
tf_idf_Keywords = terms.PDF_keywords()
loadfile = ObjectProperty(None)
text_input = ObjectProperty(None)
def __init__(self,currentPage=None,**kwargs):
super(PdfReaderMainScreen, self).__init__(**kwargs)
self.currentPage = 0
self.max = len(PdfReaderMainScreen.tf_idf_Keywords)
self.textinputtext = str(PdfReaderMainScreen.tf_idf_Keywords)
self.dirtext = str(PdfReaderMainScreen.files)
def dismiss_popup(self):
self._popup.dismiss()
def show_load(self):
content = LoadDialog(load=self.load, cancel=self.dismiss_popup)
self._popup = Popup(title="Load file", content=content,
size_hint=( 0.5,None), size=(400, 400))
self._popup.open()
def load(self,path, filename):
# filename = path
#with open(os.path.join(path, filename[0])) as stream:
#self.text_input.text = stream.read()
# result = str(os.path.join(path, filename))
terms.run_tfidf(filename[0])
opendir(filename[0])
print("FilePath " + str(filename[0]))
self.dismiss_popup()
#Really dont need to return anything here !
return filename[0]
def generate_KeyWord_Btn(self):
str_holder = ""
# pd.set_option('display.max_columns', 100)
# pd.set_option('display.max_rows', 500)
# pd.set_option('display.max_columns', 500)
# df = pd.DataFrame(PdfReaderMainScreen.tf_idf_Keywords[0],columns=['Term',' TDIDF'])
for a_tuple in PdfReaderMainScreen.tf_idf_Keywords[0]: # iterates through each tuple
str_holder+='{}, '.format(*a_tuple)
self.textinputtext = str(str_holder)
self.dirtext = str(PdfReaderMainScreen.files[0])
#self.textinputtext = ' term : {}'.format(str(terms.PDF_keywords()[0]))
def generate_doclist(self):
self.dirtext = str(PdfReaderMainScreen.files[self.currentPage])
def next_Btn(self):
#Utlize instance variable to save the state
if(self.currentPage <= self.max):
try:
print("Current Page %s" % (self.currentPage))
#increment the counter
self.currentPage +=1
# pd.set_option('display.max_columns', 100)
# pd.set_option('display.max_rows', 500)
# pd.set_option('display.max_columns', 500)
#df = pd.DataFrame(PdfReaderMainScreen.tf_idf_Keywords[self.currentPage],columns=['Term','TDIDF'])
str_holder = ""
for a_tuple in PdfReaderMainScreen.tf_idf_Keywords[self.currentPage]: # iterates through each tuple
#Unpack tuple and format with comma
str_holder+='{}, '.format(*a_tuple)
#Unpack tuple and format with fix spaces
#str_holder+='{:<20} {}\n'.format(*a_tuple)
#Display the keywords in GUI/make it viewable
self.textinputtext = str(str_holder)
self.dirtext = str(PdfReaderMainScreen.files[self.currentPage])
except KeyError:
self.textinputteIxt = ""
self.currentPage = self.max
print("Set to last Page %s" % (self.max))
return self.currentPage
else:
print("Page %s" % (self.currentPage))
return True
def previous_Btn(self):
if(self.currentPage > 0):
print("Current Page %s" % (self.currentPage))
try:
#decrement the counter
str_holder = ""
#increment the counter
self.currentPage -=1
# pd.set_option('display.max_columns', 100)
# pd.set_option('display.max_rows', 500)
# pd.set_option('display.max_columns', 500)
# df = pd.DataFrame(PdfReaderMainScreen.tf_idf_Keywords[self.currentPage],columns=['Term','TDIDF'])
for a_tuple in PdfReaderMainScreen.tf_idf_Keywords[self.currentPage]: # iterates through each tuple
#Unpack tuple and format with fix spaces
str_holder+='{}, '.format(*a_tuple)
#Display the keywords in GUI/make it viewable
self.textinputtext = str(str_holder)
self.dirtext = str(PdfReaderMainScreen.files[self.currentPage])
except KeyError:
if self.currentPage == self.max:
self.currentPage = self.max
print('decrement the counter"')
print("Previous Page %s" % (self.currentPage))
def run_model():
self.generate_KeyWord_Btn.disabled=True
self.previous_Btn.disabled=True
self.next_Btn = True
t=Thread(target=run, args=())
t.start()
Clock.schedule_interval(partial(disable, t), 8)
def disable(t, what):
if not t.isAlive():
self.load.disabled=False
self.generate_KeyWord_Btn.disabled=False
self.previous_Btn.disabled=False
self.next_Btn= False
return False
class PdfReaderUI(Widget):
pass
class PdfReaderApp(App):
def build(self):
return PdfReaderUI()
Factory.register('LoadDialog', cls=LoadDialog)
if __name__ == '__main__':
PdfReaderApp().run() | Adsys-PDFReaderTool | /Adsys_PDFReaderTool-0.0.1.tar.gz/Adsys_PDFReaderTool-0.0.1/term_frequency/PdfReader.py | PdfReader.py |
from pprint import pprint
from collections import defaultdict
import PyPDF2
from os import listdir
from os.path import isfile, join
import pprint as pp
import nltk
from nltk.tokenize import sent_tokenize, word_tokenize
from nltk.corpus import stopwords
import extract
filename = "/Users/dontesmall/Desktop/pdf_test_folder"
CORPUS = extract.extractTexttoarray((filename))
documents = []
for indx in CORPUS:
documents.append(", ".join(map(str, indx)))
# Format of the corpus is that each newline has a new 'document'
# CORPUS = """
# In information retrieval, tfโidf or TFIDF, short for term frequencyโinverse document frequency, is a numerical statistic that is intended to reflect how important a word is to a document in a collection or corpus.[1] It is often used as a weighting factor in searches of information retrieval, text mining, and user modeling. The tfโidf value increases proportionally to the number of times a word appears in the document and is offset by the number of documents in the corpus that contain the word, which helps to adjust for the fact that some words appear more frequently in general. Tfโidf is one of the most popular term-weighting schemes today; 83% of text-based recommender systems in digital libraries use tfโidf.
# LeBron Raymone James Sr. (/lษหbrษn/; born December 30, 1984), often referred to mononymously as LeBron, is an American professional basketball player for the Los Angeles Lakers of the National Basketball Association (NBA). He is often considered the best basketball player in the world and regarded by some as the greatest player of all time.[1][2][3][4] His accomplishments include four NBA Most Valuable Player Awards, three NBA Finals MVP Awards, and two Olympic gold medals. James has appeared in fifteen NBA All-Star Games and been named NBA All-Star MVP three times. He won the 2008 NBA scoring title, is the all-time NBA playoffs scoring leader, and is fourth in all-time career points scored. He has been voted onto the All-NBA First Team twelve times and the All-Defensive First Team five times.
# Marie Skลodowska Curie (/หkjสษri/;[3] French: [kyสi]; Polish: [kสฒiหri]; born Maria Salomea Skลodowska;[a] 7 November 1867 โ 4 July 1934) was a Polish and naturalized-French physicist and chemist who conducted pioneering research on radioactivity. She was the first woman to win a Nobel Prize, the first person and only woman to win twice, and the only person to win a Nobel Prize in two different sciences. She was part of the Curie family legacy of five Nobel Prizes. She was also the first woman to become a professor at the University of Paris, and in 1995 became the first woman to be entombed on her own merits in the Panthรฉon in Paris.
# """.strip().lower()
DOC_ID_TO_TF = {} # doc-id -> {tf: term_freq_map where term_freq_map is word -> percentage of words in doc that is this one,
CORPUS_CONTINER = str(documents).strip('[]') # tfidf: ...}
DOCS = CORPUS_CONTINER.split("\n") # Documents where the index is the doc id
WORDS = CORPUS_CONTINER.split()
DF = defaultdict(lambda: 0)
for word in WORDS:
DF[word] += 1
for doc_id, doc in enumerate(DOCS):
#print("HERE IS THE DOCS :" + str(DOCS))
#Num of times of the word showed up in doc
TF = defaultdict(lambda: 0)
TFIDF = {}
doc_words = doc.split()
word_count = len(doc_words)
# percentage of words in doc that is this one = count of this word in this doc / total number of words in this doc
for word in doc_words:
# Here is the total num of count
TF[word] +=1
for word in TF.keys():
TF[word] /= word_count
TFIDF[word] = TF[word] / DF[word]
# loop over tfidt to sort it as a map
pairs = [(word, tfidf) for word, tfidf in TFIDF.items()]
# Why by [1] ?
pairs.sort(key = lambda p: p[1])
top_10 = pairs[-15:]
print("TOP 10 TFIDF")
pprint(top_10)
print("BOTTOM 10 TFIDF")
pprint(pairs[0:15])
DOC_ID_TO_TF[doc_id] = {'tf': TF, 'tfidf': TFIDF}
# pprint(DOC_ID_TO_TF) | Adsys-PDFReaderTool | /Adsys_PDFReaderTool-0.0.1.tar.gz/Adsys_PDFReaderTool-0.0.1/term_frequency/tfidf.py | tfidf.py |
from .events import AddEvent, SetReadingEvent, SetFinishedEvent, ReadEvent, \
KindleEvent
class ReadingStatus(object):
"""An enum representing the three possible progress states of a book.
"""
NOT_STARTED, CURRENT, COMPLETED = xrange(3)
class BookSnapshot(object):
"""A book's state of progress.
Args:
asin: The ASIN of the book
status: The book's ReadingStatus value
progress: An integral value representing the current reading progress.
This value is meaningless unless `status` is CURRENT as progress
is untracked for books not currently being read.
"""
def __init__(self, asin, status=ReadingStatus.NOT_STARTED, progress=None):
self.asin = asin
self.status = status
self.progress = progress
class KindleLibrarySnapshot(object):
"""A snapshot of the state of a Kindle library.
Args:
events: An iterable of ``KindleEvent``s which are applied in sequence
to build the snapshot's state.
"""
def __init__(self, events=()):
self._data = {}
for event in events:
self.process_event(event)
def process_event(self, event):
"""Apply an event to the snapshot instance
"""
if not isinstance(event, KindleEvent):
pass
elif isinstance(event, AddEvent):
self._data[event.asin] = BookSnapshot(event.asin)
elif isinstance(event, SetReadingEvent):
self._data[event.asin].status = ReadingStatus.CURRENT
self._data[event.asin].progress = event.initial_progress
elif isinstance(event, ReadEvent):
self._data[event.asin].progress += event.progress
elif isinstance(event, SetFinishedEvent):
self._data[event.asin].status = ReadingStatus.COMPLETED
else:
raise TypeError
def get_book(self, asin):
"""Return the `BookSnapshot` object associated with `asin`
Raises:
KeyError: If asin not found in current snapshot
"""
return self._data[asin]
def calc_update_events(self, asin_to_progress):
"""Calculate and return an iterable of `KindleEvent`s which, when
applied to the current snapshot, result in the the current snapshot
reflecting the progress state of the `asin_to_progress` mapping.
Functionally, this method generates `AddEvent`s and `ReadEvent`s from
updated Kindle Library state.
Args:
asin_to_progress: A map of book asins to the integral
representation of progress used in the current snapshot.
Returns:
A list of Event objects that account for the changes detected in
the `asin_to_progress`.
"""
new_events = []
for asin, new_progress in asin_to_progress.iteritems():
try:
book_snapshot = self.get_book(asin)
except KeyError:
new_events.append(AddEvent(asin))
else:
if book_snapshot.status == ReadingStatus.CURRENT:
change = new_progress - book_snapshot.progress
if change > 0:
new_events.append(ReadEvent(asin, change))
return new_events | Aduro | /Aduro-0.0.1a0.tar.gz/Aduro-0.0.1a0/aduro/snapshot.py | snapshot.py |
from .events import UpdateEvent
from .snapshot import KindleLibrarySnapshot
from lector.reader import KindleCloudReaderAPI, KindleAPIError
from datetime import datetime
class KindleProgressMgr(object):
"""Manages the Kindle reading progress state held in the the `EventStore`
instance, `store`
Args:
store: An `EventStore` instance containing the past events
kindle_uname: The email associated with the Kindle account
kindle_pword: The password associated with the Kindle account
"""
def __init__(self, store, kindle_uname, kindle_pword):
self.store = store
self._snapshot = KindleLibrarySnapshot(store.get_events())
self._event_buf = []
self.uname = kindle_uname
self.pword = kindle_pword
self.books = None
self.progress = None
@property
def uncommited_events(self):
"""A logically sorted list of `Events` that are have been registered
to be committed to the current object's state but remain uncommitted.
"""
return list(sorted(self._event_buf))
def detect_events(self, max_attempts=3):
"""Returns a list of `Event`s detected from differences in state
between the current snapshot and the Kindle Library.
`books` and `progress` attributes will be set with the latest API
results upon successful completion of the function.
Returns:
If failed to retrieve progress, None
Else, the list of `Event`s
"""
# Attempt to retrieve current state from KindleAPI
for _ in xrange(max_attempts):
try:
with KindleCloudReaderAPI\
.get_instance(self.uname, self.pword) as kcr:
self.books = kcr.get_library_metadata()
self.progress = kcr.get_library_progress()
except KindleAPIError:
continue
else:
break
else:
return None
# Calculate diffs from new progress
progress_map = {book.asin: self.progress[book.asin].locs[1]
for book in self.books}
new_events = self._snapshot.calc_update_events(progress_map)
update_event = UpdateEvent(datetime.now().replace(microsecond=0))
new_events.append(update_event)
self._event_buf.extend(new_events)
return new_events
def register_events(self, events=()):
"""Register `Event` objects in `events` to be committed.
NOTE: This does not automatically commit the events.
A separate `commit_updates` call must be made to make the commit.
"""
self._event_buf.extend(events)
def commit_events(self):
"""Applies all outstanding `Event`s to the internal state
"""
# Events are sorted such that, when applied in order, each event
# represents a logical change in state. That is, an event never requires
# future events' data in order to be parsed.
# e.g. All ADDs must go before START READINGs
# All START READINGs before all READs
for event in sorted(self._event_buf):
self.store.record_event(event)
self._snapshot.process_event(event)
self._event_buf = [] | Aduro | /Aduro-0.0.1a0.tar.gz/Aduro-0.0.1a0/aduro/manager.py | manager.py |
import re
import dateutil.parser
POSITION_MEASURE = 'LOCATION'
class EventParseError(Exception):
"""Indicate an error in parsing an event from a string
"""
pass
class Event(object):
"""A base event.
"""
pass
class KindleEvent(Event):
"""A base kindle event.
Establishes sortability of Events based on the `weight` property
"""
_WEIGHT = None
asin = None
@property
def weight(self):
"""Define the sorting order of events
"""
return self._WEIGHT
@staticmethod
def from_str(string):
"""Generate a `KindleEvent`-type object from a string
"""
raise NotImplementedError
def __eq__(self, other):
return self.weight == other.weight and self.asin == other.asin
def __lt__(self, other):
return self.weight < other.weight and self.asin < other.asin
def __gt__(self, other):
return self.weight > other.weight and self.asin > other.asin
def __ne__(self, other):
return not self == other
class AddEvent(KindleEvent):
"""Represent the addition of a book to the Kindle Library
"""
_WEIGHT = 0
def __init__(self, asin):
super(AddEvent, self).__init__()
self.asin = asin
def __str__(self):
return 'ADD %s' % (self.asin,)
@staticmethod
def from_str(string):
"""Generate a `AddEvent` object from a string
"""
match = re.match(r'^ADD (\w+)$', string)
if match:
return AddEvent(match.group(1))
else:
raise EventParseError
class SetReadingEvent(KindleEvent):
"""Represents the user's desire to record progress of a book
"""
_WEIGHT = 1
def __init__(self, asin, initial_progress):
super(SetReadingEvent, self).__init__()
self.asin = asin
self.initial_progress = initial_progress
def __str__(self):
return 'START READING %s FROM %s %d' % (self.asin, POSITION_MEASURE,
self.initial_progress)
@staticmethod
def from_str(string):
"""Generate a `SetReadingEvent` object from a string
"""
match = re.match(r'^START READING (\w+) FROM \w+ (\d+)$', string)
if match:
return SetReadingEvent(match.group(1), int(match.group(2)))
else:
raise EventParseError
class ReadEvent(KindleEvent):
"""Represents the advance of a user's progress in a book
"""
_WEIGHT = 2
def __init__(self, asin, progress):
super(ReadEvent, self).__init__()
self.asin = asin
self.progress = progress
if progress <= 0:
raise ValueError('Progress field must be positive')
def __str__(self):
return 'READ %s FOR %d %sS' % (self.asin, self.progress,
POSITION_MEASURE)
@staticmethod
def from_str(string):
"""Generate a `ReadEvent` object from a string
"""
match = re.match(r'^READ (\w+) FOR (\d+) \w+S$', string)
if match:
return ReadEvent(match.group(1), int(match.group(2)))
else:
raise EventParseError
class SetFinishedEvent(KindleEvent):
"""Represents a user's completion of a book
"""
_WEIGHT = 3
def __init__(self, asin):
super(SetFinishedEvent, self).__init__()
self.asin = asin
def __str__(self):
return 'FINISH READING %s' % (self.asin)
@staticmethod
def from_str(string):
"""Generate a `SetFinishedEvent` object from a string
"""
match = re.match(r'^FINISH READING (\w+)$', string)
if match:
return SetFinishedEvent(match.group(1))
else:
raise EventParseError
class UpdateEvent(Event):
"""Represents a user's update of the Kindle database
"""
def __init__(self, a_datetime):
super(UpdateEvent, self).__init__()
self.datetime_ = a_datetime
def __str__(self):
return 'UPDATE %s' % self.datetime_.isoformat()
@staticmethod
def from_str(string):
"""Generate a `SetFinishedEvent` object from a string
"""
match = re.match(r'^UPDATE (.+)$', string)
if match:
parsed_date = dateutil.parser.parse(match.group(1), ignoretz=True)
return UpdateEvent(parsed_date)
else:
raise EventParseError | Aduro | /Aduro-0.0.1a0.tar.gz/Aduro-0.0.1a0/aduro/events.py | events.py |
Adv2
====
This package provides a 'reader' for .adv (AstroDigitalVideo) Version 2 files.
It is the result of a collaborative effort involving Bob Anderson and Hristo Pavlov.
The specification for Astro Digital Video files can be
found at: <http://www.astrodigitalvideoformat.org/spec.html>
To install this package on your system:
pip install Adv2
Then, sample usage from within your Python code is:
from pathlib import Path
from Adv2.Adv2File import Adv2reader
try:
# Create a platform agnostic path to your .adv file (use forward slashes)
file_path = str(Path('path/to/your/file.adv')) # Python will make Windows version as needed
# Create a 'reader' for the given file
rdr = Adv2reader(file_path)
except AdvLibException as adverr:
print(repr(adverr))
exit()
except IOError as ioerr:
print(repr(ioerr))
exit()
Now that the file has been opened and a 'reader' (rdr) created for it,
there are instance variables available that will be useful.
Here is how to print some of those out (these give the image size and number of images in the file):
print(f'Width: {rdr.Width} Height: {rdr.Height} NumMainFrames: {rdr.CountMainFrames}')
There is also an composite instance variable called `FileInfo` which gives access to all
of the values defined in the structure `AdvFileInfo` (there are 20 of them).
For example:
print(rdr.FileInfo.UtcTimestampAccuracyInNanoseconds)
To get (and show) the file metadata (returned as a Dict[str, str]):
print(f'\nADV_FILE_META_DATA:')
meta_data = rdr.getAdvFileMetaData()
for key in meta_data:
print(f' {key}: {meta_data[key]}')
The main thing that one will want to do is read image data, timestamps, and frame status information
from image frames.
Continuing with the example and assuming that the adv file contains a MAIN stream (it
might also contain a CALIBRATION stream):
for frame in range(rdr.CountMainFrames):
# status is a Dict[str, str]
err, image, frameInfo, status = rdr.getMainImageAndStatusData(frameNumber=frame)
# To get frames from a CALIBRATION stream, use rdr.getCalibImageAndStatusData()
if not err:
# If timestamp info was not present in the file (highly unlikely),
# the timestamp string returned will be empty (== '')
if frameInfo.StartOfExposureTimestampString:
print(frameInfo.DateString, frameInfo.StartOfExposureTimestampString)
print(f'\nframe: {frame} STATUS:')
for entry in status:
print(f' {entry}: {status[entry]}')
else:
print(err)
`err` is a string that will be empty if image bytes and metadata where successfully extracted.
In that case, `image` will contain a numpy array of uint16 values. If `err` is not empty, it will contain
a human-readable description of the error encountered.
The 'shape' of image will be `image[Height, Width]` for grayscale images. Color video
files are not yet supported.
Finally, the file should closed as in the example below:
print(f'closeFile returned: {rdr.closeFile()}')
rdr = None
The value returned will be the version number (2) of the file closed or 0, which indicates an attempt to close a file that was
already closed.
| Adv2 | /Adv2-1.2.0.tar.gz/Adv2-1.2.0/README.md | README.md |
from state_machine import StateReader
# Some imports
import itertools
import warnings
import console
import re
# Colour class to make output more pretty.
#
class Colour:
PURPLE = '\033[95m'
CYAN = '\033[96m'
DARKCYAN = '\033[36m'
BLUE = '\033[94m'
GREEN = '\033[92m'
YELLOW = '\033[93m'
RED = '\033[91m'
BOLD = '\033[1m'
UNDERLINE = '\033[4m'
END = '\033[0m'
# Some variables to be used.
__VALUE__ = '__VALUE__'
__PREFIX__ = '__PREFIX__'
__FIELD__ = '__FIELD__'
# Indicates that a master command can have a 1 to 1 binding
# to another parameter afterwards.
__BINDING__ = '__bondage__'
# Help handle binding
__HELPER__ = '__helper__'
# Version handle binding
__VERSION__ = '__version__'
# Helper to indicate that a boolean is being stored
__ACTIVE__ = '__kinkysex__'
# Binds an outside function to the failsafe handler.
# Without a bound function a default error message will be
# printed instead.
# => failsafe_function
# Values to be used in the options tree
__ALIASES__ = '__alias__'
__FUNCT__ = '__funct__'
__DATAFIELD__ = '__defdat__'
__TYPE__ = '__type__'
# Notes are used to describe commands
__NOTE__ = '__note__'
# Labels are used to sub-divide master/ sub commands
__LABEL__ = '__label__'
# Main class for the python advanced options parser.
#
class AdvOptParse:
def __init__(self, masters = None):
self.__set_masters(masters)
self.failsafe_function = None
self.container_name = None
self.fields_name = "Fields"
self.slave_fields = None
self.hidden_subs = False
self.debug = False
if masters == {}:
self.has_commands = False
# Set the name of the container application that's used in the help screen
#
def set_container_name(self, name):
self.container_name = name
# Set the name of the fields for the help screen
#
def set_fields_name(self, name):
self.fields_name = name
# Hash of master level commands. CAN contain a global function to determine actions of
# subcommands.
# (See docs).
#
def __set_masters(self, masters):
if masters == None: warnings.warn("Warning! You shouldn't init a parser without your master commands set!")
# self.masters = master
self.opt_hash = {}
for key, value in masters.iteritems():
self.opt_hash[key] = {}
self.opt_hash[key][__FUNCT__] = value[0]
self.opt_hash[key][__NOTE__] = value[1]
self.set_master_aliases(key, [])
self.set_master_fields(key, False)
self.has_commands = True
# Setup the version and helper handle. By default '-h' and '--version' are set to 'True'
self.opt_hash[__HELPER__] = { __ALIASES__: ['-h'], __ACTIVE__: True}
self.opt_hash[__VERSION__] = { __ALIASES__: ['--version'], __ACTIVE__: True}
# Takes the master level command and a hash of data
# The hash of data needs to be formatted in the following sense:
# {'X': funct} where X is any variable, option or command INCLUDING DASHES AND DOUBLE DASHES you want
# to add to your parser.
# Additionally you pass a function from your parent class that gets called when this option is detected in a
# string that is being parsed. The function by detault takes three parameters:
#
# master command (i.e. copy), parent option (i.e. '-v'), data field default (i.e. 'false'). So in an example for
#
# "clone -L 2"
#
# it would call the function: func('clone', '-L', '2') in the specified container class/ env.
#
# 'use' parameters include: 'value' : -v
# (WORK IN PROGRESS) 'prefix': --logging true
# 'field' : --file=/some/data
#
def add_suboptions(self, master, data):
if master not in self.opt_hash: self.opt_hash[master] = {}
for key, value in data.iteritems():
if value[1] == __PREFIX__: warnings.warn("Not implemented yet") ; return
if key not in self.opt_hash[master]: self.opt_hash[master][key] = {}
self.opt_hash[master][key][__ALIASES__] = [key]
self.opt_hash[master][key][__TYPE__] = value[1]
self.opt_hash[master][key][__DATAFIELD__] = value[0]
self.opt_hash[master][key][__NOTE__] = value[2]
# Create aliases for a master command that invoke the same
# functions as the actual master command.
#
# This can be used to shorten commands that user need to
# input (such as 'rails server' vs 'rails s' does it)
#
def set_master_aliases(self, master, aliases):
if master not in self.opt_hash: warnings.warn("Could not identify master command. Aborting!") ; return
if master not in aliases: aliases.append(master)
self.opt_hash[master][__ALIASES__] = aliases
# Allow a master command to bind to a sub field.
def set_master_fields(self, master, fields):
self.opt_hash[master][__BINDING__] = fields
# Define a failsafe function to handle failed parsing attempts
# If no function was registered default logging to STOUT will be used
#
def register_failsafe(self, funct):
self.failsafe_function = funct
# Defines the helper handles that are used to print the help screen.
#
def define_help_handle(self, helpers):
self.opt_hash[__HELPER__][__ALIASES__] = helpers
# Enable the helper handle (and list it in the helper screen)
#
def set_help_handle(self, boolean):
self.opt_hash[__HELPER__][__ACTIVE__] = boolean
# Defines the version handles that are used to print the version number of .
#
def define_version_handle(self, versions):
self.opt_hash[__VERSION__][__ALIASES__] = versions
# Enables the subs to be hidden from the help screen and thus only prints master, level commands
#
def set_hidden_subs(self, boolean):
self.hidden_subs = boolean
# Enable the version handle (and list it in the helper screen)
#
def set_version_handle(self, boolean):
self.opt_hash[__VERSION__][__ACTIVE__] = boolean
# Set a version string to be printed in the help screen and/or version handle
def set_container_version(self, version):
self.container_version = version
# Define fields for a command that gets handled above sub commands.
# Slave fields should be a hash with a string key and tuple value attached
# to it. A one tuple can also be replaced with the actual information.
# So:
# 'field' => ('information', "Description of the field")
# and
# 'field' => 'information'
# are both valid field types.
#
def define_fields(self, fields):
if self.slave_fields == None: self.slave_fields = {}
for key, value in fields.iteritems():
if key not in self.slave_fields: self.slave_fields[key] = {}
self.slave_fields[key] = value
# Key is the name of a field.
# Value is a tuple of information to be passed down to a callback function when the field is
# triggered.
# A one tuple can also be replaced with the actual information
#
# So:
# 'key' => ('information', "Description of the field")
# and
# 'key' => 'information'
# are both valid field types.
#
def add_field(self, key, value):
if self.slave_fields == None: self.slave_fields = {}
self.slave_fields[key] = value
# Create aliases for a sub command that invoke the same
# functions as the actual sub command.
#
# This can be used to shorten commands that user need to
# input (such as 'poke copy --file' vs 'poke copy -f')
#
# Can be combined with master alises to make short and nicely
# cryptic commands:
# poke server cp -f=~/file -t=directory/
#
# == USAGE ==
# Specify the master level command as the first parameter.
# Then use a hash with the original subs as the indices and
# the aliases in a list as values. This allows for ALL aliases for
# a master level command to be set at the same time without having
# to call this function multiple times.
#
def sub_aliases(self, master, aliases):
if master not in self.opt_hash:
if self.debug: print "[DEBUG]:", "Could not identify master command. Aborting!" ; return
for key, value in aliases.iteritems():
if key not in self.opt_hash[master]: warnings.warn("Could not identify sub command. Skipping") ; continue
self.opt_hash[master][key][__ALIASES__] = value + list(set(self.opt_hash[master][key][__ALIASES__]) - set(value))
# Enables debug mode on the parser.
# Will for example output the parsed and translated/ chopped strings to the console.
#
def enable_debug(self):
self.debug = True
# Print tree of options hashes and bound slave fields to master commands
#
def print_tree(self):
if self.debug: print "[DEBUG]:", self.opt_hash
# Parse a string either from a method parameter or from a commandline
# argument. Calls master command functions with apropriate data attached
# to it.
#
def parse(self, c = None):
if self.slave_fields == None: self.define_fields({})
for alias in self.opt_hash[__HELPER__][__ALIASES__]:
if c == alias:
self.help_screen()
return
for alias in self.opt_hash[__VERSION__][__ALIASES__]:
if c == alias:
print self.container_version
return
content = StateReader().make(c)
# content = (sys.args if (c == None) else c.split())
counter = 0
master_indices = []
focus = None
for item in content:
for master in self.opt_hash:
if "__" not in master:
if item in self.opt_hash[master][__ALIASES__]:
master_indices.append(counter)
counter += 1
counter = 0
skipper = False
wait_for_slave = False
master_indices.append(len(content))
# print master_indices
# This loop iterates over the master level commands
# of the to-be-parsed string
for index in master_indices:
if (counter + 1) < len(master_indices):
# print (counter + 1), len(master_indices)
data_transmit = {}
subs = []
sub_counter = 0
slave_field = None
has_slave = False
# This loop iterates over the sub-commands of several master commands.
#
for cmd in itertools.islice(content, index, master_indices[counter + 1] + 1):
# print sub_counter
if sub_counter == 0:
focus = self.__alias_to_master(cmd)
# print focus, cmd
if focus in self.opt_hash:
if self.opt_hash[focus][__BINDING__]:
wait_for_slave = True
sub_counter += 1
continue
else:
rgged = cmd.replace('=', '=****').split('****')
for sub_command in rgged:
# print "Sub command:", sub_command
if skipper:
skipper = False
continue
if "=" in sub_command:
sub_command = sub_command.replace('=', '')
trans_sub_cmd = self.__alias_to_sub(focus, sub_command)
if trans_sub_cmd in self.opt_hash[focus]:
data_transmit[trans_sub_cmd] = rgged[1]
skipper = True
if trans_sub_cmd not in subs: subs.append(trans_sub_cmd)
else:
if wait_for_slave:
has_slave = True
wait_for_slave = False
if sub_command in self.slave_fields:
slave_field = (sub_command, self.slave_fields[sub_command])
else:
if self.failsafe_function == None:
print "An Error occured while parsing arguments."
else:
self.failsafe_function(cmd, 'Unknown field!')
return
continue
trans_sub_cmd = self.__alias_to_sub(focus, sub_command)
if trans_sub_cmd == None:
if sub_command in self.opt_hash:
if self.opt_hash[sub_command][__BINDING__]:
# if self.debug: print "Waiting for slave field..."
wait_for_slave = True
continue
if trans_sub_cmd in self.opt_hash[focus]:
data_transmit[trans_sub_cmd] = True
if trans_sub_cmd not in subs: subs.append(trans_sub_cmd)
sub_counter += 1
self.opt_hash[focus][__FUNCT__](focus, slave_field, subs, data_transmit)
return
counter += 1
if self.failsafe_function == None:
print "Error! No arguments recognised."
else:
self.failsafe_function(content, 'Invalid Options')
# Return false if nothing was handled for container application to be able to
raise Warning
# Generates a help screen for the container appliction.
#
def help_screen(self):
if self.slave_fields == None: self.define_fields({})
_s_ = " "
_ds_ = " "
_dds_ = " "
# print "%-5s" % "Usage: Poke [Options]"
# if self.debug: print "[DEBUG]: Your terminal's width is: %d" % width
if not self.container_name and self.debug: print "[DEBUG]: Container application name unknown!" ; self.container_name = "default"
if not self.opt_hash: print "Usage:", self.container_name
else: print "Usage:", self.container_name, "[options]"
if self.opt_hash[__VERSION__][__ACTIVE__] or self.opt_hash[__HELPER__][__ACTIVE__]:
print ""
print _s_ + "General:"
if self.opt_hash[__VERSION__][__ACTIVE__]:
print _ds_ + "%-20s %s" % (self.__clean_aliases(self.opt_hash[__VERSION__][__ALIASES__]), "Print the version of"), "'%s'" % self.container_name
if self.opt_hash[__HELPER__][__ACTIVE__]:
print _ds_ + "%-20s %s" % (self.__clean_aliases(self.opt_hash[__HELPER__][__ALIASES__]), "Print this help screen")
if self.opt_hash and self.has_commands: print "" ; print _s_ + "Commands:"
for key, value in self.opt_hash.iteritems():
if "__" not in key:
print _ds_ + "%-20s %s" % (self.__clean_aliases(value[__ALIASES__]), value[__NOTE__])
if not self.hidden_subs:
for k, v in self.opt_hash[key].iteritems():
if "__" not in k:
print _dds_ + "%-22s %s" % (self.__clean_aliases(v[__ALIASES__]), v[__NOTE__])
print ""
if self.slave_fields: print _s_ + self.fields_name + ":"
for key, value in self.slave_fields.iteritems():
description = str(value)[1:-1].replace("\'", "")
print _ds_ + "%-20s %s" % (key, description)
def __clean_aliases(self, aliases):
string = ""
counter = 0
for alias in aliases:
counter += 1
string += alias
if counter < len(aliases): string += ", "
return string
def __alias_to_master(self, alias):
for key, value in self.opt_hash.iteritems():
if "__" not in key:
for map_alias in value[__ALIASES__]:
if alias == map_alias:
return key
return None
def __alias_to_sub(self, master, alias):
for key, value in self.opt_hash[master].iteritems():
if "__" not in key:
if alias in value[__ALIASES__]:
return key
return None | AdvOptParse | /AdvOptParse-0.2.13.tar.gz/AdvOptParse-0.2.13/advoptparse/parser.py | parser.py |
# -----------------------------------------------------------
# AdvaS Advanced Search 0.2.5
# advanced search algorithms implemented as a python module
# advas core module
#
# (C) 2002 - 2012 Frank Hofmann, Berlin, Germany
# Released under GNU Public License (GPL)
# email [email protected]
# -----------------------------------------------------------
# other modules required by advas
import string
import re
import math
class Advas:
def __init__(self):
"init an Advas object"
self.initFilename()
self.initLine()
self.initWords()
self.initList()
#self.init_ngrams()
return
def reInit (self):
"re-initializes an Advas object"
self.__init__()
return
# basic functions ==========================================
# file name ------------------------------------------------
def initFilename (self):
self.filename = ""
self.useFilename = False
def setFilename (self, filename):
self.filename = filename
def getFilename (self):
return self.filename
def setUseFilename (self):
self.useFilename = True
def getUseFilename (self):
return self.useFilename
def setUseWordlist (self):
self.useFilename = False
def getFileContents (self, filename):
try:
fileId = open(fileame, "r")
except:
print "[AdvaS] I/O Error - can't open given file:", filename
return -1
# get file contents
contents = fileId.readlines()
# close file
fileId.close()
return contents
# line -----------------------------------------------------
def initLine (self):
self.line = ""
def setLine (self, line):
self.line = line
def getLine (self):
return self.line
def splitLine (self):
"split a line of text into single words"
# define regexp tokens and split line
tokens = re.compile(r"[\w']+")
self.words = tokens.findall(self.line)
# words ----------------------------------------------------
def initWords (self):
self.words = {}
def setWords (self, words):
self.words = words
def getWords (self):
return self.words
def countWords(self):
"count words given in self.words, return pairs word:frequency"
list = {} # start with an empty list
for item in self.words:
# assume a new item
frequency = 0
# word already in list?
if list.has_key(item):
frequency = list[item]
frequency += 1
# save frequency , update list
list[item] = frequency
# save list of words
self.set_list (list)
# lists ----------------------------------------------------
def initList (self):
self.list = {}
def setList (self, list):
self.list = list
def getList (self):
return self.list
def mergeLists(self, *lists):
"merge lists of words"
newList = {} # start with an empty list
for currentList in lists:
key = currentList.keys()
for item in key:
# assume a new item
frequency = 0
# item already in newlist?
if newlist.has_key(item):
frequency = newList[item]
frequency += currentList[item]
newList[item] = frequency
# set list
self.setList (newList)
#def mergeListsIdf(self, *lists):
# "merge lists of words for calculating idf"
#
# newlist = {}
#
# for current_list in lists:
# key = current_list.keys()
# for item in key:
# # assume a new item
# frequency = 0
#
# # item already in newlist?
# if newlist.has_key(item):
# frequency = newlist[item]
# frequency += 1
# newlist[item] = frequency
# # set list
# self.set_list (newlist)
#
#def compact_list(self):
# "merges items appearing more than once"
#
# newlist = {}
# original = self.list
# key = original.keys()
#
# for j in key:
# item = string.lower(string.strip(j))
#
# # assume a new item
# frequency = 0
#
# # item already in newlist?
# if newlist.has_key(item):
# frequency = newlist[item]
# frequency += original[j]
# newlist[item] = frequency
#
# # set new list
# self.set_list (newlist)
#
#def remove_items(self, remove):
# "remove the items from the original list"
#
# newlist = self.list
#
# # get number of items to be removed
# key = remove.keys()
#
# for item in key:
# # item in original list?
# if newlist.has_key(item):
# del newlist[item]
#
# # set newlist
# self.set_list(newlist) | AdvaS-Advanced-Search | /advas-advanced-search-0.2.5.tar.gz/advas-0.2.5/advas-20120906.py | advas-20120906.py |
# -----------------------------------------------------------
# AdvaS Advanced Search 0.2.5
# advanced search algorithms implemented as a python module
# phonetics module
#
# (C) 2002 - 2014 Frank Hofmann, Berlin, Germany
# Released under GNU Public License (GPL)
# email [email protected]
# -----------------------------------------------------------
import string
import re
from ngram import Ngram
class Phonetics:
def __init__(self, term):
self.term = term
return
def setText(self, term):
self.term = term
return
def getText(self):
return self.term
# covering algorithms
def phoneticCode (self):
"returns the term's phonetic code using different methods"
# build an array to hold the phonetic code for each method
phoneticCodeList = {
"soundex": self.soundex(),
"metaphone": self.metaphone(),
"nysiis": self.nysiis(),
"caverphone": self.caverphone()
}
return phoneticCodeList
# phonetic algorithms
def soundex (self):
"Return the soundex value to a given string."
# Create and compare soundex codes of English words.
#
# Soundex is an algorithm that hashes English strings into
# alpha-numerical value that represents what the word sounds
# like. For more information on soundex and some notes on the
# differences in implemenations visit:
# http://www.bluepoof.com/Soundex/info.html
#
# This version modified by Nathan Heagy at Front Logic Inc., to be
# compatible with php's soundexing and much faster.
#
# eAndroid / Nathan Heagy / Jul 29 2000
# changes by Frank Hofmann / Jan 02 2005, Sep 9 2012
# generate translation table only once. used to translate into soundex numbers
#table = string.maketrans('abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ', '0123012002245501262301020201230120022455012623010202')
table = string.maketrans('ABCDEFGHIJKLMNOPQRSTUVWXYZ', '01230120022455012623010202')
# check parameter
if not self.term:
return "0000" # could be Z000 for compatibility with other implementations
# convert into uppercase letters
term = string.upper(self.term)
firstChar = term[0]
# translate the string into soundex code according to the table above
term = string.translate(term[1:], table)
# remove all 0s
term = string.replace(term, "0", "")
# remove duplicate numbers in-a-row
str2 = firstChar
for x in term:
if x != str2[-1]:
str2 = str2 + x
# pad with zeros
str2 = str2+"0"*len(str2)
# return the first four letters
return str2[:4]
def metaphone (self):
"returns metaphone code for a given string"
# implementation of the original algorithm from Lawrence Philips
# extended/rewritten by M. Kuhn
# improvements with thanks to John Machin <[email protected]>
# define return value
code = ""
i = 0
termLength = len(self.term)
if (termLength == 0):
# empty string ?
return code
# extension #1 (added 2005-01-28)
# convert to lowercase
term = string.lower(self.term)
# extension #2 (added 2005-01-28)
# remove all non-english characters, first
term = re.sub(r'[^a-z]', '', term)
if len(term) == 0:
# nothing left
return code
# extension #3 (added 2005-01-24)
# conflate repeated letters
firstChar = term[0]
str2 = firstChar
for x in term:
if x != str2[-1]:
str2 = str2 + x
# extension #4 (added 2005-01-24)
# remove any vowels unless a vowel is the first letter
firstChar = str2[0]
str3 = firstChar
for x in str2[1:]:
if (re.search(r'[^aeiou]', x)):
str3 = str3 + x
term = str3
termLength = len(term)
if termLength == 0:
# nothing left
return code
# check for exceptions
if (termLength > 1):
# get first two characters
firstChars = term[0:2]
# build translation table
table = {
"ae":"e",
"gn":"n",
"kn":"n",
"pn":"n",
"wr":"n",
"wh":"w"
}
if firstChars in table.keys():
term = term[2:]
code = table[firstChars]
termLength = len(term)
elif (term[0] == "x"):
term = ""
code = "s"
termLength = 0
# define standard translation table
stTrans = {
"b":"b",
"c":"k",
"d":"t",
"g":"k",
"h":"h",
"k":"k",
"p":"p",
"q":"k",
"s":"s",
"t":"t",
"v":"f",
"w":"w",
"x":"ks",
"y":"y",
"z":"s"
}
i = 0
while (i<termLength):
# init character to add, init basic patterns
add_char = ""
part_n_2 = ""
part_n_3 = ""
part_n_4 = ""
part_c_2 = ""
part_c_3 = ""
# extract a number of patterns, if possible
if (i < (termLength - 1)):
part_n_2 = term[i:i+2]
if (i>0):
part_c_2 = term[i-1:i+1]
part_c_3 = term[i-1:i+2]
if (i < (termLength - 2)):
part_n_3 = term[i:i+3]
if (i < (termLength - 3)):
part_n_4 = term[i:i+4]
# use table with conditions for translations
if (term[i] == "b"):
addChar = stTrans["b"]
if (i == (termLength - 1)):
if (i>0):
if (term[i-1] == "m"):
addChar = ""
elif (term[i] == "c"):
addChar = stTrans["c"]
if (part_n_2 == "ch"):
addChar = "x"
elif (re.search(r'c[iey]', part_n_2)):
addChar = "s"
if (part_n_3 == "cia"):
addChar = "x"
if (re.search(r'sc[iey]', part_c_3)):
addChar = ""
elif (term[i] == "d"):
addChar = stTrans["d"]
if (re.search(r'dg[eyi]', part_n_3)):
addChar = "j"
elif (term[i] == "g"):
addChar = stTrans["g"]
if (part_n_2 == "gh"):
if (i == (termLength - 2)):
addChar = ""
elif (re.search(r'gh[aeiouy]', part_n_3)):
addChar = ""
elif (part_n_2 == "gn"):
addChar = ""
elif (part_n_4 == "gned"):
addChar = ""
elif (re.search(r'dg[eyi]',part_c_3)):
addChar = ""
elif (part_n_2 == "gi"):
if (part_c_3 != "ggi"):
addChar = "j"
elif (part_n_2 == "ge"):
if (part_c_3 != "gge"):
addChar = "j"
elif (part_n_2 == "gy"):
if (part_c_3 != "ggy"):
addChar = "j"
elif (part_n_2 == "gg"):
addChar = ""
elif (term[i] == "h"):
addChar = stTrans["h"]
if (re.search(r'[aeiouy]h[^aeiouy]', part_c_3)):
addChar = ""
elif (re.search(r'[csptg]h', part_c_2)):
addChar = ""
elif (term[i] == "k"):
addChar = stTrans["k"]
if (part_c_2 == "ck"):
addChar = ""
elif (term[i] == "p"):
addChar = stTrans["p"]
if (part_n_2 == "ph"):
addChar = "f"
elif (term[i] == "q"):
addChar = stTrans["q"]
elif (term[i] == "s"):
addChar = stTrans["s"]
if (part_n_2 == "sh"):
addChar = "x"
if (re.search(r'si[ao]', part_n_3)):
addChar = "x"
elif (term[i] == "t"):
addChar = stTrans["t"]
if (part_n_2 == "th"):
addChar = "0"
if (re.search(r'ti[ao]', part_n_3)):
addChar = "x"
elif (term[i] == "v"):
addChar = stTrans["v"]
elif (term[i] == "w"):
addChar = stTrans["w"]
if (re.search(r'w[^aeiouy]', part_n_2)):
addChar = ""
elif (term[i] == "x"):
addChar = stTrans["x"]
elif (term[i] == "y"):
addChar = stTrans["y"]
elif (term[i] == "z"):
addChar = stTrans["z"]
else:
# alternative
addChar = term[i]
code = code + addChar
i += 1
# end while
return code
def nysiis (self):
"returns New York State Identification and Intelligence Algorithm (NYSIIS) code for the given term"
code = ""
i = 0
term = self.term
termLength = len(term)
if (termLength == 0):
# empty string ?
return code
# build translation table for the first characters
table = {
"mac":"mcc",
"ph":"ff",
"kn":"nn",
"pf":"ff",
"k":"c",
"sch":"sss"
}
for tableEntry in table.keys():
tableValue = table[tableEntry] # get table value
tableValueLen = len(tableValue) # calculate its length
firstChars = term[0:tableValueLen]
if (firstChars == tableEntry):
term = tableValue + term[tableValueLen:]
break
# build translation table for the last characters
table = {
"ee":"y",
"ie":"y",
"dt":"d",
"rt":"d",
"rd":"d",
"nt":"d",
"nd":"d",
}
for tableEntry in table.keys():
tableValue = table[tableEntry] # get table value
tableEntryLen = len(tableEntry) # calculate its length
lastChars = term[(0 - tableEntryLen):]
#print lastChars, ", ", tableEntry, ", ", tableValue
if (lastChars == tableEntry):
term = term[:(0 - tableValueLen + 1)] + tableValue
break
# initialize code
code = term
# transform ev->af
code = re.sub(r'ev', r'af', code)
# transform a,e,i,o,u->a
code = re.sub(r'[aeiouy]', r'a', code)
# transform q->g
code = re.sub(r'q', r'g', code)
# transform z->s
code = re.sub(r'z', r's', code)
# transform m->n
code = re.sub(r'm', r'n', code)
# transform kn->n
code = re.sub(r'kn', r'n', code)
# transform k->c
code = re.sub(r'k', r'c', code)
# transform sch->sss
code = re.sub(r'sch', r'sss', code)
# transform ph->ff
code = re.sub(r'ph', r'ff', code)
# transform h-> if previous or next is nonvowel -> previous
occur = re.findall(r'([a-z]{0,1}?)h([a-z]{0,1}?)', code)
#print occur
for occurGroup in occur:
occurItemPrevious = occurGroup[0]
occurItemNext = occurGroup[1]
if ((re.match(r'[^aeiouy]', occurItemPrevious)) or (re.match(r'[^aeiouy]', occurItemNext))):
if (occurItemPrevious != ""):
# make substitution
code = re.sub (occurItemPrevious + "h", occurItemPrevious * 2, code, 1)
# transform w-> if previous is vowel -> previous
occur = re.findall(r'([aeiouy]{1}?)w', code)
#print occur
for occurGroup in occur:
occurItemPrevious = occurGroup[0]
# make substitution
code = re.sub (occurItemPrevious + "w", occurItemPrevious * 2, code, 1)
# check last character
# -s, remove
code = re.sub (r's$', r'', code)
# -ay, replace by -y
code = re.sub (r'ay$', r'y', code)
# -a, remove
code = re.sub (r'a$', r'', code)
return code
def caverphone (self):
"returns the language key using the caverphone algorithm 2.0"
# Developed at the University of Otago, New Zealand.
# Project: Caversham Project (http://caversham.otago.ac.nz)
# Developer: David Hood, University of Otago, New Zealand
# Contact: [email protected]
# Project Technical Paper: http://caversham.otago.ac.nz/files/working/ctp150804.pdf
# Version 2.0 (2004-08-15)
code = ""
i = 0
term = self.term
termLength = len(term)
if (termLength == 0):
# empty string ?
return code
# convert to lowercase
code = string.lower(term)
# remove anything not in the standard alphabet (a-z)
code = re.sub(r'[^a-z]', '', code)
# remove final e
if code.endswith("e"):
code = code[:-1]
# if the name starts with cough, rough, tough, enough or trough -> cou2f (rou2f, tou2f, enou2f, trough)
code = re.sub(r'^([crt]|(en)|(tr))ough', r'\1ou2f', code)
# if the name starts with gn -> 2n
code = re.sub(r'^gn', r'2n', code)
# if the name ends with mb -> m2
code = re.sub(r'mb$', r'm2', code)
# replace cq -> 2q
code = re.sub(r'cq', r'2q', code)
# replace c[i,e,y] -> s[i,e,y]
code = re.sub(r'c([iey])', r's\1', code)
# replace tch -> 2ch
code = re.sub(r'tch', r'2ch', code)
# replace c,q,x -> k
code = re.sub(r'[cqx]', r'k', code)
# replace v -> f
code = re.sub(r'v', r'f', code)
# replace dg -> 2g
code = re.sub(r'dg', r'2g', code)
# replace ti[o,a] -> si[o,a]
code = re.sub(r'ti([oa])', r'si\1', code)
# replace d -> t
code = re.sub(r'd', r't', code)
# replace ph -> fh
code = re.sub(r'ph', r'fh', code)
# replace b -> p
code = re.sub(r'b', r'p', code)
# replace sh -> s2
code = re.sub(r'sh', r's2', code)
# replace z -> s
code = re.sub(r'z', r's', code)
# replace initial vowel [aeiou] -> A
code = re.sub(r'^[aeiou]', r'A', code)
# replace all other vowels [aeiou] -> 3
code = re.sub(r'[aeiou]', r'3', code)
# replace j -> y
code = re.sub(r'j', r'y', code)
# replace an initial y3 -> Y3
code = re.sub(r'^y3', r'Y3', code)
# replace an initial y -> A
code = re.sub(r'^y', r'A', code)
# replace y -> 3
code = re.sub(r'y', r'3', code)
# replace 3gh3 -> 3kh3
code = re.sub(r'3gh3', r'3kh3', code)
# replace gh -> 22
code = re.sub(r'gh', r'22', code)
# replace g -> k
code = re.sub(r'g', r'k', code)
# replace groups of s,t,p,k,f,m,n by its single, upper-case equivalent
for singleLetter in ["s", "t", "p", "k", "f", "m", "n"]:
otherParts = re.split(singleLetter + "+", code)
code = string.join(otherParts, string.upper(singleLetter))
# replace w[3,h3] by W[3,h3]
code = re.sub(r'w(h?3)', r'W\1', code)
# replace final w with 3
code = re.sub(r'w$', r'3', code)
# replace w -> 2
code = re.sub(r'w', r'2', code)
# replace h at the beginning with an A
code = re.sub(r'^h', r'A', code)
# replace all other occurrences of h with a 2
code = re.sub(r'h', r'2', code)
# replace r3 with R3
code = re.sub(r'r3', r'R3', code)
# replace final r -> 3
code = re.sub(r'r$', r'3', code)
# replace r with 2
code = re.sub(r'r', r'2', code)
# replace l3 with L3
code = re.sub(r'l3', r'L3', code)
# replace final l -> 3
code = re.sub(r'l$', r'3', code)
# replace l with 2
code = re.sub(r'l', r'2', code)
# remove all 2's
code = re.sub(r'2', r'', code)
# replace the final 3 -> A
code = re.sub(r'3$', r'A', code)
# remove all 3's
code = re.sub(r'3', r'', code)
# extend the code by 10 '1' (one)
code += '1' * 10
# return the first 10 characters
return code[:10]
def calcSuccVariety(self):
# derive two-letter combinations
ngramObject = Ngram(self.term, 2)
ngramObject.deriveNgrams()
ngramSet = set(ngramObject.getNgrams())
# count appearances of the second letter
varietyList = {}
for entry in ngramSet:
letter1 = entry[0]
letter2 = entry[1]
if varietyList.has_key(letter1):
items = varietyList[letter1]
if not letter2 in items:
# extend the existing one
items.append(letter2)
varietyList[letter1] = items
else:
# create a new one
varietyList[letter1] = [letter2]
return varietyList
def calcSuccVarietyCount(self, varietyList):
# save the number of matches, only
for entry in varietyList:
items = len(varietyList[entry])
varietyList[entry] = items
return varietyList
def calcSuccVarietyList(self, wordList):
result = {}
for item in wordList:
self.setText(item)
varietyList= self.calcSuccVariety()
result[item] = varietyList
return result
def calcSuccVarietyMerge(self, varietyList):
result = {}
for item in varietyList.values():
for letter in item.keys():
if not letter in result.keys():
result[letter] = item[letter]
else:
result[letter] = list(set(result[letter])|set(item[letter]))
return result | AdvaS-Advanced-Search | /advas-advanced-search-0.2.5.tar.gz/advas-0.2.5/phonetics.py | phonetics.py |
# -----------------------------------------------------------
# AdvaS Advanced Search 0.2.5
# advanced search algorithms implemented as a python module
# module containing stemming algorithms
#
# (C) 2002 - 2014 Frank Hofmann, Berlin, Germany
# Released under GNU Public License (GPL)
# email [email protected]
# -----------------------------------------------------------
from advasio import AdvasIo
import string
from phonetics import Phonetics
from ngram import Ngram
from advas import Advas
class Stemmer:
def __init__(self, encoding):
self.stemFile = ""
self.encoding = encoding
self.stemTable = {}
return
def loadStemFile(self, stemFile):
if stemFile:
self.stemFile = stemFile
fileId = AdvasIo(self.stemFile, self.encoding)
success = fileId.readFileContents()
if not success:
self.stemFile = ""
return False
else:
contents = fileId.getFileContents()
for line in contents:
left, right = line.split(":")
self.stemTable[string.strip(left)] = string.strip(right)
return True
else:
self.stemFile = ""
return False
def clearStemFile(self):
self.stemTable = {}
self.stemFile = ""
return
def tableLookup(self, term):
if term in self.stemTable:
return self.stemTable[term]
return
def successorVariety (self, term, wordList):
"calculates the terms'stem according to the successor variety algorithm"
# get basic list for the variety
varObject = Phonetics("")
sv = varObject.calcSuccVarietyList(wordList)
svm = varObject.calcSuccVarietyMerge(sv)
svmList = varObject.calcSuccVarietyCount(svm)
# examine given term
# use peak-and-plateau method to found word boundaries
termLength = len(term)
termRange = range(1, termLength-1)
# start here
start=0
# list of stems
stemList = []
for i in termRange:
# get slice
wordSlice = term[start:i+1]
# print word_slice
# check for a peak
A = term[i-1]
B = term[i]
C = term[i+1]
a = 0
if svmList.has_key(A):
a = svmList[A]
b = 0
if svmList.has_key(B):
b = svmList[B]
c = 0
if svmList.has_key(C):
c = svmList[C]
if (b>a) and (b>c):
# save slice as a stem
stemList.append(wordSlice)
# adjust start
start=i+1
# end if
# end for
if (i<termLength):
# still something left in buffer?
wordSlice = term[start:]
stemList.append(wordSlice)
# end if
# return result
return stemList
def ngramStemmer (self, wordList, size, equality):
"reduces wordList according to the n-gram stemming method"
# use return_list and stop_list for the terms to be removed, later
returnList = []
stopList = []
ngramAdvas = Advas("","")
# calculate length and range
listLength = len(wordList)
outerListRange = range(0, listLength)
for i in outerListRange:
term1 = wordList[i]
innerListRange = range (0, i)
# define basic n-gram object
term1Ngram = Ngram(term1, 2)
term1Ngram.deriveNgrams()
term1NgramList = term1Ngram.getNgrams()
for j in innerListRange:
term2 = wordList[j]
term2Ngram = Ngram(term2, 2)
term2Ngram.deriveNgrams()
term2NgramList = term2Ngram.getNgrams()
# calculate n-gram value
ngramSimilarity = ngramAdvas.compareNgramLists (term1NgramList, term2NgramList)
# compare
degree = ngramSimilarity - equality
if (degree>0):
# ... these terms are so similar that they can be conflated
# remove the longer term, keep the shorter one
if (len(term2)>len(term1)):
stopList.append(term2)
else:
stopList.append(term1)
# end if
# end if
# end for
# end for
# conflate the matrix
# remove all the items which appear in stopList
return list(set(wordList) - set(stopList)) | AdvaS-Advanced-Search | /advas-advanced-search-0.2.5.tar.gz/advas-0.2.5/stemmer.py | stemmer.py |
# -----------------------------------------------------------
# AdvaS Advanced Search 0.2.5
# advanced search algorithms implemented as a python module
# search engine module
#
# (C) 2002 - 2014 Frank Hofmann, Berlin, Germany
# Released under GNU Public License (GPL)
# email [email protected]
# -----------------------------------------------------------
import math
class AdvancedSearch:
def __init__(self):
"initializes a new AdvancedSearch object"
self.entryList = []
self.stopList = []
self.setSortOrderDescending()
self.setSearchStrategy(50, 50)
return
# sort order
def setSortOrderAscending(self):
"changes sort order to ascending"
self.sortOrderDescending = False
return
def setSortOrderDescending(self):
"changes sort order to descending"
self.sortOrderDescending = True
return
def reverseSortOrder(self):
"reverses the current sort order"
if self.getSortOrder() == True:
self.setSortOrderAscending()
else:
self.setSortOrderDescending()
return
def getSortOrder(self):
"get current sort order with True if descending"
return self.sortOrderDescending
# search entry
def addEntry(self, entry):
"registers the given entry, and returns its document id"
entryId = self.getEmptyId()
entry.setEntryId(entryId)
self.entryList.append(entry)
return entryId
def isInEntryList(self, entryId):
"returns True if document with entryId was registered"
value = False
for entry in self.entryList:
if entry.getEntryId() == entryId:
value = True
break
return value
def removeEntry(self, entryId):
"remove document with entryId from list of entries"
newEntryList = []
for entry in self.entryList:
if entry.getEntryId() != entryId:
newEntryList.append(entry)
self.entryList = newEntryList
return
def clearEntryList(self):
"unregister all documents -- clear the entry list"
self.entryList = []
return
def countEntryList(self):
"counts the registered documents, and returns its number"
return len(self.entryList)
def getEntryList(self):
"return full list of registered documents"
entryList = []
for entry in self.entryList:
entryList.append(entry.getEntry())
return entryList
def getEmptyId(self):
"returns a new, still unused document id"
entryId = 0
idList = []
for entry in self.entryList:
idList.append(entry.getEntryId())
if (len(idList)):
entryId = max(idList) + 1
return entryId
# sort entry list
def sortEntryList(self, entryList):
"sort entry list ascending, or descending"
if len(entryList) == 0:
return []
else:
return sorted(entryList, key=lambda entry: entry[0], reverse = self.getSortOrder())
# merge lists
def mergeLists(self, *lists):
"merge lists of words"
newlist = {} # start with an empty list
for currentList in lists:
keyList = currentList.keys()
for item in keyList:
# assume a new item
frequency = 0
# item already in newlist?
if newlist.has_key(item):
frequency = newlist[item]
frequency += currentList[item]
newlist[item] = frequency
return newlist
def mergeListsIdf(self, *lists):
"merge lists of words for calculating idf"
newlist = {} # start with an empty list
for currentList in lists:
keyList = currentList.keys()
for item in keyList:
# assume a new item
frequency = 0
# item already in newlist?
if newlist.has_key(item):
frequency = newlist[item]
frequency += 1
newlist[item] = frequency
return newlist
# stop list
def setStopList(self, stopList):
"fill the stop list with the given values"
self.stopList = stopList
return
def getStopList(self):
"return the current stop list"
return self.stopList
def extendStopList(self, itemList):
"extends the current stop list with the given items"
for item in itemList:
if not item in self.stopList:
self.stopList.append(item)
return
def reduceStopList(self, itemList):
"reduces the current stop list by the given items"
for item in itemList:
if item in self.stopList:
self.stopList.remove(item)
return
# phonetic comparisons
def comparePhoneticCode(self, entry1, entry2):
"compares two entries of phonetic codes and returns the number of exact matches"
matches = 0
for item in entry1.keys():
if entry2.has_key(item):
if entry1[item] == entry2[item]:
matches += 1
return matches
def comparePhoneticCodeLists(self, query, document):
"compare phonetic codes of a query and a single document"
total = 0
for entry in query:
codes = query[entry]
#print entry
#print codes
for entry2 in document:
codes2 = document[entry2]
#print entry2
#print codes2
matches = self.comparePhoneticCode(codes, codes2)
total += matches
return total
def searchByPhoneticCode(self, query):
"find all the documents matching the query in terms of phonetic similarity"
matchList = {}
for entry in self.getEntryList():
entryId = entry.getEntryId()
matches = self.comparePhoneticCodeLists(query, entry)
matchList[entryId] = matches
return matchList
# term frequency for all registered search entries
def tf(self):
"term frequency for the list of registered documents"
occurency = {}
for entry in self.entryList:
tf = entry.data.tf()
occurency = self.mergeLists(occurency, tf)
return occurency
def tfStop(self):
"term frequency with stop list for the list of registered documents"
occurency = {}
for entry in self.entryList:
tfStop = entry.data.tfStop(self.stopList)
occurency = self.mergeLists(occurency, tfStop)
return occurency
# def tfRelation(self, pattern, document):
# keysPattern = pattern.keys()
# keysDocument = document.keys()
# identicalKeys = list(set(keysPattern.keys()) & set(keysDocument.keys())
#
# total = 0
# for item in identicalKeys:
# total = total + pattern[item] + document[item]
# return
def idf (self, wordList):
"calculates the inverse document frequency for a given list of terms"
key = wordList.keys()
documents = self.countEntryList()
for item in key:
frequency = wordList[item]
# calculate idf = ln(N/n):
# N=number of documents
# n=number of documents that contain term
idf = math.log(documents/frequency)
wordList[item] = idf
return wordList
# evaluate and compare descriptors
def compareDescriptors (self, request, document):
"returns the degree of equality between two descriptors (often a request and a document)"
compareBinary = self.compareDescriptorsBinary(request, document)
compareFuzzy = self.compareDescriptorsFuzzy(request, document)
compareKnn = self.compareDescriptorsKNN(request, document)
result = {
'binary': compareBinary,
'fuzzy': compareFuzzy,
'knn': compareKnn
}
return result
def compareDescriptorsBinary(self, request, document):
"binary comparison"
# request, document: document descriptors
# return value: either True for similarity, or False
# define return value
equality = 0
# calc similar descriptors
itemsRequest = request.getDescriptorList()
itemsDocument = document.getDescriptorList()
if set(itemsRequest) & set(itemsDocument) == set(itemsRequest):
equality = 1
return equality
def compareDescriptorsFuzzy(self, request, document):
"fuzzy comparison"
# request, document: lists of descriptors
# return value: float, between 0 and 1
# define return value
equality = 0
# get number of items
itemsRequest = request.getDescriptorList()
itemsDocument = document.getDescriptorList()
# calc similar descriptors
similarDescriptors = len(set(itemsRequest) & set(itemsDocument))
# calc equality
equality = float(similarDescriptors) / float ((math.sqrt(len(itemsRequest)) * math.sqrt(len(itemsDocument))))
return equality
def compareDescriptorsKNN(self, request, document):
"k-Nearest Neighbour algorithm"
firstList = request
otherList = document
globalDistance = float(0)
for item in firstList.getDescriptorList():
firstValue = float(firstList.getDescriptorValue(item))
otherValue = float(otherList.getDescriptorValue(item))
i = float(firstValue - otherValue)
localDistance = float(i * i)
globalDistance = globalDistance + localDistance
# end for
for item in otherList.getDescriptorList():
otherValue = float(otherList.getDescriptorValue(item))
firstValue = 0
if item in firstList.getDescriptorList():
continue # don't count again
localDistance = float(otherValue * otherValue)
globalDistance = globalDistance + localDistance
# end for
kNN = math.sqrt(globalDistance)
return kNN
def calculateRetrievalStatusValue(d, p, q):
"calculates the document weight for document descriptors"
# d: list of existance (1) or non-existance (0)
# p, q: list of probabilities of existance (p) and non-existance (q)
itemsP = len(p)
itemsQ = len(q)
itemsD = len(d)
if ((itemsP - itemsQ) <> 0):
# different length of lists p and q
return 0
if ((itemsD - itemsP) <> 0):
# different length of lists d and p
return 0
rsv = 0
for i in range(itemsP):
eqUpper = float(p[i]) / float(1-p[i])
eqLower = float(q[i]) / float(1-q[i])
value = float(d[i] * math.log (eqUpper / eqLower))
rsv += value
return rsv
# search strategy
def setSearchStrategy(self, fullTextWeight, advancedWeight):
"adjust the current search strategy"
self.searchStrategy = {
"fulltextsearch": fullTextWeight,
"advancedsearch": advancedWeight
}
return
def getSearchStrategy(self):
"returns the current search strategy"
return self.searchStrategy
# search
def search(self, pattern):
"combines both full text, and advanced search"
result = []
searchStrategy = self.getSearchStrategy()
fullTextWeight = searchStrategy["fulltextsearch"]
advancedWeight = searchStrategy["advancedsearch"]
if fullTextWeight:
result = self.fullTextSearch(pattern)
if advancedWeight:
resultAdvancedSearch = self.advancedSearch(pattern)
if not len(result):
result = resultAdvancedSearch
else:
for item in resultAdvancedSearch:
weightAVS, hitsAVS, entryIndexAVS = item
for i in xrange(len(result)):
entry = result[i]
weightFTS, hitsFTS, entryIndexFTS = entry
if entryIndexAVS == entryIndexFTS:
weight = weightAVS + weightFTS
hits = hitsAVS + hitsFTS
result[i] = (weight, hits, entryIndexAVS)
break
return self.sortEntryList(result)
def fullTextSearch(self, pattern):
"full text search for the registered documents"
searchStrategy = self.getSearchStrategy()
fullTextWeight = searchStrategy["fulltextsearch"]
# search for the given search pattern
# both data and query are multiline objects
originalQuery = pattern.getText()
query = ''.join(originalQuery)
result = []
for entry in self.entryList:
originalData = entry.getText()
data = ''.join(originalData)
hits = data.count(query)
# set return value
entryId = entry.getEntryId()
value = fullTextWeight * hits
result.append((value, hits, entryId))
# sort the result according to the sort value
result = self.sortEntryList(result)
return result
def advancedSearch(self, pattern):
searchStrategy = self.getSearchStrategy()
advancedWeight = searchStrategy["advancedsearch"]
tfPattern = pattern.data.tf()
digramPattern = pattern.data.getNgramsByParagraph(2)
trigramPattern = pattern.data.getNgramsByParagraph(3)
phoneticPattern = pattern.getPhoneticCode()
descriptorsPattern = pattern.getKeywords()
result = []
for entry in self.entryList:
# calculate tf
tfEntry = entry.data.tf()
# calculate digrams
digramEntry = entry.data.getNgramsByParagraph(2)
digramValue = entry.data.compareNgramLists(digramEntry, digramPattern)
# calculate trigrams
trigramEntry = entry.data.getNgramsByParagraph(3)
trigramValue = entry.data.compareNgramLists(trigramEntry, trigramPattern)
# phonetic codes
phoneticEntry = entry.getPhoneticCode()
phoneticValue = self.comparePhoneticCodeLists(phoneticPattern, phoneticEntry)
# descriptor comparison
descriptorsEntry = entry.getKeywords()
desc = self.compareDescriptors (descriptorsPattern, descriptorsEntry)
descValue = desc['binary']*0.3 + desc['fuzzy']*0.3 + desc['knn']*0.4
hits = 0
value = digramValue * 0.25
value += trigramValue * 0.25
value += phoneticValue * 0.25
value += descValue * 0.25
# set return value
entryId = entry.getEntryId()
value = advancedWeight * value
result.append((value, hits, entryId))
return result | AdvaS-Advanced-Search | /advas-advanced-search-0.2.5.tar.gz/advas-0.2.5/advancedsearch.py | advancedsearch.py |
# -----------------------------------------------------------
# AdvaS Advanced Search 0.2.5
# advanced search algorithms implemented as a python module
# advas core module
#
# (C) 2002 - 2014 Frank Hofmann, Berlin, Germany
# Released under GNU Public License (GPL)
# email [email protected]
# -----------------------------------------------------------
# other modules required by advas
import string
import re
import math
from ngram import Ngram
from phonetics import Phonetics
from advasio import AdvasIo
class Advas:
def __init__(self, text, encoding):
"init an Advas object"
self.setText(text)
self.setEncoding(encoding)
return
def getText(self):
"return the saved text value"
return self.text
def setText(self, text):
"set a given text value"
self.text = text
return
def getEncoding(self):
"return the saved text encoding"
return self.encoding
def setEncoding(self, encoding):
"set the text encoding"
self.encoding = encoding
return
# basic functions ==========================================
# line -----------------------------------------------------
def splitLine (self, line):
"split a line of text into single words"
# define regexp tokens and split line
tokens = re.compile(r"[\w']+")
return tokens.findall(line)
def splitParagraph (self):
"split a paragraph into single lines"
lines = self.text
return lines
def splitText(self):
"split the text into single words per paragraph line"
paragraphList = []
# split text into single lines
lines = self.splitParagraph()
for line in lines:
# split this line into single words
wordList = self.splitLine(line)
paragraphList.append(wordList)
return paragraphList
def isComment(self, line):
"verifies a line for being a UNIX style comment"
# remove any whitespace at the beginning
line = string.lstrip(line)
# is comment? (UNIX style)
if line.startswith("#"):
return True
else:
return False
def kmpSearch(self, text, pattern):
"search pattern in a text using Knuth-Morris-Pratt algorithm"
i = 0
j = -1
next = {0: -1}
# initialize next array
while 1:
if ((j == -1) or (pattern[i] == pattern[j])):
i = i + 1
j = j + 1
next[i] = j
else:
j = next[j]
# end if
if (i >= len(pattern)):
break
# end while
# search
i = 0
j = 0
positions = []
while 1:
if ((j == -1) or (text[i] == pattern[j])):
i = i + 1
j = j + 1
else:
j = next[j]
# end if
if (i >= len(text)):
return positions
# end if
if (j >= len(pattern)):
positions.append(i - len(pattern))
i = i - len(pattern) + 1
j = 0
# end if
# end while
return
# list functions -------------------------------------------
def removeItems(self, originalList, removeList):
"remove the items from the original list"
for item in removeList:
# item in original list?
if originalList.has_key(item):
del originalList[item]
return originalList
# advanced functions =======================================
# term frequency (tf) --------------------------------------
def tf (self):
"calculates the term frequency for the given text"
occurency = {}
# split the given text into single lines
splittedParagraph = self.splitText()
for line in splittedParagraph:
for word in line:
if occurency.has_key(word):
newValue = occurency[word] + 1
else:
newValue = 1
occurency[word] = newValue
# return list of words and their frequency
return occurency
def tfStop (self, stopList):
"calculates the term frequency and removes the items given in stop list"
# get term frequency from self.text
wordList = self.tf()
# remove items given in stop list
ocurrency = self.removeItems(wordList, stopList)
# return result
return ocurrency
def idf(self, numberOfDocuments, frequencyList):
"calculates the inverse document frequency for a given list of terms"
idfList = {}
for item in frequencyList.keys():
# get frequency
frequency = frequencyList[item]
# calculate idf = ln(numberOfDocuments/n):
# n=number of documents that contain term
idf = math.log(float(numberOfDocuments)/float(frequency))
# save idf
idfList[item] = idf
return idfList
# n-gram functions ----------------------------------------
def getNgramsByWord (self, word, ngramSize):
if not ngramSize:
return []
term = Ngram(word, ngramSize)
if term.deriveNgrams():
return term.getNgrams()
else:
return []
def getNgramsByLine (self, ngramSize):
if not ngramSize:
return []
occurency = []
# split the given text into single lines
lines = self.splitParagraph()
for line in lines:
term = Ngram(line, ngramSize)
if term.deriveNgrams():
occurency.append(term.getNgrams())
else:
occurency.append([])
return occurency
def getNgramsByParagraph(self, ngramSize):
if not ngramSize:
return []
reducedList = []
occurency = self.getNgramsByLine(ngramSize)
for line in occurency:
reducedList = list(set(reducedList) | set(line))
return reducedList
def compareNgramLists (self, list1, list2):
"compares two lists of ngrams and returns their degree of equality"
# equality of terms : Dice coefficient
#
# S = 2C/(A+B)
#
# S = degree of equality
# C = n-grams contained in term 2 as well as in term 2
# A = number of n-grams contained in term 1
# B = number of n-grams contained in term 2
# find n-grams contained in both lists
A = len(list1)
B = len(list2)
# extract the items which appear in both list1 and list2
list3 = list(set(list1) & set(list2))
C = len(list3)
# calculate similarity of term 1 and 2
S = float(float(2*C)/float(A+B))
return S
# phonetic codes ---------------------------------------
def soundex(self):
soundexCode = {}
# split the given text into single lines
splittedParagraph = self.splitText()
for line in splittedParagraph:
for word in line:
if not soundexCode.has_key(word):
phoneticsObject = Phonetics(word)
soundexValue = phoneticsObject.soundex()
soundexCode[word] = soundexValue
return soundexCode
def metaphone(self):
metaphoneCode = {}
# split the given text into single lines
splittedParagraph = self.splitText()
for line in splittedParagraph:
for word in line:
if not metaphoneCode.has_key(word):
phoneticsObject = Phonetics(word)
metaphoneValue = phoneticsObject.metaphone()
metaphoneCode[word] = metaphoneValue
return metaphoneCode
def nysiis(self):
nysiisCode = {}
# split the given text into single lines
splittedParagraph = self.splitText()
for line in splittedParagraph:
for word in line:
if not nysiisCode.has_key(word):
phoneticsObject = Phonetics(word)
nysiisValue = phoneticsObject.nysiis()
nysiisCode[word] = nysiisValue
return nysiisCode
def caverphone(self):
caverphoneCode = {}
# split the given text into single lines
splittedParagraph = self.splitText()
for line in splittedParagraph:
for word in line:
if not caverphoneCode.has_key(word):
phoneticsObject = Phonetics(word)
caverphoneValue = phoneticsObject.caverphone()
caverphoneCode[word] = caverphoneValue
return caverphoneCode
def phoneticCode(self):
codeList = {}
# split the given text into single lines
splittedParagraph = self.splitText()
for line in splittedParagraph:
for word in line:
if not codeList.has_key(word):
phoneticsObject = Phonetics(word)
value = phoneticsObject.phoneticCode()
codeList[word] = value
return codeList
# language detection -----------------------------------
def isLanguage (self, keywordList):
"given text is written in a certain language"
# old function - substituted by isLanguageByKeywords()
return self.isLanguageByKeywords (keywordList)
def isLanguageByKeywords (self, keywordList):
"determine the language of a given text with the use of keywords"
# keywordList: list of items used to determine the language
# get term frequency using tf
textTf = self.tf()
# lower each keyword
listLength = len(keywordList)
for i in range(listLength):
keywordList[i] = string.lower(string.strip(keywordList[i]))
# end for
# derive intersection
intersection = list(set(keywordList) & set(textTf.keys()))
lineLanguage = len(intersection)
# value
value = float(float(lineLanguage)/float(listLength))
return value
# synonyms ---------------------------------------------
def getSynonyms(self, filename, encoding):
# works with OpenThesaurus (plain text version)
# requires an OpenThesaurus release later than 2003-10-23
# http://www.openthesaurus.de
synonymFile = AdvasIo(filename, encoding)
success = synonymFile.readFileContents()
if not success:
return False
contents = synonymFile.getFileContents()
searchTerm = self.text[0]
synonymList = []
for line in contents:
if not self.isComment(line):
wordList = line.split(";")
if searchTerm in wordList:
synonymList += wordList
# remove extra characters
for i in range(len(synonymList)):
synonym = synonymList[i]
synonymList[i] = synonym.strip()
# compact list: remove double entries
synonymList = list(set(synonymList))
# maybe the search term is in the list: remove it, too
if searchTerm in synonymList:
synonymList = list(set(synonymList).difference(set([searchTerm])))
return synonymList
def isSynonymOf(self, term, filename, encoding):
synonymList = self.getSynonyms(filename, encoding)
if term in synonymList:
return True
return False | AdvaS-Advanced-Search | /advas-advanced-search-0.2.5.tar.gz/advas-0.2.5/advas.py | advas.py |
class LinkedList:
def __init__(self,Iterable:object="",initialize_list:bool=False,size:int=0,initial_value:object=0,sorted:bool=False,order:bool=False,dtype:object=None)->None:
self.next=None
self.__dtype=dtype
self.__mode=sorted
self.__last=self
self.__order=order
self.__length=0
self.__data=0
if(initialize_list):
for item in range(size):
self.append(initial_value)
else:
for item in Iterable:
self.append(item)
def append(self,data:object=0)->None:
self.__length+=1
tem=LinkedList()
if(self.__dtype!=None):
data=self.__dtype(data)
tem.__data = data
t=self
if(self.__mode):
while(t.next!=None and ((t.next.__data<data and not self.__order) or (t.next.__data>data and self.__order))):
t=t.next
tem.next=t.next
t.next=tem
else:
tem.__last=self.__last
self.__last.next=tem
self.__last=tem
def __str__(self):
a="[ "
t=self.next
while(t!=None):
if(isinstance(t.__data,str)):
a+=f"'{t.__data}' ,"
else:
a+=f"{t.__data} ,"
t=t.next
a+="\b]"
return a
def __len__(self):
return self.__length
def extend(self,_iterable):
for item in _iterable:
self.append(item)
def copy(self):
tem=LinkedList()
t=self.next
while(t!=None):
tem.append(t.__data)
t=t.next
return tem
def insert_index(self,index:int,data:object):
self.__length+=1
tem=LinkedList()
tem.__data=data
i=0
t=self
while(t!=None):
if(i==index or i==self.__length-1):
tem.next=t.next
t.next=tem
break
i+=1
t=t.next
def __getitem__(self, item):
if(not isinstance(item,int) and not isinstance(item,LinkedList)):
a=item.start
b=item.stop
c=item.step
if (c == None):
c = 1
if(a==None):
if(c>=0):
a=0
else:
a=len(self)
if(b==None):
if(c>=0):
b=len(self)
else:
b=-1
tem=LinkedList()
t=self
if(a<-len(self) or a>len(self) or b>len(self) or b<-len(self) ):
raise IndexError("Index out of range")
else:
var1=0
if(c<0):
a,b=b+1,a+1
c=-1*c
var1=-1
i=0
while(t!=None):
if(i==a):
break
i+=1
t=t.next
t=t.next
k=i
while(t!=None and i<b):
if(k==i):
tem.append(t.__data)
k+=(c)
t=t.next
i+=1
if(var1==-1):
tem.reversed()
return tem
elif(isinstance(item,LinkedList)):
return item.next.__data
else:
if(item<0):
if(item<-len(self)):
raise IndexError("Index out of range")
else:
item=len(self)+item
else:
if(item>len(self)):
raise IndexError("Index out of range")
t=self
i=0
while(t.next!=None):
if(i==item):
return t.next.__data
t=t.next
i+=1
raise StopIteration
def __setitem__(self, key, value):
if(isinstance(key,int)):
t=self
i=0
while(t.next!=None):
if(i==key):
t.next.__data=value
break
t=t.next
i+=1
else:
try:
key.next.__data=value
except:
raise SyntaxError("Invalid reffernce pass for Linked list assignment")
def insert_reffernce(self,obj:object,data:object)->None:
self.__length+=1
tem=LinkedList()
tem.__data=data
tem.next=obj.next
obj.next=tem
def swap(self,obj1:object,obj2:object):
self[obj1],self[obj2]=self[obj2],self[obj1]
def __reversed__(self):
t=self.copy()
t.reversed()
return t
def reversed(self):
t=self.next
if(t!=None):
p=self.next.next
t.next=None
while(p!=None):
q=p.next
p.next=t
t=p
p=q
self.next=t
def __eq__(self, other):
self=other
def __mul__(self, other:int):
if(isinstance(other,int)):
if(other==1):
return self
else:
return self+self.__mul__(other-1)
else:
raise SyntaxError(f"Invalid operator between Linked list object and {type(other).__name__} object")
def __add__(self, other):
if(isinstance(other,LinkedList)):
tem=LinkedList(self)
t=other.next
while(t!=None):
tem.append(t.__data)
t=t.next
return tem
else:
raise SyntaxError("Invalid operator between linked list and %s" % (type(other).__name__))
def __abs__(self):
t = self
while (t.next != None):
if (isinstance(t.next.__data, int) or isinstance(t.next.__data, float)):
t.next.__data = abs(t.next.__data)
t = t.next
def Sum(self):
return sum(self)
def mean(self):
return sum(self)/len(self)
def meadian(self):
# Mode = 3(Median) - 2(Mean) use this relation to compute median in o(n)
ans="%.1f"%((self.mode()+(2*self.mean()))/3)
return float(ans)
def mode(self):
d={}
for item in self:
if(item in d):
d[item]+=1
else:
d[item]=1
max=list(d.keys())
max=max[0]
for item in d:
if(d[item]>d[max] and d[item]>1):
max=item
l=0
s=0
for item in d:
if(d[item]==d[max]):
s+=item
l+=1
return (s/l)
def __pow__(self, power,modula=None):
t = self
while (t.next != None):
if (isinstance(t.next.__data, int) or isinstance(t.next.__data, float)):
t.next.__data = pow(t.next.__data,power,modula)
t = t.next
@classmethod
def create_sized_list(cls,size=0,intial_value=0):
tem=LinkedList()
tem.__length=size
for item in range(size):
tem.append(intial_value)
return tem
def sqrt(self,modula=None):
t=self
self.__pow__(0.5,modula)
def count(self,element,start=0,end=-1):
if(end==-1):
end=len(self)
count=0
t=self
i=0
while(t!=None):
if(i>=start):
if(t.next.__data==element):
count+=1
i+=1
if(i==end):
break
t=t.next
return count
def index(self,value,start=0,end=-1):
if(end==-1):
end=len(self)
t = self
i = 0
while (t != None):
if (i >= start):
if (t.next.__data == value):
return i
i += 1
t=t.next
if (i == end):
break
return -1
def pop(self,index=-1):
if(index==-1):
index=len(self)-1
t=self
i=0
while(t.next!=None):
if(i==index):
t.next=t.next.next
break
t=t.next
i+=1
self.__length-=1
def serarch(self,value:object)->bool:
return self.index(value)!=-1
def remove(self,value):
j=self.index(value)
if(j!=-1):
self.pop(j)
def clear(self):
self.__length=0
self.next=None
def __iadd__(self, other):
if(isinstance(other,LinkedList)):
t=other
while(t.next!=None):
self.append(t.next.__data)
t=t.next
return self
else:
raise SyntaxError("Invalid operator between linked list and %s"%(type(other).__name__))
def __imul__(self, other):
return self.__mul__(other)
def __call__(self, *args, **kwargs):
for item in args:
self.append(item)
def replace(self,old_value,new_value,times=-1):
count=0
t=self
while(t.next!=None):
if(count==times):
break
if(t.next.__data==old_value):
count+=1
t.next.__data=new_value
t=t.next
def rindex(self,value):
t=self
i=0
j=-1
while(t.next!=None):
if(t.next.__data==value):
j=i
i+=1
t=t.next
return j
def concatenate(self,item=""):
t=self
ans=""
while(t.next!=None):
ans+=str(t.next.__data)
ans+=item
t=t.next
return ans
def partition(self,value:int,starts:object=0,ends:object=-1):
if(isinstance(starts,int) and isinstance(ends,int)):
startp=self
midp=self
endp=self.__last.__last
else:
startp = starts
midp = starts
endp = ends
mid=0
end=len(self)-1
while(end>=mid and midp.next!=endp.next):
if(midp.next.__data>value):
endp.next.__data,midp.next.__data=midp.next.__data,endp.next.__data
end-=1
endp=endp.__last
elif(midp.next.__data<value):
startp.next.__data,midp.next.__data=midp.next.__data,startp.next.__data
startp=startp.next
midp=midp.next
mid+=1
else:
midp = midp.next
mid += 1
return (startp,endp.next)
def cummulativeSum(self):
tem=LinkedList([0])
sum=0
t=self
while(t.next!=None):
if(isinstance(t.next.__data,int) or isinstance(t.next.__data,float)):
sum+=t.next.__data
tem.append(sum)
t=t.next
return tem
def Max(self):
return max(self)
def Min(self):
return min(self)
def join(self,item):
t=self
while(t.next!=None):
self.insert_reffernce(t.next,item)
t=t.next.next
def maxSum(self):
sum=0
t=self
try:
ans=t.next.__data
except:
ans=0
while(t.next!=None):
if (isinstance(t.next.__data, int) or isinstance(t.next.__data, float)):
sum+=(t.next.__data)
if(sum<0):
sum=0
ans = max(ans, sum)
t=t.next
return ans
def maxProduct(self):
t = self
try:
max1 = t.next.__data
max2 = t.next.__data
min1 = t.next.__data
min2 = -t.next.__data
except:
min1 = 0
max1 = 0
max2 = 0
min2 = 0
while (t.next != None):
if (isinstance(t.next.__data, int) or isinstance(t.next.__data, float)):
if(max1<t.next.__data):
max1=t.next.__data
elif(max2<t.next.__data and t.next.__data<=max1):
max2=t.next.__data
else:
pass
if(min1>=t.next.__data):
min1=t.next.__data
elif (min2 >= t.next.__data and t.next.__data>=min1):
min2 = t.next.__data
t = t.next
if(min1*min2>max2*max1):
return (min1*min2,(min1,min2))
return (max1*max2,(max1,max2))
def shift(self,value,side=True):
startp = self
midp = self
endp = self.__last.__last
mid = 0
end = len(self) - 1
while (end >= mid and midp.next != None):
if (midp.next.__data!=value and side):
endp.next.__data, midp.next.__data = midp.next.__data, endp.next.__data
end -= 1
endp = endp.__last
elif (midp.next.__data!=value and not side):
startp.next.__data, midp.next.__data = midp.next.__data, startp.next.__data
startp = startp.next
midp = midp.next
mid += 1
else:
midp = midp.next
mid += 1
def __fact(self,n):
a=1
for i in range(2,n+1):
a*=i
return a
def factorial(self):
t = self
while (t.next != None):
if (isinstance(t.next.__data, int) or isinstance(t.next.__data, float)):
t.next.__data=self.__fact(t.next.__data)
t = t.next
def sort(self,order:bool=False):
tem=LinkedList(sorted(self,reverse=order))
t=self
while(t.next!=None):
t.next.__data=tem.next.__data
tem=tem.next
t=t.next
def power(self,power,modula=None):
self.__pow__(power,modula) | Advance-LinkedList | /Advance-LinkedList-1.0.2.tar.gz/Advance-LinkedList-1.0.2/LinkedList/__main__.py | __main__.py |
__version__="1.0.2"
__author__="Nitin Gupta"
class LinkedList:
def __init__(self,Iterable:object="",initialize_list:bool=False,size:int=0,initial_value:object=0,sorted:bool=False,reverse:bool=False,dtype:object=None)->None:
self.next=None
self.dtype=dtype
self.sorted=sorted
self.__last=self
self.reverse=reverse
self.__length=0
self.data=0
if(initialize_list):
for item in range(size):
self.append(initial_value)
else:
for item in Iterable:
self.append(item)
def append(self,data:object=0)->None:
self.__length+=1
tem=LinkedList()
if(self.dtype!=None):
data=self.dtype(data)
tem.data = data
t=self
if(self.sorted):
while(t.next!=None and ((t.next.data<data and not self.reverse) or (t.next.data>data and self.reverse))):
t=t.next
tem.next=t.next
t.next=tem
else:
tem.__last=self.__last
self.__last.next=tem
self.__last=tem
def __str__(self):
a="[ "
t=self.next
while(t!=None):
if(isinstance(t.data,str)):
a+=f"'{t.data}' ,"
else:
a+=f"{t.data} ,"
t=t.next
a+="\b]"
return a
def __len__(self):
return self.__length
def extend(self,_iterable):
for item in _iterable:
self.append(item)
def copy(self):
tem=LinkedList()
t=self.next
while(t!=None):
tem.append(t.data)
t=t.next
return tem
def insert_index(self,index:int,data:object):
self.__length+=1
tem=LinkedList()
tem.data=data
i=0
t=self
while(t!=None):
if(i==index or i==self.__length-1):
tem.next=t.next
t.next=tem
break
i+=1
t=t.next
def __getitem__(self, item):
if(not isinstance(item,int) and not isinstance(item,LinkedList)):
a=item.start
b=item.stop
c=item.step
if (c == None):
c = 1
if(a==None):
if(c>=0):
a=0
else:
a=len(self)
if(b==None):
if(c>=0):
b=len(self)
else:
b=-1
tem=LinkedList()
t=self
if(a<-len(self) or a>len(self) or b>len(self) or b<-len(self) ):
raise IndexError("Index out of range")
else:
var1=0
if(c<0):
a,b=b+1,a+1
c=-1*c
var1=-1
i=0
while(t!=None):
if(i==a):
break
i+=1
t=t.next
t=t.next
k=i
while(t!=None and i<b):
if(k==i):
tem.append(t.data)
k+=(c)
t=t.next
i+=1
if(var1==-1):
tem.reversed()
return tem
elif(isinstance(item,LinkedList)):
return item.next.data
else:
if(item<0):
if(item<-len(self)):
raise IndexError("Index out of range")
else:
item=len(self)+item
else:
if(item>len(self)):
raise IndexError("Index out of range")
t=self
i=0
while(t.next!=None):
if(i==item):
return t.next.data
t=t.next
i+=1
raise StopIteration
def __setitem__(self, key, value):
if(isinstance(key,int)):
t=self
i=0
while(t.next!=None):
if(i==key):
t.next.data=value
break
t=t.next
i+=1
else:
try:
key.next.data=value
except:
raise SyntaxError("Invalid reffernce pass for Linked list assignment")
def insert_reffernce(self,obj:object,data:object)->None:
self.__length+=1
tem=LinkedList()
tem.data=data
tem.next=obj.next
obj.next=tem
def swap(self,obj1:object,obj2:object):
self[obj1],self[obj2]=self[obj2],self[obj1]
def __reversed__(self):
t=self.copy()
t.reversed()
return t
def reversed(self):
t=self.next
if(t!=None):
p=self.next.next
t.next=None
while(p!=None):
q=p.next
p.next=t
t=p
p=q
self.next=t
def __eq__(self, other):
self=other
def __mul__(self, other:int):
if(isinstance(other,int)):
if(other==1):
return self
else:
return self+self.__mul__(other-1)
else:
raise SyntaxError(f"Invalid operator between Linked list object and {type(other).__name__} object")
def __add__(self, other):
if(isinstance(other,LinkedList)):
tem=LinkedList(self)
t=other.next
while(t!=None):
tem.append(t.data)
t=t.next
return tem
else:
raise SyntaxError("Invalid operator between linked list and %s" % (type(other).__name__))
def __abs__(self):
t = self
while (t.next != None):
if (isinstance(t.next.data, int) or isinstance(t.next.data, float)):
t.next.data = abs(t.next.data)
t = t.next
def Sum(self):
return sum(self)
def mean(self):
return sum(self)/len(self)
def meadian(self):
# Mode = 3(Median) - 2(Mean) use this relation to compute median in o(n)
ans="%.1f"%((self.mode()+(2*self.mean()))/3)
return float(ans)
def mode(self):
d={}
for item in self:
if(item in d):
d[item]+=1
else:
d[item]=1
max=list(d.keys())
max=max[0]
for item in d:
if(d[item]>d[max] and d[item]>1):
max=item
l=0
s=0
for item in d:
if(d[item]==d[max]):
s+=item
l+=1
return (s/l)
def __pow__(self, power,modula=None):
t = self
while (t.next != None):
if (isinstance(t.next.data, int) or isinstance(t.next.data, float)):
t.next.data = pow(t.next.data,power,modula)
t = t.next
@classmethod
def create_sized_list(cls,size=0,intial_value=0):
tem=LinkedList()
tem.__length=size
for item in range(size):
tem.append(intial_value)
return tem
def sqrt(self,modula=None):
t=self
self.__pow__(0.5,modula)
def count(self,element,start=0,end=-1):
if(end==-1):
end=len(self)
count=0
t=self
i=0
while(t!=None):
if(i>=start):
if(t.next.data==element):
count+=1
i+=1
if(i==end):
break
t=t.next
return count
def index(self,value,start=0,end=-1):
if(end==-1):
end=len(self)
t = self
i = 0
while (t != None):
if (i >= start):
if (t.next.data == value):
return i
i += 1
t=t.next
if (i == end):
break
return -1
def pop(self,index=-1):
if(index==-1):
index=len(self)-1
t=self
i=0
while(t.next!=None):
if(i==index):
t.next=t.next.next
break
t=t.next
i+=1
self.__length-=1
def serarch(self,value:object)->bool:
return self.index(value)!=-1
def remove(self,value):
j=self.index(value)
if(j!=-1):
self.pop(j)
def clear(self):
self.__length=0
self.next=None
def __iadd__(self, other):
if(isinstance(other,LinkedList)):
t=other
while(t.next!=None):
self.append(t.next.data)
t=t.next
return self
else:
raise SyntaxError("Invalid operator between linked list and %s"%(type(other).__name__))
def __imul__(self, other):
return self.__mul__(other)
def __call__(self, *args, **kwargs):
for item in args:
self.append(item)
def replace(self,old_value,new_value,times=-1):
count=0
t=self
while(t.next!=None):
if(count==times):
break
if(t.next.data==old_value):
count+=1
t.next.data=new_value
t=t.next
def rindex(self,value):
t=self
i=0
j=-1
while(t.next!=None):
if(t.next.data==value):
j=i
i+=1
t=t.next
return j
def concatenate(self,item=""):
t=self
ans=""
while(t.next!=None):
ans+=str(t.next.data)
ans+=item
t=t.next
return ans
def partition(self,value:int,starts:object=0,ends:object=-1):
if(isinstance(starts,int) and isinstance(ends,int)):
startp=self
midp=self
endp=self.__last.__last
else:
startp = starts
midp = starts
endp = ends
mid=0
end=len(self)-1
while(end>=mid and midp.next!=endp.next):
if(midp.next.data>value):
endp.next.data,midp.next.data=midp.next.data,endp.next.data
end-=1
endp=endp.__last
elif(midp.next.data<value):
startp.next.data,midp.next.data=midp.next.data,startp.next.data
startp=startp.next
midp=midp.next
mid+=1
else:
midp = midp.next
mid += 1
return (startp,endp.next)
def cummulativeSum(self):
tem=LinkedList([0])
sum=0
t=self
while(t.next!=None):
if(isinstance(t.next.data,int) or isinstance(t.next.data,float)):
sum+=t.next.data
tem.append(sum)
t=t.next
return tem
def Max(self):
return max(self)
def Min(self):
return min(self)
def join(self,item):
t=self
while(t.next!=None):
self.insert_reffernce(t.next,item)
t=t.next.next
def maxSum(self):
sum=0
t=self
try:
ans=t.next.data
except:
ans=0
while(t.next!=None):
if (isinstance(t.next.data, int) or isinstance(t.next.data, float)):
sum+=(t.next.data)
if(sum<0):
sum=0
ans = max(ans, sum)
t=t.next
return ans
def maxProduct(self):
t = self
try:
max1 = t.next.data
max2 = t.next.data
min1 = t.next.data
min2 = -t.next.data
except:
min1 = 0
max1 = 0
max2 = 0
min2 = 0
while (t.next != None):
if (isinstance(t.next.data, int) or isinstance(t.next.data, float)):
if(max1<t.next.data):
max1=t.next.data
elif(max2<t.next.data and t.next.data<=max1):
max2=t.next.data
else:
pass
if(min1>=t.next.data):
min1=t.next.data
elif (min2 >= t.next.data and t.next.data>=min1):
min2 = t.next.data
t = t.next
if(min1*min2>max2*max1):
return (min1*min2,(min1,min2))
return (max1*max2,(max1,max2))
def shift(self,value,side=True):
startp = self
midp = self
endp = self.__last.__last
mid = 0
end = len(self) - 1
while (end >= mid and midp.next != None):
if (midp.next.data!=value and side):
endp.next.data, midp.next.data = midp.next.data, endp.next.data
end -= 1
endp = endp.__last
elif (midp.next.data!=value and not side):
startp.next.data, midp.next.data = midp.next.data, startp.next.data
startp = startp.next
midp = midp.next
mid += 1
else:
midp = midp.next
mid += 1
def __fact(self,n):
a=1
for i in range(2,n+1):
a*=i
return a
def factorial(self):
t = self
while (t.next != None):
if (isinstance(t.next.data, int) or isinstance(t.next.data, float)):
t.next.data=self.__fact(t.next.data)
t = t.next
def sort(self,reverse:bool=False):
tem=LinkedList(sorted(self,reverse=reverse))
t=self
while(t.next!=None):
t.next.data=tem.next.data
tem=tem.next
t=t.next
def power(self,power,modula=None):
self.__pow__(power,modula) | Advance-LinkedList | /Advance-LinkedList-1.0.2.tar.gz/Advance-LinkedList-1.0.2/LinkedList/__init__.py | __init__.py |
class LinkedList:
def __init__(self,Iterable:object="",initialize_list:bool=False,size:int=0,initial_value:object=0,sorted:bool=False,order:bool=False,dtype:object=None)->None:
self.next=None
self.__dtype=dtype
self.__mode=sorted
self.__last=self
self.__order=order
self.__length=0
self.__data=0
if(initialize_list):
for item in range(size):
self.append(initial_value)
else:
for item in Iterable:
self.append(item)
def append(self,data:object=0)->None:
self.__length+=1
tem=LinkedList()
if(self.__dtype!=None):
data=self.__dtype(data)
tem.__data = data
t=self
if(self.__mode):
while(t.next!=None and ((t.next.__data<data and not self.__order) or (t.next.__data>data and self.__order))):
t=t.next
tem.next=t.next
t.next=tem
else:
tem.__last=self.__last
self.__last.next=tem
self.__last=tem
def __str__(self):
a="[ "
t=self.next
while(t!=None):
if(isinstance(t.__data,str)):
a+=f"'{t.__data}' ,"
else:
a+=f"{t.__data} ,"
t=t.next
a+="\b]"
return a
def __len__(self):
return self.__length
def extend(self,_iterable):
for item in _iterable:
self.append(item)
def copy(self):
tem=LinkedList()
t=self.next
while(t!=None):
tem.append(t.__data)
t=t.next
return tem
def insert_index(self,index:int,data:object):
self.__length+=1
tem=LinkedList()
tem.__data=data
i=0
t=self
while(t!=None):
if(i==index or i==self.__length-1):
tem.next=t.next
t.next=tem
break
i+=1
t=t.next
def __getitem__(self, item):
if(not isinstance(item,int) and not isinstance(item,LinkedList)):
a=item.start
b=item.stop
c=item.step
if (c == None):
c = 1
if(a==None):
if(c>=0):
a=0
else:
a=len(self)
if(b==None):
if(c>=0):
b=len(self)
else:
b=-1
tem=LinkedList()
t=self
if(a<-len(self) or a>len(self) or b>len(self) or b<-len(self) ):
raise IndexError("Index out of range")
else:
var1=0
if(c<0):
a,b=b+1,a+1
c=-1*c
var1=-1
i=0
while(t!=None):
if(i==a):
break
i+=1
t=t.next
t=t.next
k=i
while(t!=None and i<b):
if(k==i):
tem.append(t.__data)
k+=(c)
t=t.next
i+=1
if(var1==-1):
tem.reversed()
return tem
elif(isinstance(item,LinkedList)):
return item.next.__data
else:
if(item<0):
if(item<-len(self)):
raise IndexError("Index out of range")
else:
item=len(self)+item
else:
if(item>len(self)):
raise IndexError("Index out of range")
t=self
i=0
while(t.next!=None):
if(i==item):
return t.next.__data
t=t.next
i+=1
raise StopIteration
def __setitem__(self, key, value):
if(isinstance(key,int)):
t=self
i=0
while(t.next!=None):
if(i==key):
t.next.__data=value
break
t=t.next
i+=1
else:
try:
key.next.__data=value
except:
raise SyntaxError("Invalid reffernce pass for Linked list assignment")
def insert_reffernce(self,obj:object,data:object)->None:
self.__length+=1
tem=LinkedList()
tem.__data=data
tem.next=obj.next
obj.next=tem
def swap(self,obj1:object,obj2:object):
self[obj1],self[obj2]=self[obj2],self[obj1]
def __reversed__(self):
t=self.copy()
t.reversed()
return t
def reversed(self):
t=self.next
if(t!=None):
p=self.next.next
t.next=None
while(p!=None):
q=p.next
p.next=t
t=p
p=q
self.next=t
def __eq__(self, other):
self=other
def __mul__(self, other:int):
if(isinstance(other,int)):
if(other==1):
return self
else:
return self+self.__mul__(other-1)
else:
raise SyntaxError(f"Invalid operator between Linked list object and {type(other).__name__} object")
def __add__(self, other):
if(isinstance(other,LinkedList)):
tem=LinkedList(self)
t=other.next
while(t!=None):
tem.append(t.__data)
t=t.next
return tem
else:
raise SyntaxError("Invalid operator between linked list and %s" % (type(other).__name__))
def __abs__(self):
t = self
while (t.next != None):
if (isinstance(t.next.__data, int) or isinstance(t.next.__data, float)):
t.next.__data = abs(t.next.__data)
t = t.next
def Sum(self):
return sum(self)
def mean(self):
return sum(self)/len(self)
def meadian(self):
# Mode = 3(Median) - 2(Mean) use this relation to compute median in o(n)
ans="%.1f"%((self.mode()+(2*self.mean()))/3)
return float(ans)
def mode(self):
d={}
for item in self:
if(item in d):
d[item]+=1
else:
d[item]=1
max=list(d.keys())
max=max[0]
for item in d:
if(d[item]>d[max] and d[item]>1):
max=item
l=0
s=0
for item in d:
if(d[item]==d[max]):
s+=item
l+=1
return (s/l)
def __pow__(self, power,modula=None):
t = self
while (t.next != None):
if (isinstance(t.next.__data, int) or isinstance(t.next.__data, float)):
t.next.__data = pow(t.next.__data,power,modula)
t = t.next
@classmethod
def create_sized_list(cls,size=0,intial_value=0):
tem=LinkedList()
tem.__length=size
for item in range(size):
tem.append(intial_value)
return tem
def sqrt(self,modula=None):
t=self
self.__pow__(0.5,modula)
def count(self,element,start=0,end=-1):
if(end==-1):
end=len(self)
count=0
t=self
i=0
while(t!=None):
if(i>=start):
if(t.next.__data==element):
count+=1
i+=1
if(i==end):
break
t=t.next
return count
def index(self,value,start=0,end=-1):
if(end==-1):
end=len(self)
t = self
i = 0
while (t != None):
if (i >= start):
if (t.next.__data == value):
return i
i += 1
t=t.next
if (i == end):
break
return -1
def pop(self,index=-1):
if(index==-1):
index=len(self)-1
t=self
i=0
while(t.next!=None):
if(i==index):
t.next=t.next.next
break
t=t.next
i+=1
self.__length-=1
def serarch(self,value:object)->bool:
return self.index(value)!=-1
def remove(self,value):
j=self.index(value)
if(j!=-1):
self.pop(j)
def clear(self):
self.__length=0
self.next=None
def __iadd__(self, other):
if(isinstance(other,LinkedList)):
t=other
while(t.next!=None):
self.append(t.next.__data)
t=t.next
return self
else:
raise SyntaxError("Invalid operator between linked list and %s"%(type(other).__name__))
def __imul__(self, other):
return self.__mul__(other)
def __call__(self, *args, **kwargs):
for item in args:
self.append(item)
def replace(self,old_value,new_value,times=-1):
count=0
t=self
while(t.next!=None):
if(count==times):
break
if(t.next.__data==old_value):
count+=1
t.next.__data=new_value
t=t.next
def rindex(self,value):
t=self
i=0
j=-1
while(t.next!=None):
if(t.next.__data==value):
j=i
i+=1
t=t.next
return j
def concatenate(self,item=""):
t=self
ans=""
while(t.next!=None):
ans+=str(t.next.__data)
ans+=item
t=t.next
return ans
def partition(self,value:int,starts:object=0,ends:object=-1):
if(isinstance(starts,int) and isinstance(ends,int)):
startp=self
midp=self
endp=self.__last.__last
else:
startp = starts
midp = starts
endp = ends
mid=0
end=len(self)-1
while(end>=mid and midp.next!=endp.next):
if(midp.next.__data>value):
endp.next.__data,midp.next.__data=midp.next.__data,endp.next.__data
end-=1
endp=endp.__last
elif(midp.next.__data<value):
startp.next.__data,midp.next.__data=midp.next.__data,startp.next.__data
startp=startp.next
midp=midp.next
mid+=1
else:
midp = midp.next
mid += 1
return (startp,endp.next)
def cummulativeSum(self):
tem=LinkedList([0])
sum=0
t=self
while(t.next!=None):
if(isinstance(t.next.__data,int) or isinstance(t.next.__data,float)):
sum+=t.next.__data
tem.append(sum)
t=t.next
return tem
def Max(self):
return max(self)
def Min(self):
return min(self)
def join(self,item):
t=self
while(t.next!=None):
self.insert_reffernce(t.next,item)
t=t.next.next
def maxSum(self):
sum=0
t=self
try:
ans=t.next.__data
except:
ans=0
while(t.next!=None):
if (isinstance(t.next.__data, int) or isinstance(t.next.__data, float)):
sum+=(t.next.__data)
if(sum<0):
sum=0
ans = max(ans, sum)
t=t.next
return ans
def maxProduct(self):
t = self
try:
max1 = t.next.__data
max2 = t.next.__data
min1 = t.next.__data
min2 = -t.next.__data
except:
min1 = 0
max1 = 0
max2 = 0
min2 = 0
while (t.next != None):
if (isinstance(t.next.__data, int) or isinstance(t.next.__data, float)):
if(max1<t.next.__data):
max1=t.next.__data
elif(max2<t.next.__data and t.next.__data<=max1):
max2=t.next.__data
else:
pass
if(min1>=t.next.__data):
min1=t.next.__data
elif (min2 >= t.next.__data and t.next.__data>=min1):
min2 = t.next.__data
t = t.next
if(min1*min2>max2*max1):
return (min1*min2,(min1,min2))
return (max1*max2,(max1,max2))
def shift(self,value,side=True):
startp = self
midp = self
endp = self.__last.__last
mid = 0
end = len(self) - 1
while (end >= mid and midp.next != None):
if (midp.next.__data!=value and side):
endp.next.__data, midp.next.__data = midp.next.__data, endp.next.__data
end -= 1
endp = endp.__last
elif (midp.next.__data!=value and not side):
startp.next.__data, midp.next.__data = midp.next.__data, startp.next.__data
startp = startp.next
midp = midp.next
mid += 1
else:
midp = midp.next
mid += 1
def __fact(self,n):
a=1
for i in range(2,n+1):
a*=i
return a
def factorial(self):
t = self
while (t.next != None):
if (isinstance(t.next.__data, int) or isinstance(t.next.__data, float)):
t.next.__data=self.__fact(t.next.__data)
t = t.next
def sort(self,order:bool=False):
tem=LinkedList(sorted(self,reverse=order))
t=self
while(t.next!=None):
t.next.__data=tem.next.__data
tem=tem.next
t=t.next
def power(self,power,modula=None):
self.__pow__(power,modula) | AdvanceLinkedList | /AdvanceLinkedList-1.0.3.tar.gz/AdvanceLinkedList-1.0.3/LinkedList/__main__.py | __main__.py |
__version__="1.0.3"
__author__="Nitin Gupta"
class LinkedList:
def __init__(self,Iterable:object="",initialize_list:bool=False,size:int=0,initial_value:object=0,sorted:bool=False,reverse:bool=False,dtype:object=None)->None:
self.next=None
self.dtype=dtype
self.sorted=sorted
self.__last=self
self.reverse=reverse
self.__length=0
self.data=0
if(initialize_list):
for item in range(size):
self.append(initial_value)
else:
for item in Iterable:
self.append(item)
def append(self,data:object=0)->None:
self.__length+=1
tem=LinkedList()
if(self.dtype!=None):
data=self.dtype(data)
tem.data = data
t=self
if(self.sorted):
while(t.next!=None and ((t.next.data<data and not self.reverse) or (t.next.data>data and self.reverse))):
t=t.next
tem.next=t.next
t.next=tem
else:
tem.__last=self.__last
self.__last.next=tem
self.__last=tem
def __str__(self):
a="[ "
t=self.next
while(t!=None):
if(isinstance(t.data,str)):
a+=f"'{t.data}' ,"
else:
a+=f"{t.data} ,"
t=t.next
a+="\b]"
return a
def __len__(self):
return self.__length
def extend(self,_iterable):
for item in _iterable:
self.append(item)
def copy(self):
tem=LinkedList()
t=self.next
while(t!=None):
tem.append(t.data)
t=t.next
return tem
def insert_index(self,index:int,data:object):
self.__length+=1
tem=LinkedList()
tem.data=data
i=0
t=self
while(t!=None):
if(i==index or i==self.__length-1):
tem.next=t.next
t.next=tem
break
i+=1
t=t.next
def __getitem__(self, item):
if(not isinstance(item,int) and not isinstance(item,LinkedList)):
a=item.start
b=item.stop
c=item.step
if (c == None):
c = 1
if(a==None):
if(c>=0):
a=0
else:
a=len(self)
if(b==None):
if(c>=0):
b=len(self)
else:
b=-1
tem=LinkedList()
t=self
if(a<-len(self) or a>len(self) or b>len(self) or b<-len(self) ):
raise IndexError("Index out of range")
else:
var1=0
if(c<0):
a,b=b+1,a+1
c=-1*c
var1=-1
i=0
while(t!=None):
if(i==a):
break
i+=1
t=t.next
t=t.next
k=i
while(t!=None and i<b):
if(k==i):
tem.append(t.data)
k+=(c)
t=t.next
i+=1
if(var1==-1):
tem.reversed()
return tem
elif(isinstance(item,LinkedList)):
return item.next.data
else:
if(item<0):
if(item<-len(self)):
raise IndexError("Index out of range")
else:
item=len(self)+item
else:
if(item>len(self)):
raise IndexError("Index out of range")
t=self
i=0
while(t.next!=None):
if(i==item):
return t.next.data
t=t.next
i+=1
raise StopIteration
def __setitem__(self, key, value):
if(isinstance(key,int)):
t=self
i=0
while(t.next!=None):
if(i==key):
t.next.data=value
break
t=t.next
i+=1
else:
try:
key.next.data=value
except:
raise SyntaxError("Invalid reffernce pass for Linked list assignment")
def insert_reffernce(self,obj:object,data:object)->None:
self.__length+=1
tem=LinkedList()
tem.data=data
tem.next=obj.next
obj.next=tem
def swap(self,obj1:object,obj2:object):
self[obj1],self[obj2]=self[obj2],self[obj1]
def __reversed__(self):
t=self.copy()
t.reversed()
return t
def reversed(self):
t=self.next
if(t!=None):
p=self.next.next
t.next=None
while(p!=None):
q=p.next
p.next=t
t=p
p=q
self.next=t
def __eq__(self, other):
self=other
def __mul__(self, other:int):
if(isinstance(other,int)):
if(other==1):
return self
else:
return self+self.__mul__(other-1)
else:
raise SyntaxError(f"Invalid operator between Linked list object and {type(other).__name__} object")
def __add__(self, other):
if(isinstance(other,LinkedList)):
tem=LinkedList(self)
t=other.next
while(t!=None):
tem.append(t.data)
t=t.next
return tem
else:
raise SyntaxError("Invalid operator between linked list and %s" % (type(other).__name__))
def __abs__(self):
t = self
while (t.next != None):
if (isinstance(t.next.data, int) or isinstance(t.next.data, float)):
t.next.data = abs(t.next.data)
t = t.next
def Sum(self):
return sum(self)
def mean(self):
return sum(self)/len(self)
def meadian(self):
# Mode = 3(Median) - 2(Mean) use this relation to compute median in o(n)
ans="%.1f"%((self.mode()+(2*self.mean()))/3)
return float(ans)
def mode(self):
d={}
for item in self:
if(item in d):
d[item]+=1
else:
d[item]=1
max=list(d.keys())
max=max[0]
for item in d:
if(d[item]>d[max] and d[item]>1):
max=item
l=0
s=0
for item in d:
if(d[item]==d[max]):
s+=item
l+=1
return (s/l)
def __pow__(self, power,modula=None):
t = self
while (t.next != None):
if (isinstance(t.next.data, int) or isinstance(t.next.data, float)):
t.next.data = pow(t.next.data,power,modula)
t = t.next
@classmethod
def create_sized_list(cls,size=0,intial_value=0):
tem=LinkedList()
tem.__length=size
for item in range(size):
tem.append(intial_value)
return tem
def sqrt(self,modula=None):
t=self
self.__pow__(0.5,modula)
def count(self,element,start=0,end=-1):
if(end==-1):
end=len(self)
count=0
t=self
i=0
while(t!=None):
if(i>=start):
if(t.next.data==element):
count+=1
i+=1
if(i==end):
break
t=t.next
return count
def index(self,value,start=0,end=-1):
if(end==-1):
end=len(self)
t = self
i = 0
while (t != None):
if (i >= start):
if (t.next.data == value):
return i
i += 1
t=t.next
if (i == end):
break
return -1
def pop(self,index=-1):
if(index==-1):
index=len(self)-1
t=self
i=0
while(t.next!=None):
if(i==index):
t.next=t.next.next
break
t=t.next
i+=1
self.__length-=1
def serarch(self,value:object)->bool:
return self.index(value)!=-1
def remove(self,value):
j=self.index(value)
if(j!=-1):
self.pop(j)
def clear(self):
self.__length=0
self.next=None
def __iadd__(self, other):
if(isinstance(other,LinkedList)):
t=other
while(t.next!=None):
self.append(t.next.data)
t=t.next
return self
else:
raise SyntaxError("Invalid operator between linked list and %s"%(type(other).__name__))
def __imul__(self, other):
return self.__mul__(other)
def __call__(self, *args, **kwargs):
for item in args:
self.append(item)
def replace(self,old_value,new_value,times=-1):
count=0
t=self
while(t.next!=None):
if(count==times):
break
if(t.next.data==old_value):
count+=1
t.next.data=new_value
t=t.next
def rindex(self,value):
t=self
i=0
j=-1
while(t.next!=None):
if(t.next.data==value):
j=i
i+=1
t=t.next
return j
def concatenate(self,item=""):
t=self
ans=""
while(t.next!=None):
ans+=str(t.next.data)
ans+=item
t=t.next
return ans
def partition(self,value:int,starts:object=0,ends:object=-1):
if(isinstance(starts,int) and isinstance(ends,int)):
startp=self
midp=self
endp=self.__last.__last
else:
startp = starts
midp = starts
endp = ends
mid=0
end=len(self)-1
while(end>=mid and midp.next!=endp.next):
if(midp.next.data>value):
endp.next.data,midp.next.data=midp.next.data,endp.next.data
end-=1
endp=endp.__last
elif(midp.next.data<value):
startp.next.data,midp.next.data=midp.next.data,startp.next.data
startp=startp.next
midp=midp.next
mid+=1
else:
midp = midp.next
mid += 1
return (startp,endp.next)
def cummulativeSum(self):
tem=LinkedList([0])
sum=0
t=self
while(t.next!=None):
if(isinstance(t.next.data,int) or isinstance(t.next.data,float)):
sum+=t.next.data
tem.append(sum)
t=t.next
return tem
def Max(self):
return max(self)
def Min(self):
return min(self)
def join(self,item):
t=self
while(t.next!=None):
self.insert_reffernce(t.next,item)
t=t.next.next
def maxSum(self):
sum=0
t=self
try:
ans=t.next.data
except:
ans=0
while(t.next!=None):
if (isinstance(t.next.data, int) or isinstance(t.next.data, float)):
sum+=(t.next.data)
if(sum<0):
sum=0
ans = max(ans, sum)
t=t.next
return ans
def maxProduct(self):
t = self
try:
max1 = t.next.data
max2 = t.next.data
min1 = t.next.data
min2 = -t.next.data
except:
min1 = 0
max1 = 0
max2 = 0
min2 = 0
while (t.next != None):
if (isinstance(t.next.data, int) or isinstance(t.next.data, float)):
if(max1<t.next.data):
max1=t.next.data
elif(max2<t.next.data and t.next.data<=max1):
max2=t.next.data
else:
pass
if(min1>=t.next.data):
min1=t.next.data
elif (min2 >= t.next.data and t.next.data>=min1):
min2 = t.next.data
t = t.next
if(min1*min2>max2*max1):
return (min1*min2,(min1,min2))
return (max1*max2,(max1,max2))
def shift(self,value,side=True):
startp = self
midp = self
endp = self.__last.__last
mid = 0
end = len(self) - 1
while (end >= mid and midp.next != None):
if (midp.next.data!=value and side):
endp.next.data, midp.next.data = midp.next.data, endp.next.data
end -= 1
endp = endp.__last
elif (midp.next.data!=value and not side):
startp.next.data, midp.next.data = midp.next.data, startp.next.data
startp = startp.next
midp = midp.next
mid += 1
else:
midp = midp.next
mid += 1
def __fact(self,n):
a=1
for i in range(2,n+1):
a*=i
return a
def factorial(self):
t = self
while (t.next != None):
if (isinstance(t.next.data, int) or isinstance(t.next.data, float)):
t.next.data=self.__fact(t.next.data)
t = t.next
def sort(self,reverse:bool=False):
tem=LinkedList(sorted(self,reverse=reverse))
t=self
while(t.next!=None):
t.next.data=tem.next.data
tem=tem.next
t=t.next
def power(self,power,modula=None):
self.__pow__(power,modula) | AdvanceLinkedList | /AdvanceLinkedList-1.0.3.tar.gz/AdvanceLinkedList-1.0.3/LinkedList/__init__.py | __init__.py |
import math
import random
#Cell Type:
# Input Cell
# Backfed Input Cell
# Noisy Input Cell
# Hidden Cell
# Probablistic Hidden Cell
# Spiking Hidden Cell
# Output Cell
# Match Input Output Cell
# Recurrent Cell
# Memory Cell
# Different Memory Cell
# Kernel
# Convolution or Pool
def function(ftype:str,z:float,prime=False,alpha=1):
"""
type : "sigmoid,ELU,..."
z : Pre-activation
prime : True/False
alpha : Default(1)
Funtion :
# Binary Step (z)
# Linear (z, alpha)
# Sigmoid (z)
# Tanh (z)
# ReLU (z)
# Leaky-ReLU (z, alpha)
# Parameterised-ReLU (z, alpha)
# Exponential-Linear-Unit (z, alpha)
"""
if ftype == "Binary-Step":
if prime == False:
if z < 0:
y = 0
else:
y = 1
# else: pas de deriver
if ftype == "Linear":
if prime == False:
y = z*alpha
else:
y = alpha
if ftype == "Sigmoid":
if prime == False:
y = 1/(1+math.exp(-z))
else:
y = (1/(1+math.exp(-z))) * (1-(1/(1+math.exp(-z))))
if ftype == "Tanh":
if prime == False:
y = (math.exp(z)-math.exp(-z))/(math.exp(z)+math.exp(-z))
else:
y = 1 - (math.exp(z)-math.exp(-z))/(math.exp(z)+math.exp(-z))**2
if ftype == "ReLU":
if prime == False:
y = max(0,z)
else:
if z >= 0:
y = 1
else:
y = 0
if ftype == "Leaky-ReLU":
if prime == False:
y = max(alpha*z, z)
else:
if z > 0:
y = 1
else:
y = alpha
if ftype == "Parameterised-ReLU":
if prime == False:
if z >= 0:
y = z
else:
y = alpha*z
else:
if z >= 0:
y = 1
else:
y = alpha
if ftype == "Exponential-Linear-Unit":
if prime == False:
if z >= 0:
y = z
else:
y = alpha*(math.exp(z)-1)
else:
if z >= 0:
y = z
else:
y = alpha*(math.exp(y))
return y
class neural_network:
def __init__(self):
self.network = [{},{},{}]
self.used_neuron_feedforward = {}
self.used_neuron_backward = {}
def Add_Input_Neuron(self,neuron_name:str,neuron_type:str):
"""
neuron_type:
-Input Cell
-Backfed Input Cell #not available
-Noisy Input Cell #not available
"""
self.network[0][neuron_name] = {
"type":neuron_type,
"output_bridge":{},
"y":0
}
self.used_neuron_feedforward[neuron_name] = False
self.used_neuron_backward[neuron_name] = False
def Add_Hidden_Neuron(self,neuron_name:str,neuron_type:str,activation_type:str,alpha:float=None,biais:float=0.0):
"""
neuron_type :
-Hidden Cell
-Probablistic Hidden Cell #not available
-Spiking Hidden Cell #not available
#----------#
activation_type :
-Binary Step (z)
-Linear (z, alpha)
-Sigmoid (z)
-Tanh (z)
-ReLU (z)
-Leaky-ReLU (z, alpha)
-Parameterised-ReLU (z, alpha)
-Exponential-Linear-Unit (z, alpha)
"""
self.network[1][neuron_name] = {
"type":neuron_type,
"activation":{
"ftype":activation_type,
"alpha":alpha
},
"input_bridge":{},
"output_bridge":{},
"biais":biais,
"y":0,
"delta":0
}
self.used_neuron_feedforward[neuron_name] = False
self.used_neuron_backward[neuron_name] = False
def Add_Output_Neuron(self,neuron_name:str,neuron_type:str,activation_type:str,alpha:float=None,biais:float=0.0):
"""
neuron_type :
-Output Cell
-Match Input Output Cell #not available
#----------#
activation_type :
-Binary Step (z)
-Linear (z, alpha)
-Sigmoid (z)
-Tanh (z)
-ReLU (z)
-Leaky-ReLU (z, alpha)
-Parameterised-ReLU (z, alpha)
-Exponential-Linear-Unit (z, alpha)
"""
self.network[2][neuron_name] = {
"type":neuron_type,
"activation":{
"ftype":activation_type,
"alpha":alpha
},
"input_bridge":{},
"biais":biais,
"y":0,
"delta":0
}
self.used_neuron_feedforward[neuron_name] = False
self.used_neuron_backward[neuron_name] = False
def Add_Bridge(self,bridge_list:list):
"""
bridge_list:
[
[from,to],
[from,to],
...
]
"""
#pour tout les ponts
for bridge in bridge_list:
#on recherche sur tout l'INPUT_LAYER
for input_neuron in self.network[0]:
#si un des neurone de la couche est dans le ponts selectionner
if input_neuron in bridge:
#on ajoute le deuxieme neurone dans l'output
self.network[0][input_neuron]["output_bridge"][bridge[1]] = random.uniform(-1,1)
#on recherche sur tout l'HIDDEN_LAYER
for hidden_neuron in self.network[1]:
#si un des neurone de la couche est dans le ponts selectionner
if hidden_neuron in bridge:
#on verifie le type (out/in)->(0,1)
types = bridge.index(hidden_neuron)
if types == 0:#si c'est un ponts sortant
self.network[1][hidden_neuron]["output_bridge"][bridge[1]] = random.uniform(-1,1)
else:#si c'est un ponts entrant
self.network[1][hidden_neuron]["input_bridge"][bridge[0]] = random.uniform(-1,1)
#on recherche sur tout l'OUTPUT_LAYER
for output_neuron in self.network[2]:
#si un des neurone de la couche est dans le ponts selectionner
if output_neuron in bridge:
self.network[2][output_neuron]["input_bridge"][bridge[0]] = random.uniform(-1,1)
def train(self,inputs,expected,learning_rate,nb_epoch,display=False):
#pour chaque epoch
for epoch in range(nb_epoch):
#definir l'erreur de l'epoch a 0
error = 0
#pour tout les inputs
for x,values in enumerate(inputs):
#faire le feet_forward avec les valeurs
outputs = self.feed_forward(values)
#on fait l'addition de toute les difference de l'output
error += sum([(expected[x][i]-outputs[i])**2 for i in range(len(expected[x]))])
#on calcule le taux d'erreur de chaque neurone
self.backward(expected[x])
self.update_weights(values,learning_rate)
if display == True:
print('>epoch=%d, lrate=%.3f, error=%.3f' % (epoch, learning_rate, error))
def feed_forward(self,inputs_values):
####METTRE TOUT LES NEURONES A "NON UTILISER"####
self.used_neuron_feedforward = {x: False for x in self.used_neuron_feedforward}
#################################################
#pour chaque neurone de l'INPUT_LAYER
for x_input_neuron,input_neuron in enumerate(self.network[0]):
#definir la valeur des inputs
self.network[0][input_neuron]["y"] = inputs_values[x_input_neuron]
self.used_neuron_feedforward[input_neuron] = True
#tant que tout les neurones ne sont pas calculer
while all(self.used_neuron_feedforward[x] == True for x in self.used_neuron_feedforward) == False:
#pour chaque couche de l'HIDDEN_LAYER
for hidden_neuron in self.network[1]:
#si tout les neurones d'entrรฉ sont calculer
if all(self.used_neuron_feedforward[in_neuron]==True for in_neuron in self.network[1][hidden_neuron]["input_bridge"]):
#preactivation du neurone
z = self.pre_activation(self.network[1][hidden_neuron])
#on calcul l'activation
y = function(self.network[1][hidden_neuron]["activation"]["ftype"],z,alpha=self.network[1][hidden_neuron]["activation"]["alpha"])
self.network[1][hidden_neuron]["y"] = y
self.used_neuron_feedforward[hidden_neuron] = True
#pour chaque couche de l'OUTPUT_LAYER
for output_neuron in self.network[2]:
#si tout les neurones d'entrรฉ sont calculer
if all(self.used_neuron_feedforward[in_neuron]==True for in_neuron in self.network[2][output_neuron]["input_bridge"]):
#preactivation du neurone
z = self.pre_activation(self.network[2][output_neuron])
#on calcul l'activation
y = function(self.network[2][output_neuron]["activation"]["ftype"],z,alpha=self.network[2][output_neuron]["activation"]["alpha"])
self.network[2][output_neuron]["y"] = y
self.used_neuron_feedforward[output_neuron] = True
outputs = [self.network[2][x]["y"] for x in self.network[2]]
return outputs
def pre_activation(self,current_neuron):
z = current_neuron["biais"]
#pour tout les neurones entrant
for in_neuron in current_neuron["input_bridge"]:
#calculer valeur * poids
#chercher couche par couche le neurone demander
for layer in self.network:
#si la couche contient le neurone
if in_neuron in layer.keys():
in_neuron_data = layer[in_neuron]
break
z += in_neuron_data["y"]*current_neuron["input_bridge"][in_neuron]
return z
def backward(self,expected):
####METTRE TOUT LES NEURONES A "NON UTILISER"####
self.used_neuron_backward = {x: False for x in self.used_neuron_backward}
#on met tout les neurone de l'INPUT_LAYER a True
for input_neuron in self.network[0]:
self.used_neuron_backward[input_neuron] = True
#################################################
#Calcul de l'erreur de l'OUTPUT_LAYER
#pour chaque neurones de l'OUTPUT_LAYER
for x_output_neuron,output_neuron in enumerate(self.network[2]):
#on calcule la difference entre l'attendue et ce qu'on a eu
error = expected[x_output_neuron] - self.network[2][output_neuron]["y"]
#on calcul le taux d'erreur de ce neurone
self.network[2][output_neuron]['delta'] = error* function(self.network[2][output_neuron]["activation"]["ftype"],self.network[2][output_neuron]["y"],prime=True,alpha=self.network[2][output_neuron]["activation"]["alpha"])
self.used_neuron_backward[output_neuron] = True
#Tant que tout les hidden
while all(self.used_neuron_backward[x] == True for x in self.used_neuron_backward) == False:
#pour chaque couche de l'HIDDEN_LAYER
for hidden_neuron in self.network[1]:
#si tout les neurones de sortie sont calculer
if all(self.used_neuron_backward[out_neuron]==True for out_neuron in self.network[1][hidden_neuron]["output_bridge"]) and not all(self.used_neuron_backward[n] == True for n in self.used_neuron_backward):
#definir l'erreur ร 0
error = 0.0
#pour chaque neurone de sortie
for out_neuron in self.network[1][hidden_neuron]["output_bridge"]:
#si le neurones est dans l'HIDDEN_LAYER
if out_neuron in self.network[1].keys():
#multiplier le poid du pont et le taux d'erreur du neurone de sortie
error += (self.network[1][hidden_neuron]['output_bridge'][out_neuron] * self.network[1][out_neuron]['delta'])
#si le neurones est dans l'OUTPUT_LAYER
if out_neuron in self.network[2].keys():
#multiplier le poid du pont et le taux d'erreur du neurone de sortie
error += (self.network[1][hidden_neuron]['output_bridge'][out_neuron] * self.network[2][out_neuron]['delta'])
#on ajoute le taux d'erreur de ce neurone a la liste
self.network[1][hidden_neuron]["delta"] = error * function(self.network[1][hidden_neuron]["activation"]["ftype"],self.network[1][hidden_neuron]["y"],prime=True,alpha=self.network[1][hidden_neuron]["activation"]["alpha"])
self.used_neuron_backward[hidden_neuron] = True
def update_weights(self,inputs,learning_rate):
####METTRE TOUT LES NEURONES A "NON UTILISER"####
self.used_neuron_feedforward = {x: False for x in self.used_neuron_feedforward}
#################################################
#pour chaque neurone de l'INPUT_LAYER
for x_input_neuron,input_neuron in enumerate(self.network[0]):
#definir la valeur des inputs
self.network[0][input_neuron]["y"] = inputs[x_input_neuron]
self.used_neuron_feedforward[input_neuron] = True
#Tant que tout les neurone ne sont pas calculer
while all(self.used_neuron_feedforward[x] == True for x in self.used_neuron_feedforward) == False:
#pour chaque neurone de l'HIDDEN_LAYER
for hidden_neuron in self.network[1]:
#si tout les neurones d'entrรฉ sont calculer
if all(self.used_neuron_feedforward[in_neuron]==True for in_neuron in self.network[1][hidden_neuron]["input_bridge"]):
#Pour chaque entrรฉ
for in_neuron in self.network[1][hidden_neuron]["input_bridge"]:
#si l'entrรฉ est dans l'INPUT_LAYER
if in_neuron in self.network[0].keys():
#on update le poids de l'entrรฉ
#du neuron actuel
self.network[1][hidden_neuron]["input_bridge"][in_neuron] += learning_rate * self.network[1][hidden_neuron]["delta"] * self.network[0][in_neuron]["y"]
#de l'input neuron
self.network[0][in_neuron]["output_bridge"][hidden_neuron] = self.network[1][hidden_neuron]["input_bridge"][in_neuron]
#si l'entrรฉ est dans l'HIDDEN_LAYER
if in_neuron in self.network[1].keys():
#on update le poids de l'entrรฉ
#du neuron actuel
self.network[1][hidden_neuron]["input_bridge"][in_neuron] += learning_rate * self.network[1][hidden_neuron]["delta"] * self.network[1][in_neuron]["y"]
#du neuron precedent
self.network[1][in_neuron]["output_bridge"][hidden_neuron] = self.network[1][hidden_neuron]["input_bridge"][in_neuron]
#on met a jour le biais
self.network[1][hidden_neuron]["biais"] += learning_rate * self.network[1][hidden_neuron]["delta"]
self.used_neuron_feedforward[hidden_neuron] = True
#pour chaque neurone de l'OUTPUT_LAYER
for output_neuron in self.network[2]:
#si tout les neurones d'entrรฉ sont calculer
if all(self.used_neuron_feedforward[in_neuron]==True for in_neuron in self.network[2][output_neuron]["input_bridge"]):
#Pour chaque entrรฉ
for in_neuron in self.network[2][output_neuron]['input_bridge']:
#si l'entrรฉ est dans l'INPUT_LAYER
if in_neuron in self.network[0].keys():
#on update le poids de l'entrรฉ
#du neuron actuel
self.network[2][output_neuron]["input_bridge"][in_neuron] += learning_rate * self.network[2][output_neuron]["delta"] * self.network[0][in_neuron]["y"]
#de l'input neuron
self.network[0][in_neuron]["output_bridge"][output_neuron] = self.network[2][output_neuron]["input_bridge"][in_neuron]
#si l'entrรฉ est dans l'HIDDEN_LAYER
if in_neuron in self.network[1].keys():
#on update le poids de l'entrรฉ
#du neuron actuel
self.network[2][output_neuron]["input_bridge"][in_neuron] += learning_rate * self.network[2][output_neuron]["delta"] * self.network[1][in_neuron]["y"]
#du neuron precedent
self.network[1][in_neuron]["output_bridge"][output_neuron] = self.network[2][output_neuron]["input_bridge"][in_neuron]
#on met a jour le biais
self.network[2][output_neuron]["biais"] += learning_rate * self.network[2][output_neuron]["delta"]
self.used_neuron_feedforward[output_neuron] = True
def predict(self,inputs):
outputs = self.feed_forward(inputs)
return outputs | Advanced-Neural-Network | /Advanced_Neural_Network-1.0.2-py3-none-any.whl/ANN.py | ANN.py |
# =============================================================================
# Libraries
# =============================================================================
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import roc_auc_score as auc
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
from sklearn import tree
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import LogisticRegressionCV
from regressors import stats
# =============================================================================
# Constants Errors
# =============================================================================
ERR_ABT = 'Your ABT is not a Pandas DataFrame.\n \
(REMEMBER about na_values in Your read method)'
ERR_TARGET_NAME = "There is not a target name in DataFrame"
ERR_TARGET_INT = "Your target column is not int or you have NaN values. \n \
Please use: df = df.dropna(subset=[target_name]) \n \
df[target_name] = df[target_name].astype(np.int64)"
ERR_TARGET_NUMBER = "You should have just two elements (0 and 1) in target column"
RM_ONE_VALUE = 'one value in column'
ERR_SCORECARD = "generate scorecard by fit() method"
BINS_ERR = 'Number of bins must be greater then 1'
PRESELECTION_ERR = 'NO FEATURES AFTER PRESELECTION'
SELECTION_ERR = 'NO FEATURES AFTER SELECTION'
ERR_GINI_MODEL = "You don't have model yet"
ERR_TEST_SIZE = "Test size should be between 0-1"
# =============================================================================
# selection parameters
# =============================================================================
NUM_OF_FEATURES = 8 # how many features you get from selection method
SELECT_METHOD = LogisticRegression() # model for features selection
TEST_SIZE = 0.3 # size of test sample, train = 1-TEST_SIZE
RANDOM_STATE = 1234 # Random seed
N_BINS = 4 # max number of categories
MIN_WEIGHT_FRACTION_LEAF = 0.05 # min percent freq in each category
MIN_GINI = 0.05 # min gini value for preselection
DELTA_GINI = 0.2 # max AR Diff value
# =============================================================================
# ASB free ver 1.0.1
# =============================================================================
class AdvancedScorecardBuilder(object):
""" Advanced ScoreCard Builder Free Version
Parameters
----------
df: DataFrame, abt dataset. With one target column
target_name: string, target column name
dates: string or list, optional (default=None)
A date type columns list
Return
------
data: Pandas DataFrame, shape=[n_sample,n_features]
data without target
target_name: string, target column name
target: 1d NumPy array
_removed_features: dict with removed features from data - one value
if dates is not None
_date_names: str, list of date column names
_date_columns: Numpy array with date type column
Examples
--------
>>> import pandas as pd
>>> from AmaFree import AdvancedScorecardBuilder as asb
>>> from sklearn import datasets
>>> X,y = datasets.make_classification(n_samples=10**4, n_features=15, random_state=123)
>>> names = []
>>> for el in range(15):
>>> names.append("zm"+str(el+1))
>>> df = pd.DataFrame(X, columns=names)
>>> df['target'] = y
>>> foo = asb(df,'target')
>>> foo.fit()
>>> foo.get_scorecard()
Default parameters for fit() method
-----------------------------------
test size = 0.3 , train size = 0.7
Maximal number of bins = 4
Weight Fraction of category = 0.05 (5%) of all data
Number of features for model = 8
minimal accepted gini = 0.05 (5%) for one feature
maximal accepted delta gini = 0.2
After fit() method You have
----------------------------
train_ - training set with target
test_ - test set with target
labels_ - dict with all bin labels
stats_ - statistics for all features (train)
stats_test_ -
preselected_features
_rejected_low_gini
_rejected_AR_gini
selected_features
model_info_
self._scorecard_
"""
def __init__(self, df, target_name, dates=None):
"""init method
load abt data as DataFrame and choose column name for target.
All columns with date put as list in dates parameter.
"""
# check is df a dataFrame
if not isinstance(df, (pd.DataFrame)):
raise Exception(ERR_ABT)
self.DESCR = self.__doc__
# check if target is ok
if self.__check_target(df, target_name):
self.target_name = target_name # string
self.target = df[target_name].values # Numpy array
# remove one unique value columns
self._removed_features = {}
self._log = ''
df = df.drop(self.__get_one_value_features(df), axis = 1)
# date type data
if dates:
self._date_names = dates
self._date_columns = pd.to_datetime(df[dates], format = '%Y%m')
df = df.drop(dates, axis=1)
# categorical features for analysis
obj_df = df.select_dtypes(include = ['object']).copy()
c_list = list(obj_df)
if len(c_list) > 0:
self._category_df = {}
for feature in c_list:
le = LabelEncoder()
try:
le.fit(df[feature])
self._category_df[feature] = le
# code for future categorical data analysis
except:
raise Exception("Your categorical data have NaN")
# remove target from data
self.__data_ = df # data with target column
self.data = df.drop(self.target_name, axis = 1)
# features without target, dates and one value columns
self.feature_names = list(self.data)
def __check_target(self, df, target):
''' init HELPER method
target checker: verify if target name is in df,
verify if target column is int,
verify if there are two unique values in column,
and if this two values are 1 and 0.
'''
if target not in list(df): # verify target name in df
raise Exception(ERR_TARGET_NAME)
if not df[target].dtype in ['int64', 'int32']: # verify if target column is int
raise Exception(ERR_TARGET_INT)
if not len(df[target].unique()) == 2: # and is there 2 unique values in column
raise Exception(ERR_TARGET_NUMBER)
# if this two values are 1 and 0
if not any(df[target] == 0) & any(df[target] == 1):
raise Exception(ERR_TARGET_NUMBER)
return True
def __get_one_value_features(self, df):
''' init HELPER method
Take list with bad (one value) features
'''
rm_list = []
for feature in list(df):
if not len(df[feature].unique()) > 1:
print('feature {} removed - one value in column'.format(feature))
self._log += 'feature '+str(feature)+ ' removed - one value in column \n'
rm_list.append(feature)
self._removed_features[feature] = RM_ONE_VALUE
return rm_list
def __str__(self):
''' Just for FUN '''
return '<AMA Institute | Free ASB>'
def fit(self,
test_size=TEST_SIZE,
min_freq_of_category=MIN_WEIGHT_FRACTION_LEAF,
n_category=N_BINS,
min_gini= MIN_GINI,
delta_gini=DELTA_GINI,
n_features=NUM_OF_FEATURES):
""" Made score card and model for data
Parameters
----------
test_size: float, optional (default=0.3)
Should be between 0.0 and 1.0 represent the proportion
of the dataset to include in the test split.
min_freq_of_category: float, optional (default = 0.05)
percent of data in each category.
n_category: int, maximum of category number in binning.
min_gini: float, (default = 0.05) gini minimum value for preselection.
delta_gini: float, (default = 0.2) AR value for camparision train and test statistics.
n_features: int, maximum number of selected features
Return
------
self.train_ - training set with target
self.test_ - test set with target
self.labels_ - dict with all bin labels
self.stats_ -
self.stats_test_
self.preselected_features
self._rejected_low_gini
self._rejected_AR_gini
self.selected_features
self.model_info_
self._scorecard_
"""
# 1. NaN < 0.05 remove
self._log += 'NaN analysis \n'
if self.__data_.isnull().any().any():
self.__data_ = self.__remove_empty_nan(self.__data_, min_freq_nan=min_freq_of_category)
# 2. split data
self._log += 'Spliting data \n'
self.train_, self.test_ = self.__split_frame(test_size=test_size)
# 3. binning features
self._log += 'binning features\n'
trainBin_, testBin_, self.labels_ = self.__sup_bin_features(self.train_, self.test_, self.feature_names, n_category)
self._log += 'NaN analysis'
self.trainBin_, self.testBin_ = self.__fillnan(trainBin_,testBin_,min_freq_nan=min_freq_of_category)
# stats
self._log += 'Get features stats for training set\n'
self.stats_ = self.__stats(self.trainBin_,list(self.trainBin_))
self._log += 'Get features stats for test set\n'
self.stats_test_ = self.__stats(self.testBin_,list(self.testBin_))
# 4 preselekcja
self._log += 'Preselection\n'
self.preselected_features, self._rejected_low_gini, self._rejected_AR_gini= self.__preselection(list(self.trainBin_),min_gini,delta_gini)
self._log += 'low gini: '+" ".join(str(x) for x in self._rejected_low_gini)+'\n'
self._log += 'Delta AR gini: '+" ".join(str(x) for x in self._rejected_AR_gini)+'\n'
if not len(self.preselected_features):
self._log += PRESELECTION_ERR
raise Exception(PRESELECTION_ERR)
trainLogit = self.__logit_value(self.trainBin_[self.preselected_features],self.stats_)
testLogit = self.__logit_value(self.testBin_[self.preselected_features],self.stats_test_)
# 5 selected features from logit train data
self.selected_features = self.__selection(trainLogit, n=n_features) # list
if not len(self.selected_features):
self._log += SELECTION_ERR
raise Exception(SELECTION_ERR)
self.leader, self.model_info_ = self.__get_model_info(
trainLogit[self.selected_features], self.train_[self.target_name],testLogit[self.selected_features],self.test_[self.target_name])
self._scorecard_ = pd.DataFrame.from_dict(self.__scorecard_dict(
self.selected_features, self.stats_, self.model_info_['coef'], self.model_info_['intercept']))
def __split_frame(self, test_size):
"""Sampling data by random method
We use sklearn train_test_split method
Parameters
----------
test_size: float, optional (default=0.3)
Should be between 0.0 and 1.0 represent the proportion
of the dataset to include in the test split.
Return
----------
train_: DataFrame, shape = [n_samples, (1-test_size)*n_features with target]
test_: DataFrame, shape = [n_samples, test_size*n_features with target]
"""
if test_size <=0 and test_size >=1:
raise Exception(ERR_TEST_SIZE)
tr, te = train_test_split(self.__data_, random_state = RANDOM_STATE, test_size=test_size)
return tr.reset_index(drop=True), te.reset_index(drop=True)
def __remove_empty_nan(self,df,min_freq_nan):
df_new = df.copy()
nAll = df_new.shape[0]
for feature in list(df_new):
if df_new[feature].isnull().any():
nNan = df_new[feature].isnull().sum()
if nNan/nAll < min_freq_nan:
print("You have less then {} empty values in {}. I change them by mean value".format(min_freq_nan,feature))
self._log += 'You have less then '+ str(min_freq_nan)+ ' empty values in '+feature+'. I change them by mean value\n'
df_new[feature] = df_new[feature].fillna(df_new[feature].mean())
else:
self._log += 'in {} you have more then {} NaNs \n'.format(feature,min_freq_nan)
return df_new
def __is_numeric(self, df, name):
"""helper method
verify is type of column is int or float
Parameters
----------
df: DataFrame
name: string, column name to check
Return
------
True if column is float or int
or False if not
"""
if df[name].dtype in ['int64', 'int32']:
return True
if df[name].dtype in ['float64', 'float32']:
return True
return False
def __binn_continous_feature(self,df,feature,max_leaf_nodes,
min_weight_fraction_leaf=MIN_WEIGHT_FRACTION_LEAF,
random_state=RANDOM_STATE):
"""supervised binning of continue feature by tree
Parameters
----------
df: DataFrame,
feature: string, analysing feature name
max_leaf_nodes: parameter of tree
min_weight_fraction_leaf: parameter of tree
random_state: parameter of tree
Return
------
labs: dict, description
"""
# new DataFrame with result
df_cat = pd.DataFrame()
# cut all data to two col DataFrame
df_two_col = df[[self.target_name, feature]].copy()
# drop nan values (because tree)
df_two_col = df_two_col.dropna(axis=0).reset_index(drop=True)
# binns list with [min,max]
bins = [-np.inf, np.inf]
# get Tree classifier - check if we need another parameters !!
clf = tree.DecisionTreeClassifier(
max_leaf_nodes=max_leaf_nodes,
min_weight_fraction_leaf=min_weight_fraction_leaf,
random_state=random_state)
# fit tree
clf.fit(df_two_col[feature].values.reshape(-1, 1), df_two_col[self.target_name])
# get tresholds and remove empty
thresh = [round(s, 3) for s in clf.tree_.threshold if s != -2]
# add tresholds to binns
bins = bins + thresh
return sorted(bins)
@staticmethod
def __cut_bin(data,bins):
"""helper method """
return pd.cut(data,bins=bins, labels=False, retbins=True, include_lowest=True)
def __sup_bin_features(self, df,testdf,features,n_bins):
"""binn method
binning of numerical variables by tree algorithm
Parameters
----------
df: train DataFrame with data for binning
testdf: test DataFrame with data for binning
features: features list
n_bins: binns number
Return
------
Binned train set, test set and labels of bins
"""
df_c = pd.DataFrame() # category frame with labels int
df_test = pd.DataFrame() # df after bin before checking
# remove_list = []
labs = {} # binns lists
df_copy = df.copy() # copy of data
df_test_copy = testdf.copy() # copy test data
# run loop for every feature in data - all should be numeric
for feature in features:
# check is type is numeric
if self.__is_numeric(df_copy, feature):
labs[feature] = self.__binn_continous_feature(df_copy, feature,n_bins)
if len(labs[feature])>2:
# cuts with int labels
df_c[feature], _ = self.__cut_bin(df_copy[feature],labs[feature])
df_test[feature], _ = self.__cut_bin(df_test_copy[feature],labs[feature])
elif len(df_copy[feature].unique())==2:
df_c[feature] = df_copy[feature]
df_test[feature] = df_test_copy[feature]
else:
print('I removed {} - no binns'.format(feature))
self._log += 'I removed '+ str(feature)+' - no binns \n'
self._removed_features[feature] = 'no binns'
else: # if is category type data or something else
df_c[feature] = df_copy[feature]
df_test[feature] = df_test_copy[feature]
# raise Exception('You still have non numerical data')
# remember add target to the train and test data
df_c[self.target_name] = df[self.target_name]
df_test[self.target_name] = testdf[self.target_name]
return df_c, df_test, labs
def __fillnan(self,df, df_2,min_freq_nan):
''' change all nan as last numerical category
'''
if not (df.isnull().any().any() and df_2.isnull().any().any()):
return df, df_2
for feature in df.columns[df.isnull().any()].tolist():
self._log += 'change NaN values for binned feature '+str(feature) +'\n'
if df[feature].isnull().sum()/df[feature].shape[0] > min_freq_nan:
self._log += 'more then '+str(min_freq_nan)+' NaN goes to the NEW category \n'
df[feature]=df[feature].fillna(df[feature].max()+1)
df_2[feature]=df_2[feature].fillna(df_2[feature].max()+1)
else:
self._log += 'less then '+str(min_freq_nan)+' NaN goes to the FIRST category \n'
df[feature]=df[feature].fillna(0)
df_2[feature]=df_2[feature].fillna(0)
for feature in df_2.columns[df_2.isnull().any()].tolist():
self._log += 'change NaN values for binned feature '+str(feature) +'\n'
if df_2[feature].isnull().sum()/df_2[feature].shape[0] > min_freq_nan:
self._log += 'more then '+str(min_freq_nan)+' NaN goes to the NEW category \n'
df[feature]=df[feature].fillna(df[feature].max()+1)
df_2[feature]=df_2[feature].fillna(df_2[feature].max()+1)
else:
self._log += 'less then '+str(min_freq_nan)+' NaN goes to the FIRST category \n'
df[feature]=df[feature].fillna(0)
df_2[feature]=df_2[feature].fillna(0)
return df, df_2
def __dictStats(self,df,feature):
''' Generate dict with target values for feature
'''
slownik = {}
elements = list(df[feature].unique())
for el in elements:
slownik[el] = dict(df[df[feature] == el][self.target_name].value_counts())
if 0 not in slownik[el]:
slownik[el][0] = 0.00000000000000000001
if 1 not in slownik[el]:
slownik[el][1] = 0.00000000000000000001
return slownik
def __df_stat(self,fd,td,feature):
''' Generate DataFrame for feature stats
'''
result = pd.DataFrame.from_dict(fd, orient='index')
#population
pop_all = result[0]+result[1]
result['Population'] = pop_all
result['Percent of population [%]'] = round((pop_all)/ td['length'] , 3)*100
result['Good rate [%]'] = round(result[0] / pop_all,3)*100
result['Bad rate [%]'] = round(result[1] / pop_all,3)*100
p_goods_to_all_goods = result[0] / td[0]
p_bads_to_all_bads = result[1]/td[1]
result['Percent of goods [%]'] = round(p_goods_to_all_goods,3)*100
result['Percent of bad [%]'] = round(p_bads_to_all_bads,3)*100
result['var'] = feature
result['logit'] = np.log(result[1] / result[0])
result['WoE'] = np.log((p_goods_to_all_goods)/(p_bads_to_all_bads))
result['IV'] =((p_goods_to_all_goods)-(p_bads_to_all_bads))*result['WoE']
if hasattr(self, '_category_df'):
if feature in self._category_df:
result['label'] = str(feature)+' = '+result.index
else:
result['label'] = self.__category_names(result,self.labels_[feature],feature)
else:
result['label'] = self.__category_names(result,self.labels_[feature],feature)
result = result.rename(columns = { 1:'n_bad', 0:'n_good'})
result['bin_label'] = result.index
return result
def __category_names(self, df, bins, name):
'''helper method for getting bins labels as a string'''
result = []
string = ""
if len(df)==2 and len(bins)==2:
return [str(name)+"=0",str(name)+"=1"]
if len(df)==2 and len(bins)==3 and bins[1]==0.5:
return [str(name)+"=0",str(name)+"=1"]
if not len(df) == len(bins):
string = "(not missing) and "
for ix, el in enumerate(bins):
if ix == 0:
continue
if ix == 1:
string += str(name) + ' <= ' + str(el)
result.append(string)
string = str(el) + ' < ' + str(name)
if ix < len(bins)-1 and ix > 1:
string += ' <= ' + str(el)
result.append(string)
string = str(el) + ' < ' + str(name)
if ix == len(bins)-1:
result.append(str(bins[ix-1]) + ' < ' + str(name))
if len(df) == len(bins):
result.append('missing')
return result
def __stats(self,df,features):
'''Generate stats for all features
'''
statsDict = {}
logit_dict = {}
if self.target_name in features:
features.remove(self.target_name)
target_dict = df[self.target_name].value_counts()
if 1 not in target_dict.keys():
target_dict[1] = 0.00000000000000000001
if 0 not in target_dict.keys():
target_dict[0] = 0.00000000000000000001
target_dict['length'] = df.shape[0]
for feature in features:
# take 0 and 1 for each category in feature and then compute more info
statsDict[feature] = self.__df_stat(self.__dictStats(df, feature), target_dict,feature)
logit_dict = statsDict[feature]['logit'].to_dict()
statsDict[feature]['Gini'] = np.absolute(2 * auc(df[self.target_name],self.__change_dict(df,feature,logit_dict)) - 1)
return statsDict
def __compute_gini(self, logit_c,df,feature):
""" method for computing gini index
"""
ld = logit_c.to_dict()
return np.absolute(2 * auc(df[self.target_name],self.__change_dict(df,feature,ld)) - 1)
def __preselection(self, features, min_gini=MIN_GINI, delta_gini=DELTA_GINI):
"""Preselection of features by gini and AR_DIFF value """
# 1. gini na kolumnie - drop < mini_gini
results = []
rejected_low_gini = []
rejected_delta_gini = []
if self.target_name in features:
features.remove(self.target_name)
for feature in features:
print(feature)
gini = self.__compute_gini(self.stats_[feature]['logit'],self.trainBin_, feature)
if gini > min_gini:
# 2. procentowa miedzy testem a treningiem |g_train - g_test|/g_train > delta gini
gini_test = self.__compute_gini(self.stats_[feature]['logit'],self.testBin_,feature)
AR_Diff = self.__AR_value(gini,gini_test)
if AR_Diff < delta_gini:
results.append(feature)
else:
rejected_delta_gini.append([feature, AR_Diff])
else:
rejected_low_gini.append(
[feature, gini])
return results, rejected_low_gini, rejected_delta_gini
def __AR_value(self,train,test):
'''compute AR diff value '''
return np.absolute((train-test))/train
def __logit_value(self,df, stats):
'''helper method
change all values in columns with corresponding logit value
'''
logit = df.copy()
if self.target_name in list(logit):
logit = logit.drop(self.target_name, axis=1)
for el in logit:
logit[el] = self.__change_dict(logit,el,stats[el]['logit'].to_dict())
return logit
def __change_dict(self,df,feature,dict_):
'''helper method
map all elements for feature column'''
return df[feature].map(dict_)
def __selection(self, df, selector=SELECT_METHOD, n=NUM_OF_FEATURES):
"""Method for selecting best n features with chosen estimator (logistic regression as default)"""
return self.__choose_n_best(list(df), self.__ranking_features(df, selector), n)
def __ranking_features(self, df, selector):
'''RFE feautre ranking from RFE selection '''
from sklearn.feature_selection import RFE
rfe = RFE(estimator=selector, n_features_to_select=1, step=1)
rfe.fit(df, self.train_[self.target_name])
return rfe.ranking_
def __choose_n_best(self, features, ranking, n):
'''choose n best features from ranking list'''
result = list(ranking <= n)
selected_features = [features[i]
for i, val in enumerate(result) if val == 1]
return selected_features
def __get_model_info(self, X, y, X_test,y_test):
lre = LogisticRegressionCV()
features = list(X)
result = {"coef": {},
"p_value":{},
"features": features,
'model': str(lre).split("(")[0],
'gini': 0,
'acc': 0,
'Precision':0,
'Recall':0,
'F1':0}
lre.fit(X, y)
pred = lre.predict(X_test)
p_va = stats.coef_pval(lre,X,y)
result['acc'] = accuracy_score(y_test,pred) # accuracy classification score
result['Precision']=precision_score(y_test,pred) # precision tp/(tp+fp) PPV
result['Recall'] = recall_score(y_test,pred) # Recall tp/(tp+fn) NPV
result['F1'] = f1_score(y_test,pred) # balanced F-score weighted harmonic mean of the precision and recall
for ix, el in enumerate(lre.coef_[0]):
result['coef'][features[ix]] = el
result['p_value'][features[ix]] = p_va[ix]
result['intercept'] = lre.intercept_
partial_score = np.asarray(X) * lre.coef_
for_gini_score = [sum(i) for i in partial_score] + lre.intercept_
result['gini'] = np.absolute(2 * auc(y, for_gini_score) - 1)
return lre, result
def __scorecard_dict(self,features, stats, coef, inter):
'''scorecard dictionary with score points for all categories
with beta coefficients > 0.
Parameters
----------
features: list, list of all modeled features
stats: dict, dictionary with statistics
coef: dict, dictionary with all models coefficient
inter: list, list with model intercept value.
Return
------
scorecarf: dict
'''
alpha = inter[0]
factor = 20/np.log(2)
score_dict = {"variable": [], "label": [],
"logit": [], 'score': []}
stats_copy = {}
v = len(features)
alp = 0
# del all features with negative beta coefficient
for el in features:
if coef[el]<0:
features.remove(el)
self._scored_features_ = features
for el in features:
stats_copy[el] = stats[el].sort_values(
by='logit', ascending=False).reset_index(drop=True)
alp += coef[el]*stats_copy[el]["logit"][0]*factor
alp = -alp+300
for el in features:
f_beta = coef[el]*stats_copy[el]["logit"][0]
for ix, ele in enumerate(stats_copy[el]['var']):
score_dict["variable"].append(ele)
score_dict["label"].append(stats_copy[el]['label'][ix])
score_dict["logit"].append(stats_copy[el]["logit"][ix])
a1 = -(coef[el]*stats_copy[el]["logit"][ix]-f_beta + alpha/v)*factor
a2 = alp/v
score_dict["score"].append(int(round(a1+a2)))
if score_dict["score"][0]<0:
score_dict['score']+=np.absolute(score_dict['score'][0])+1
return score_dict
def test_gini(self):
"""Compute gini for test dataset"""
df_bin_test = self.testBin_[self._scored_features_]
df = pd.DataFrame()
# change bin value to score value
for feature in list(df_bin_test):
# get score values for beans
df_a = self.show_stats(feature,bin_lab=True).sort_values(by="bin_label").reset_index(drop=True)
score_dict = df_a.to_dict()['score']
df[feature] = self.__change_dict(df_bin_test,feature,score_dict)
df['total_score'] = (df.sum(axis=1)).astype('int')
X = df['total_score'].values.reshape(-1, 1)
y = self.testBin_[self.target_name]
gini = np.absolute(2*auc(y,X)-1)
return gini
def __summary_features(self, features):
summary= ""
for feature in features:
table = self.show_stats(feature).to_html()\
.replace('<table border="1" class="dataframe">','<table class="table table-striped">')
element = '<h3>'+str(feature)+'</h3>'+table
summary += element
return summary
def __summary_report(self):
all_train = self.train_.shape[0]
all_test = self.test_.shape[0]
train_good =self.train_[self.target_name].value_counts()[0]
train_bad =self.train_[self.target_name].value_counts()[1]
test_good =self.test_[self.target_name].value_counts()[0]
test_bad =self.test_[self.target_name].value_counts()[1]
info = {'n observed':[all_train,all_test],
'n good':[train_good,test_good],
'n bad':[train_bad, test_bad],
'Percent of good [%]':[round(train_good/all_train,3)*100, round(test_good/all_test,3)*100],
'Percent of bad [%]':[round(train_bad/all_train,3)*100, round(test_bad/all_test,3)*100]
}
df = pd.DataFrame(info, index=['Training','Test'])
summary = df[['n observed','n good','n bad', 'Percent of good [%]', 'Percent of bad [%]']].to_html()\
.replace('<table border="1" class="dataframe">','<table class="table table-striped">')
return summary
def __model_report(self):
summary= self.get_scorecard().to_html()\
.replace('<table border="1" class="dataframe">','<table class="table table-striped">')
return summary
def __gini_report(self):
info = {'Gini train':self.gini_model(),'Gini test':self.test_gini()}
df = pd.DataFrame(info, index = [0])
summary = df.to_html()\
.replace('<table border="1" class="dataframe">','<table class="table table-striped">')
return summary
def html_report(self, name='report.html', features=None):
if not features:
features = self._scored_features_
html_page = '''<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="description" content="Advanced Scorecard Builder Report">
<meta name="author" content="Sebastian Zajฤ
c">
<title>Report</title>
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css">
<style>
h2{text-align:center}</style>
<!-- HTML5 shim and Respond.js for IE8 support of HTML5 elements and media queries -->
<!--[if lt IE 9]>
<script src="https://oss.maxcdn.com/html5shiv/3.7.3/html5shiv.min.js"></script>
<script src="https://oss.maxcdn.com/respond/1.4.2/respond.min.js"></script>
<![endif]-->
</head>
<body>
<div class="jumbotron"><div class="container">
<div class="text-center">
<img src="https://amainstitute.pl/wp-content/uploads/2018/06/ama_logo.png" alt='AMA Instistute Logo'>
</div>
<h2>Advanced Scorecard Builder</h2>
<h2>Report</h2>
</div></div>
<div class="container">
<h2>Data Summary</h2><div class="summary table-responsive">'''+self.__summary_report() +'''</div>
</div>
<div class="container">
<h2>Features report</h2><div class="features table-responsive">''' + self.__summary_features(features) + '''</div>
</div><div class="container">
<h2>Scorecard</h2><div class="score table-responsive">'''+self.__model_report()+'''</div>
<h2>Gini</h2><div class="gini table-responsive">'''+self.__gini_report() + '''</div>
<footer>
<p>© 2018 AMA Institute. Advanced Scorecard Builder Free Version</p>
</footer>
</div>
<!-- Bootstrap core JavaScript
================================================== -->
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.12.4/jquery.min.js"></script>
<script>$(".summary thead th:first-child").text("Set"); $(".features th:first-child, .score th:first-child, .gini th:first-child").remove()</script>
</body>
</html>'''
f = open(name,'w')
f.write(html_page)
f.close()
def get_scorecard(self):
if hasattr(self, '_scorecard_'):
lista = ['label', 'variable', 'score']
return self._scorecard_[lista]
raise Exception(ERR_SCORECARD)
def gini_model(self):
if hasattr(self, 'model_info_'):
return self.model_info_['gini']
raise Exception(ERR_GINI_MODEL)
def show_stats(self,name,set_name='train', bin_lab=None):
""" Interface method for feature statistics view as DataFrame
Parameters
----------
name: string, feature names
set_name: string, (default=train) choose train/test
Return
----------
result: DataFrame
"""
lista = ['label','Bad rate [%]','Percent of population [%]','n_good','n_bad',"IV"]
if bin_lab:
lista.append('bin_label')
if set_name == 'train':
r_part = self.stats_[name].sort_values(
by='logit', ascending=False).reset_index(drop=True)
elif set_name == 'test':
r_part = self.stats_test_[name].sort_values(
by='logit', ascending=False).reset_index(drop=True)
else:
raise Exception('Choose train or test')
if name in self.model_info_['features'] and self.model_info_['coef'][name]>0:
lista = ['score','label','Bad rate [%]','Percent of population [%]','n_good','n_bad',"IV"]
if bin_lab:
lista.append('bin_label')
s_part = self._scorecard_[self._scorecard_['variable']==name][['score']].reset_index(drop=True)
result = pd.concat([r_part,s_part],axis=1,join='inner')
else:
result = r_part
return result[lista] | Advanced-scorecard-builder | /advanced_scorecard_builder-1.0.2-py3-none-any.whl/AmaFree/asb.py | asb.py |
AdvancedAnalytics
===================
A collection of python modules, classes and methods for simplifying the use of machine learning solutions. **AdvancedAnalytics** provides easy access to advanced tools in **Sci-Learn**, **NLTK** and other machine learning packages. **AdvancedAnalytics** was developed to simplify learning python from the book *The Art and Science of Data Analytics*.
Description
===========
From a high level view, building machine learning applications typically proceeds through three stages:
1. Data Preprocessing
2. Modeling or Analytics
3. Postprocessing
The classes and methods in **AdvancedAnalytics** primarily support the first and last stages of machine learning applications.
Data scientists report they spend 80% of their total effort in first and last stages. The first stage, *data preprocessing*, is concerned with preparing the data for analysis. This includes:
1. identifying and correcting outliers,
2. imputing missing values, and
3. encoding data.
The last stage, *solution postprocessing*, involves developing graphic summaries of the solution, and metrics for evaluating the quality of the solution.
Documentation and Examples
============================
The API and documentation for all classes and examples are available at https://github.com/tandonneur/AdvancedAnalytics/.
Usage
=====
Currently the most popular usage is for supporting solutions developed using these advanced machine learning packages:
* Sci-Learn
* StatsModels
* NLTK
The intention is to expand this list to other packages. This is a simple example for linear regression that uses the data map structure to preprocess data:
.. code-block:: python
from AdvancedAnalytics.ReplaceImputeEncode import DT
from AdvancedAnalytics.ReplaceImputeEncode import ReplaceImputeEncode
from AdvancedAnalytics.Tree import tree_regressor
from sklearn.tree import DecisionTreeRegressor, export_graphviz
# Data Map Using DT, Data Types
data_map = {
"Salary": [DT.Interval, (20000.0, 2000000.0)],
"Department": [DT.Nominal, ("HR", "Sales", "Marketing")]
"Classification": [DT.Nominal, (1, 2, 3, 4, 5)]
"Years": [DT.Interval, (18, 60)] }
# Preprocess data from data frame df
rie = ReplaceImputeEncode(data_map=data_map, interval_scaling=None,
nominal_encoding= "SAS", drop=True)
encoded_df = rie.fit_transform(df)
y = encoded_df["Salary"]
X = encoded_df.drop("Salary", axis=1)
dt = DecisionTreeRegressor(criterion= "gini", max_depth=4,
min_samples_split=5, min_samples_leaf=5)
dt = dt.fit(X,y)
tree_regressor.display_importance(dt, encoded_df.columns)
tree_regressor.display_metrics(dt, X, y)
Current Modules and Classes
=============================
ReplaceImputeEncode
Classes for Data Preprocessing
* DT defines new data types used in the data dictionary
* ReplaceImputeEncode a class for data preprocessing
Regression
Classes for Linear and Logistic Regression
* linreg support for linear regressino
* logreg support for logistic regression
* stepwise a variable selection class
Tree
Classes for Decision Tree Solutions
* tree_regressor support for regressor decision trees
* tree_classifier support for classification decision trees
Forest
Classes for Random Forests
* forest_regressor support for regressor random forests
* forest_classifier support for classification random forests
NeuralNetwork
Classes for Neural Networks
* nn_regressor support for regressor neural networks
* nn_classifier support for classification neural networks
Text
Classes for Text Analytics
* text_analysis support for topic analysis
* text_plot for word clouds
* sentiment_analysis support for sentiment analysis
Internet
Classes for Internet Applications
* scrape support for web scrapping
* metrics a class for solution metrics
Installation and Dependencies
=============================
**AdvancedAnalytics** is designed to work on any operating system running python 3. It can be installed using **pip** or **conda**.
.. code-block:: python
pip install AdvancedAnalytics
# or
conda install -c dr.jones AdvancedAnalytics
General Dependencies
There are dependencies. Most classes import one or more modules from
**Sci-Learn**, referenced as *sklearn* in module imports, and
**StatsModels**. These are both installed with the current version
of **anaconda**.
Installed with AdvancedAnalytics
Most packages used by **AdvancedAnalytics** are automatically
installed with its installation. These consist of the following
packages.
* statsmodels
* scikit-learn
* scikit-image
* nltk
* pydotplus
Other Dependencies
The *Tree* and *Forest* modules plot decision trees and importance
metrics using **pydotplus** and the **graphviz** packages. These
should also be automatically installed with **AdvancedAnalytics**.
However, the **graphviz** install is sometimes not fully complete
with the conda install. It may require an additional pip install.
.. code-block:: python
pip install graphviz
Text Analytics Dependencies
The *TextAnalytics* module uses the **NLTK**, **Sci-Learn**, and
**wordcloud** packages. Usually these are also automatically
installed automatically with **AdvancedAnalytics**. You can verify
they are installed using the following commands.
.. code-block:: python
conda list nltk
conda list sci-learn
conda list wordcloud
However, when the **NLTK** package is installed, it does not
install the data used by the package. In order to load the
**NLTK** data run the following code once before using the
*TextAnalytics* module.
.. code-block:: python
#The following NLTK commands should be run once
nltk.download("punkt")
nltk.download("averaged_preceptron_tagger")
nltk.download("stopwords")
nltk.download("wordnet")
The **wordcloud** package also uses a little know package
**tinysegmenter** version 0.3. Run the following code to ensure
it is installed.
.. code-block:: python
conda install -c conda-forge tinysegmenter==0.3
# or
pip install tinysegmenter==0.3
Internet Dependencies
The *Internet* module contains a class *scrape* which has some
functions for scraping newsfeeds. Some of these use the
**newspaper3k** package. It should be automatically installed with
**AdvancedAnalytics**.
However, it also uses the package **newsapi-python**, which is not
automatically installed. If you intended to use this news scraping
scraping tool, it is necessary to install the package using the
following code:
.. code-block:: python
conda install -c conda-forge newsapi
# or
pip install newsapi
In addition, the newsapi service is sponsored by a commercial company
www.newsapi.com. You will need to register with them to obtain an
*API* key required to access this service. This is free of charge
for developers, but there is a fee if *newsapi* is used to broadcast
news with an application or at a website.
Code of Conduct
---------------
Everyone interacting in the AdvancedAnalytics project's codebases, issue trackers, chat rooms, and mailing lists is expected to follow the PyPA Code of Conduct: https://www.pypa.io/en/latest/code-of-conduct/ .
| AdvancedAnalytics | /AdvancedAnalytics-1.39.tar.gz/AdvancedAnalytics-1.39/README.rst | README.rst |
from operator import indexOf
import cryptography
from cryptography.fernet import Fernet
from datetime import datetime
import string, random
def passwordToken(MinLength=100, MaxLength=120):
#---- Generates a random token that is stored that will be used to encrypt user data ----
passwordToken = ''.join(random.choice(string.ascii_lowercase + string.digits + string.ascii_uppercase + string.punctuation) for _ in range(random.randint(MinLength,MaxLength)))
RandomTextPoint = random.randrange(len(passwordToken))
RandomInputToken, RandomInputKey = encryption(''.join(random.choice(string.ascii_lowercase + string.digits + string.ascii_uppercase + string.punctuation) for _ in range(random.randint(100,120))))
randominputprivateKey = RandomInputKey.decode("UTF-8") + RandomInputToken.decode("UTF-8")
text = passwordToken[:RandomTextPoint] + randominputprivateKey + passwordToken[RandomTextPoint:]
privateToken, privateKey = encryption(text)
return privateKey.decode("UTF-8") + ":" + privateToken.decode("UTF-8")
def generateSessionToken(username, MinLength=100, MaxLength=120):
#---- Generates a random 128 character long SessionToken
sessionToken = ''.join(random.choice(string.ascii_lowercase + string.digits + string.ascii_uppercase) for _ in range(random.randint(MinLength,MaxLength)))
return encryption(username + ":" + sessionToken)
def dataEncrpytion(text, MinLength=100, MaxLength=120):
#---- Generate a random 128 character password with password to show on the servers and files to save ----
RandomText = ''.join(random.choice(text + string.ascii_lowercase + string.digits + string.ascii_uppercase + string.punctuation) for _ in range(random.randint(MinLength,MaxLength)))
#---- Creates a random number within the bounds of the length of passwords (basically shoves text in a random location) ----
TextPoint = random.randrange(len(text))
RandomTextPoint = random.randrange(len(RandomText))
RandomToken = passwordToken(MinLength, MaxLength)
#---- Combine all the random points and text together to store this password ----
text = text[:TextPoint] + RandomText[:RandomTextPoint] + RandomToken + RandomText[RandomTextPoint:] + text[TextPoint:]
TextToken, TextKey = encryption(text)
timestamp = datetime.utcfromtimestamp(Fernet(str.encode(RandomToken.split(":")[0])).extract_timestamp(str.encode(RandomToken.split(":")[1]))).strftime(''.join(random.choice(['%d','%H','%d','%M','%d','%S']) for _ in range(int(MinLength/4),int(MaxLength/2))))
RandomTextToken, RandomTextKey = encryption(RandomText)
return TextKey.decode("utf-8") +":" + timestamp[0:random.randint(1, len(timestamp))] +"/" + TextToken.decode("utf-8") + ":" + timestamp[0:random.randint(1, len(timestamp))] +"/" + RandomTextToken.decode("UTF-8") + ":" + timestamp[0:random.randint(1, len(timestamp))] + "/" + RandomTextKey.decode("UTF-8") +":" + timestamp[0:random.randint(1, len(timestamp))] + "/" + RandomToken
def encryption(text):
#---- Changes string to byte format ----
bytetext = str.encode(text)
#--- Generates a special key ----
key = Fernet.generate_key()
encryption_type = Fernet(key)
#---- Makes Token string an encrypted fernet with the generated key for the byte string ----
token = encryption_type.encrypt(bytetext)
#---- Returns encrypted text text format ----
return token, key
def dataDecryption(EncryptedText):
ShortenedText = EncryptedText.split(":")[len(EncryptedText.split(":"))-2]
RandomKey = ShortenedText[ShortenedText.index("/")+1:len(ShortenedText)]
RandomToken = EncryptedText.split(":")[len(EncryptedText.split(":"))-1]
timestamp = datetime.utcfromtimestamp(Fernet(str.encode(RandomKey)).extract_timestamp(str.encode(RandomToken))).strftime('%d%H:%d%M:%d%S')
Textkey = EncryptedText.split(":")[0]
textToken = CleanToken(EncryptedText.split(":")[1], timestamp)
RandomtextToken = CleanToken(EncryptedText.split(":")[2], timestamp)
RandomtextKey = CleanToken(EncryptedText.split(":")[3], timestamp)
return str(decryption(str.encode(textToken), str.encode(Textkey))).replace(RandomKey +":" + RandomToken, "").replace(str(decryption(str.encode(RandomtextToken), str.encode(RandomtextKey))), "")
def CleanToken(TokenString, timestamp):
for x in timestamp +"%:dHMS":
if x+"/" in TokenString:
cleanToken = TokenString[TokenString.index(x+"/") +2: len(TokenString)]
if TokenString.index(x+"/") < 30:
break
return cleanToken
def decryption(token, key):
#---- Will decrypt the encrypted text with a token and key ----
encryption_type = Fernet(key)
return encryption_type.decrypt(token).decode() | AdvancedFernetDataEncryption | /advancedfernetdataencryption-1.1-py3-none-any.whl/AdvancedFernetDataEncryption.py | AdvancedFernetDataEncryption.py |
AdvancedHTMLParser
==================
AdvancedHTMLParser is an Advanced HTML Parser, with support for adding, removing, modifying, and formatting HTML.
It aims to provide the same interface as you would find in a compliant browser through javascript ( i.e. all the getElement methods, appendChild, etc), an XPath implementation, as well as many more complex and sophisticated features not available through a browser. And most importantly, it's in python!
There are many potential applications, not limited to:
* Webpage Scraping / Data Extraction
* Testing and Validation
* HTML Modification/Insertion
* Outputting your website
* Debugging
* HTML Document generation
* Web Crawling
* Formatting HTML documents or web pages
It is especially good for servlets/webpages. It is quick to take an expertly crafted page in raw HTML / css, and have your servlet's ingest with AdvancedHTMLParser and create/insert data elements into the existing view using a simple and well-known interface ( javascript-like + HTML DOM ).
Another useful scenario is creating automated testing suites which can operate much more quickly and reliably (and at a deeper function-level), unlike in-browser testing suites.
Full API
--------
Can be found http://htmlpreview.github.io/?https://github.com/kata198/AdvancedHTMLParser/blob/master/doc/AdvancedHTMLParser.html?vers=8.1.8 .
Examples
--------
Various examples can be found in the "tests" directory. A very old, simple example can also be found as "example.py" in the root directory.
Short Doc
---------
**The Package and Modules**
The top-level module in this package is "*AdvancedHTMLParser*."
import AdvancedHTMLParser
Most everything "public" is available through this top-level module, but some corner-case usages may require importing from a submodule. All of these associations can be found through the pydocs.
For example, to access AdvancedTag, the recommended path is just to import the top-level, and use dot-access:
import AdvancedHTMLParser
myTag = AdvancedHTMLParser.AdvancedTag('div')
However, you can also import AdvancedTag through this top-level module:
import AdvancedHTMLParser
from AdvancedHTMLParser import AdvancedTag
Or, you can import from the specific sub-module, directly:
import AdvancedHTMLParser
from AdvancedHTMLParser.Tags import AdvancedTag
All examples below are written as if "import AdvancedHTMLParser" has already been performed, and all relations in examples are based off usages from the top-level import, only.
**AdvancedHTMLParser**
Think of this like "document" in a browser.
The AdvancedHTMLParser can read in a file (or string) of HTML, and will create a modifiable DOM tree from it. It can also be constructed manually from AdvancedHTMLParser.AdvancedTag objects.
To populate an AdvancedHTMLParser from existing HTML:
parser = AdvancedHTMLParser.AdvancedHTMLParser()
# Parse an HTML string into the document
parser.parseStr(htmlStr)
# Parse an HTML file into the document
parser.parseFile(filename)
The parser then exposes many "standard" functions as you'd find on the web for accessing the data, and some others:
getElementsByTagName \- Returns a list of all elements matching a tag name
getElementsByName \- Returns a list of all elements with a given name attribute
getElementById \- Returns a single AdvancedTag (or None) if found an element matching the provided ID
getElementsByClassName \- Returns a list of all elements containing one or more space\-separated class names
getElementsByAttr \- Returns a list of all elements matching a paticular attribute/value pair.
getElementsByXPathExpression \- Return a TagCollection (list) of all elements matching a given XPath expression
getElementsWithAttrValues \- Returns a list of all elements with a specific attribute name containing one of a list of values
getElementsCustomFilter \- Provide a function/lambda that takes a tag argument, and returns True to "match" it. Returns all matched objects
getRootNodes \- Get a list of nodes at root level (0)
getAllNodes \- Get all the nodes contained within this document
getHTML \- Returns string of HTML representing this DOM
getFormattedHTML \- Returns a formatted string (using AdvancedHTMLFormatter; see below) of the HTML. Takes as argument an indent (defaults to four spaces)
getMiniHTML \- Returns a "mini" HTML representation which disregards all whitespace and indentation beyond the functional single\-space
The results of all of these getElement\* functions are TagCollection objects. This is a special kind of list which contains additional functions. See the "TagCollection" section below for more info.
These objects can be modified, and will be reflected in the parent DOM.
The parser also contains some expected properties, like
head \- The "head" tag associated with this document, or None
body \- The "body" tag associated with this document, or None
forms \- All "forms" on this document as a TagCollection
**General Attributes**
In general, attributes can be accessed with dot-syntax, i.e.
tagEm.id = "Hello"
will set the "id" attribute. If it works in HTML javascript on a tag element, it should work on an AdvancedTag element with python.
setAttribute, getAttribute, and removeAttribute are more explicit and recommended ways of getting/setting/deleting attributes on elements.
The same names are used in python as in the javascript/DOM, such as 'className' corrosponding to a space-separated string of the 'class' attribute, 'classList' corrosponding to a list of classes, etc.
**Style Attribute**
Style attributes can be manipulated just like in javascript, so element.style.position = 'relative' for setting, or element.style.position for access.
You can also assign the tag.style as a string, like:
myTag.style = "display: block; float: right; font\-weight: bold"
in addition to individual properties:
myTag.style.display = 'block'
myTag.style.float = 'right'
myTag.style.fontWeight = 'bold'
You can remove style properties by setting its value to an empty string.
For example, to clear "display" property:
myTag.style.display = ''
A standard method *setProperty* can also obe used to set or remove individual properties
For example:
myTag.style.setProperty("display", "block") # Set display: block
myTag.style.setProperty("display", '') # Clear display: property
The naming conventions are the same as in javascript, like "element.style.paddingTop" for "padding-top" attribute.
**TagCollection**
A TagCollection can be used like a list. Every element has a unique uuid associated with it, and a TagCollection will ensure that the same element does not appear twice within its list (so it acts like an ordered set)
It also exposes the various getElement\* functions which operate on the elements within the list (and their children).
For example:
# Filter off the parser all tags with "item" in class
tagCollection = document.getElementsByClassName('item')
# Return all nodes which are nested within any class="item" object
# and also contains the class name "onsale"
itemsWithOnSaleClass = tagCollection.getElementsByClassName('onsale')
To operate just on items in the list, you can use the TagCollection method, *filterCollection*, which takes a lambda/function and returns True to retain that tag in the return.
For example:
# Filter off the parser all tags with "item" in class
tagCollection = document.getElementsByClassName('item')
# Provide a lambda to filter this collection, returning in tagCollection2
# those items which have a "value" attribute > 20 and contains at least
# 1 child element with "specialPrice" class
tagCollection2 = tagCollection.filterCollection( lambda node : int(node.getAttribute('value') or 0) > 20 and len(node.getElementsByClassName('specialPrice')) > 1 )
TagCollections also support advanced filtering (find/filter methods), see "Advanced Filtering" section below.
**AdvancedTag**
The AdvancedTag represents a single tag and its inner text. It exposes many of the functions and properties you would expect to be present if using javascript.
each AdvancedTag also supports the same getElementsBy\* functions as the parser.
It adds several additional that are not found in javascript, such as peers and arbitrary attribute searching.
some of these include:
appendText \- Append text to this element
appendChild \- Append a child to this element
appendBlock \- Append a block (text or AdvancedTag) to this element
append \- alias of appendBlock
removeChild \- Removes a child
removeText \- Removes first occurance of some text from any text nodes
removeTextAll \- Removes ALL occurances of some text from any text nodes
insertBefore \- Inserts a child before an existing child
insertAfter \- Inserts a child after an existing child
getChildren \- Returns the children as a list
getStartTag \- Start Tag, with attributes
getEndTag \- End Tag
getPeersByName \- Gets "peers" (elements with same parent, at same level in tree) with a given name
getPeersByAttr \- Gets peers by an arbitrary attribute/value combination
getPeersWithAttrValues \- Gets peers by an arbitrary attribute/values combination.
getPeersByClassName \- Gets peers that contain a given class name
getElement\\\* \- Same as above, but act on the children of this element.
getParentElementCustomFilter \- Takes a lambda/function and applies on all parents of this element upward until the document root. Returns the first node that when passed to this function returns True, or None if no matches on any parent nodes
getHTML / toHTML / asHTML \- Get the HTML representation using this node as a root (so start tag and attributes, innerHTML (text and child nodes), and end tag)
firstChild \- Get the first child of this node, be it text or an element (AdvancedTag)
firstElementChild \- Get the first child of this node that is an element
lastChild \- Get the last child of this node, be it text or an element (AdvancedTag)
lastElementChild \- Get the last child of this node that is an element
nextSibling \- Get next sibling, be it text or an element
nextElementSibling \- Get next sibling, that is an element
previousSibling \- Get previous sibling, be it text or an element
previousElementSibling \- Get previous sibling, that is an element
{get,set,has,remove}Attribute \- get/set/test/remove an attribute
{add,remove}Class \- Add/remove a class from the list of classes
setStyle \- Set a specific style property [like: setStyle("font\-weight", "bold") ]
isTagEqual \- Compare if two tags have the same attributes. Using the == operator will compare if they are the same exact tag (by uuid)
getUid \- Get a unique ID for this tag (internal)
getAllChildNodes \- Gets all nodes beneath this node in the document (its children, its children's children, etc)
getAllNodes \- Same as getAllChildNodes, but also includes this node
contains \- Check if a provided node appears anywhere beneath this node (as child, child\-of\-child, etc)
remove \- Remove this node from its parent element, and disassociates this and all sub\-nodes from the associated document
\_\_str\_\_ \- str(tag) will show start tag with attributes, inner text, and end tag
\_\_repr\_\_ \- Shows a reconstructable representation of this tag
\_\_getitem\_\_ \- Can be indexed like tag[2] to access second child.
And some properties:
children/childNodes \- The children (tags) as a list NOTE: This returns only AdvancedTag objects, not text.
childBlocks \- All direct child blocks. This includes both AdvnacedTag objects and text nodes (str)
innerHTML \- The innerHTML including the html of all children
innerText \- The text nodes, in order, as they appear as direct children to this node as a string
textContent \- All the text nodes, in order, as they appear within this node or any children (or their children, etc.)
outerHTML \- innerHTML wrapped in this tag
classNames/classList \- a list of the classes
parentNode/parentElement \- The parent tag
tagName \- The tag name
ownerDocument \- The document associated with this node, if any
And many others. See the pydocs for a full list, and associated docstrings.
**Appending raw HTML**
You can append raw HTML to a tag by calling:
tagEm.appendInnerHTML('<div id="Some sample HTML"> <span> Yes </span> </div>')
which acts like, in javascript:
tagEm.innerHTML += '<div id="Some sample HTML"> <span> Yes </span> </div>';
**Creating Tags from HTML**
Tags can be created from HTML strings outside of AdvancedHTMLParser.parseStr (which parses an entire document) by:
* Parser.AdvancedHTMLParser.createElement - Like document.createElement, creates a tag with a given tag name. Not associated with any document.
* Parser.AdvancedHTMLParser.createElementFromHTML - Creates a single tag from HTML.
* Parser.AdvancedHTMLParser.createElementsFromHTML - Creates and returns a list of one or more tags from HTML.
* Parser.AdvancedHTMLParser.createBlocksFromHTML - Creates and returns a list of blocks. These can be AdvancedTag objects (A tag), or a str object (if raw text outside of tags). This is recommended for parsing arbitrary HTML outside of parsing the entire document. The createElement{,s}FromHTML functions will discard any text outside of the tags passed in.
Advanced Filtering
------------------
AdvancedHTMLParser contains two kinds of "Advanced Filtering":
**find**
The most basic unified-search, AdvancedHTMLParser has a "find" method on it. This will search all nodes with a single, simple query.
This is not as robust as the "filter" method (which can also be used on any tag or TagCollection), but does not require any dependency packages.
find \- Perform a search of elements using attributes as keys and potential values as values
(i.e. parser.find(name='blah', tagname='span') will return all elements in this document
with the name "blah" of the tag type "span" )
Arguments are key = value, or key can equal a tuple/list of values to match ANY of those values.
Append a key with \_\_contains to test if some strs (or several possible strs) are within an element
Append a key with \_\_icontains to perform the same \_\_contains op, but ignoring case
Special keys:
tagname \- The tag name of the element
text \- The text within an element
NOTE: Empty string means both "not set" and "no value" in this implementation.
Example:
cheddarElements = parser.find(name='items', text\_\_icontains='cheddar')
**filter**
If you have QueryableList installed (a default dependency since 7.0.0 to AdvancedHTMLParser, but can be skipped with '\-\-no\-deps' passed to setup.py)
then you can take advantage of the advanced "filter" methods, on either the parser (entire document), any tag (that tag and nodes beneath), or tag collection (any of those tags, or any tags beneath them).
A full explanation of the various filter modes that QueryableList supports can be found at https://github.com/kata198/QueryableList
Special keys are: "tagname" for the tag name, and "text" for the inner text of a node.
An attribute that is unset has a value of None, which is different than a set attribute with an empty value ''.
For example:
cheddarElements = parser.filter(name='items', text\_\_icontains='cheddar')
The AdvancedHTMLParser has:
filter / filterAnd \- Perform a filter query on all nodes in this document, returning a TagCollection of elements matching ALL criteria
filterOr \- Perform a filter query on all nodes in this document, returning a TagCollection of elements matching ANY criteria
Every AdvancedTag has:
filter / filterAnd \- Perform a filter query on this nodes and all sub\-nodes, returning a TagCollection of elements matching ALL criteria
filterOr \- Perform a filter query on this nodes and all sub\-nodes, returning a TagCollection of elements matching ANY criteria
Every TagCollection has:
filter / filterAnd \- Perform a filter query on JUST the nodes contained within this list (no children), returning a TagCollection of elements matching ALL criteria
filterOr \- Perform a filter query on JUST the nodes contained within this list (no children), returning a TagCollection of elements matching ANY criteria
filterAll / filterAllAnd \- Perform a filter query on the nodes contained within this list, and all of their sub\-nodes, returning a TagCollection of elements matching ALL criteria
filterAllOr \- Perform a filter query on the nodes contained within this list, and all of their sub\-nodes, returning a TagCollection of elements matching ANY criteria
Validation
----------
Validation can be performed by using ValidatingAdvancedHTMLParser. It will raise an exception if an assumption would have to be made to continue parsing (i.e. something important).
InvalidCloseException - Tried to close a tag that shouldn't have been closed
MissedCloseException - Missed a non-optional close of a tag that would lead to causing an assumption during parsing.
InvalidAttributeNameException - An attribute name was found that contained an invalid character, or broke a naming rule.
XPath
-----
**XPath support is in Beta phase.**
Basic XPath support has been added, which supports searching, attribute matching, positions, indexes, some functions, most axes (such as parent::).
Examples of some currently supported expressions:
//table//tr[last()]/parent::tbody
Find any table, descend to any descendant that is the last tr of its parent, rise to and return the parent tbody of that tr.
//div[ @name = "Cheese" ]/span[2]
Find any div with attribute name="Cheese" , and return the second direct child which is a span.
//\*[ normalize\-space() = "Banana" ]
Find and return any tag which contains the inner text, normalized for whitespace, of "Banana"
Find and return any tag under a div containing a class "purple-cheese"
//div/\*[ contains( concat( ' ', @class, ' ' ), 'purple\-cheese' ) ]
More will be added. If you have a needed xpath feature not currently supported (you'll know by parse exception raised), please open an issue and I will make it a priority!
IndexedAdvancedHTMLParser
=========================
IndexedAdvancedHTMLParser provides the ability to use indexing for faster search. If you are just parsing and not modifying, this is your best bet. If you are modifying the DOM tree, make sure you call IndexedAdvancedHTMLParser.reindex() before relying on them.
Each of the get\* functions above takes an additional "useIndex" function, which can also be set to False to skip index. See constructor for more information, and "Performance and Indexing" section below.
AdvancedHTMLFormatter and formatHTML
------------------------------------
**AdvancedHTMLFormatter**
The AdvancedHTMLFormatter formats HTML into a pretty layout. It can handle elements like pre, core, script, style, etc to keep their contents preserved, but does not understand CSS rules.
The methods are:
parseStr \- Parse a string of contents
parseFile \- Parse a filename or file object
getHTML \- Get the formatted html
getRootNodes \- Get a list of the "root" nodes (most outer nodes, should be <html> on a valid document)
getRoot \- Gets the "root" node (on a valid document this should be <html>). For arbitrary HTML, you should use getRootNodes, as there may be several nodes at the same outermost level
You can access this same formatting off an AdvancedHTMLParser.AdvancedHTMLParser (or IndexedAdvancedHTMLParser) by calling .getFormattedHTML()
**AdvancedHTMLMiniFormatter**
The AdvancedHTMLMiniFormatter will strip all non-functional whitespace (meaning any whitespace which wouldn't normally add a space to the document or is required for xhtml) and provide no indentation.
Use this when pretty-printing doesn't matter and you'd like to save space.
You can access this same formatting off an AdvancedHTMLParser.AdvancedHTMLParser (or IndexedAdvancedHTMLParser) by calling .getMiniHTML()
**AdvancedHTMLSlimTagFormatter and AdvancedHTMLSlimTagMiniFormatter**
In order to support some less-leniant parsers, AdvancedHTMLParser will by default include a space prior to the close-tag '>' character in HTML output.
For example:
<span id="abc" >Blah</span>
<br />
<hr class="bigline" />
It is recommended to keep these extra spaces, but if for some reason you feel you need to get rid of them, you can use either *AdvancedHTMLSlimTagFormatter* or *AdvancedHTMLSlimTagMiniFormatter*.
*AdvancedHTMLSlimTagFormatter* will do pretty-printing (like getFormattedHTML / AdvancedHTMLFormatter.getHTML output)
*AdvancedHTMLSlimTagMiniFormatter* will do mini-printing (like getMiniHTML / AdvancedHTMLMiniFormatter.getHTML output)
Feeding in your HTML via formatter.parseStr(htmlStr) [where htmlStr can be parser.getHTML()] will cause it to be output without the start-tag padding.
For example:
<span id="abc">Blah</span>
By default, self-closing tags will retain their padding so that an xhtml-compliant parser doesn't treat "/" as either an attribute or part of the attribute-value of the preceding attribute.
For example:
<hr class="bigline"/>
Could be interpreted as a horizontal rule with a class name of "bigline/". Most modern browsers work around this and will not have issue, but some parsers will.
You may pass an optional keyword-argument to the formatter constructor, slimSelfClosing=True, in order to force removal of this padding from self-closing tags.
For example:
myHtml = '<hr class="bigline" />'
formatter = AdvancedHTMLSlimTagMiniFormatter(slimSelfClosing=True)
formatter.parseStr(myHtml)
miniHtml = formatter.getHTML()
# miniHtml will now contain '<hr class="bigline"/>'
.
**formatHTML script**
A script, formatHTML comes with this package and will perform formatting on an input file, and output to a file or stdout:
Usage: formatHTML (Optional Arguments) (optional: /path/to/in.html) (optional: [/path/to/output.html])
Formats HTML on input and writes to output.
Optional Arguments:
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
\-e [encoding] \- Specify an encoding to use. Default is utf\-8
\-m or \-\-mini \- Output "mini" HTML (only retain functional whitespace,
strip the rest and no indentation)
\-p or \-\-pretty \- Output "pretty" HTML [This is the defualt mode]
\-\-indent=' ' \- Use the provided string [default 4\-spaces] to represent each
level of nesting. Use \-\-indent=" " for 1 tab insead, for example.
Affects pretty printing mode only
If output filename is not specified or is empty string, output will be to stdout.
If input filename is not specified or is empty string, input will be from stdin
If \-e is provided, will use that as the encoding. Defaults to utf\-8
Notes
-----
* Each tag has a generated unique ID which is assigned at create time. The search functions use these to prevent duplicates in search results. There is a global function in the module, AdvancedHTMLParser.uniqueTags, which will filter a list of tags and remove any duplicates. TagCollections will only allow one instance of a tag (no duplicates)
* In general, for tag names and attribute names, you should use lowercase values. During parsing, the parser will lowercase attribute names (like NAME="Abc" becomes name="Abc"). During searching, however, for performance reasons, it is assumed you are passing in already-lowercased strings. If you can't trust the input to be lowercase, then it is your responsibility to call .lower() before calling .getElementsBy\*
* If you are using IndexedAdvancedHTMLParser (instead of AdvancedHTMLParser) to construct HTML and not search, I recommend either setting the index params to False in the constructor, or calling IndexedAdvancedHTMLParser.disableIndexing(). When you are finished and want to go back to searching, you can call IndexedAdvancedHTMLParser.reindex and set to True what you want to reindex.
* There are additional functions and usages not documented here, check the file for more information.
Performance and Indexing
------------------------
Performance is very good using either AdvancedHTMLParser, and even better (for scraping) using IndexedAdvancedHTMLParser class. The performance can be further enhanced on IndexedAdvancedHTMLParser via several indexing tunables:
First, in the constructor of IndexedAdvancedHTMLParser and in the reindex method is a boolean to be set which determines if each field is indexed (e.x. indexIDs will make getElementByID use an index).
If an index is used, parsing time slightly goes up, but searches become O(1) (from root node, slightly less efficent from other nodes) instead of O(n) [n=num elements].
By default, IDs, Names, Tag Names, Class Names are indexed.
You can add an index for any arbitrary field (used in getElementByAttr) via IndexedAdvancedHTMLParser.addIndexOnAttribute('src'), for example, to index the 'src' attribute. This index can be removed via removeIndexOnAttribute.
Dependencies
------------
AdvancedHTMLParser can be installed without dependencies (pass '\-\-no\-deps' to setup.py), and everything will function EXCEPT filter\* methods.
By default, https://github.com/kata198/QueryableList will be installed, which will enable support for those additional filter methods.
Unicode
-------
AdvancedHTMLParser generally has very good support for unicode, and defaults to "utf\-8" (can be altered by the "encoding" argument to the AdvancedHTMLParser.AdvancedHTMLParser when parsing.)
If you are still getting UnicodeDecodeError or UnicodeEncodeError, there are a few things you can try:
* If the error happens when printing/writing to stdout ( default behaviour for apache / mod\_python is to open stdout with the ANSI/ASCII encoding ), ensure your streams are, in fact, set to utf\-8.
\* Set the environment variable PYTHONIOENCODING to "utf\\\-8" before python is launched. In Apache, you can add the line "SetEnv PYTHONIOENCODING utf\\\-8" to your httpd.conf in order to achieve this.
* Ensure that the data you are passing to AdvancedHTMLParser has the correct encoding (matching the "encoding" parameter).
* Switch to python3 if at all possible \-\- python2 does have 'unicode' support and AdvancedHTMLParser uses it to the best of its ability, but python2 does still have some inherit flaws which may come up using standard library / output functions. You should ensure that these are set to use utf\-8 (as described above).
AdvancedHTMLParser is tested against unicode ( even has a unit test ) which works in both python2 and python3 in the general case.
If you are having an issue (even on python2) and you've checked the above "common configuration/usage" errors and think there is still an issue, please open a bug report on https://github.com/kata198/AdvancedHTMLParser with a test case, python version, and traceback.
The library itself is considered unicode-safe, and almost always it's an issue outside of this library, or has a simple workaround.
Example Usage
-------------
See https://raw.githubusercontent.com/kata198/AdvancedHTMLParser/master/example.py for an example of parsing store data using this class.
Changes
-------
See: https://raw.githubusercontent.com/kata198/AdvancedHTMLParser/master/ChangeLog
Contact Me / Support
--------------------
I am available by email to provide support, answer questions, or otherwise provide assistance in using this software. Use my email kata198 at gmail.com with "AdvancedHTMLParser" in the subject line.
If you are having an issue / found a bug / want to merge in some changes, please open a pull request.
Unit Tests
----------
See "tests" directory available in github. Use "runTests.py" within that directory. Tests use my `GoodTests <https://github.com/kata198/GoodTests>`_ framework. It will download it to the current directory if not found in path, so you don't need to worry that it's a dependency.
| AdvancedHTMLParser | /AdvancedHTMLParser-9.0.2.tar.gz/AdvancedHTMLParser-9.0.2/README.rst | README.rst |
AdvancedHTMLParser
==================
AdvancedHTMLParser is an Advanced HTML Parser, with support for adding, removing, modifying, and formatting HTML.
It aims to provide the same interface as you would find in a compliant browser through javascript ( i.e. all the getElement methods, appendChild, etc), an XPath implementation, as well as many more complex and sophisticated features not available through a browser. And most importantly, it's in python!
There are many potential applications, not limited to:
* Webpage Scraping / Data Extraction
* Testing and Validation
* HTML Modification/Insertion
* Outputting your website
* Debugging
* HTML Document generation
* Web Crawling
* Formatting HTML documents or web pages
It is especially good for servlets/webpages. It is quick to take an expertly crafted page in raw HTML / css, and have your servlet's ingest with AdvancedHTMLParser and create/insert data elements into the existing view using a simple and well-known interface ( javascript-like + HTML DOM ).
Another useful scenario is creating automated testing suites which can operate much more quickly and reliably (and at a deeper function-level), unlike in-browser testing suites.
Full API
--------
Can be found http://htmlpreview.github.io/?https://github.com/kata198/AdvancedHTMLParser/blob/master/doc/AdvancedHTMLParser.html?vers=8.1.8 .
Examples
--------
Various examples can be found in the "tests" directory. A very old, simple example can also be found as "example.py" in the root directory.
Short Doc
---------
**The Package and Modules**
The top-level module in this package is "*AdvancedHTMLParser*."
import AdvancedHTMLParser
Most everything "public" is available through this top-level module, but some corner-case usages may require importing from a submodule. All of these associations can be found through the pydocs.
For example, to access AdvancedTag, the recommended path is just to import the top-level, and use dot-access:
import AdvancedHTMLParser
myTag = AdvancedHTMLParser.AdvancedTag('div')
However, you can also import AdvancedTag through this top-level module:
import AdvancedHTMLParser
from AdvancedHTMLParser import AdvancedTag
Or, you can import from the specific sub-module, directly:
import AdvancedHTMLParser
from AdvancedHTMLParser.Tags import AdvancedTag
All examples below are written as if "import AdvancedHTMLParser" has already been performed, and all relations in examples are based off usages from the top-level import, only.
**AdvancedHTMLParser**
Think of this like "document" in a browser.
The AdvancedHTMLParser can read in a file (or string) of HTML, and will create a modifiable DOM tree from it. It can also be constructed manually from AdvancedHTMLParser.AdvancedTag objects.
To populate an AdvancedHTMLParser from existing HTML:
parser = AdvancedHTMLParser.AdvancedHTMLParser()
# Parse an HTML string into the document
parser.parseStr(htmlStr)
# Parse an HTML file into the document
parser.parseFile(filename)
The parser then exposes many "standard" functions as you'd find on the web for accessing the data, and some others:
getElementsByTagName - Returns a list of all elements matching a tag name
getElementsByName - Returns a list of all elements with a given name attribute
getElementById - Returns a single AdvancedTag (or None) if found an element matching the provided ID
getElementsByClassName - Returns a list of all elements containing one or more space-separated class names
getElementsByAttr - Returns a list of all elements matching a paticular attribute/value pair.
getElementsByXPathExpression - Return a TagCollection (list) of all elements matching a given XPath expression
getElementsWithAttrValues - Returns a list of all elements with a specific attribute name containing one of a list of values
getElementsCustomFilter - Provide a function/lambda that takes a tag argument, and returns True to "match" it. Returns all matched objects
getRootNodes - Get a list of nodes at root level (0)
getAllNodes - Get all the nodes contained within this document
getHTML - Returns string of HTML representing this DOM
getFormattedHTML - Returns a formatted string (using AdvancedHTMLFormatter; see below) of the HTML. Takes as argument an indent (defaults to four spaces)
getMiniHTML - Returns a "mini" HTML representation which disregards all whitespace and indentation beyond the functional single-space
The results of all of these getElement\* functions are TagCollection objects. This is a special kind of list which contains additional functions. See the "TagCollection" section below for more info.
These objects can be modified, and will be reflected in the parent DOM.
The parser also contains some expected properties, like
head - The "head" tag associated with this document, or None
body - The "body" tag associated with this document, or None
forms - All "forms" on this document as a TagCollection
**General Attributes**
In general, attributes can be accessed with dot-syntax, i.e.
tagEm.id = "Hello"
will set the "id" attribute. If it works in HTML javascript on a tag element, it should work on an AdvancedTag element with python.
setAttribute, getAttribute, and removeAttribute are more explicit and recommended ways of getting/setting/deleting attributes on elements.
The same names are used in python as in the javascript/DOM, such as 'className' corrosponding to a space-separated string of the 'class' attribute, 'classList' corrosponding to a list of classes, etc.
**Style Attribute**
Style attributes can be manipulated just like in javascript, so element.style.position = 'relative' for setting, or element.style.position for access.
You can also assign the tag.style as a string, like:
myTag.style = "display: block; float: right; font-weight: bold"
in addition to individual properties:
myTag.style.display = 'block'
myTag.style.float = 'right'
myTag.style.fontWeight = 'bold'
You can remove style properties by setting its value to an empty string.
For example, to clear "display" property:
myTag.style.display = ''
A standard method *setProperty* can also obe used to set or remove individual properties
For example:
myTag.style.setProperty("display", "block") # Set display: block
myTag.style.setProperty("display", '') # Clear display: property
The naming conventions are the same as in javascript, like "element.style.paddingTop" for "padding-top" attribute.
**TagCollection**
A TagCollection can be used like a list. Every element has a unique uuid associated with it, and a TagCollection will ensure that the same element does not appear twice within its list (so it acts like an ordered set)
It also exposes the various getElement\* functions which operate on the elements within the list (and their children).
For example:
# Filter off the parser all tags with "item" in class
tagCollection = document.getElementsByClassName('item')
# Return all nodes which are nested within any class="item" object
# and also contains the class name "onsale"
itemsWithOnSaleClass = tagCollection.getElementsByClassName('onsale')
To operate just on items in the list, you can use the TagCollection method, *filterCollection*, which takes a lambda/function and returns True to retain that tag in the return.
For example:
# Filter off the parser all tags with "item" in class
tagCollection = document.getElementsByClassName('item')
# Provide a lambda to filter this collection, returning in tagCollection2
# those items which have a "value" attribute > 20 and contains at least
# 1 child element with "specialPrice" class
tagCollection2 = tagCollection.filterCollection( lambda node : int(node.getAttribute('value') or 0) > 20 and len(node.getElementsByClassName('specialPrice')) > 1 )
TagCollections also support advanced filtering (find/filter methods), see "Advanced Filtering" section below.
**AdvancedTag**
The AdvancedTag represents a single tag and its inner text. It exposes many of the functions and properties you would expect to be present if using javascript.
each AdvancedTag also supports the same getElementsBy\* functions as the parser.
It adds several additional that are not found in javascript, such as peers and arbitrary attribute searching.
some of these include:
appendText - Append text to this element
appendChild - Append a child to this element
appendBlock - Append a block (text or AdvancedTag) to this element
append - alias of appendBlock
removeChild - Removes a child
removeText - Removes first occurance of some text from any text nodes
removeTextAll - Removes ALL occurances of some text from any text nodes
insertBefore - Inserts a child before an existing child
insertAfter - Inserts a child after an existing child
getChildren - Returns the children as a list
getStartTag - Start Tag, with attributes
getEndTag - End Tag
getPeersByName - Gets "peers" (elements with same parent, at same level in tree) with a given name
getPeersByAttr - Gets peers by an arbitrary attribute/value combination
getPeersWithAttrValues - Gets peers by an arbitrary attribute/values combination.
getPeersByClassName - Gets peers that contain a given class name
getElement\* - Same as above, but act on the children of this element.
getParentElementCustomFilter - Takes a lambda/function and applies on all parents of this element upward until the document root. Returns the first node that when passed to this function returns True, or None if no matches on any parent nodes
getHTML / toHTML / asHTML - Get the HTML representation using this node as a root (so start tag and attributes, innerHTML (text and child nodes), and end tag)
firstChild - Get the first child of this node, be it text or an element (AdvancedTag)
firstElementChild - Get the first child of this node that is an element
lastChild - Get the last child of this node, be it text or an element (AdvancedTag)
lastElementChild - Get the last child of this node that is an element
nextSibling - Get next sibling, be it text or an element
nextElementSibling - Get next sibling, that is an element
previousSibling - Get previous sibling, be it text or an element
previousElementSibling - Get previous sibling, that is an element
{get,set,has,remove}Attribute - get/set/test/remove an attribute
{add,remove}Class - Add/remove a class from the list of classes
setStyle - Set a specific style property [like: setStyle("font-weight", "bold") ]
isTagEqual - Compare if two tags have the same attributes. Using the == operator will compare if they are the same exact tag (by uuid)
getUid - Get a unique ID for this tag (internal)
getAllChildNodes - Gets all nodes beneath this node in the document (its children, its children's children, etc)
getAllNodes - Same as getAllChildNodes, but also includes this node
contains - Check if a provided node appears anywhere beneath this node (as child, child-of-child, etc)
remove - Remove this node from its parent element, and disassociates this and all sub-nodes from the associated document
__str__ - str(tag) will show start tag with attributes, inner text, and end tag
__repr__ - Shows a reconstructable representation of this tag
__getitem__ - Can be indexed like tag[2] to access second child.
And some properties:
children/childNodes - The children (tags) as a list NOTE: This returns only AdvancedTag objects, not text.
childBlocks - All direct child blocks. This includes both AdvnacedTag objects and text nodes (str)
innerHTML - The innerHTML including the html of all children
innerText - The text nodes, in order, as they appear as direct children to this node as a string
textContent - All the text nodes, in order, as they appear within this node or any children (or their children, etc.)
outerHTML - innerHTML wrapped in this tag
classNames/classList - a list of the classes
parentNode/parentElement - The parent tag
tagName - The tag name
ownerDocument - The document associated with this node, if any
And many others. See the pydocs for a full list, and associated docstrings.
**Appending raw HTML**
You can append raw HTML to a tag by calling:
tagEm.appendInnerHTML('<div id="Some sample HTML"> <span> Yes </span> </div>')
which acts like, in javascript:
tagEm.innerHTML += '<div id="Some sample HTML"> <span> Yes </span> </div>';
**Creating Tags from HTML**
Tags can be created from HTML strings outside of AdvancedHTMLParser.parseStr (which parses an entire document) by:
* Parser.AdvancedHTMLParser.createElement - Like document.createElement, creates a tag with a given tag name. Not associated with any document.
* Parser.AdvancedHTMLParser.createElementFromHTML - Creates a single tag from HTML.
* Parser.AdvancedHTMLParser.createElementsFromHTML - Creates and returns a list of one or more tags from HTML.
* Parser.AdvancedHTMLParser.createBlocksFromHTML - Creates and returns a list of blocks. These can be AdvancedTag objects (A tag), or a str object (if raw text outside of tags). This is recommended for parsing arbitrary HTML outside of parsing the entire document. The createElement{,s}FromHTML functions will discard any text outside of the tags passed in.
Advanced Filtering
------------------
AdvancedHTMLParser contains two kinds of "Advanced Filtering":
**find**
The most basic unified-search, AdvancedHTMLParser has a "find" method on it. This will search all nodes with a single, simple query.
This is not as robust as the "filter" method (which can also be used on any tag or TagCollection), but does not require any dependency packages.
find - Perform a search of elements using attributes as keys and potential values as values
(i.e. parser.find(name='blah', tagname='span') will return all elements in this document
with the name "blah" of the tag type "span" )
Arguments are key = value, or key can equal a tuple/list of values to match ANY of those values.
Append a key with __contains to test if some strs (or several possible strs) are within an element
Append a key with __icontains to perform the same __contains op, but ignoring case
Special keys:
tagname - The tag name of the element
text - The text within an element
NOTE: Empty string means both "not set" and "no value" in this implementation.
Example:
cheddarElements = parser.find(name='items', text__icontains='cheddar')
**filter**
If you have QueryableList installed (a default dependency since 7.0.0 to AdvancedHTMLParser, but can be skipped with '\-\-no\-deps' passed to setup.py)
then you can take advantage of the advanced "filter" methods, on either the parser (entire document), any tag (that tag and nodes beneath), or tag collection (any of those tags, or any tags beneath them).
A full explanation of the various filter modes that QueryableList supports can be found at https://github.com/kata198/QueryableList
Special keys are: "tagname" for the tag name, and "text" for the inner text of a node.
An attribute that is unset has a value of None, which is different than a set attribute with an empty value ''.
For example:
cheddarElements = parser.filter(name='items', text__icontains='cheddar')
The AdvancedHTMLParser has:
filter / filterAnd - Perform a filter query on all nodes in this document, returning a TagCollection of elements matching ALL criteria
filterOr - Perform a filter query on all nodes in this document, returning a TagCollection of elements matching ANY criteria
Every AdvancedTag has:
filter / filterAnd - Perform a filter query on this nodes and all sub-nodes, returning a TagCollection of elements matching ALL criteria
filterOr - Perform a filter query on this nodes and all sub-nodes, returning a TagCollection of elements matching ANY criteria
Every TagCollection has:
filter / filterAnd - Perform a filter query on JUST the nodes contained within this list (no children), returning a TagCollection of elements matching ALL criteria
filterOr - Perform a filter query on JUST the nodes contained within this list (no children), returning a TagCollection of elements matching ANY criteria
filterAll / filterAllAnd - Perform a filter query on the nodes contained within this list, and all of their sub-nodes, returning a TagCollection of elements matching ALL criteria
filterAllOr - Perform a filter query on the nodes contained within this list, and all of their sub-nodes, returning a TagCollection of elements matching ANY criteria
Validation
----------
Validation can be performed by using ValidatingAdvancedHTMLParser. It will raise an exception if an assumption would have to be made to continue parsing (i.e. something important).
InvalidCloseException - Tried to close a tag that shouldn't have been closed
MissedCloseException - Missed a non-optional close of a tag that would lead to causing an assumption during parsing.
InvalidAttributeNameException - An attribute name was found that contained an invalid character, or broke a naming rule.
XPath
-----
**XPath support is in Beta phase.**
Basic XPath support has been added, which supports searching, attribute matching, positions, indexes, some functions, most axes (such as parent::).
Examples of some currently supported expressions:
//table//tr[last()]/parent::tbody
Find any table, descend to any descendant that is the last tr of its parent, rise to and return the parent tbody of that tr.
//div[ @name = "Cheese" ]/span[2]
Find any div with attribute name="Cheese" , and return the second direct child which is a span.
//*[ normalize-space() = "Banana" ]
Find and return any tag which contains the inner text, normalized for whitespace, of "Banana"
Find and return any tag under a div containing a class "purple-cheese"
//div/*[ contains( concat( ' ', @class, ' ' ), 'purple-cheese' ) ]
More will be added. If you have a needed xpath feature not currently supported (you'll know by parse exception raised), please open an issue and I will make it a priority!
IndexedAdvancedHTMLParser
=========================
IndexedAdvancedHTMLParser provides the ability to use indexing for faster search. If you are just parsing and not modifying, this is your best bet. If you are modifying the DOM tree, make sure you call IndexedAdvancedHTMLParser.reindex() before relying on them.
Each of the get\* functions above takes an additional "useIndex" function, which can also be set to False to skip index. See constructor for more information, and "Performance and Indexing" section below.
AdvancedHTMLFormatter and formatHTML
------------------------------------
**AdvancedHTMLFormatter**
The AdvancedHTMLFormatter formats HTML into a pretty layout. It can handle elements like pre, core, script, style, etc to keep their contents preserved, but does not understand CSS rules.
The methods are:
parseStr - Parse a string of contents
parseFile - Parse a filename or file object
getHTML - Get the formatted html
getRootNodes - Get a list of the "root" nodes (most outer nodes, should be <html> on a valid document)
getRoot - Gets the "root" node (on a valid document this should be <html>). For arbitrary HTML, you should use getRootNodes, as there may be several nodes at the same outermost level
You can access this same formatting off an AdvancedHTMLParser.AdvancedHTMLParser (or IndexedAdvancedHTMLParser) by calling .getFormattedHTML()
**AdvancedHTMLMiniFormatter**
The AdvancedHTMLMiniFormatter will strip all non-functional whitespace (meaning any whitespace which wouldn't normally add a space to the document or is required for xhtml) and provide no indentation.
Use this when pretty-printing doesn't matter and you'd like to save space.
You can access this same formatting off an AdvancedHTMLParser.AdvancedHTMLParser (or IndexedAdvancedHTMLParser) by calling .getMiniHTML()
**AdvancedHTMLSlimTagFormatter and AdvancedHTMLSlimTagMiniFormatter**
In order to support some less-leniant parsers, AdvancedHTMLParser will by default include a space prior to the close-tag '>' character in HTML output.
For example:
<span id="abc" >Blah</span>
<br />
<hr class="bigline" />
It is recommended to keep these extra spaces, but if for some reason you feel you need to get rid of them, you can use either *AdvancedHTMLSlimTagFormatter* or *AdvancedHTMLSlimTagMiniFormatter*.
*AdvancedHTMLSlimTagFormatter* will do pretty-printing (like getFormattedHTML / AdvancedHTMLFormatter.getHTML output)
*AdvancedHTMLSlimTagMiniFormatter* will do mini-printing (like getMiniHTML / AdvancedHTMLMiniFormatter.getHTML output)
Feeding in your HTML via formatter.parseStr(htmlStr) [where htmlStr can be parser.getHTML()] will cause it to be output without the start-tag padding.
For example:
<span id="abc">Blah</span>
By default, self-closing tags will retain their padding so that an xhtml-compliant parser doesn't treat "/" as either an attribute or part of the attribute-value of the preceding attribute.
For example:
<hr class="bigline"/>
Could be interpreted as a horizontal rule with a class name of "bigline/". Most modern browsers work around this and will not have issue, but some parsers will.
You may pass an optional keyword-argument to the formatter constructor, slimSelfClosing=True, in order to force removal of this padding from self-closing tags.
For example:
myHtml = '<hr class="bigline" />'
formatter = AdvancedHTMLSlimTagMiniFormatter(slimSelfClosing=True)
formatter.parseStr(myHtml)
miniHtml = formatter.getHTML()
# miniHtml will now contain '<hr class="bigline"/>'
.
**formatHTML script**
A script, formatHTML comes with this package and will perform formatting on an input file, and output to a file or stdout:
Usage: formatHTML (Optional Arguments) (optional: /path/to/in.html) (optional: [/path/to/output.html])
Formats HTML on input and writes to output.
Optional Arguments:
-------------------
-e [encoding] - Specify an encoding to use. Default is utf-8
-m or --mini - Output "mini" HTML (only retain functional whitespace,
strip the rest and no indentation)
-p or --pretty - Output "pretty" HTML [This is the defualt mode]
--indent=' ' - Use the provided string [default 4-spaces] to represent each
level of nesting. Use --indent=" " for 1 tab insead, for example.
Affects pretty printing mode only
If output filename is not specified or is empty string, output will be to stdout.
If input filename is not specified or is empty string, input will be from stdin
If -e is provided, will use that as the encoding. Defaults to utf-8
Notes
-----
* Each tag has a generated unique ID which is assigned at create time. The search functions use these to prevent duplicates in search results. There is a global function in the module, AdvancedHTMLParser.uniqueTags, which will filter a list of tags and remove any duplicates. TagCollections will only allow one instance of a tag (no duplicates)
* In general, for tag names and attribute names, you should use lowercase values. During parsing, the parser will lowercase attribute names (like NAME="Abc" becomes name="Abc"). During searching, however, for performance reasons, it is assumed you are passing in already-lowercased strings. If you can't trust the input to be lowercase, then it is your responsibility to call .lower() before calling .getElementsBy\*
* If you are using IndexedAdvancedHTMLParser (instead of AdvancedHTMLParser) to construct HTML and not search, I recommend either setting the index params to False in the constructor, or calling IndexedAdvancedHTMLParser.disableIndexing(). When you are finished and want to go back to searching, you can call IndexedAdvancedHTMLParser.reindex and set to True what you want to reindex.
* There are additional functions and usages not documented here, check the file for more information.
Performance and Indexing
------------------------
Performance is very good using either AdvancedHTMLParser, and even better (for scraping) using IndexedAdvancedHTMLParser class. The performance can be further enhanced on IndexedAdvancedHTMLParser via several indexing tunables:
First, in the constructor of IndexedAdvancedHTMLParser and in the reindex method is a boolean to be set which determines if each field is indexed (e.x. indexIDs will make getElementByID use an index).
If an index is used, parsing time slightly goes up, but searches become O(1) (from root node, slightly less efficent from other nodes) instead of O(n) [n=num elements].
By default, IDs, Names, Tag Names, Class Names are indexed.
You can add an index for any arbitrary field (used in getElementByAttr) via IndexedAdvancedHTMLParser.addIndexOnAttribute('src'), for example, to index the 'src' attribute. This index can be removed via removeIndexOnAttribute.
Dependencies
------------
AdvancedHTMLParser can be installed without dependencies (pass '\-\-no\-deps' to setup.py), and everything will function EXCEPT filter\* methods.
By default, https://github.com/kata198/QueryableList will be installed, which will enable support for those additional filter methods.
Unicode
-------
AdvancedHTMLParser generally has very good support for unicode, and defaults to "utf\-8" (can be altered by the "encoding" argument to the AdvancedHTMLParser.AdvancedHTMLParser when parsing.)
If you are still getting UnicodeDecodeError or UnicodeEncodeError, there are a few things you can try:
* If the error happens when printing/writing to stdout ( default behaviour for apache / mod\_python is to open stdout with the ANSI/ASCII encoding ), ensure your streams are, in fact, set to utf\-8.
* Set the environment variable PYTHONIOENCODING to "utf\-8" before python is launched. In Apache, you can add the line "SetEnv PYTHONIOENCODING utf\-8" to your httpd.conf in order to achieve this.
* Ensure that the data you are passing to AdvancedHTMLParser has the correct encoding (matching the "encoding" parameter).
* Switch to python3 if at all possible \-\- python2 does have 'unicode' support and AdvancedHTMLParser uses it to the best of its ability, but python2 does still have some inherit flaws which may come up using standard library / output functions. You should ensure that these are set to use utf\-8 (as described above).
AdvancedHTMLParser is tested against unicode ( even has a unit test ) which works in both python2 and python3 in the general case.
If you are having an issue (even on python2) and you've checked the above "common configuration/usage" errors and think there is still an issue, please open a bug report on https://github.com/kata198/AdvancedHTMLParser with a test case, python version, and traceback.
The library itself is considered unicode-safe, and almost always it's an issue outside of this library, or has a simple workaround.
Example Usage
-------------
See https://raw.githubusercontent.com/kata198/AdvancedHTMLParser/master/example.py for an example of parsing store data using this class.
Changes
-------
See: https://raw.githubusercontent.com/kata198/AdvancedHTMLParser/master/ChangeLog
Contact Me / Support
--------------------
I am available by email to provide support, answer questions, or otherwise provide assistance in using this software. Use my email kata198 at gmail.com with "AdvancedHTMLParser" in the subject line.
If you are having an issue / found a bug / want to merge in some changes, please open a pull request.
Unit Tests
----------
See "tests" directory available in github. Use "runTests.py" within that directory. Tests use my [GoodTests](https://github.com/kata198/GoodTests) framework. It will download it to the current directory if not found in path, so you don't need to worry that it's a dependency.
| AdvancedHTMLParser | /AdvancedHTMLParser-9.0.2.tar.gz/AdvancedHTMLParser-9.0.2/README.md | README.md |
import AdvancedHTMLParser
if __name__ == '__main__':
parser = AdvancedHTMLParser.AdvancedHTMLParser()
parser.parseStr('''
<html>
<head>
<title>HEllo</title>
</head>
<body>
<div id="container1" class="abc">
<div name="items">
<span name="price">1.96</span>
<span name="itemName">Sponges</span>
</div>
<div name="items">
<span name="price">3.55</span>
<span name="itemName">Turtles</span>
</div>
<div name="items">
<span name="price" class="something" >6.55</span>
<img src="/images/cheddar.png" style="width: 64px; height: 64px;" />
<span name="itemName">Cheese</span>
</div>
</div>
<div id="images">
<img src="/abc.gif" name="image" />
<img src="/abc2.gif" name="image" />
</div>
<div id="saleSection" style="background-color: blue">
<div name="items">
<span name="itemName">Pudding Cups</span>
<span name="price">1.60</span>
</div>
<hr />
<div name="items" class="limited-supplies" >
<span name="itemName">Gold Brick</span>
<span name="price">214.55</span>
<b style="margin-left: 10px">LIMITED QUANTITIES: <span id="item_5123523_remain">130</span></b>
</div>
</body>
</html>
''')
# Get all items by name
items = parser.getElementsByName('items')
# Parse some arbitrary html
parser2 = AdvancedHTMLParser.AdvancedHTMLParser()
parser2.parseStr('<div name="items"> <span name="itemName">Coop</span><span name="price">1.44</span></div>')
# Append a new item to the list
items[0].parentNode.appendChild(parser2.getRoot())
items = parser.getElementsByName('items')
print ( "Items less than $4.00: ")
print ( "-----------------------\n")
#import pdb; pdb.set_trace()
for item in items:
priceEm = item.getElementsByName('price')[0]
priceValue = round(float(priceEm.innerHTML.strip()), 2)
if priceValue < 4.00:
name = priceEm.getPeersByName('itemName')[0].innerHTML.strip()
print ( "%s - $%.2f" %(name, priceValue) )
# OUTPUT:
# Items less than $4.00:
# -----------------------
#
# Sponges - $1.96
# Turtles - $3.55
# Coop - $1.44
# Pudding Cups - $1.60 | AdvancedHTMLParser | /AdvancedHTMLParser-9.0.2.tar.gz/AdvancedHTMLParser-9.0.2/example.py | example.py |
AdvancedHTTPServer
==================
Standalone web server built on Pythonโs BaseHTTPServer
|Build Status| |Documentation Status| |Github Issues| |PyPi Release|
License
-------
AdvancedHTTPServer is released under the BSD 3-clause license, for more
details see the
`LICENSE <https://github.com/zeroSteiner/AdvancedHTTPServer/blob/master/LICENSE>`__
file.
Features
--------
AdvancedHTTPServer builds on top of Pythonโs included BaseHTTPServer and
provides out of the box support for additional commonly needed features
such as: - Threaded request handling - Binding to multiple interfaces -
SSL and SNI support - Registering handler functions to HTTP resources -
A default robots.txt file - Basic authentication - The HTTP verbs GET,
HEAD, POST, and OPTIONS - Remote Procedure Call (RPC) over HTTP -
WebSockets
Dependencies
------------
AdvancedHTTPServer does not have any additional dependencies outside of
the Python standard library.
The following version of Python are currently supported:
- Python 2.7
- Python 3.3
- Python 3.4
- Python 3.5
- Python 3.6
- Python 3.7
Code Documentation
------------------
AdvancedHTTPServer uses Sphinx for internal code documentation. This
documentation can be generated from source with the command
``sphinx-build docs/source docs/html``. The latest documentation is
kindly hosted on `ReadTheDocs <https://readthedocs.org/>`__ at
`advancedhttpserver.readthedocs.io <https://advancedhttpserver.readthedocs.io/en/latest/>`__.
Changes In Version 2.0
----------------------
- The ``AdvancedHTTPServer`` module has been renamed
``advancedhttpserver``
- Classes prefixed with ``AdvancedHTTPServer`` have been renamed to
have the redundant prefix removed
- The ``hmac_key`` option is no longer supported
- A single ``AdvancedHTTPServer`` instance can now be bound to multiple
ports
- The ``RequestHandler.install_handlers`` method has been renamed to
``on_init``
- ``SERIALIZER_DRIVERS`` was renamed to ``g_serializer_drivers``
- Support for multiple hostnames with SSL using the SNI extension
- Support for persistent HTTP 1.1 TCP connections
Powered By AdvancedHTTPServer
-----------------------------
- `King Phisher <https://github.com/securestate/king-phisher>`__
Phishing Campaign Toolkit
.. |Build Status| image:: http://img.shields.io/travis/zeroSteiner/AdvancedHTTPServer.svg?style=flat-square
:target: https://travis-ci.org/zeroSteiner/AdvancedHTTPServer
.. |Documentation Status| image:: https://readthedocs.org/projects/advancedhttpserver/badge/?version=latest&style=flat-square
:target: http://advancedhttpserver.readthedocs.org/en/latest
.. |Github Issues| image:: http://img.shields.io/github/issues/zerosteiner/AdvancedHTTPServer.svg?style=flat-square
:target: https://github.com/zerosteiner/AdvancedHTTPServer/issues
.. |PyPi Release| image:: https://img.shields.io/pypi/v/AdvancedHTTPServer.svg?style=flat-square
:target: https://pypi.python.org/pypi/AdvancedHTTPServer
| AdvancedHTTPServer | /AdvancedHTTPServer-2.2.0.tar.gz/AdvancedHTTPServer-2.2.0/README.rst | README.rst |
# Homepage: https://github.com/zeroSteiner/AdvancedHTTPServer
# Author: Spencer McIntyre (zeroSteiner)
# Config file example
FILE_CONFIG = """
[server]
ip = 0.0.0.0
port = 8080
web_root = /var/www/html
list_directories = True
# Set an ssl_cert to enable SSL
# ssl_cert = /path/to/cert.pem
# ssl_key = /path/to/cert.key
# ssl_version = TLSv1
"""
# The AdvancedHTTPServer systemd service unit file
# Quick how to:
# 1. Copy this file to /etc/systemd/system/pyhttpd.service
# 2. Edit the run parameters appropriately in the ExecStart option
# 3. Set configuration settings in /etc/pyhttpd.conf
# 4. Run "systemctl daemon-reload"
FILE_SYSTEMD_SERVICE_UNIT = """
[Unit]
Description=Python Advanced HTTP Server
After=network.target
[Service]
Type=simple
ExecStart=/sbin/runuser -l nobody -c "/usr/bin/python -m advancedhttpserver -c /etc/pyhttpd.conf"
ExecStop=/bin/kill -INT $MAINPID
[Install]
WantedBy=multi-user.target
"""
__version__ = '2.2.0'
__all__ = (
'AdvancedHTTPServer',
'RegisterPath',
'RequestHandler',
'RPCClient',
'RPCClientCached',
'RPCError',
'RPCConnectionError',
'ServerTestCase',
'WebSocketHandler',
'build_server_from_argparser',
'build_server_from_config'
)
import base64
import binascii
import collections
import datetime
import hashlib
import io
import json
import logging
import logging.handlers
import mimetypes
import os
import posixpath
import random
import re
import select
import shutil
import socket
import sqlite3
import ssl
import string
import struct
import sys
import threading
import time
import traceback
import unittest
import urllib
import weakref
import zlib
if sys.version_info[0] < 3:
import BaseHTTPServer
import cgi as html
import Cookie
import httplib
import Queue as queue
import SocketServer as socketserver
import urlparse
http = type('http', (), {'client': httplib, 'cookies': Cookie, 'server': BaseHTTPServer})
urllib.parse = urlparse
urllib.parse.quote = urllib.quote
urllib.parse.unquote = urllib.unquote
urllib.parse.urlencode = urllib.urlencode
from ConfigParser import ConfigParser
else:
import html
import http.client
import http.cookies
import http.server
import queue
import socketserver
import urllib.parse
from configparser import ConfigParser
g_handler_map = {}
g_serializer_drivers = {}
"""Dictionary of available drivers for serialization."""
g_ssl_has_server_sni = (getattr(ssl, 'HAS_SNI', False) and sys.version_info >= ((2, 7, 9) if sys.version_info[0] < 3 else (3, 4)))
"""An indication of if the environment offers server side SNI support."""
def _serialize_ext_dump(obj):
if obj.__class__ == datetime.date:
return 'datetime.date', obj.isoformat()
elif obj.__class__ == datetime.datetime:
return 'datetime.datetime', obj.isoformat()
elif obj.__class__ == datetime.time:
return 'datetime.time', obj.isoformat()
raise TypeError('Unknown type: ' + repr(obj))
def _serialize_ext_load(obj_type, obj_value, default):
if obj_type == 'datetime.date':
return datetime.datetime.strptime(obj_value, '%Y-%m-%d').date()
elif obj_type == 'datetime.datetime':
return datetime.datetime.strptime(obj_value, '%Y-%m-%dT%H:%M:%S' + ('.%f' if '.' in obj_value else ''))
elif obj_type == 'datetime.time':
return datetime.datetime.strptime(obj_value, '%H:%M:%S' + ('.%f' if '.' in obj_value else '')).time()
return default
def _json_default(obj):
obj_type, obj_value = _serialize_ext_dump(obj)
return {'__complex_type__': obj_type, 'value': obj_value}
def _json_object_hook(obj):
return _serialize_ext_load(obj.get('__complex_type__'), obj.get('value'), obj)
g_serializer_drivers['application/json'] = {
'dumps': lambda d: json.dumps(d, default=_json_default),
'loads': lambda d, e: json.loads(d, object_hook=_json_object_hook)
}
try:
import msgpack
except ImportError:
has_msgpack = False
else:
has_msgpack = True
_MSGPACK_EXT_TYPES = {10: 'datetime.datetime', 11: 'datetime.date', 12: 'datetime.time'}
def _msgpack_default(obj):
obj_type, obj_value = _serialize_ext_dump(obj)
obj_type = next(i[0] for i in _MSGPACK_EXT_TYPES.items() if i[1] == obj_type)
if sys.version_info[0] == 3:
obj_value = obj_value.encode('utf-8')
return msgpack.ExtType(obj_type, obj_value)
def _msgpack_ext_hook(code, obj_value):
default = msgpack.ExtType(code, obj_value)
if sys.version_info[0] == 3:
obj_value = obj_value.decode('utf-8')
obj_type = _MSGPACK_EXT_TYPES.get(code)
return _serialize_ext_load(obj_type, obj_value, default)
g_serializer_drivers['binary/message-pack'] = {
'dumps': lambda d: msgpack.dumps(d, default=_msgpack_default),
'loads': lambda d, e: msgpack.loads(d, encoding=e, ext_hook=_msgpack_ext_hook)
}
if hasattr(logging, 'NullHandler'):
logging.getLogger('AdvancedHTTPServer').addHandler(logging.NullHandler())
def random_string(size):
"""
Generate a random string of *size* length consisting of both letters
and numbers. This function is not meant for cryptographic purposes
and should not be used to generate security tokens.
:param int size: The length of the string to return.
:return: A string consisting of random characters.
:rtype: str
"""
return ''.join(random.choice(string.ascii_letters + string.digits) for _ in range(size))
def resolve_ssl_protocol_version(version=None):
"""
Look up an SSL protocol version by name. If *version* is not specified, then
the strongest protocol available will be returned.
:param str version: The name of the version to look up.
:return: A protocol constant from the :py:mod:`ssl` module.
:rtype: int
"""
if version is None:
protocol_preference = ('TLSv1_2', 'TLSv1_1', 'TLSv1', 'SSLv3', 'SSLv23', 'SSLv2')
for protocol in protocol_preference:
if hasattr(ssl, 'PROTOCOL_' + protocol):
return getattr(ssl, 'PROTOCOL_' + protocol)
raise RuntimeError('could not find a suitable ssl PROTOCOL_ version constant')
elif isinstance(version, str):
if not hasattr(ssl, 'PROTOCOL_' + version):
raise ValueError('invalid ssl protocol version: ' + version)
return getattr(ssl, 'PROTOCOL_' + version)
raise TypeError("ssl_version() argument 1 must be str, not {0}".format(type(version).__name__))
def build_server_from_argparser(description=None, server_klass=None, handler_klass=None):
"""
Build a server from command line arguments. If a ServerClass or
HandlerClass is specified, then the object must inherit from the
corresponding AdvancedHTTPServer base class.
:param str description: Description string to be passed to the argument parser.
:param server_klass: Alternative server class to use.
:type server_klass: :py:class:`.AdvancedHTTPServer`
:param handler_klass: Alternative handler class to use.
:type handler_klass: :py:class:`.RequestHandler`
:return: A configured server instance.
:rtype: :py:class:`.AdvancedHTTPServer`
"""
import argparse
def _argp_dir_type(arg):
if not os.path.isdir(arg):
raise argparse.ArgumentTypeError("{0} is not a valid directory".format(repr(arg)))
return arg
def _argp_port_type(arg):
if not arg.isdigit():
raise argparse.ArgumentTypeError("{0} is not a valid port".format(repr(arg)))
arg = int(arg)
if arg < 0 or arg > 65535:
raise argparse.ArgumentTypeError("{0} is not a valid port".format(repr(arg)))
return arg
description = (description or 'HTTP Server')
server_klass = (server_klass or AdvancedHTTPServer)
handler_klass = (handler_klass or RequestHandler)
parser = argparse.ArgumentParser(conflict_handler='resolve', description=description, fromfile_prefix_chars='@')
parser.epilog = 'When a config file is specified with --config only the --log, --log-file and --password options will be used.'
parser.add_argument('-c', '--conf', dest='config', type=argparse.FileType('r'), help='read settings from a config file')
parser.add_argument('-i', '--ip', dest='ip', default='0.0.0.0', help='the ip address to serve on')
parser.add_argument('-L', '--log', dest='loglvl', choices=('DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'), default='INFO', help='set the logging level')
parser.add_argument('-p', '--port', dest='port', default=8080, type=_argp_port_type, help='port to serve on')
parser.add_argument('-v', '--version', action='version', version=parser.prog + ' Version: ' + __version__)
parser.add_argument('-w', '--web-root', dest='web_root', default='.', type=_argp_dir_type, help='path to the web root directory')
parser.add_argument('--log-file', dest='log_file', help='log information to a file')
parser.add_argument('--no-threads', dest='use_threads', action='store_false', default=True, help='disable threading')
parser.add_argument('--password', dest='password', help='password to use for basic authentication')
ssl_group = parser.add_argument_group('ssl options')
ssl_group.add_argument('--ssl-cert', dest='ssl_cert', help='the ssl cert to use')
ssl_group.add_argument('--ssl-key', dest='ssl_key', help='the ssl key to use')
ssl_group.add_argument('--ssl-version', dest='ssl_version', choices=[p[9:] for p in dir(ssl) if p.startswith('PROTOCOL_')], help='the version of ssl to use')
arguments = parser.parse_args()
logging.getLogger('').setLevel(logging.DEBUG)
console_log_handler = logging.StreamHandler()
console_log_handler.setLevel(getattr(logging, arguments.loglvl))
console_log_handler.setFormatter(logging.Formatter("%(asctime)s %(levelname)-8s %(message)s"))
logging.getLogger('').addHandler(console_log_handler)
if arguments.log_file:
main_file_handler = logging.handlers.RotatingFileHandler(arguments.log_file, maxBytes=262144, backupCount=5)
main_file_handler.setLevel(logging.DEBUG)
main_file_handler.setFormatter(logging.Formatter("%(asctime)s %(name)-30s %(levelname)-10s %(message)s"))
logging.getLogger('').setLevel(logging.DEBUG)
logging.getLogger('').addHandler(main_file_handler)
if arguments.config:
config = ConfigParser()
config.readfp(arguments.config)
server = build_server_from_config(
config,
'server',
server_klass=server_klass,
handler_klass=handler_klass
)
else:
server = server_klass(
handler_klass,
address=(arguments.ip, arguments.port),
use_threads=arguments.use_threads,
ssl_certfile=arguments.ssl_cert,
ssl_keyfile=arguments.ssl_key,
ssl_version=arguments.ssl_version
)
server.serve_files_root = arguments.web_root
if arguments.password:
server.auth_add_creds('', arguments.password)
return server
def build_server_from_config(config, section_name, server_klass=None, handler_klass=None):
"""
Build a server from a provided :py:class:`configparser.ConfigParser`
instance. If a ServerClass or HandlerClass is specified, then the
object must inherit from the corresponding AdvancedHTTPServer base
class.
:param config: Configuration to retrieve settings from.
:type config: :py:class:`configparser.ConfigParser`
:param str section_name: The section name of the configuration to use.
:param server_klass: Alternative server class to use.
:type server_klass: :py:class:`.AdvancedHTTPServer`
:param handler_klass: Alternative handler class to use.
:type handler_klass: :py:class:`.RequestHandler`
:return: A configured server instance.
:rtype: :py:class:`.AdvancedHTTPServer`
"""
server_klass = (server_klass or AdvancedHTTPServer)
handler_klass = (handler_klass or RequestHandler)
port = config.getint(section_name, 'port')
web_root = None
if config.has_option(section_name, 'web_root'):
web_root = config.get(section_name, 'web_root')
if config.has_option(section_name, 'ip'):
ip = config.get(section_name, 'ip')
else:
ip = '0.0.0.0'
ssl_certfile = None
if config.has_option(section_name, 'ssl_cert'):
ssl_certfile = config.get(section_name, 'ssl_cert')
ssl_keyfile = None
if config.has_option(section_name, 'ssl_key'):
ssl_keyfile = config.get(section_name, 'ssl_key')
ssl_version = None
if config.has_option(section_name, 'ssl_version'):
ssl_version = config.get(section_name, 'ssl_version')
server = server_klass(
handler_klass,
address=(ip, port),
ssl_certfile=ssl_certfile,
ssl_keyfile=ssl_keyfile,
ssl_version=ssl_version
)
if config.has_option(section_name, 'password_type'):
password_type = config.get(section_name, 'password_type')
else:
password_type = 'md5'
if config.has_option(section_name, 'password'):
password = config.get(section_name, 'password')
if config.has_option(section_name, 'username'):
username = config.get(section_name, 'username')
else:
username = ''
server.auth_add_creds(username, password, pwtype=password_type)
cred_idx = 0
while config.has_option(section_name, 'password' + str(cred_idx)):
password = config.get(section_name, 'password' + str(cred_idx))
if not config.has_option(section_name, 'username' + str(cred_idx)):
break
username = config.get(section_name, 'username' + str(cred_idx))
server.auth_add_creds(username, password, pwtype=password_type)
cred_idx += 1
if web_root is None:
server.serve_files = False
else:
server.serve_files = True
server.serve_files_root = web_root
if config.has_option(section_name, 'list_directories'):
server.serve_files_list_directories = config.getboolean(section_name, 'list_directories')
return server
class _RequestEmbryo(object):
__slots__ = ('server', 'socket', 'address', 'created')
def __init__(self, server, client_socket, address, created=None):
server.request_embryos.append(self)
self.server = weakref.ref(server)
self.socket = client_socket
self.address = address
self.created = created or time.time()
def fileno(self):
return self.socket.fileno()
def serve_ready(self):
server = self.server() # server is a weakref
if not server:
return False
try:
self.socket.do_handshake()
except ssl.SSLWantReadError:
return False
except (socket.error, OSError, ValueError):
self.socket.close()
server.request_embryos.remove(self)
return False
self.socket.settimeout(None)
server.request_embryos.remove(self)
server.request_queue.put((self.socket, self.address))
server.handle_request()
return True
class RegisterPath(object):
"""
Register a path and handler with the global handler map. This can be
used as a decorator. If no handler is specified then the path and
function will be registered with all :py:class:`.RequestHandler`
instances.
.. code-block:: python
@RegisterPath('^test$')
def handle_test(handler, query):
pass
"""
def __init__(self, path, handler=None, is_rpc=False):
"""
:param str path: The path regex to register the function to.
:param str handler: A specific :py:class:`.RequestHandler` class to register the handler with.
:param bool is_rpc: Whether the handler is an RPC handler or not.
"""
self.path = path
self.is_rpc = is_rpc
if handler is None or isinstance(handler, str):
self.handler = handler
elif hasattr(handler, '__name__'):
self.handler = handler.__name__
elif hasattr(handler, '__class__'):
self.handler = handler.__class__.__name__
else:
raise ValueError('unknown handler: ' + repr(handler))
def __call__(self, function):
handler_map = g_handler_map.get(self.handler, {})
handler_map[self.path] = (function, self.is_rpc)
g_handler_map[self.handler] = handler_map
return function
class RPCError(Exception):
"""
This class represents an RPC error either local or remote. Any errors
in routines executed on the server will raise this error.
"""
def __init__(self, message, status=None, remote_exception=None):
super(RPCError, self).__init__()
self.message = message
self.status = status
self.remote_exception = remote_exception
def __repr__(self):
return "{0}(message='{1}', status={2}, remote_exception={3})".format(self.__class__.__name__, self.message, self.status, self.is_remote_exception)
def __str__(self):
if self.is_remote_exception:
return 'a remote exception occurred'
return "the server responded with {0} '{1}'".format(self.status, self.message)
@property
def is_remote_exception(self):
"""
This is true if the represented error resulted from an exception on the
remote server.
:type: bool
"""
return bool(self.remote_exception is not None)
class RPCConnectionError(RPCError):
"""
An exception raised when there is a connection-related error encountered by
the RPC client.
.. versionadded:: 2.1.0
"""
pass
class RPCClient(object):
"""
This object facilitates communication with remote RPC methods as
provided by a :py:class:`.RequestHandler` instance.
Once created this object can be called directly, doing so is the same
as using the call method.
This object uses locks internally to be thread safe. Only one thread
can execute a function at a time.
"""
def __init__(self, address, use_ssl=False, username=None, password=None, uri_base='/', ssl_context=None):
"""
:param tuple address: The address of the server to connect to as (host, port).
:param bool use_ssl: Whether to connect with SSL or not.
:param str username: The username to authenticate with.
:param str password: The password to authenticate with.
:param str uri_base: An optional prefix for all methods.
:param ssl_context: An optional SSL context to use for SSL related options.
"""
self.host = str(address[0])
self.port = int(address[1])
if not hasattr(self, 'logger'):
self.logger = logging.getLogger('AdvancedHTTPServer.RPCClient')
self.headers = None
"""An optional dictionary of headers to include with each RPC request."""
self.use_ssl = bool(use_ssl)
self.ssl_context = ssl_context
self.uri_base = str(uri_base)
self.username = (None if username is None else str(username))
self.password = (None if password is None else str(password))
self.lock = threading.Lock()
"""A :py:class:`threading.Lock` instance used to synchronize operations."""
self.serializer = None
"""The :py:class:`.Serializer` instance to use for encoding RPC data to the server."""
self.set_serializer('application/json')
self.reconnect()
def __del__(self):
self.client.close()
def __reduce__(self):
address = (self.host, self.port)
return (self.__class__, (address, self.use_ssl, self.username, self.password, self.uri_base))
def set_serializer(self, serializer_name, compression=None):
"""
Configure the serializer to use for communication with the server.
The serializer specified must be valid and in the
:py:data:`.g_serializer_drivers` map.
:param str serializer_name: The name of the serializer to use.
:param str compression: The name of a compression library to use.
"""
self.serializer = Serializer(serializer_name, charset='UTF-8', compression=compression)
self.logger.debug('using serializer: ' + serializer_name)
def __call__(self, *args, **kwargs):
return self.call(*args, **kwargs)
def encode(self, data):
"""Encode data with the configured serializer."""
return self.serializer.dumps(data)
def decode(self, data):
"""Decode data with the configured serializer."""
return self.serializer.loads(data)
def reconnect(self):
"""Reconnect to the remote server."""
self.lock.acquire()
if self.use_ssl:
self.client = http.client.HTTPSConnection(self.host, self.port, context=self.ssl_context)
else:
self.client = http.client.HTTPConnection(self.host, self.port)
self.lock.release()
def call(self, method, *args, **kwargs):
"""
Issue a call to the remote end point to execute the specified
procedure.
:param str method: The name of the remote procedure to execute.
:return: The return value from the remote function.
"""
if kwargs:
options = self.encode(dict(args=args, kwargs=kwargs))
else:
options = self.encode(args)
headers = {}
if self.headers:
headers.update(self.headers)
headers['Content-Type'] = self.serializer.content_type
headers['Content-Length'] = str(len(options))
headers['Connection'] = 'close'
if self.username is not None and self.password is not None:
headers['Authorization'] = 'Basic ' + base64.b64encode((self.username + ':' + self.password).encode('UTF-8')).decode('UTF-8')
method = os.path.join(self.uri_base, method)
self.logger.debug('calling RPC method: ' + method[1:])
try:
with self.lock:
self.client.request('RPC', method, options, headers)
resp = self.client.getresponse()
except http.client.ImproperConnectionState:
raise RPCConnectionError('improper connection state')
if resp.status != 200:
raise RPCError(resp.reason, resp.status)
resp_data = resp.read()
resp_data = self.decode(resp_data)
if not ('exception_occurred' in resp_data and 'result' in resp_data):
raise RPCError('missing response information', resp.status)
if resp_data['exception_occurred']:
raise RPCError('remote method incurred an exception', resp.status, remote_exception=resp_data['exception'])
return resp_data['result']
class RPCClientCached(RPCClient):
"""
This object builds upon :py:class:`.RPCClient` and
provides additional methods for cacheing results in memory.
"""
def __init__(self, *args, **kwargs):
cache_db = kwargs.pop('cache_db', ':memory:')
super(RPCClientCached, self).__init__(*args, **kwargs)
self.cache_db = sqlite3.connect(cache_db, check_same_thread=False)
cursor = self.cache_db.cursor()
cursor.execute('CREATE TABLE IF NOT EXISTS cache (method TEXT NOT NULL, options_hash BLOB NOT NULL, return_value BLOB NOT NULL)')
self.cache_db.commit()
self.cache_lock = threading.Lock()
def cache_call(self, method, *options):
"""
Call a remote method and store the result locally. Subsequent
calls to the same method with the same arguments will return the
cached result without invoking the remote procedure. Cached results are
kept indefinitely and must be manually refreshed with a call to
:py:meth:`.cache_call_refresh`.
:param str method: The name of the remote procedure to execute.
:return: The return value from the remote function.
"""
options_hash = self.encode(options)
if len(options_hash) > 20:
options_hash = hashlib.new('sha1', options_hash).digest()
options_hash = sqlite3.Binary(options_hash)
with self.cache_lock:
cursor = self.cache_db.cursor()
cursor.execute('SELECT return_value FROM cache WHERE method = ? AND options_hash = ?', (method, options_hash))
return_value = cursor.fetchone()
if return_value:
return_value = bytes(return_value[0])
return self.decode(return_value)
return_value = self.call(method, *options)
store_return_value = sqlite3.Binary(self.encode(return_value))
with self.cache_lock:
cursor = self.cache_db.cursor()
cursor.execute('INSERT INTO cache (method, options_hash, return_value) VALUES (?, ?, ?)', (method, options_hash, store_return_value))
self.cache_db.commit()
return return_value
def cache_call_refresh(self, method, *options):
"""
Call a remote method and update the local cache with the result
if it already existed.
:param str method: The name of the remote procedure to execute.
:return: The return value from the remote function.
"""
options_hash = self.encode(options)
if len(options_hash) > 20:
options_hash = hashlib.new('sha1', options).digest()
options_hash = sqlite3.Binary(options_hash)
with self.cache_lock:
cursor = self.cache_db.cursor()
cursor.execute('DELETE FROM cache WHERE method = ? AND options_hash = ?', (method, options_hash))
return_value = self.call(method, *options)
store_return_value = sqlite3.Binary(self.encode(return_value))
with self.cache_lock:
cursor = self.cache_db.cursor()
cursor.execute('INSERT INTO cache (method, options_hash, return_value) VALUES (?, ?, ?)', (method, options_hash, store_return_value))
self.cache_db.commit()
return return_value
def cache_clear(self):
"""Purge the local store of all cached function information."""
with self.cache_lock:
cursor = self.cache_db.cursor()
cursor.execute('DELETE FROM cache')
self.cache_db.commit()
self.logger.info('the RPC cache has been purged')
return
class ServerNonThreaded(http.server.HTTPServer, object):
"""
This class is used internally by :py:class:`.AdvancedHTTPServer` and
is not intended for use by other classes or functions. It is responsible for
listening on a single address, TCP port and SSL combination.
"""
def __init__(self, *args, **kwargs):
self.__config = kwargs.pop('config')
if not hasattr(self, 'logger'):
self.logger = logging.getLogger('AdvancedHTTPServer')
self.allow_reuse_address = True
self.request_queue = queue.Queue()
self.request_embryos = []
self.using_ssl = False
super(ServerNonThreaded, self).__init__(*args, **kwargs)
def __repr__(self):
address = self.server_address[0]
if self.socket.family == socket.AF_INET:
address += ':' + str(self.server_address[1])
elif self.socket.family == socket.AF_INET6:
address = '[' + address + ']:' + str(self.server_address[1])
return "<{0} address: {1} ssl: {2!r}>".format(self.__class__.__name__, address, self.using_ssl)
@property
def read_checkable_fds(self):
return [self] + self.request_embryos
def get_config(self):
return self.__config
def get_request(self):
return self.request_queue.get(block=True, timeout=None)
def handle_request(self):
timeout = self.socket.gettimeout()
if timeout is None:
timeout = self.timeout
elif self.timeout is not None:
timeout = min(timeout, self.timeout)
try:
request, client_address = self.request_queue.get(block=True, timeout=timeout)
except queue.Empty:
return self.handle_timeout()
except OSError:
return None
if self.verify_request(request, client_address):
try:
self.process_request(request, client_address)
except Exception:
self.handle_error(request, client_address)
self.shutdown_request(request)
except:
self.shutdown_request(request)
raise
else:
self.shutdown_request(request)
return None
def finish_request(self, request, client_address):
try:
super(ServerNonThreaded, self).finish_request(request, client_address)
except IOError:
self.logger.warning('IOError encountered in finish_request')
except KeyboardInterrupt:
self.logger.warning('KeyboardInterrupt encountered in finish_request')
self.shutdown()
def serve_ready(self):
client_socket, address = self.socket.accept()
if self.using_ssl:
client_socket.settimeout(0)
embryo = _RequestEmbryo(self, client_socket, address)
embryo.serve_ready()
else:
client_socket.settimeout(None)
self.request_queue.put((client_socket, address))
self.handle_request()
def server_bind(self, *args, **kwargs):
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
super(ServerNonThreaded, self).server_bind(*args, **kwargs)
def shutdown(self, *args, **kwargs):
try:
self.socket.shutdown(socket.SHUT_RDWR)
except socket.error:
pass
self.socket.close()
class ServerThreaded(socketserver.ThreadingMixIn, ServerNonThreaded):
"""
This class is used internally by :py:class:`.AdvancedHTTPServer` and
is not intended for use by other classes or functions. It is responsible for
listening on a single address, TCP port and SSL combination.
"""
daemon_threads = True
class RequestHandler(http.server.BaseHTTPRequestHandler, object):
"""
This is the primary http request handler class of the
AdvancedHTTPServer framework. Custom request handlers must inherit
from this object to be compatible. Instances of this class are created
automatically. This class will handle standard HTTP GET, HEAD, OPTIONS,
and POST requests. Callback functions called handlers can be registered
to resource paths using regular expressions in the *handler_map*
attribute for GET HEAD and POST requests and *rpc_handler_map* for RPC
requests. Non-RPC handler functions that are not class methods of
the request handler instance will be passed the instance of the
request handler as the first argument.
"""
if not mimetypes.inited:
mimetypes.init() # try to read system mime.types
extensions_map = mimetypes.types_map.copy()
extensions_map.update({
'': 'application/octet-stream', # Default
'.py': 'text/plain',
'.rb': 'text/plain',
'.c': 'text/plain',
'.h': 'text/plain',
})
protocol_version = 'HTTP/1.1'
wbufsize = 4096
web_socket_handler = None
"""An optional class to handle Web Sockets. This class must be derived from :py:class:`.WebSocketHandler`."""
def __init__(self, *args, **kwargs):
self.cookies = None
self.path = None
self.wfile = None
self._wfile = None
self.server = args[2]
self.headers_active = False
"""Whether or not the request is in the sending headers phase."""
self.handler_map = {}
"""The dictionary object which maps regular expressions of resources to the functions which should handle them."""
self.rpc_handler_map = {}
"""The dictionary object which maps regular expressions of RPC functions to their handlers."""
for map_name in (None, self.__class__.__name__):
handler_map = g_handler_map.get(map_name, {})
for path, function_info in handler_map.items():
function, function_is_rpc = function_info
if function_is_rpc:
self.rpc_handler_map[path] = function
else:
self.handler_map[path] = function
self.basic_auth_user = None
"""The name of the user if the current request is using basic authentication."""
self.query_data = None
"""The parameter data that has been passed to the server parsed as a dict."""
self.raw_query_data = None
"""The raw data that was parsed into the :py:attr:`.query_data` attribute."""
self.__config = self.server.get_config()
"""A reference to the configuration provided by the server."""
self.on_init()
super(RequestHandler, self).__init__(*args, **kwargs)
def setup(self, *args, **kwargs):
ret = super(RequestHandler, self).setup(*args, **kwargs)
self._wfile = self.wfile
return ret
def on_init(self):
"""
This method is meant to be over ridden by custom classes. It is
called as part of the __init__ method and provides an opportunity
for the handler maps to be populated with entries or the config to be
customized.
"""
pass # over ride me
def __get_handler(self, is_rpc=False):
handler = None
handler_map = (self.rpc_handler_map if is_rpc else self.handler_map)
for (path_regex, handler) in handler_map.items():
if re.match(path_regex, self.path):
break
else:
return (None, None)
is_method = False
self_handler = None
if hasattr(handler, '__name__'):
self_handler = getattr(self, handler.__name__, None)
if self_handler is not None and (handler == self_handler.__func__ or handler == self_handler):
is_method = True
return (handler, is_method)
def version_string(self):
return self.__config['server_version']
def respond_file(self, file_path, attachment=False, query=None):
"""
Respond to the client by serving a file, either directly or as
an attachment.
:param str file_path: The path to the file to serve, this does not need to be in the web root.
:param bool attachment: Whether to serve the file as a download by setting the Content-Disposition header.
"""
del query
file_path = os.path.abspath(file_path)
try:
file_obj = open(file_path, 'rb')
except IOError:
self.respond_not_found()
return
self.send_response(200)
self.send_header('Content-Type', self.guess_mime_type(file_path))
fs = os.fstat(file_obj.fileno())
self.send_header('Content-Length', str(fs[6]))
if attachment:
file_name = os.path.basename(file_path)
self.send_header('Content-Disposition', 'attachment; filename=' + file_name)
self.send_header('Last-Modified', self.date_time_string(fs.st_mtime))
self.end_headers()
shutil.copyfileobj(file_obj, self.wfile)
file_obj.close()
return
def respond_list_directory(self, dir_path, query=None):
"""
Respond to the client with an HTML page listing the contents of
the specified directory.
:param str dir_path: The path of the directory to list the contents of.
"""
del query
try:
dir_contents = os.listdir(dir_path)
except os.error:
self.respond_not_found()
return
if os.path.normpath(dir_path) != self.__config['serve_files_root']:
dir_contents.append('..')
dir_contents.sort(key=lambda a: a.lower())
displaypath = html.escape(urllib.parse.unquote(self.path), quote=True)
f = io.BytesIO()
encoding = sys.getfilesystemencoding()
f.write(b'<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n')
f.write(b'<html>\n<title>Directory listing for ' + displaypath.encode(encoding) + b'</title>\n')
f.write(b'<body>\n<h2>Directory listing for ' + displaypath.encode(encoding) + b'</h2>\n')
f.write(b'<hr>\n<ul>\n')
for name in dir_contents:
fullname = os.path.join(dir_path, name)
displayname = linkname = name
# Append / for directories or @ for symbolic links
if os.path.isdir(fullname):
displayname = name + "/"
linkname = name + "/"
if os.path.islink(fullname):
displayname = name + "@"
# Note: a link to a directory displays with @ and links with /
f.write(('<li><a href="' + urllib.parse.quote(linkname) + '">' + html.escape(displayname, quote=True) + '</a>\n').encode(encoding))
f.write(b'</ul>\n<hr>\n</body>\n</html>\n')
length = f.tell()
f.seek(0)
self.send_response(200)
self.send_header('Content-Type', 'text/html; charset=' + encoding)
self.send_header('Content-Length', length)
self.end_headers()
shutil.copyfileobj(f, self.wfile)
f.close()
return
def respond_not_found(self):
"""Respond to the client with a default 404 message."""
self.send_response_full(b'Resource Not Found\n', status=404)
return
def respond_redirect(self, location='/'):
"""
Respond to the client with a 301 message and redirect them with
a Location header.
:param str location: The new location to redirect the client to.
"""
self.send_response(301)
self.send_header('Content-Length', 0)
self.send_header('Location', location)
self.end_headers()
return
def respond_server_error(self, status=None, status_line=None, message=None):
"""
Handle an internal server error, logging a traceback if executed
within an exception handler.
:param int status: The status code to respond to the client with.
:param str status_line: The status message to respond to the client with.
:param str message: The body of the response that is sent to the client.
"""
(ex_type, ex_value, ex_traceback) = sys.exc_info()
if ex_type:
(ex_file_name, ex_line, _, _) = traceback.extract_tb(ex_traceback)[-1]
line_info = "{0}:{1}".format(ex_file_name, ex_line)
log_msg = "encountered {0} in {1}".format(repr(ex_value), line_info)
self.server.logger.error(log_msg, exc_info=True)
status = (status or 500)
status_line = (status_line or http.client.responses.get(status, 'Internal Server Error')).strip()
self.send_response(status, status_line)
message = (message or status_line)
if isinstance(message, (str, bytes)):
self.send_header('Content-Length', len(message))
self.end_headers()
if isinstance(message, str):
self.wfile.write(message.encode(sys.getdefaultencoding()))
else:
self.wfile.write(message)
elif hasattr(message, 'fileno'):
fs = os.fstat(message.fileno())
self.send_header('Content-Length', fs[6])
self.end_headers()
shutil.copyfileobj(message, self.wfile)
else:
self.end_headers()
return
def respond_unauthorized(self, request_authentication=False):
"""
Respond to the client that the request is unauthorized.
:param bool request_authentication: Whether to request basic authentication information by sending a WWW-Authenticate header.
"""
headers = {}
if request_authentication:
headers['WWW-Authenticate'] = 'Basic realm="' + self.__config['server_version'] + '"'
self.send_response_full(b'Unauthorized', status=401, headers=headers)
return
def dispatch_handler(self, query=None):
"""
Dispatch functions based on the established handler_map. It is
generally not necessary to override this function and doing so
will prevent any handlers from being executed. This function is
executed automatically when requests of either GET, HEAD, or POST
are received.
:param dict query: Parsed query parameters from the corresponding request.
"""
query = (query or {})
# normalize the path
# abandon query parameters
self.path = self.path.split('?', 1)[0]
self.path = self.path.split('#', 1)[0]
original_path = urllib.parse.unquote(self.path)
self.path = posixpath.normpath(original_path)
words = self.path.split('/')
words = filter(None, words)
tmp_path = ''
for word in words:
_, word = os.path.splitdrive(word)
_, word = os.path.split(word)
if word in (os.curdir, os.pardir):
continue
tmp_path = os.path.join(tmp_path, word)
self.path = tmp_path
if self.path == 'robots.txt' and self.__config['serve_robots_txt']:
self.send_response_full(self.__config['robots_txt'])
return
self.cookies = http.cookies.SimpleCookie(self.headers.get('cookie', ''))
handler, is_method = self.__get_handler(is_rpc=False)
if handler is not None:
try:
handler(*((query,) if is_method else (self, query)))
except Exception:
self.respond_server_error()
return
if not self.__config['serve_files']:
self.respond_not_found()
return
file_path = self.__config['serve_files_root']
file_path = os.path.join(file_path, tmp_path)
if os.path.isfile(file_path) and os.access(file_path, os.R_OK):
self.respond_file(file_path, query=query)
return
elif os.path.isdir(file_path) and os.access(file_path, os.R_OK):
if not original_path.endswith('/'):
# redirect browser, doing what apache does
destination = self.path + '/'
if self.command == 'GET' and self.query_data:
destination += '?' + urllib.parse.urlencode(self.query_data, True)
self.respond_redirect(destination)
return
for index in ['index.html', 'index.htm']:
index = os.path.join(file_path, index)
if os.path.isfile(index) and os.access(index, os.R_OK):
self.respond_file(index, query=query)
return
if self.__config['serve_files_list_directories']:
self.respond_list_directory(file_path, query=query)
return
self.respond_not_found()
return
def send_response(self, *args, **kwargs):
if self.wfile != self._wfile:
self.wfile.close()
self.wfile = self._wfile
super(RequestHandler, self).send_response(*args, **kwargs)
self.headers_active = True
# in the event that the http request is invalid, all attributes may not be defined
headers = getattr(self, 'headers', {})
protocol_version = getattr(self, 'protocol_version', 'HTTP/1.0').upper()
if headers.get('Connection', None) == 'keep-alive' and protocol_version == 'HTTP/1.1':
connection = 'keep-alive'
else:
connection = 'close'
self.send_header('Connection', connection)
def send_response_full(self, message, content_type='text/plain; charset=UTF-8', status=200, headers=None):
self.send_response(status)
self.send_header('Content-Type', content_type)
self.send_header('Content-Length', len(message))
if headers is not None:
for header, value in headers.items():
self.send_header(header, value)
self.end_headers()
self.wfile.write(message)
return
def end_headers(self):
super(RequestHandler, self).end_headers()
self.headers_active = False
if self.command == 'HEAD':
self.wfile.flush()
self.wfile = open(os.devnull, 'wb')
def guess_mime_type(self, path):
"""
Guess an appropriate MIME type based on the extension of the
provided path.
:param str path: The of the file to analyze.
:return: The guessed MIME type of the default if non are found.
:rtype: str
"""
_, ext = posixpath.splitext(path)
if ext in self.extensions_map:
return self.extensions_map[ext]
ext = ext.lower()
return self.extensions_map[ext if ext in self.extensions_map else '']
def stock_handler_respond_unauthorized(self, query):
"""This method provides a handler suitable to be used in the handler_map."""
del query
self.respond_unauthorized()
return
def stock_handler_respond_not_found(self, query):
"""This method provides a handler suitable to be used in the handler_map."""
del query
self.respond_not_found()
return
def check_authorization(self):
"""
Check for the presence of a basic auth Authorization header and
if the credentials contained within in are valid.
:return: Whether or not the credentials are valid.
:rtype: bool
"""
try:
store = self.__config.get('basic_auth')
if store is None:
return True
auth_info = self.headers.get('Authorization')
if not auth_info:
return False
auth_info = auth_info.split()
if len(auth_info) != 2 or auth_info[0] != 'Basic':
return False
auth_info = base64.b64decode(auth_info[1]).decode(sys.getdefaultencoding())
username = auth_info.split(':')[0]
password = ':'.join(auth_info.split(':')[1:])
password_bytes = password.encode(sys.getdefaultencoding())
if hasattr(self, 'custom_authentication'):
if self.custom_authentication(username, password):
self.basic_auth_user = username
return True
return False
if not username in store:
self.server.logger.warning('received invalid username: ' + username)
return False
password_data = store[username]
if password_data['type'] == 'plain':
if password == password_data['value']:
self.basic_auth_user = username
return True
elif hashlib.new(password_data['type'], password_bytes).digest() == password_data['value']:
self.basic_auth_user = username
return True
self.server.logger.warning('received invalid password from user: ' + username)
except Exception:
pass
return False
def cookie_get(self, name):
"""
Check for a cookie value by name.
:param str name: Name of the cookie value to retreive.
:return: Returns the cookie value if it's set or None if it's not found.
"""
if not hasattr(self, 'cookies'):
return None
if self.cookies.get(name):
return self.cookies.get(name).value
return None
def cookie_set(self, name, value):
"""
Set the value of a client cookie. This can only be called while
headers can be sent.
:param str name: The name of the cookie value to set.
:param str value: The value of the cookie to set.
"""
if not self.headers_active:
raise RuntimeError('headers have already been ended')
cookie = "{0}={1}; Path=/; HttpOnly".format(name, value)
self.send_header('Set-Cookie', cookie)
def do_GET(self):
if not self.check_authorization():
self.respond_unauthorized(request_authentication=True)
return
uri = urllib.parse.urlparse(self.path)
self.path = uri.path
self.query_data = urllib.parse.parse_qs(uri.query)
if self.web_socket_handler is not None and self.headers.get('upgrade', '').lower() == 'websocket':
self.web_socket_handler(self) # pylint: disable=not-callable
return
self.dispatch_handler(self.query_data)
return
do_HEAD = do_GET
def do_POST(self):
if not self.check_authorization():
self.respond_unauthorized(request_authentication=True)
return
content_length = int(self.headers.get('content-length', 0))
data = self.rfile.read(content_length)
self.raw_query_data = data
content_type = self.headers.get('content-type', '')
content_type = content_type.split(';', 1)[0]
self.query_data = {}
try:
if not isinstance(data, str):
data = data.decode(self.get_content_type_charset())
if content_type.startswith('application/json'):
data = json.loads(data)
if isinstance(data, dict):
self.query_data = dict([(i[0], [i[1]]) for i in data.items()])
else:
self.query_data = urllib.parse.parse_qs(data, keep_blank_values=1)
except Exception:
self.respond_server_error(400)
else:
self.dispatch_handler(self.query_data)
return
def do_OPTIONS(self):
available_methods = list(x[3:] for x in dir(self) if x.startswith('do_'))
if 'RPC' in available_methods and not self.rpc_handler_map:
available_methods.remove('RPC')
self.send_response(200)
self.send_header('Content-Length', 0)
self.send_header('Allow', ', '.join(available_methods))
self.end_headers()
def do_RPC(self):
if not self.check_authorization():
self.respond_unauthorized(request_authentication=True)
return
data_length = self.headers.get('content-length')
if data_length is None:
self.send_error(411)
return
content_type = self.headers.get('content-type')
if content_type is None:
self.send_error(400, 'Missing Header: Content-Type')
return
try:
data_length = int(self.headers.get('content-length'))
data = self.rfile.read(data_length)
except Exception:
self.send_error(400, 'Invalid Data')
return
try:
serializer = Serializer.from_content_type(content_type)
except ValueError:
self.send_error(400, 'Invalid Content-Type')
return
try:
data = serializer.loads(data)
except Exception:
self.server.logger.warning('serializer failed to load data')
self.send_error(400, 'Invalid Data')
return
if isinstance(data, (list, tuple)):
meth_args = data
meth_kwargs = {}
elif isinstance(data, dict):
meth_args = data.get('args', ())
meth_kwargs = data.get('kwargs', {})
else:
self.server.logger.warning('received data does not match the calling convention')
self.send_error(400, 'Invalid Data')
return
rpc_handler, is_method = self.__get_handler(is_rpc=True)
if not rpc_handler:
self.respond_server_error(501)
return
if not is_method:
meth_args = (self,) + tuple(meth_args)
response = {'result': None, 'exception_occurred': False}
try:
response['result'] = rpc_handler(*meth_args, **meth_kwargs)
except Exception as error:
response['exception_occurred'] = True
exc_name = "{0}.{1}".format(error.__class__.__module__, error.__class__.__name__)
response['exception'] = dict(name=exc_name, message=getattr(error, 'message', None))
self.server.logger.error('error: ' + exc_name + ' occurred while calling rpc method: ' + self.path, exc_info=True)
try:
response = serializer.dumps(response)
except Exception:
self.respond_server_error(message='Failed To Pack Response')
return
self.send_response(200)
self.send_header('Content-Type', serializer.content_type)
self.end_headers()
self.wfile.write(response)
return
def log_error(self, msg_format, *args):
self.server.logger.warning(self.address_string() + ' ' + msg_format % args)
def log_message(self, msg_format, *args):
self.server.logger.info(self.address_string() + ' ' + msg_format % args)
def get_query(self, name, default=None):
"""
Get a value from the query data that was sent to the server.
:param str name: The name of the query value to retrieve.
:param default: The value to return if *name* is not specified.
:return: The value if it exists, otherwise *default* will be returned.
:rtype: str
"""
return self.query_data.get(name, [default])[0]
def get_content_type_charset(self, default='UTF-8'):
"""
Inspect the Content-Type header to retrieve the charset that the client
has specified.
:param str default: The default charset to return if none exists.
:return: The charset of the request.
:rtype: str
"""
encoding = default
header = self.headers.get('Content-Type', '')
idx = header.find('charset=')
if idx > 0:
encoding = (header[idx + 8:].split(' ', 1)[0] or encoding)
return encoding
class WakeupFd(object):
__slots__ = ('read_fd', 'write_fd')
def __init__(self):
self.read_fd, self.write_fd = os.pipe()
def close(self):
os.close(self.read_fd)
os.close(self.write_fd)
def fileno(self):
return self.read_fd
class WebSocketHandler(object):
"""
A handler for web socket connections.
"""
_opcode_continue = 0x00
_opcode_text = 0x01
_opcode_binary = 0x02
_opcode_close = 0x08
_opcode_ping = 0x09
_opcode_pong = 0x0a
_opcode_names = {
_opcode_continue: 'continue',
_opcode_text: 'text',
_opcode_binary: 'binary',
_opcode_close: 'close',
_opcode_ping: 'ping',
_opcode_pong: 'pong'
}
guid = '258EAFA5-E914-47DA-95CA-C5AB0DC85B11'
def __init__(self, handler):
"""
:param handler: The :py:class:`RequestHandler` instance that is handling the request.
"""
self.handler = handler
if not hasattr(self, 'logger'):
self.logger = logging.getLogger('AdvancedHTTPServer.WebSocketHandler')
headers = self.handler.headers
client_extensions = headers.get('Sec-WebSocket-Extensions', '')
self.client_extensions = [extension.strip() for extension in client_extensions.split(',')]
key = headers.get('Sec-WebSocket-Key', None)
digest = hashlib.sha1((key + self.guid).encode('utf-8')).digest()
handler.send_response(101, 'Switching Protocols')
handler.send_header('Upgrade', 'WebSocket')
handler.send_header('Connection', 'Upgrade')
handler.send_header('Sec-WebSocket-Accept', base64.b64encode(digest).decode('utf-8'))
handler.end_headers()
handler.wfile.flush()
self.lock = threading.Lock()
self.connected = True
self.logger.info('web socket has been connected')
self.on_connected()
self._last_buffer = b''
self._last_opcode = 0
self._last_sent_opcode = 0
while self.connected:
try:
self._process_message()
except socket.error:
self.logger.warning('there was a socket error while processing web socket messages')
self.close()
except Exception:
self.logger.error('there was an error while processing web socket messages', exc_info=True)
self.close()
self.handler.close_connection = 1
def _decode_string(self, data):
str = data.decode('utf-8')
if sys.version_info[0] == 3:
return str
# raise an exception on surrogates in python 2.7 to more closely replicate 3.x behaviour
for idx, ch in enumerate(str):
if 0xD800 <= ord(ch) <= 0xDFFF:
raise UnicodeDecodeError('utf-8', '', idx, idx + 1, 'invalid continuation byte')
return str
def _process_message(self):
byte_0 = self.handler.rfile.read(1)
if not byte_0:
self.close()
return
byte_0 = ord(byte_0)
if byte_0 & 0x70:
self.close()
return
fin = bool(byte_0 & 0x80)
opcode = byte_0 & 0x0f
length = ord(self.handler.rfile.read(1)) & 0x7f
if length == 126:
length = struct.unpack('>H', self.handler.rfile.read(2))[0]
elif length == 127:
length = struct.unpack('>Q', self.handler.rfile.read(8))[0]
masks = [b for b in self.handler.rfile.read(4)]
if sys.version_info[0] < 3:
masks = map(ord, masks)
payload = bytearray(self.handler.rfile.read(length))
for idx, char in enumerate(payload):
payload[idx] = char ^ masks[idx % 4]
payload = bytes(payload)
self.logger.debug("received message (len: {0:,} opcode: 0x{1:02x} fin: {2})".format(len(payload), opcode, fin))
if fin:
if opcode == self._opcode_continue:
opcode = self._last_opcode
payload = self._last_buffer + payload
self._last_buffer = b''
self._last_opcode = 0
elif self._last_buffer and opcode in (self._opcode_binary, self._opcode_text):
self.logger.warning('closing connection due to unflushed buffer in new data frame')
self.close()
return
self.on_message(opcode, payload)
return
if opcode > 0x02:
self.logger.warning('closing connection due to fin flag not set on opcode > 0x02')
self.close()
return
if opcode:
if self._last_buffer:
self.logger.warning('closing connection due to unflushed buffer in new continuation frame')
self.close()
return
self._last_buffer = payload
self._last_opcode = opcode
else:
self._last_buffer += payload
def close(self):
"""
Close the web socket connection and stop processing results. If the
connection is still open, a WebSocket close message will be sent to the
peer.
"""
if not self.connected:
return
self.connected = False
if self.handler.wfile.closed:
return
if select.select([], [self.handler.wfile], [], 0)[1]:
with self.lock:
self.handler.wfile.write(b'\x88\x00')
self.handler.wfile.flush()
self.on_closed()
def send_message(self, opcode, message):
"""
Send a message to the peer over the socket.
:param int opcode: The opcode for the message to send.
:param bytes message: The message data to send.
"""
if not isinstance(message, bytes):
message = message.encode('utf-8')
length = len(message)
if not select.select([], [self.handler.wfile], [], 0)[1]:
self.logger.error('the socket is not ready for writing')
self.close()
return
buffer = b''
buffer += struct.pack('B', 0x80 + opcode)
if length <= 125:
buffer += struct.pack('B', length)
elif 126 <= length <= 65535:
buffer += struct.pack('>BH', 126, length)
else:
buffer += struct.pack('>BQ', 127, length)
buffer += message
self._last_sent_opcode = opcode
self.lock.acquire()
try:
self.handler.wfile.write(buffer)
self.handler.wfile.flush()
except Exception:
self.logger.error('an error occurred while sending a message', exc_info=True)
self.close()
finally:
self.lock.release()
def send_message_binary(self, message):
return self.send_message(self._opcode_binary, message)
def send_message_ping(self, message):
return self.send_message(self._opcode_ping, message)
def send_message_text(self, message):
return self.send_message(self._opcode_text, message)
def on_closed(self):
"""
A method that can be over ridden and is called after the web socket is
closed.
"""
pass
def on_connected(self):
"""
A method that can be over ridden and is called after the web socket is
connected.
"""
pass
def on_message(self, opcode, message):
"""
The primary dispatch function to handle incoming WebSocket messages.
:param int opcode: The opcode of the message that was received.
:param bytes message: The data contained within the message.
"""
self.logger.debug("processing {0} (opcode: 0x{1:02x}) message".format(self._opcode_names.get(opcode, 'UNKNOWN'), opcode))
if opcode == self._opcode_close:
self.close()
elif opcode == self._opcode_ping:
if len(message) > 125:
self.close()
return
self.send_message(self._opcode_pong, message)
elif opcode == self._opcode_pong:
pass
elif opcode == self._opcode_binary:
self.on_message_binary(message)
elif opcode == self._opcode_text:
try:
message = self._decode_string(message)
except UnicodeDecodeError:
self.logger.warning('closing connection due to invalid unicode within a text message')
self.close()
else:
self.on_message_text(message)
elif opcode == self._opcode_continue:
self.close()
else:
self.logger.warning("received unknown opcode: {0} (0x{0:02x})".format(opcode))
self.close()
def on_message_binary(self, message):
"""
A method that can be over ridden and is called when a binary message is
received from the peer.
:param bytes message: The message data.
"""
pass
def on_message_text(self, message):
"""
A method that can be over ridden and is called when a text message is
received from the peer.
:param str message: The message data.
"""
pass
def ping(self):
self.send_message_ping(random_string(16))
class Serializer(object):
"""
This class represents a serilizer object for use with the RPC system.
"""
def __init__(self, name, charset='UTF-8', compression=None):
"""
:param str name: The name of the serializer to use.
:param str charset: The name of the encoding to use.
:param str compression: The compression library to use.
"""
if not name in g_serializer_drivers:
raise ValueError("unknown serializer '{0}'".format(name))
self.name = name
self._charset = charset
self._compression = compression
self.content_type = "{0}; charset={1}".format(self.name, self._charset)
if self._compression:
self.content_type += '; compression=' + self._compression
@classmethod
def from_content_type(cls, content_type):
"""
Build a serializer object from a MIME Content-Type string.
:param str content_type: The Content-Type string to parse.
:return: A new serializer instance.
:rtype: :py:class:`.Serializer`
"""
name = content_type
options = {}
if ';' in content_type:
name, options_str = content_type.split(';', 1)
for part in options_str.split(';'):
part = part.strip()
if '=' in part:
key, value = part.split('=')
else:
key, value = (part, None)
options[key] = value
# old style compatibility
if name.endswith('+zlib'):
options['compression'] = 'zlib'
name = name[:-5]
return cls(name, charset=options.get('charset', 'UTF-8'), compression=options.get('compression'))
def dumps(self, data):
"""
Serialize a python data type for transmission or storage.
:param data: The python object to serialize.
:return: The serialized representation of the object.
:rtype: bytes
"""
data = g_serializer_drivers[self.name]['dumps'](data)
if sys.version_info[0] == 3 and isinstance(data, str):
data = data.encode(self._charset)
if self._compression == 'zlib':
data = zlib.compress(data)
assert isinstance(data, bytes)
return data
def loads(self, data):
"""
Deserialize the data into it's original python object.
:param bytes data: The serialized object to load.
:return: The original python object.
"""
if not isinstance(data, bytes):
raise TypeError("loads() argument 1 must be bytes, not {0}".format(type(data).__name__))
if self._compression == 'zlib':
data = zlib.decompress(data)
if sys.version_info[0] == 3 and self.name.startswith('application/'):
data = data.decode(self._charset)
data = g_serializer_drivers[self.name]['loads'](data, (self._charset if sys.version_info[0] == 3 else None))
if isinstance(data, list):
data = tuple(data)
return data
SSLSNICertificate = collections.namedtuple('SSLSNICertificate', ('hostname', 'certfile', 'keyfile'))
"""
The information for a certificate used by SSL's Server Name Indicator (SNI)
extension.
.. versionadded:: 2.2.0
.. py:attribute:: hostname
The hostname string for requests which should use this certificate information.
.. py:attribute:: certfile
The path to the SSL certificate file on disk to use for the hostname.
.. py:attribute:: keyfile
The path to the SSL key file on disk to use for the hostname.
"""
SSLSNIEntry = collections.namedtuple('SSLSNIEntry', ('certificate', 'context'))
class AdvancedHTTPServer(object):
"""
This is the primary server class for the AdvancedHTTPServer module.
Custom servers must inherit from this object to be compatible. When
no *address* parameter is specified the address '0.0.0.0' is used and
the port is guessed based on if the server is run as root or not and
SSL is used.
"""
def __init__(self, handler_klass, address=None, addresses=None, use_threads=True, ssl_certfile=None, ssl_keyfile=None, ssl_version=None):
"""
:param handler_klass: The request handler class to use.
:type handler_klass: :py:class:`.RequestHandler`
:param tuple address: The address to bind to in the format (host, port).
:param tuple addresses: The addresses to bind to in the format (host, port, ssl).
:param bool use_threads: Whether to enable the use of a threaded handler.
:param str ssl_certfile: An SSL certificate file to use, setting this enables SSL.
:param str ssl_keyfile: An SSL certificate file to use.
:param ssl_version: The SSL protocol version to use.
"""
if addresses is None:
addresses = []
if address is None and not addresses:
if ssl_certfile is not None:
if os.getuid():
addresses.insert(0, ('0.0.0.0', 8443, True))
else:
addresses.insert(0, ('0.0.0.0', 443, True))
else:
if os.getuid():
addresses.insert(0, ('0.0.0.0', 8080, False))
else:
addresses.insert(0, ('0.0.0.0', 80, False))
elif address:
addresses.insert(0, (address[0], address[1], ssl_certfile is not None))
self.ssl_certfile = ssl_certfile
self.ssl_keyfile = ssl_keyfile
if not hasattr(self, 'logger'):
self.logger = logging.getLogger('AdvancedHTTPServer')
self.__should_stop = threading.Event()
self.__is_shutdown = threading.Event()
self.__is_shutdown.set()
self.__is_running = threading.Event()
self.__is_running.clear()
self.__server_thread = None
self.__wakeup_fd = None
self.__config = {
'basic_auth': None,
'robots_txt': b'User-agent: *\nDisallow: /\n',
'serve_files': False,
'serve_files_list_directories': True, # irrelevant if serve_files == False
'serve_files_root': os.getcwd(),
'serve_robots_txt': True,
'server_version': 'AdvancedHTTPServer/' + __version__
}
self.sub_servers = []
"""The instances of :py:class:`.ServerNonThreaded` that are responsible for listening on each configured address."""
if use_threads:
server_klass = ServerThreaded
else:
server_klass = ServerNonThreaded
for address in addresses:
server = server_klass((address[0], address[1]), handler_klass, config=self.__config)
use_ssl = (len(address) == 3 and address[2])
server.using_ssl = use_ssl
self.sub_servers.append(server)
self.logger.info("listening on {0}:{1}".format(address[0], address[1]) + (' with ssl' if use_ssl else ''))
self._ssl_sni_entries = None
if any([server.using_ssl for server in self.sub_servers]):
self._ssl_sni_entries = {}
if ssl_version is None or isinstance(ssl_version, str):
ssl_version = resolve_ssl_protocol_version(ssl_version)
self._ssl_ctx = ssl.SSLContext(ssl_version)
self._ssl_ctx.load_cert_chain(ssl_certfile, keyfile=ssl_keyfile)
if g_ssl_has_server_sni:
self._ssl_ctx.set_servername_callback(self._ssl_servername_callback)
for server in self.sub_servers:
if not server.using_ssl:
continue
server.socket = self._ssl_ctx.wrap_socket(server.socket, server_side=True, do_handshake_on_connect=False)
if hasattr(handler_klass, 'custom_authentication'):
self.logger.debug('a custom authentication function is being used')
self.auth_set(True)
def _ssl_servername_callback(self, sock, hostname, context):
sni_entry = self._ssl_sni_entries.get(hostname)
if sni_entry:
self.logger.debug('setting a new ssl context for sni hostname: %s', hostname)
sock.context = sni_entry.context
return None
def add_sni_cert(self, hostname, ssl_certfile=None, ssl_keyfile=None, ssl_version=None):
"""
Add an SSL certificate for a specific hostname as supported by SSL's
Server Name Indicator (SNI) extension. See :rfc:`3546` for more details
on SSL extensions. In order to use this method, the server instance must
have been initialized with at least one address configured for SSL.
.. warning::
This method will raise a :py:exc:`RuntimeError` if either the SNI
extension is not available in the :py:mod:`ssl` module or if SSL was
not enabled at initialization time through the use of arguments to
:py:meth:`~.__init__`.
.. versionadded:: 2.0.0
:param str hostname: The hostname for this configuration.
:param str ssl_certfile: An SSL certificate file to use, setting this enables SSL.
:param str ssl_keyfile: An SSL certificate file to use.
:param ssl_version: The SSL protocol version to use.
"""
if not g_ssl_has_server_sni:
raise RuntimeError('the ssl server name indicator extension is unavailable')
if self._ssl_sni_entries is None:
raise RuntimeError('ssl was not enabled on initialization')
if ssl_certfile:
ssl_certfile = os.path.abspath(ssl_certfile)
if ssl_keyfile:
ssl_keyfile = os.path.abspath(ssl_keyfile)
cert_info = SSLSNICertificate(hostname, ssl_certfile, ssl_keyfile)
if ssl_version is None or isinstance(ssl_version, str):
ssl_version = resolve_ssl_protocol_version(ssl_version)
ssl_ctx = ssl.SSLContext(ssl_version)
ssl_ctx.load_cert_chain(ssl_certfile, keyfile=ssl_keyfile)
self._ssl_sni_entries[hostname] = SSLSNIEntry(context=ssl_ctx, certificate=cert_info)
def remove_sni_cert(self, hostname):
"""
Remove the SSL Server Name Indicator (SNI) certificate configuration for
the specified *hostname*.
.. warning::
This method will raise a :py:exc:`RuntimeError` if either the SNI
extension is not available in the :py:mod:`ssl` module or if SSL was
not enabled at initialization time through the use of arguments to
:py:meth:`~.__init__`.
.. versionadded:: 2.2.0
:param str hostname: The hostname to delete the SNI configuration for.
"""
if not g_ssl_has_server_sni:
raise RuntimeError('the ssl server name indicator extension is unavailable')
if self._ssl_sni_entries is None:
raise RuntimeError('ssl was not enabled on initialization')
sni_entry = self._ssl_sni_entries.pop(hostname, None)
if sni_entry is None:
raise ValueError('the specified hostname does not have an sni certificate configuration')
@property
def sni_certs(self):
"""
.. versionadded:: 2.2.0
:return: Return a tuple of :py:class:`~.SSLSNICertificate` instances for each of the certificates that are configured.
:rtype: tuple
"""
if not g_ssl_has_server_sni or self._ssl_sni_entries is None:
return tuple()
return tuple(entry.certificate for entry in self._ssl_sni_entries.values())
@property
def server_started(self):
return self.__server_thread is not None
def _serve_ready(self):
read_check = [self.__wakeup_fd]
for sub_server in self.sub_servers:
read_check.extend(sub_server.read_checkable_fds)
all_read_ready, _, _ = select.select(read_check, [], [])
for read_ready in all_read_ready:
if isinstance(read_ready, (_RequestEmbryo, http.server.HTTPServer)):
read_ready.serve_ready()
def serve_forever(self, fork=False):
"""
Start handling requests. This method must be called and does not
return unless the :py:meth:`.shutdown` method is called from
another thread.
:param bool fork: Whether to fork or not before serving content.
:return: The child processes PID if *fork* is set to True.
:rtype: int
"""
if fork:
if not hasattr(os, 'fork'):
raise OSError('os.fork is not available')
child_pid = os.fork()
if child_pid != 0:
self.logger.info('forked child process: ' + str(child_pid))
return child_pid
self.__server_thread = threading.current_thread()
self.__wakeup_fd = WakeupFd()
self.__is_shutdown.clear()
self.__should_stop.clear()
self.__is_running.set()
while not self.__should_stop.is_set():
try:
self._serve_ready()
except socket.error:
self.logger.warning('encountered socket error, stopping server')
self.__should_stop.set()
self.__is_shutdown.set()
self.__is_running.clear()
return 0
def shutdown(self):
"""Shutdown the server and stop responding to requests."""
self.__should_stop.set()
if self.__server_thread == threading.current_thread():
self.__is_shutdown.set()
self.__is_running.clear()
else:
if self.__wakeup_fd is not None:
os.write(self.__wakeup_fd.write_fd, b'\x00')
self.__is_shutdown.wait()
if self.__wakeup_fd is not None:
self.__wakeup_fd.close()
self.__wakeup_fd = None
for server in self.sub_servers:
server.shutdown()
@property
def serve_files(self):
"""
Whether to enable serving files or not.
:type: bool
"""
return self.__config['serve_files']
@serve_files.setter
def serve_files(self, value):
value = bool(value)
if self.__config['serve_files'] == value:
return
self.__config['serve_files'] = value
if value:
self.logger.info('serving files has been enabled')
else:
self.logger.info('serving files has been disabled')
@property
def serve_files_root(self):
"""
The web root to use when serving files.
:type: str
"""
return self.__config['serve_files_root']
@serve_files_root.setter
def serve_files_root(self, value):
self.__config['serve_files_root'] = os.path.abspath(value)
@property
def serve_files_list_directories(self):
"""
Whether to list the contents of directories. This is only honored
when :py:attr:`.serve_files` is True.
:type: bool
"""
return self.__config['serve_files_list_directories']
@serve_files_list_directories.setter
def serve_files_list_directories(self, value):
self.__config['serve_files_list_directories'] = bool(value)
@property
def serve_robots_txt(self):
"""
Whether to serve a default robots.txt file which denies everything.
:type: bool
"""
return self.__config['serve_robots_txt']
@serve_robots_txt.setter
def serve_robots_txt(self, value):
self.__config['serve_robots_txt'] = bool(value)
@property
def server_version(self):
"""
The server version to be sent to clients in headers.
:type: str
"""
return self.__config['server_version']
@server_version.setter
def server_version(self, value):
self.__config['server_version'] = str(value)
def auth_set(self, status):
"""
Enable or disable requiring authentication on all incoming requests.
:param bool status: Whether to enable or disable requiring authentication.
"""
if not bool(status):
self.__config['basic_auth'] = None
self.logger.info('basic authentication has been disabled')
else:
self.__config['basic_auth'] = {}
self.logger.info('basic authentication has been enabled')
def auth_delete_creds(self, username=None):
"""
Delete the credentials for a specific username if specified or all
stored credentials.
:param str username: The username of the credentials to delete.
"""
if not username:
self.__config['basic_auth'] = {}
self.logger.info('basic authentication database has been cleared of all entries')
return
del self.__config['basic_auth'][username]
def auth_add_creds(self, username, password, pwtype='plain'):
"""
Add a valid set of credentials to be accepted for authentication.
Calling this function will automatically enable requiring
authentication. Passwords can be provided in either plaintext or
as a hash by specifying the hash type in the *pwtype* argument.
:param str username: The username of the credentials to be added.
:param password: The password data of the credentials to be added.
:type password: bytes, str
:param str pwtype: The type of the *password* data, (plain, md5, sha1, etc.).
"""
if not isinstance(password, (bytes, str)):
raise TypeError("auth_add_creds() argument 2 must be bytes or str, not {0}".format(type(password).__name__))
pwtype = pwtype.lower()
if not pwtype in ('plain', 'md5', 'sha1', 'sha256', 'sha384', 'sha512'):
raise ValueError('invalid password type, must be \'plain\', or supported by hashlib')
if self.__config.get('basic_auth') is None:
self.__config['basic_auth'] = {}
self.logger.info('basic authentication has been enabled')
if pwtype != 'plain':
algorithms_available = getattr(hashlib, 'algorithms_available', ()) or getattr(hashlib, 'algorithms', ())
if pwtype not in algorithms_available:
raise ValueError('hashlib does not support the desired algorithm')
# only md5 and sha1 hex for backwards compatibility
if pwtype == 'md5' and len(password) == 32:
password = binascii.unhexlify(password)
elif pwtype == 'sha1' and len(password) == 40:
password = binascii.unhexlify(password)
if not isinstance(password, bytes):
password = password.encode('UTF-8')
if len(hashlib.new(pwtype, b'foobar').digest()) != len(password):
raise ValueError('the length of the password hash does not match the type specified')
self.__config['basic_auth'][username] = {'value': password, 'type': pwtype}
class ServerTestCase(unittest.TestCase):
"""
A base class for unit tests with AdvancedHTTPServer derived classes.
"""
server_class = AdvancedHTTPServer
"""The :py:class:`.AdvancedHTTPServer` class to use as the server, this can be overridden by subclasses."""
handler_class = RequestHandler
"""The :py:class:`.RequestHandler` class to use as the request handler, this can be overridden by subclasses."""
def __init__(self, *args, **kwargs):
super(ServerTestCase, self).__init__(*args, **kwargs)
self.test_resource = "/{0}".format(random_string(40))
"""
A resource which has a handler set to it which will respond with
a 200 status code and the message 'Hello World!'
"""
self.server_address = ('localhost', random.randint(30000, 50000))
self._server_kwargs = {
'address': self.server_address
}
if hasattr(self, 'assertRegexpMatches') and not hasattr(self, 'assertRegexMatches'):
self.assertRegexMatches = self.assertRegexpMatches
if hasattr(self, 'assertRaisesRegexp') and not hasattr(self, 'assertRaisesRegex'):
self.assertRaisesRegex = self.assertRaisesRegexp
def setUp(self):
RegisterPath("^{0}$".format(self.test_resource[1:]), self.handler_class.__name__)(self._test_resource_handler)
self.server = self.server_class(self.handler_class, **self._server_kwargs)
self.assertTrue(isinstance(self.server, AdvancedHTTPServer))
self.server_thread = threading.Thread(target=self.server.serve_forever)
self.server_thread.daemon = True
self.server_thread.start()
self.assertTrue(self.server_thread.is_alive())
self.shutdown_requested = False
if len(self.server_address) == 3 and self.server_address[2]:
context = ssl.create_default_context()
context.check_hostname = False
context.verify_mode = ssl.CERT_NONE
self.http_connection = http.client.HTTPSConnection(self.server_address[0], self.server_address[1], context=context)
else:
self.http_connection = http.client.HTTPConnection(self.server_address[0], self.server_address[1])
self.http_connection.connect()
def _test_resource_handler(self, handler, query):
del query
handler.send_response_full(b'Hello World!\n')
return
def assertHTTPStatus(self, http_response, status):
"""
Check an HTTP response object and ensure the status is correct.
:param http_response: The response object to check.
:type http_response: :py:class:`http.client.HTTPResponse`
:param int status: The status code to expect for *http_response*.
"""
self.assertTrue(isinstance(http_response, http.client.HTTPResponse))
error_message = "HTTP Response received status {0} when {1} was expected".format(http_response.status, status)
self.assertEqual(http_response.status, status, msg=error_message)
def http_request(self, resource, method='GET', headers=None):
"""
Make an HTTP request to the test server and return the response.
:param str resource: The resource to issue the request to.
:param str method: The HTTP verb to use (GET, HEAD, POST etc.).
:param dict headers: The HTTP headers to provide in the request.
:return: The HTTP response object.
:rtype: :py:class:`http.client.HTTPResponse`
"""
headers = (headers or {})
if not 'Connection' in headers:
headers['Connection'] = 'keep-alive'
self.http_connection.request(method, resource, headers=headers)
time.sleep(0.025)
response = self.http_connection.getresponse()
response.data = response.read()
return response
def tearDown(self):
if not self.shutdown_requested:
self.assertTrue(self.server_thread.is_alive())
self.http_connection.close()
self.server.shutdown()
self.server_thread.join(10.0)
self.assertFalse(self.server_thread.is_alive())
del self.server
def main():
try:
server = build_server_from_argparser()
except ImportError:
server = AdvancedHTTPServer(RequestHandler, use_threads=False)
server.serve_files_root = '.'
server.serve_files_root = (server.serve_files_root or '.')
server.serve_files = True
try:
server.serve_forever()
except KeyboardInterrupt:
pass
server.shutdown()
logging.shutdown()
return 0
if __name__ == '__main__':
main() | AdvancedHTTPServer | /AdvancedHTTPServer-2.2.0.tar.gz/AdvancedHTTPServer-2.2.0/advancedhttpserver.py | advancedhttpserver.py |
from globalfunc import *
from settings import Settings
import re
try:
import converter
except ImportError:
converter = None
class ConverterHandler(object):
def __init__(self, variant, settings = {}):
self.settings = Settings(settings)
### INITIATE CONVERTERS AND RULEPARSER ###
self.variant = variant
self.converters = {}
self.ruleparser = _RuleParser(variant, self)
for vvariant in self.settings.VALIDVARIANTS:
self.converters[vvariant] = _Converter(vvariant, self)
self.mainconverter = self.converters[variant]
def convert(self, content, parserules = True):
return self.mainconverter.convert(content, parserules)
def convert_to(self, variant, content, parserules = True):
return self.converters[variant].convert(content, parserules)
def parse(self, text):
return self.ruleparser.parse(text)
class _Converter(object):
def __init__(self, variant, handler):
### DEFINATION OF VARIBLES ###
self.variant = variant # The variant we want convert to
self.handler = handler
self.convtable = {} # The conversion table
self.quicktable = {} # A quick table
self.maxlen = 0 # Max length of the words
self.maxdepth = 10 # Depth for recursive convert rule
self.hooks = {'depth_exceed_msg': None,
'rule_parser': None} # Hooks for converter
### DEFINATION OF LAMBDA METHOD ###
self.get_message = lambda name, *args, **kwargs: get_message(variant, name, *args, **kwargs)
### INITIATE FUNCTIONS ###
self.load_table() # Load default table
self.set_default_hooks() # As it says
"""def get_message(self, name, *args, **kwargs):
return get_message(self.variant, name, *args, **kwargs)"""
def set_default_hooks(self):
"""As it says."""
self.hooks['depth_exceed_msg'] = lambda depth: self.get_message('deptherr', depth)
self.hooks['rule_parser'] = self.handler.ruleparser.parse
def set_hook(self, name, callfunc):
self.hooks[name] = callfunc
def add_quick(self, ori):
"""Add item to quicktable."""
orilen = len(ori)
self.maxlen = orilen > self.maxlen and orilen or self.maxlen
try:
wordlens = self.quicktable[ori[0]]
except KeyError, err:
self.quicktable[ori[0]] = [orilen]
else:
wllen = len(wordlens)
pos = wllen // 2
while pos > -1 and pos < wllen + 1:
if pos == 0: left = orilen + 1
else: left = wordlens[pos - 1]
if pos == wllen: right = orilen - 1
else: right = wordlens[pos]
#print left, orilen, right, pos
if orilen == left or orilen == right:
break
elif left > orilen and orilen > right:
wordlens.insert(pos, orilen)
break
elif orilen > left:
pos -= pos // 2 or 1
else: # right > orilen
pos += (wllen - pos) // 2 or 1
def load_table(self, isgroup = False):
"""Load a conversion table.
Raise ImportException if an import error happens."""
newtable = __import__('langconv.defaulttables.%s' % \
self.variant.replace('-', '_'), fromlist = 'convtable').convtable
self.convtable.update(newtable)
# try to load quicktable from cache
if not isgroup:
self.quicktable = get_cache(self.handler.settings, '%s-qtable' % self.variant)
self.maxlen = get_cache(self.handler.settings, '%s-maxlen' % self.variant)
if self.quicktable is not None and self.maxlen is not None:
return
else:
self.quicktable = {}
self.maxlen = 0
for (ori, dst) in newtable.iteritems():
self.add_quick(ori)
# try to dump quicktable to cache
if not isgroup:
set_cache(self.handler.settings, '%s-qtable' % self.variant, self.quicktable)
set_cache(self.handler.settings, '%s-maxlen' % self.variant, self.maxlen)
def update(self, newtable):
self.convtable.update(newtable)
for (ori, dst) in newtable.iteritems():
self.add_quick(ori)
def add_rule(self, ori, dst):
"""add a rule to convtable and quicktable"""
self.convtable[ori] = dst
self.add_quick(ori)
def del_rule(self, ori, dst):
if self.convtable.get(ori) == dst:
self.convtable.pop(ori)
if converter: # The C module has been imported correctly
def convert(self, content, parserules = True):
content = to_unicode(content)
return converter.convert(self, content, parserules)
else:
def recursive_convert_rule(self, content, pos, contlen, depth = 1):
oripos = pos
out = []
exceedtime = 0
while pos < contlen:
token = content[pos:pos + 2]
if token == '-{':
if depth < self.maxdepth:
inner, pos = self.recursive_convert_rule(content, pos + 2, contlen, depth + 1)
out.append(inner)
continue
else:
if not exceedtime and self.hooks['depth_exceed_msg'] is not None:
out.append(self.hooks['depth_exceed_msg'](depth))
exceedtime += 1
elif token == '}-':
if depth >= self.maxdepth and exceedtime:
exceedtime -= 1
else:
inner = ''.join(out)
if not exceedtime:
inner = self.handler.parse(inner)
return (inner, pos + 2)
out.append(content[pos])
pos += 1
else:
# unclosed rule, won't parse but still auto convert
return ('-', oripos - 1)
def convert(self, content, parserules = True):
"""Use the specified variant to convert the content.
content is the string to convert,
set parserules to False if you don't want to parse rules."""
content = to_unicode(content)
out = []
contlen = len(content)
pos = 0
trytime = 0 # for debug
while pos < contlen:
if parserules and content[pos:pos + 2] == '-{':
# markup found
inner, pos = self.recursive_convert_rule(content, pos + 2, contlen)
out.append(inner)
continue
wordlens = self.quicktable.get(content[pos])
single = content[pos]
if wordlens is None:
trytime += 1 # for debug
out.append(single)
pos += 1
else:
for wordlen in wordlens:
trytime += 1 # for debug
oriword = content[pos:pos + wordlen]
convword = self.convtable.get(oriword)
if convword is not None:
out.append(convword)
pos += wordlen
break
else:
trytime += 1 # for debug
out.append(single)
pos += 1
print trytime # for debug
return ''.join(out)
class _RuleParser(object):
def __init__(self, variant, handler):
self.variant = variant
self.handler = handler
self.flagdict = {'A': lambda flag, rule: self.add_rule(flag, rule, display = True),
# add a single rule to convtable and return the converted result
# -{FLAG|rule}-
# FLAG: A[[;NF]|[;NA:variant]]
'D': self.describe_rule,
# describe the rule
# -{D|rule}-
'G': self.add_group,
# add a lot rules from a group to convtable
# -{G|groupname}-
'H': lambda flag, rule: self.add_rule(flag, rule, display = False),
# add a single rule to convtable
# -{FLAG|rule}-
# FLAG: H[[;NF]|[;NA:variant]]
'R': self.display_raw,
# raw content
# -{R|content}-
'T': self.set_title,
# set title
# -{FLAG|rule}-
# FLAG: T[[;NF]|[;NA:variant]]
'-': self.remove_rule,
# remove rules from convtable
# -{-|rule}-
}
self.variants = self.handler.settings.VALIDVARIANTS
self.fallback = self.handler.settings.VARIANTFALLBACK
self.asfallback = {}
for var in self.variants:
self.asfallback[var] = []
for varright in self.variants:
for varleft in self.fallback[varright]:
self.asfallback[varleft].append(varright)
self.myfallback = self.fallback[self.variant]
varsep_pattern = ';\s*(?='
for variant in self.variants:
varsep_pattern += '%s\s*:|' % variant # zh-hans:xxx;zh-hant:yyy
varsep_pattern += '[^;]*?=>\s*%s\s*:|' % variant # xxx=>zh-hans:yyy; xxx=>zh-hant:zzz
varsep_pattern += '\s*$)'
self.varsep = re.compile(varsep_pattern)
def parse(self, text):
flagrule = text.split(u'|', 1)
if len(flagrule) == 1:
# flag is empty, so just call the default rule parser
return self.parse_rule(text, withtable = False)
else:
flag, rule = flagrule
flag = flag.strip()
rule = rule.strip()
ruleparser = self.flagdict.get(flag[0])
if ruleparser:
# we got a valid flag, call the parser now
return ruleparser(flag, rule)
else:
# perhaps it's a "fallback convert"
return self.fb_convert(text, flag, rule)
def parse_rule(self, rule, withtable = True, allowfallback = True,
notadd = []):
"""parse rule and get default output."""
#TODO:
#add flags:
# NOFALLBACK
# NOCONVERT
table = {}
for variant in self.variants:
table[variant] = {}
bidtable = {}
unidtable = {}
all = ''
out = ''
overrule = False
rule = rule.replace(u'=>', u'=>')
choices = self.varsep.split(rule)
for choice in choices:
if choice == '':
continue
#first, we split [xxx=>]zh-hans:yyy to ([xxx=>]zh-hans, yyy)
part = choice.split(u':', 1)
# only 'yyy'
if len(part) == 1:
all = part[0]
out = all # output
continue
variant = part[0].strip() # [xxx=>]zh-hans
toword = part[1].strip() # yyy
#then, we split xxx=>zh-hans to (xxx, zh-hans)
unid = variant.split(u'=>', 1)
if toword:
# only 'zh-hans:xxx'
if len(unid) == 1 and variant in self.variants:
if variant == self.variant:
out = toword
overrule = True
elif allowfallback and \
not overrule and \
variant in self.myfallback:
out = toword
if withtable:
bidtable[variant] = toword
# 'xxx=>zh-hans:yyy'
elif len(unid) == 2:
variant = unid[1].strip() # zh-hans
if variant == self.variant:
out = toword
overrule = True
elif allowfallback and \
not overrule and \
variant in self.myfallback:
out = toword
if withtable:
fromword = unid[0].strip()
if not unidtable.has_key(variant):
unidtable[variant] = {}
if toword and variant in self.variants:
if variant not in notadd:
unidtable[variant][fromword] = toword
if allowfallback:
for fbv in self.asfallback[variant]:
if fbv not in notadd:
if not unidtable.has_key(fbv):
unidtable[fbv] = {}
if not unidtable[fbv].has_key(fromword):
unidtable[fbv][fromword] = toword
elif out == '':
out = choice
elif out == '':
out = choice
if not withtable:
return out
### ELSE
# add 'xxx': 'xxx' to every variant
if all:
for variant in self.variants:
table[variant][all] = all
# parse bidtable, aka tables filled by 'zh-hans:xxx'
for (variant, toword) in bidtable.iteritems():
for fromword in bidtable.itervalues():
if variant not in notadd:
table[variant][fromword] = toword
if allowfallback:
for fbv in self.asfallback[variant]:
if not table[fbv].has_key(fromword) and \
fbv not in notadd:
table[fbv][fromword] = toword
# parse unidtable, aka tables filled by 'xxx=>zh-hans:yyy'
for variant in unidtable.iterkeys():
table[variant].update(unidtable[variant])
### ENDIF
return (out, table)
def _parse_multiflag(self, flag):
allowfallback = True
notadd = []
# a valid multiflag could be:
# (A|H|T|-)[[;NF]|[;NA:variant]]
for fpart in flag.split(';'):
fpart = fpart.strip()
if fpart == 'NF': # no fallback
allowfallback = False
elif fpart.startswith('NA'): # not add
napart = fpart.split(':', 1)
if len(napart) == 2 and napart[0].strip() == 'NA' and \
napart[1].strip() in self.variants:
notadd.append(napart[1])
return (allowfallback, notadd)
def add_rule(self, flag, rule, display):
af, na = self._parse_multiflag(flag)
out, tables = self.parse_rule(rule, withtable = True, \
allowfallback = af, \
notadd = na)
for (variant, table) in tables.iteritems():
self.handler.converters[variant].update(table)
if display:
return out
else:
return u''
def describe_rule(self, flag, rule):
return rule
def add_group(self, flag, rule):
return ''
def display_raw(self, flag, rule):
return rule
def set_title(self, flag, rule):
af, na = self._parse_multiflag(flag)
out = self.parse_rule(rule, withtable = False, \
allowfallback = af, \
notadd = na)
return ''
def remove_rule(self, flag, rule):
af, na = self._parse_multiflag(flag)
out, tables = self.parse_rule(rule, withtable = True, \
allowfallback = af, \
notadd = na)
for (variant, table) in tables.iteritems():
for oridst in table.iteritems():
self.handler.converters[variant].del_rule(*oridst)
return ''
def fb_convert(self, text, flag, rule):
return text | AdvancedLangConv | /AdvancedLangConv-0.01.tar.gz/AdvancedLangConv-0.01/langconv/langconv.py | langconv.py |
convtable = {
u'ใฉ': u'ๅธ',
u'ใฅ': u'ๅ',
u'ใ': u'ๅ',
u'ใ': u'ๅ',
u'ใ': u'ๅ',
u'ใ': u'๐กต',
u'ใ ': u'๐กข',
u'ใฟ': u'๐ก น',
u'ใ': u'ใ ',
u'ใง': u'ๆ',
u'ใง': u'ๆ',
u'ใจซ': u'ใฉ',
u'ใฑฉ': u'ๆฎฐ',
u'ใฑฎ': u'ๆฎจ',
u'ใฒฟ': u'็',
u'ใณ ': u'ๆพพ',
u'ใถ': u'้ธ',
u'ใถถ': u'็ถ',
u'ใถฝ': u'็
ฑ',
u'ใบ': u'็ฑ',
u'ใป': u'๐คซฉ',
u'ใป': u'๐คชบ',
u'ไ': u'็',
u'ไ
': u'็จ',
u'ไฒ': u'็ญด',
u'ไถ': u'ไท',
u'ไท': u'็ดฌ',
u'ไธ': u'็ธณ',
u'ไน': u'็ต
',
u'ไบ': u'ไ',
u'ไผ': u'็ถ',
u'ไฝ': u'็ถต',
u'ไพ': u'ไป',
u'ไ': u'็นฟ',
u'ไ': u'็นธ',
u'ไฝ': u'๐ฆช',
u'ไ': u'่ณ',
u'ไ': u'่ฎ',
u'ไ': u'๐ง',
u'ไ': u'๐งต',
u'ไ': u'่ฅฌ',
u'ไฃ': u'่จข',
u'ไฅ': u'๐งฉ',
u'ไง': u'่ญ
',
u'ไ': u'่ฒ',
u'ไ': u'๐งตณ',
u'ไ': u'ไผ',
u'ไ': u'่ณฐ',
u'ไข': u'๐จข',
u'ไฅบ': u'้พ',
u'ไฅฝ': u'้บ',
u'ไฅฟ': u'๐จฏ
',
u'ไฆ': u'๐จฆซ',
u'ไฆ': u'๐จง',
u'ไฆ': u'้ฏ',
u'ไฆ
': u'้ฅ',
u'ไฉ': u'้ฆ',
u'ไญช': u'๐ฉฏ',
u'ไฏ': u'๐ฉฃ',
u'ไฏ': u'้จง',
u'ไฏ
': u'ไฏ',
u'ไฒ': u'ไฑฝ',
u'ไฒ': u'๐ฉถ',
u'ไฒ': u'้ฎฃ',
u'ไฒ ': u'้ฐ',
u'ไฒก': u'้ฐ',
u'ไฒข': u'้ฐง',
u'ไฒฃ': u'ไฑท',
u'ได': u'้ณพ',
u'ได': u'้ต',
u'ได': u'้ดท',
u'ได': u'้ถ',
u'ได': u'้ถช',
u'ได': u'้ท',
u'ได': u'้ทฟ',
u'ไธ': u'่ฌ',
u'ไธ': u'่',
u'ไธ': u'ๅฐ',
u'ไธ': u'ๆฅญ',
u'ไธ': u'ๅข',
u'ไธ': u'ๆฑ',
u'ไธ': u'็ตฒ',
u'ไธข': u'ไธ',
u'ไธค': u'ๅ
ฉ',
u'ไธฅ': u'ๅด',
u'ไธง': u'ๅช',
u'ไธช': u'ๅ',
u'ไธฐ': u'่ฑ',
u'ไธด': u'่จ',
u'ไธบ': u'็บ',
u'ไธฝ': u'้บ',
u'ไธพ': u'่',
u'ไน': u'็พฉ',
u'ไน': u'็',
u'ไน': u'ๆจ',
u'ไน': u'ๅฌ',
u'ไน ': u'็ฟ',
u'ไนก': u'้',
u'ไนฆ': u'ๆธ',
u'ไนฐ': u'่ฒท',
u'ไนฑ': u'ไบ',
u'ไบ': u'็ญ',
u'ไบ': u'ๆผ',
u'ไบ': u'่ง',
u'ไบ': u'้ฒ',
u'ไบ': u'ไบ',
u'ไบง': u'็ข',
u'ไบฉ': u'็',
u'ไบฒ': u'่ฆช',
u'ไบต': u'่คป',
u'ไบธ': u'ๅฒ',
u'ไบฟ': u'ๅ',
u'ไป
': u'ๅ
',
u'ไป': u'ๅพ',
u'ไป': u'ไพ',
u'ไป': u'ๅ',
u'ไปช': u'ๅ',
u'ไปฌ': u'ๅ',
u'ไปท': u'ๅน',
u'ไผ': u'็พ',
u'ไผ': u'ๅช',
u'ไผ': u'ๆ',
u'ไผ': u'ๅด',
u'ไผ': u'ๅ',
u'ไผ': u'ๅ',
u'ไผ ': u'ๅณ',
u'ไผฃ': u'ไฟ',
u'ไผค': u'ๅท',
u'ไผฅ': u'ๅ',
u'ไผฆ': u'ๅซ',
u'ไผง': u'ๅ',
u'ไผช': u'ๅฝ',
u'ไผซ': u'ไฝ',
u'ไฝ': u'้ซ',
u'ไฝฅ': u'ๅ',
u'ไพ ': u'ไฟ ',
u'ไพฃ': u'ไพถ',
u'ไพฅ': u'ๅฅ',
u'ไพฆ': u'ๅต',
u'ไพง': u'ๅด',
u'ไพจ': u'ๅ',
u'ไพฉ': u'ๅ',
u'ไพช': u'ๅ',
u'ไพฌ': u'ๅ',
u'ไฟฃ': u'ไฟ',
u'ไฟฆ': u'ๅ',
u'ไฟจ': u'ๅผ',
u'ไฟฉ': u'ๅ',
u'ไฟช': u'ๅท',
u'ไฟซ': u'ๅ',
u'ไฟญ': u'ๅ',
u'ๅบ': u'ๅต',
u'ๅพ': u'ๅพ',
u'ๅฌ': u'ๅฏ',
u'ๅป': u'ๅ',
u'ๅพ': u'ๅจ',
u'ๅฟ': u'ๅ',
u'ๅฅ': u'ๅป',
u'ๅง': u'ๅ',
u'ๅจ': u'ๅฒ',
u'ๅฉ': u'ๅบ',
u'ๅฟ': u'ๅ
',
u'ๅ
': u'ๅ
',
u'ๅ
': u'ๅ
',
u'ๅ
': u'้ปจ',
u'ๅ
ฐ': u'่ญ',
u'ๅ
ณ': u'้',
u'ๅ
ด': u'่',
u'ๅ
น': u'่ฒ',
u'ๅ
ป': u'้ค',
u'ๅ
ฝ': u'็ธ',
u'ๅ': u'ๅ
',
u'ๅ
': u'ๅ
ง',
u'ๅ': u'ๅฒก',
u'ๅ': u'ๅ',
u'ๅ': u'ๅฏซ',
u'ๅ': u'่ป',
u'ๅ': u'่พฒ',
u'ๅฏ': u'้ฆฎ',
u'ๅฒ': u'ๆฒ',
u'ๅณ': u'ๆฑบ',
u'ๅต': u'ๆณ',
u'ๅป': u'ๅ',
u'ๅ': u'ๅ',
u'ๅ': u'ๆถผ',
u'ๅ': u'ๆธ',
u'ๅ': u'ๆน',
u'ๅ': u'ๅ',
u'ๅ ': u'ๅนพ',
u'ๅค': u'้ณณ',
u'ๅซ': u'้ณง',
u'ๅญ': u'ๆ',
u'ๅฏ': u'ๅฑ',
u'ๅป': u'ๆ',
u'ๅฟ': u'้ฟ',
u'ๅ': u'่ป',
u'ๅ': u'ๅ',
u'ๅ': u'ๅ',
u'ๅ': u'ๅ',
u'ๅ': u'ๅต',
u'ๅ ': u'ๅช',
u'ๅซ': u'ๅฅ',
u'ๅฌ': u'ๅ',
u'ๅญ': u'ๅ',
u'ๅน': u'ๅ',
u'ๅฝ': u'ๅ',
u'ๅฟ': u'ๅ',
u'ๅ': u'ๅด',
u'ๅ': u'ๅ',
u'ๅ': u'ๅฎ',
u'ๅ': u'ๅ',
u'ๅฅ': u'ๅ',
u'ๅง': u'ๅ',
u'ๅ': u'ๅธ',
u'ๅ': u'่พฆ',
u'ๅก': u'ๅ',
u'ๅข': u'ๅฑ',
u'ๅจ': u'ๅ',
u'ๅฑ': u'ๅต',
u'ๅฒ': u'ๅ',
u'ๅณ': u'ๅ',
u'ๅฟ': u'ๅข',
u'ๅ': u'ๅ',
u'ๅ': u'ๅฉ',
u'ๅ': u'ๅป',
u'ๅฆ': u'ๅญ',
u'ๅฎ': u'ๅฑ',
u'ๅบ': u'ๅ',
u'ๅป': u'้ซ',
u'ๅ': u'่ฏ',
u'ๅ': u'ๅ',
u'ๅ': u'ๅฎ',
u'ๅ': u'่ณฃ',
u'ๅข': u'็ง',
u'ๅค': u'้นต',
u'ๅซ': u'่ก',
u'ๅด': u'ๅป',
u'ๅบ': u'ๅทน',
u'ๅ': u'ๅป ',
u'ๅ
': u'ๅปณ',
u'ๅ': u'ๆญท',
u'ๅ': u'ๅฒ',
u'ๅ': u'ๅฃ',
u'ๅ': u'ๅญ',
u'ๅ': u'ๅ',
u'ๅ': u'้พ',
u'ๅ': u'ๅป',
u'ๅข': u'ๅป',
u'ๅฃ': u'ๅด',
u'ๅฆ': u'ๅป',
u'ๅจ': u'ๅป',
u'ๅฉ': u'ๅป',
u'ๅฎ': u'ๅป',
u'ๅฟ': u'็ธฃ',
u'ๅ': u'ๅ',
u'ๅ': u'ๅ',
u'ๅ': u'้',
u'ๅ': u'็ผ',
u'ๅ': u'่ฎ',
u'ๅ': u'ๆ',
u'ๅ ': u'็',
u'ๅถ': u'่',
u'ๅท': u'่',
u'ๅน': u'ๅ',
u'ๅฝ': u'ๅฐ',
u'ๅ': u'ๅพ',
u'ๅ': u'ๅ',
u'ๅ': u'ๅ',
u'ๅ': u'ๅ',
u'ๅฃ': u'ๅ',
u'ๅจ': u'ๅธ',
u'ๅฌ': u'่ฝ',
u'ๅฏ': u'ๅ',
u'ๅด': u'ๅณ',
u'ๅ': u'ๅถ',
u'ๅ': u'ๅธ',
u'ๅ': u'ๅ',
u'ๅ': u'ๅ',
u'ๅ': u'ๅฆ',
u'ๅ': u'ๅ',
u'ๅ': u'ๅก',
u'ๅ': u'ๅผ',
u'ๅ': u'ๅ',
u'ๅ': u'ๅ',
u'ๅ': u'่ฉ ',
u'ๅ': u'ๅจ',
u'ๅ': u'ๅ',
u'ๅ': u'ๅ',
u'ๅค': u'ๅ',
u'ๅ': u'้ฟ',
u'ๅ': u'ๅ',
u'ๅ': u'ๅ ',
u'ๅ': u'ๅต',
u'ๅ': u'ๅถ',
u'ๅ': u'ๅฆ',
u'ๅ': u'ๅฉ',
u'ๅ': u'ๅฒ',
u'ๅ': u'ๅ',
u'ๅ': u'ๅฅ',
u'ๅ': u'ๅฒ',
u'ๅ': u'ๅ',
u'ๅ': u'ๅ',
u'ๅ ': u'ๅฎ',
u'ๅก': u'ๅข',
u'ๅข': u'ๅฉ',
u'ๅค': u'ๅ',
u'ๅง': u'ๅ',
u'ๅฌ': u'ๅ',
u'ๅญ': u'ๅ',
u'ๅฎ': u'ๅ',
u'ๅด': u'ๅฝ',
u'ๅธ': u'ๅฏ',
u'ๅท': u'ๅด',
u'ๅฝ': u'ๅ',
u'ๅพ': u'ๅณ',
u'ๅซ': u'ๅ',
u'ๅณ': u'ๅฏ',
u'ๅ': u'ๅ',
u'ๅค': u'ๅถ',
u'ๅฑ': u'ๅ',
u'ๅ': u'ๅ',
u'ๅฃ': u'ๅ',
u'ๅข': u'ๅ',
u'ๅญ': u'ๅ',
u'ๅฑ': u'ๅช',
u'ๅด': u'ๅ',
u'ๅต': u'ๅ',
u'ๅฝ': u'ๅ',
u'ๅพ': u'ๅ',
u'ๅ': u'ๅ',
u'ๅฃ': u'่',
u'ๅน': u'ๅฃ',
u'ๅบ': u'ๅ ด',
u'ๅ': u'ๅฃ',
u'ๅ': u'ๅก',
u'ๅ': u'ๅ
',
u'ๅ': u'ๅฃ',
u'ๅ': u'ๅฃข',
u'ๅ': u'ๅฃฉ',
u'ๅ': u'ๅกข',
u'ๅ': u'ๅขณ',
u'ๅ ': u'ๅข',
u'ๅ': u'ๅฃ',
u'ๅ
': u'ๅฃ ',
u'ๅ': u'ๅฃ',
u'ๅ': u'ๅฃ',
u'ๅฆ': u'ๅขพ',
u'ๅฉ': u'ๅ ',
u'ๅซ': u'ๅข',
u'ๅญ': u'ๅก',
u'ๅฑ': u'ๅฃ',
u'ๅฒ': u'ๅก',
u'ๅด': u'ๅ ',
u'ๅ': u'ๅก',
u'ๅ': u'ๅกค',
u'ๅ': u'ๅ ',
u'ๅฏ': u'ๅต',
u'ๅ ': u'ๅกน',
u'ๅ ': u'ๅขฎ',
u'ๅข': u'็',
u'ๅฃฎ': u'ๅฃฏ',
u'ๅฃฐ': u'่ฒ',
u'ๅฃณ': u'ๆฎผ',
u'ๅฃถ': u'ๅฃบ',
u'ๅฃธ': u'ๅฃผ',
u'ๅค': u'่',
u'ๅค': u'ๅ',
u'ๅค': u'ๅพฉ',
u'ๅค': u'ๅค ',
u'ๅคด': u'้ ญ',
u'ๅคธ': u'่ช',
u'ๅคน': u'ๅคพ',
u'ๅคบ': u'ๅฅช',
u'ๅฅ': u'ๅฅฉ',
u'ๅฅ': u'ๅฅ',
u'ๅฅ': u'ๅฅฎ',
u'ๅฅ': u'็',
u'ๅฅฅ': u'ๅฅง',
u'ๅฆ': u'ๅฆ',
u'ๅฆ': u'ๅฉฆ',
u'ๅฆ': u'ๅชฝ',
u'ๅฆฉ': u'ๅซต',
u'ๅฆช': u'ๅซ',
u'ๅฆซ': u'ๅชฏ',
u'ๅง': u'ๅง',
u'ๅงน': u'ๅฅผ',
u'ๅจ': u'ๅฉ',
u'ๅจ
': u'ๅฉญ',
u'ๅจ': u'ๅฌ',
u'ๅจ': u'ๅฌ',
u'ๅจ': u'ๅญ',
u'ๅจฑ': u'ๅจ',
u'ๅจฒ': u'ๅชง',
u'ๅจด': u'ๅซป',
u'ๅฉณ': u'ๅซฟ',
u'ๅฉด': u'ๅฌฐ',
u'ๅฉต': u'ๅฌ',
u'ๅฉถ': u'ๅฌธ',
u'ๅชช': u'ๅชผ',
u'ๅซ': u'ๅฌก',
u'ๅซ': u'ๅฌช',
u'ๅซฑ': u'ๅฌ',
u'ๅฌท': u'ๅฌค',
u'ๅญ': u'ๅญซ',
u'ๅญฆ': u'ๅญธ',
u'ๅญช': u'ๅญฟ',
u'ๅฎ': u'ๅฏง',
u'ๅฎ': u'ๅฏถ',
u'ๅฎ': u'ๅฏฆ',
u'ๅฎ ': u'ๅฏต',
u'ๅฎก': u'ๅฏฉ',
u'ๅฎช': u'ๆฒ',
u'ๅฎซ': u'ๅฎฎ',
u'ๅฎฝ': u'ๅฏฌ',
u'ๅฎพ': u'่ณ',
u'ๅฏ': u'ๅฏข',
u'ๅฏน': u'ๅฐ',
u'ๅฏป': u'ๅฐ',
u'ๅฏผ': u'ๅฐ',
u'ๅฏฟ': u'ๅฃฝ',
u'ๅฐ': u'ๅฐ',
u'ๅฐ': u'็พ',
u'ๅฐ': u'ๅกต',
u'ๅฐ': u'ๅ',
u'ๅฐง': u'ๅ ฏ',
u'ๅฐด': u'ๅฐท',
u'ๅฐธ': u'ๅฑ',
u'ๅฐฝ': u'็ก',
u'ๅฑ': u'ๅฑค',
u'ๅฑ': u'ๅฑ',
u'ๅฑ': u'ๅฑ',
u'ๅฑ': u'ๅฑ',
u'ๅฑ': u'ๅฑฌ',
u'ๅฑก': u'ๅฑข',
u'ๅฑฆ': u'ๅฑจ',
u'ๅฑฟ': u'ๅถผ',
u'ๅฒ': u'ๆญฒ',
u'ๅฒ': u'่ฑ',
u'ๅฒ': u'ๅถ',
u'ๅฒ': u'ๅด',
u'ๅฒ': u'ๅณด',
u'ๅฒ': u'ๅถด',
u'ๅฒ': u'ๅต',
u'ๅฒ': u'ๅณถ',
u'ๅฒญ': u'ๅถบ',
u'ๅฒฝ': u'ๅดฌ',
u'ๅฒฟ': u'ๅท',
u'ๅณ': u'ๅถง',
u'ๅณก': u'ๅณฝ',
u'ๅณฃ': u'ๅถข',
u'ๅณค': u'ๅถ ',
u'ๅณฅ': u'ๅดข',
u'ๅณฆ': u'ๅท',
u'ๅด': u'ๅถ',
u'ๅด': u'ๅด',
u'ๅด': u'ๅถฎ',
u'ๅดญ': u'ๅถ',
u'ๅต': u'ๅถธ',
u'ๅต': u'ๅถ',
u'ๅต': u'ๅถ',
u'ๅท
': u'ๅท',
u'ๅทฉ': u'้',
u'ๅทฏ': u'ๅทฐ',
u'ๅธ': u'ๅนฃ',
u'ๅธ
': u'ๅธฅ',
u'ๅธ': u'ๅธซ',
u'ๅธ': u'ๅน',
u'ๅธ': u'ๅธณ',
u'ๅธ': u'็ฐพ',
u'ๅธ': u'ๅน',
u'ๅธฆ': u'ๅธถ',
u'ๅธง': u'ๅน',
u'ๅธฎ': u'ๅนซ',
u'ๅธฑ': u'ๅนฌ',
u'ๅธป': u'ๅน',
u'ๅธผ': u'ๅน',
u'ๅน': u'ๅช',
u'ๅน': u'่ฅ',
u'ๅนถ': u'ไธฆ',
u'ๅนฟ': u'ๅปฃ',
u'ๅบ': u'ๆ
ถ',
u'ๅบ': u'ๅปฌ',
u'ๅบ': u'ๅปก',
u'ๅบ': u'ๅบซ',
u'ๅบ': u'ๆ',
u'ๅบ': u'ๅป',
u'ๅบ': u'้พ',
u'ๅบ': u'ๅปข',
u'ๅปช': u'ๅปฉ',
u'ๅผ': u'้',
u'ๅผ': u'็ฐ',
u'ๅผ': u'ๆฃ',
u'ๅผ': u'ๅผ',
u'ๅผ ': u'ๅผต',
u'ๅผฅ': u'ๅฝ',
u'ๅผช': u'ๅผณ',
u'ๅผฏ': u'ๅฝ',
u'ๅผน': u'ๅฝ',
u'ๅผบ': u'ๅผท',
u'ๅฝ': u'ๆญธ',
u'ๅฝ': u'็ถ',
u'ๅฝ': u'้',
u'ๅฝฆ': u'ๅฝฅ',
u'ๅฝป': u'ๅพน',
u'ๅพ': u'ๅพ',
u'ๅพ': u'ๅพ ',
u'ๅฟ': u'ๆถ',
u'ๅฟ': u'ๆบ',
u'ๅฟง': u'ๆ',
u'ๅฟพ': u'ๆพ',
u'ๆ': u'ๆท',
u'ๆ': u'ๆ
',
u'ๆ': u'ๆ
ซ',
u'ๆ': u'ๆฎ',
u'ๆ': u'ๆ
ช',
u'ๆ
': u'ๆต',
u'ๆ': u'ๆด',
u'ๆ': u'ๆ',
u'ๆป': u'็ธฝ',
u'ๆผ': u'ๆ',
u'ๆฟ': u'ๆ',
u'ๆ': u'ๆ',
u'ๆ': u'ๆ',
u'ๆณ': u'ๆ',
u'ๆถ': u'ๆก',
u'ๆธ': u'ๆ
',
u'ๆน': u'ๆจ',
u'ๆบ': u'ๆท',
u'ๆป': u'ๆป',
u'ๆผ': u'ๆฑ',
u'ๆฝ': u'ๆฒ',
u'ๆฆ': u'ๆ
',
u'ๆซ': u'ๆจ',
u'ๆฌ': u'ๆธ',
u'ๆญ': u'ๆ
ณ',
u'ๆฎ': u'ๆ',
u'ๆฏ': u'ๆซ',
u'ๆ': u'้ฉ',
u'ๆง': u'ๆผ',
u'ๆจ': u'ๆ
',
u'ๆฉ': u'ๆฒ',
u'ๆซ': u'ๆ',
u'ๆฌ': u'ๆ',
u'ๆญ': u'ๆ
',
u'ๆฎ': u'ๆ',
u'ๆฏ': u'ๆ
ฃ',
u'ๆ ': u'ๆ
',
u'ๆค': u'ๆค',
u'ๆฆ': u'ๆ',
u'ๆฟ': u'้ก',
u'ๆ
': u'ๆพ',
u'ๆ': u'ๆฃ',
u'ๆ': u'ๆถ',
u'ๆ': u'ๆ',
u'ๆ': u'ๆ',
u'ๆ': u'ๆ',
u'ๆ': u'ๆฒ',
u'ๆ': u'ๆง',
u'ๆ': u'ๆฐ',
u'ๆฌ': u'ๆฉ',
u'ๆฏ': u'ๆฑ',
u'ๆท': u'ๆถ',
u'ๆ': u'ๆฒ',
u'ๆง': u'ๅท',
u'ๆฉ': u'ๆด',
u'ๆช': u'ๆซ',
u'ๆซ': u'ๆ',
u'ๆฌ': u'ๆ',
u'ๆฐ': u'ๆพ',
u'ๆ': u'ๆซ',
u'ๆ': u'ๆ',
u'ๆ': u'ๆถ',
u'ๆ ': u'ๆณ',
u'ๆก': u'ๆ',
u'ๆข': u'ๆถ',
u'ๆค': u'่ญท',
u'ๆฅ': u'ๅ ฑ',
u'ๆ
': u'ๆ',
u'ๆ': u'ๆฌ',
u'ๆข': u'ๆ',
u'ๆฃ': u'ๆ',
u'ๆฅ': u'ๆ',
u'ๆฆ': u'ๆ',
u'ๆง': u'ๆฐ',
u'ๆจ': u'ๆฅ',
u'ๆฉ': u'ๆ',
u'ๆ': u'ๆ',
u'ๆ': u'ๆฏ',
u'ๆ': u'ๆฃ',
u'ๆ': u'ๆ',
u'ๆ': u'ๆพ',
u'ๆ': u'ๆป',
u'ๆ': u'ๆพ',
u'ๆ ': u'ๆ',
u'ๆก': u'ๆ',
u'ๆข': u'ๆ',
u'ๆฃ': u'ๆ',
u'ๆค': u'ๆ ',
u'ๆฅ': u'ๆฎ',
u'ๆฆ': u'ๆ',
u'ๆ': u'ๆฉ',
u'ๆ': u'ๆ',
u'ๆ': u'ๆ',
u'ๆก': u'ๆฟ',
u'ๆข': u'ๆ',
u'ๆฃ': u'ๆ',
u'ๆฎ': u'ๆ',
u'ๆณ': u'ๆ',
u'ๆด': u'ๆ',
u'ๆท': u'ๆฒ',
u'ๆธ': u'ๆฃ',
u'ๆบ': u'ๆป',
u'ๆผ': u'ๆ',
u'ๆฝ': u'ๆฌ',
u'ๆพ': u'ๆต',
u'ๆฟ': u'ๆณ',
u'ๆ': u'ๆ',
u'ๆ': u'ๆฑ',
u'ๆ': u'ๆ',
u'ๆ
': u'ๆช',
u'ๆบ': u'ๆ',
u'ๆ': u'ๆ',
u'ๆ
': u'ๆ',
u'ๆ': u'ๆบ',
u'ๆ': u'ๆ',
u'ๆ': u'ๆฏ',
u'ๆ': u'ๆค',
u'ๆ': u'ๆ',
u'ๆ': u'ๆ',
u'ๆต': u'ๆ',
u'ๆท': u'ๆท',
u'ๆธ': u'ๆผ',
u'ๆบ': u'ๆ',
u'ๆ': u'ๆป',
u'ๆ': u'ๆข',
u'ๆ': u'ๆต',
u'ๆ': u'ๆ',
u'ๆฐ': u'ๆธ',
u'ๆ': u'้ฝ',
u'ๆ': u'ๆ',
u'ๆฉ': u'ๆฌ',
u'ๆญ': u'ๆท',
u'ๆ ': u'็ก',
u'ๆง': u'่',
u'ๆถ': u'ๆ',
u'ๆท': u'ๆ ',
u'ๆธ': u'ๆ',
u'ๆ': u'ๆ',
u'ๆผ': u'ๆ',
u'ๆฝ': u'ๆจ',
u'ๆพ': u'้กฏ',
u'ๆ': u'ๆ',
u'ๆ': u'ๆฌ',
u'ๆ': u'ๆ',
u'ๆ': u'ๆ',
u'ๆ': u'ๆ',
u'ๆ': u'ๆ',
u'ๆ': u'ๆซ',
u'ๆง': u'ๆ',
u'ๆฏ': u'่ก',
u'ๆบ': u'ๆฉ',
u'ๆ': u'ๆฎบ',
u'ๆ': u'้',
u'ๆ': u'ๆฌ',
u'ๆ': u'ๆกฟ',
u'ๆก': u'ๆข',
u'ๆฅ': u'ไพ',
u'ๆจ': u'ๆฅ',
u'ๆฉ': u'ๆฆช',
u'ๆฐ': u'ๅ',
u'ๆ': u'ๆฅต',
u'ๆ': u'ๆง',
u'ๆ': u'ๆจ
',
u'ๆข': u'ๆจ',
u'ๆฃ': u'ๆฃ',
u'ๆฅ': u'ๆซช',
u'ๆง': u'ๆข',
u'ๆจ': u'ๆฃ',
u'ๆช': u'ๆง',
u'ๆซ': u'ๆฅ',
u'ๆญ': u'ๆข',
u'ๆ': u'ๆซ',
u'ๆ ': u'ๆชธ',
u'ๆฝ': u'ๆช',
u'ๆ ': u'ๆข',
u'ๆ
': u'ๆต',
u'ๆ ': u'ๆจ',
u'ๆ ': u'ๆฃง',
u'ๆ ': u'ๆซ',
u'ๆ ': u'ๆซณ',
u'ๆ ': u'ๆฃ',
u'ๆ ': u'ๆซจ',
u'ๆ ': u'ๆซ',
u'ๆ ': u'ๆฌ',
u'ๆ ': u'ๆจน',
u'ๆ ': u'ๆฃฒ',
u'ๆ ท': u'ๆจฃ',
u'ๆ พ': u'ๆฌ',
u'ๆก ': u'ๆค',
u'ๆกก': u'ๆฉ',
u'ๆกข': u'ๆฅจ',
u'ๆกฃ': u'ๆช',
u'ๆกค': u'ๆฆฟ',
u'ๆกฅ': u'ๆฉ',
u'ๆกฆ': u'ๆจบ',
u'ๆกง': u'ๆช',
u'ๆกจ': u'ๆงณ',
u'ๆกฉ': u'ๆจ',
u'ๆขฆ': u'ๅคข',
u'ๆขผ': u'ๆชฎ',
u'ๆขพ': u'ๆฃถ',
u'ๆขฟ': u'ๆงค',
u'ๆฃ': u'ๆชข',
u'ๆฃ': u'ๆขฒ',
u'ๆฃ': u'ๆฌ',
u'ๆค': u'ๆงจ',
u'ๆค': u'ๆซ',
u'ๆค ': u'ๆงง',
u'ๆคค': u'ๆฌ',
u'ๆคญ': u'ๆฉข',
u'ๆฅผ': u'ๆจ',
u'ๆฆ': u'ๆฌ',
u'ๆฆ
': u'ๆฆฒ',
u'ๆฆ': u'ๆซฌ',
u'ๆฆ': u'ๆซ',
u'ๆฆ': u'ๆซธ',
u'ๆง': u'ๆช',
u'ๆง': u'ๆชป',
u'ๆง': u'ๆชณ',
u'ๆง ': u'ๆซง',
u'ๆจช': u'ๆฉซ',
u'ๆจฏ': u'ๆชฃ',
u'ๆจฑ': u'ๆซป',
u'ๆฉฅ': u'ๆซซ',
u'ๆฉฑ': u'ๆซฅ',
u'ๆฉน': u'ๆซ',
u'ๆฉผ': u'ๆซ',
u'ๆชฉ': u'ๆช',
u'ๆฌข': u'ๆญก',
u'ๆฌค': u'ๆญ',
u'ๆฌง': u'ๆญ',
u'ๆญผ': u'ๆฎฒ',
u'ๆฎ': u'ๆญฟ',
u'ๆฎ': u'ๆฎค',
u'ๆฎ': u'ๆฎ',
u'ๆฎ': u'ๆฎ',
u'ๆฎ': u'ๆฎฎ',
u'ๆฎ': u'ๆฎซ',
u'ๆฎก': u'ๆฎฏ',
u'ๆฎด': u'ๆฏ',
u'ๆฏ': u'ๆฏ',
u'ๆฏ': u'่ฝ',
u'ๆฏ': u'็ข',
u'ๆฏ': u'ๆ',
u'ๆฏก': u'ๆฐ',
u'ๆฏต': u'ๆฏฟ',
u'ๆฐ': u'ๆฐ',
u'ๆฐ': u'ๆฐฃ',
u'ๆฐข': u'ๆฐซ',
u'ๆฐฉ': u'ๆฐฌ',
u'ๆฐฒ': u'ๆฐณ',
u'ๆฑ': u'ๅฏ',
u'ๆฑ': u'ๆผข',
u'ๆฑค': u'ๆนฏ',
u'ๆฑน': u'ๆดถ',
u'ๆฒ': u'ๆบ',
u'ๆฒก': u'ๆฒ',
u'ๆฒฃ': u'็',
u'ๆฒค': u'ๆผ',
u'ๆฒฅ': u'็',
u'ๆฒฆ': u'ๆทช',
u'ๆฒง': u'ๆป',
u'ๆฒฉ': u'ๆบ',
u'ๆฒช': u'ๆปฌ',
u'ๆณ': u'ๆฟ',
u'ๆณช': u'ๆท',
u'ๆณถ': u'ๆพฉ',
u'ๆณท': u'็ง',
u'ๆณธ': u'็',
u'ๆณบ': u'ๆฟผ',
u'ๆณป': u'็',
u'ๆณผ': u'ๆฝ',
u'ๆณฝ': u'ๆพค',
u'ๆณพ': u'ๆถ',
u'ๆด': u'ๆฝ',
u'ๆด': u'็',
u'ๆดผ': u'็ชช',
u'ๆต': u'ๆตน',
u'ๆต
': u'ๆทบ',
u'ๆต': u'ๆผฟ',
u'ๆต': u'ๆพ',
u'ๆต': u'ๆน',
u'ๆต': u'ๆฟ',
u'ๆต': u'ๆธฌ',
u'ๆต': u'ๆพฎ',
u'ๆต': u'ๆฟ',
u'ๆต': u'็',
u'ๆต': u'ๆปป',
u'ๆต': u'ๆธพ',
u'ๆต': u'ๆปธ',
u'ๆต': u'ๆฟ',
u'ๆต': u'ๆฝฏ',
u'ๆถ': u'ๅก',
u'ๆถ': u'ๆฟค',
u'ๆถ': u'ๆพ',
u'ๆถ': u'ๆทถ',
u'ๆถ': u'ๆผฃ',
u'ๆถ ': u'ๆฝฟ',
u'ๆถก': u'ๆธฆ',
u'ๆถฃ': u'ๆธ',
u'ๆถค': u'ๆป',
u'ๆถฆ': u'ๆฝค',
u'ๆถง': u'ๆพ',
u'ๆถจ': u'ๆผฒ',
u'ๆถฉ': u'ๆพ',
u'ๆธ': u'ๆทต',
u'ๆธ': u'ๆทฅ',
u'ๆธ': u'ๆผฌ',
u'ๆธ': u'็',
u'ๆธ': u'ๆผธ',
u'ๆธ': u'ๆพ ',
u'ๆธ': u'ๆผ',
u'ๆธ': u'็',
u'ๆธ': u'ๆปฒ',
u'ๆธฉ': u'ๆบซ',
u'ๆนพ': u'็ฃ',
u'ๆนฟ': u'ๆฟ',
u'ๆบ': u'ๆฝฐ',
u'ๆบ
': u'ๆฟบ',
u'ๆบ': u'ๆผต',
u'ๆป': u'ๆฝท',
u'ๆป': u'ๆปพ',
u'ๆป': u'ๆปฏ',
u'ๆป': u'็ง',
u'ๆป ': u'็',
u'ๆปก': u'ๆปฟ',
u'ๆปข': u'็
',
u'ๆปค': u'ๆฟพ',
u'ๆปฅ': u'ๆฟซ',
u'ๆปฆ': u'็ค',
u'ๆปจ': u'ๆฟฑ',
u'ๆปฉ': u'็',
u'ๆปช': u'ๆพฆ',
u'ๆผค': u'็ ',
u'ๆฝ': u'็ ',
u'ๆฝ': u'็',
u'ๆฝ': u'็ฒ',
u'ๆฝ': u'ๆฟฐ',
u'ๆฝ': u'ๆฝ',
u'ๆฝด': u'็ฆ',
u'ๆพ': u'็พ',
u'ๆฟ': u'็จ',
u'ๆฟ': u'็',
u'็': u'็',
u'็ญ': u'ๆป
',
u'็ฏ': u'็',
u'็ต': u'้',
u'็พ': u'็ฝ',
u'็ฟ': u'็ฆ',
u'็': u'็
ฌ',
u'็': u'็',
u'็': u'็',
u'็': u'็
',
u'็': u'็',
u'็น': u'้ป',
u'็ผ': u'็
',
u'็ฝ': u'็พ',
u'็': u'็',
u'็': u'็',
u'็': u'็ด',
u'็': u'็ญ',
u'็': u'็
',
u'็ฆ': u'็
ฉ',
u'็ง': u'็',
u'็จ': u'็',
u'็ฉ': u'็ด',
u'็ซ': u'็',
u'็ฌ': u'็ผ',
u'็ญ': u'็ฑ',
u'็': u'็
ฅ',
u'็': u'็',
u'็': u'็พ',
u'็
ด': u'็
',
u'็ฑ': u'ๆ',
u'็ท': u'็บ',
u'็': u'็',
u'็ฆ': u'ๆฐ',
u'็ต': u'็ฝ',
u'็บ': u'็ง',
u'็': u'็ข',
u'็ถ': u'็',
u'็ท': u'็ท',
u'็ธ': u'็',
u'็น': u'็ถ',
u'็': u'็ฝ',
u'็': u'็ฎ',
u'็': u'็ฐ',
u'็ฌ': u'็จ',
u'็ญ': u'็น',
u'็ฎ': u'็
',
u'็ฏ': u'็ช',
u'็ฐ': u'็',
u'็ฑ': u'็',
u'็ฒ': u'็ป',
u'็': u'็ซ',
u'็': u'็ต',
u'็': u'็ผ',
u'็ก': u'็',
u'็ช': u'่ฑฌ',
u'็ซ': u'่ฒ',
u'็ฌ': u'่',
u'็ฎ': u'็ป',
u'็ญ': u'็บ',
u'็': u'็ฃ',
u'็': u'็',
u'็': u'็ช',
u'็ฎ': u'็',
u'็ฏ': u'็ฐ',
u'็ฐ': u'็พ',
u'็ฑ': u'็ฒ',
u'็บ': u'็ฝ',
u'็': u'็บ',
u'็': u'็',
u'็ฐ': u'็ซ',
u'็ฒ': u'็ฟ',
u'็': u'็',
u'็': u'็ฃ',
u'็ผ': u'็',
u'็ถ': u'็ค',
u'็ท': u'็ฆ',
u'็': u'็',
u'็': u'็',
u'็ฏ': u'็',
u'็ต': u'้ป',
u'็ป': u'็ซ',
u'็
': u'ๆข',
u'็ด': u'็',
u'็': u'็ค',
u'็': u'็',
u'็': u'็ง',
u'็ ': u'็',
u'็ก': u'็',
u'็ฌ': u'็ง',
u'็ญ': u'็ฒ',
u'็ฎ': u'็ก',
u'็ฏ': u'็',
u'็ฑ': u'็ฐ',
u'็ด': u'็พ',
u'็': u'็ฐ',
u'็': u'็',
u'็': u'็ข',
u'็': u'็',
u'็จ': u'็',
u'็ช': u'็',
u'็ซ': u'็',
u'็
': u'็',
u'็': u'็ฎ',
u'็': u'็',
u'็': u'็บ',
u'็ช': u'็',
u'็ซ': u'็ฑ',
u'็พ': u'็ฎ',
u'็ฟ': u'็ญ',
u'็': u'็ฉ',
u'็ฃ': u'็ฌ',
u'็ซ': u'็ฒ',
u'็': u'็',
u'็ฑ': u'็บ',
u'็ฒ': u'็ธ',
u'็': u'็',
u'็': u'้นฝ',
u'็': u'็ฃ',
u'็': u'่',
u'็': u'็',
u'็': u'็ค',
u'็': u'็',
u'็ฆ': u'็ฅ',
u'็ฌ': u'็',
u'็': u'็',
u'็': u'็',
u'็': u'็ผ',
u'็': u'็ถ',
u'็': u'็',
u'็ฉ': u'็',
u'็ซ': u'็ฏ',
u'็ถ': u'็ฃฏ',
u'็พ': u'็คฌ',
u'็ฟ': u'็คฆ',
u'็ ': u'็ขญ',
u'็ ': u'็ขผ',
u'็ ': u'็ฃ',
u'็ ': u'็กจ',
u'็ ': u'็กฏ',
u'็ ': u'็ขธ',
u'็ บ': u'็คช',
u'็ ป': u'็คฑ',
u'็ พ': u'็คซ',
u'็ก': u'็ค',
u'็ก': u'็ก',
u'็ก': u'็ขฉ',
u'็ก': u'็กค',
u'็ก': u'็ฃฝ',
u'็ก': u'็ฃ',
u'็กฎ': u'็ขบ',
u'็กท': u'็ค',
u'็ข': u'็ค',
u'็ข': u'็ฃง',
u'็ข': u'็ฃฃ',
u'็ขฑ': u'้นผ',
u'็คผ': u'็ฆฎ',
u'็ฅ': u'็ฆก',
u'็ฅ': u'็ฆ',
u'็ฅข': u'็ฆฐ',
u'็ฅฏ': u'็ฆ',
u'็ฅท': u'็ฆฑ',
u'็ฅธ': u'็ฆ',
u'็ฆ': u'็จ',
u'็ฆ': u'็ฅฟ',
u'็ฆ
': u'็ฆช',
u'็ฆป': u'้ข',
u'็ง': u'็ฆฟ',
u'็ง': u'็จ',
u'็ง': u'็จฎ',
u'็งฏ': u'็ฉ',
u'็งฐ': u'็จฑ',
u'็งฝ': u'็ฉข',
u'็งพ': u'็ฉ ',
u'็จ': u'็ฉญ',
u'็จ': u'็จ
',
u'็จฃ': u'็ฉ',
u'็จณ': u'็ฉฉ',
u'็ฉ': u'็ฉก',
u'็ฉท': u'็ชฎ',
u'็ช': u'็ซ',
u'็ช': u'็ซ
',
u'็ช': u'็ชต',
u'็ช': u'็ชฏ',
u'็ช': u'็ซ',
u'็ช': u'็ชฉ',
u'็ชฅ': u'็ชบ',
u'็ชฆ': u'็ซ',
u'็ชญ': u'็ชถ',
u'็ซ': u'่ฑ',
u'็ซ': u'็ซถ',
u'็ฌ': u'็ฏค',
u'็ฌ': u'็ญ',
u'็ฌ': u'็ญ',
u'็ฌ': u'็ญง',
u'็ฌบ': u'็ฎ',
u'็ฌผ': u'็ฑ ',
u'็ฌพ': u'็ฑฉ',
u'็ญ': u'็ฏ',
u'็ญ': u'็ฏณ',
u'็ญ': u'็ฏฉ',
u'็ญ': u'็ฐน',
u'็ญ': u'็ฎ',
u'็ญน': u'็ฑ',
u'็ญผ': u'็ฏ',
u'็ญพ': u'็ฐฝ',
u'็ฎ': u'็ฐก',
u'็ฎ': u'็ฑ',
u'็ฎฆ': u'็ฐ',
u'็ฎง': u'็ฏ',
u'็ฎจ': u'็ฑ',
u'็ฎฉ': u'็ฑฎ',
u'็ฎช': u'็ฐ',
u'็ฎซ': u'็ฐซ',
u'็ฏ': u'็ฐฃ',
u'็ฏ': u'็ฐ',
u'็ฏฎ': u'็ฑ',
u'็ฏฑ': u'็ฑฌ',
u'็ฐ': u'็ฑช',
u'็ฑ': u'็ฑ',
u'็ฑด': u'็ณด',
u'็ฑป': u'้ก',
u'็ฑผ': u'็ง',
u'็ฒ': u'็ณถ',
u'็ฒ': u'็ณฒ',
u'็ฒค': u'็ฒต',
u'็ฒช': u'็ณ',
u'็ฒฎ': u'็ณง',
u'็ณ': u'็ณ',
u'็ณ': u'้คฑ',
u'็ดง': u'็ท',
u'็ตท': u'็ธถ',
u'็บ': u'็ณน',
u'็บ ': u'็ณพ',
u'็บก': u'็ด',
u'็บข': u'็ด
',
u'็บฃ': u'็ด',
u'็บค': u'็บ',
u'็บฅ': u'็ด',
u'็บฆ': u'็ด',
u'็บง': u'็ด',
u'็บจ': u'็ด',
u'็บฉ': u'็บ',
u'็บช': u'็ด',
u'็บซ': u'็ด',
u'็บฌ': u'็ทฏ',
u'็บญ': u'็ด',
u'็บฎ': u'็ด',
u'็บฏ': u'็ด',
u'็บฐ': u'็ด',
u'็บฑ': u'็ด',
u'็บฒ': u'็ถฑ',
u'็บณ': u'็ด',
u'็บด': u'็ด',
u'็บต': u'็ธฑ',
u'็บถ': u'็ถธ',
u'็บท': u'็ด',
u'็บธ': u'็ด',
u'็บน': u'็ด',
u'็บบ': u'็ดก',
u'็บป': u'็ดต',
u'็บผ': u'็ด',
u'็บฝ': u'็ด',
u'็บพ': u'็ด',
u'็บฟ': u'็ท',
u'็ป': u'็ดบ',
u'็ป': u'็ดฒ',
u'็ป': u'็ดฑ',
u'็ป': u'็ทด',
u'็ป': u'็ต',
u'็ป
': u'็ดณ',
u'็ป': u'็ดฐ',
u'็ป': u'็น',
u'็ป': u'็ต',
u'็ป': u'็ธ',
u'็ป': u'็ต',
u'็ป': u'็ดผ',
u'็ป': u'็ต',
u'็ป': u'็ดน',
u'็ป': u'็นน',
u'็ป': u'็ถ',
u'็ป': u'็ดฟ',
u'็ป': u'็ถ',
u'็ป': u'็ตจ',
u'็ป': u'็ต',
u'็ป': u'็ต',
u'็ป': u'็น',
u'็ป': u'็ตฐ',
u'็ป': u'็ต',
u'็ป': u'็นช',
u'็ป': u'็ตฆ',
u'็ป': u'็ตข',
u'็ป': u'็ตณ',
u'็ป': u'็ตก',
u'็ป': u'็ต',
u'็ป': u'็ต',
u'็ป': u'็ตฑ',
u'็ป ': u'็ถ',
u'็ปก': u'็ถ',
u'็ปข': u'็ตน',
u'็ปฃ': u'็ถ',
u'็ปค': u'็ถ',
u'็ปฅ': u'็ถ',
u'็ปฆ': u'็ต',
u'็ปง': u'็นผ',
u'็ปจ': u'็ถ',
u'็ปฉ': u'็ธพ',
u'็ปช': u'็ท',
u'็ปซ': u'็ถพ',
u'็ปฌ': u'็ท',
u'็ปญ': u'็บ',
u'็ปฎ': u'็ถบ',
u'็ปฏ': u'็ท',
u'็ปฐ': u'็ถฝ',
u'็ปฑ': u'็ท',
u'็ปฒ': u'็ท',
u'็ปณ': u'็นฉ',
u'็ปด': u'็ถญ',
u'็ปต': u'็ถฟ',
u'็ปถ': u'็ถฌ',
u'็ปท': u'็ถณ',
u'็ปธ': u'็ถข',
u'็ปน': u'็ถฏ',
u'็ปบ': u'็ถน',
u'็ปป': u'็ถฃ',
u'็ปผ': u'็ถ',
u'็ปฝ': u'็ถป',
u'็ปพ': u'็ถฐ',
u'็ปฟ': u'็ถ ',
u'็ผ': u'็ถด',
u'็ผ': u'็ท',
u'็ผ': u'็ท',
u'็ผ': u'็ท',
u'็ผ': u'็ท',
u'็ผ
': u'็ทฌ',
u'็ผ': u'็บ',
u'็ผ': u'็ทน',
u'็ผ': u'็ทฒ',
u'็ผ': u'็ท',
u'็ผ': u'็ธ',
u'็ผ': u'็นข',
u'็ผ': u'็ทฆ',
u'็ผ': u'็ถ',
u'็ผ': u'็ท',
u'็ผ': u'็ทถ',
u'็ผ': u'็ทฑ',
u'็ผ': u'็ธ',
u'็ผ': u'็ทฉ',
u'็ผ': u'็ท ',
u'็ผ': u'็ธท',
u'็ผ': u'็ทจ',
u'็ผ': u'็ทก',
u'็ผ': u'็ทฃ',
u'็ผ': u'็ธ',
u'็ผ': u'็ธ',
u'็ผ': u'็ธ',
u'็ผ': u'็ธ',
u'็ผ': u'็ธซ',
u'็ผ': u'็ธ',
u'็ผ': u'็ธ',
u'็ผ ': u'็บ',
u'็ผก': u'็ธญ',
u'็ผข': u'็ธ',
u'็ผฃ': u'็ธ',
u'็ผค': u'็นฝ',
u'็ผฅ': u'็ธน',
u'็ผฆ': u'็ธต',
u'็ผง': u'็ธฒ',
u'็ผจ': u'็บ',
u'็ผฉ': u'็ธฎ',
u'็ผช': u'็น',
u'็ผซ': u'็น
',
u'็ผฌ': u'็บ',
u'็ผญ': u'็น',
u'็ผฎ': u'็น',
u'็ผฏ': u'็น',
u'็ผฐ': u'้',
u'็ผฑ': u'็นพ',
u'็ผฒ': u'็นฐ',
u'็ผณ': u'็นฏ',
u'็ผด': u'็นณ',
u'็ผต': u'็บ',
u'็ฝ': u'็ฝ',
u'็ฝ': u'็ถฒ',
u'็ฝ': u'็พ
',
u'็ฝ': u'็ฝฐ',
u'็ฝข': u'็ฝท',
u'็ฝด': u'็พ',
u'็พ': u'็พ',
u'็พ': u'็พฅ',
u'็พก': u'็พจ',
u'็ฟ': u'็ฟน',
u'่ข': u'่ฎ',
u'่ง': u'่ฌ',
u'่ธ': u'่ณ',
u'่ป': u'ๆฅ',
u'่': u'่ถ',
u'่': u'่พ',
u'่': u'่ท',
u'่': u'่น',
u'่': u'่ฏ',
u'่ฉ': u'่ต',
u'่ช': u'่ฐ',
u'่': u'่
',
u'่ ': u'่
ธ',
u'่ค': u'่',
u'่ฎ': u'้ชฏ',
u'่ด': u'้ค',
u'่พ': u'่
',
u'่ฟ': u'่
ซ',
u'่': u'่น',
u'่': u'่
',
u'่': u'่ฝ',
u'่': u'ๅ',
u'่ง': u'ๆง',
u'่จ': u'่
',
u'่ช': u'่',
u'่ซ': u'่',
u'่ถ': u'่ ',
u'่': u'่',
u'่': u'่พ',
u'่': u'่',
u'่': u'่',
u'่': u'่
ฆ',
u'่': u'่ฟ',
u'่': u'่ ',
u'่': u'่
ณ',
u'่ฑ': u'่ซ',
u'่ถ': u'่
ก',
u'่ธ': u'่',
u'่
': u'่',
u'่
ญ': u'้ฝถ',
u'่
ป': u'่ฉ',
u'่
ผ': u'้ฆ',
u'่
ฝ': u'่',
u'่
พ': u'้จฐ',
u'่': u'่',
u'่': u'่ข',
u'่': u'่ผฟ',
u'่ฃ': u'่ค',
u'่ฐ': u'่ฆ',
u'่ฑ': u'่',
u'่ป': u'่ซ',
u'่ฐ': u'่ฑ',
u'่ณ': u'่ท',
u'่บ': u'่',
u'่': u'็ฏ',
u'่': u'็พ',
u'่': u'่',
u'่': u'่ช',
u'่ฆ': u'่',
u'่': u'่ฏ',
u'่': u'่ฆ',
u'่': u'่ถ',
u'่': u'่ง',
u'่': u'่',
u'่': u'่ผ',
u'่': u'่ง',
u'่': u'่',
u'่ง': u'่ด',
u'่น': u'่',
u'่': u'่',
u'่': u'่ข',
u'่': u'่ฆ',
u'่': u'ๅก',
u'่': u'็
ข',
u'่ง': u'็นญ',
u'่': u'่',
u'่': u'่ฆ',
u'่': u'่',
u'่': u'่ข',
u'่': u'่',
u'่': u'่ฝ',
u'่': u'่',
u'่': u'่',
u'่ ': u'่บ',
u'่ก': u'็ช',
u'่ฃ': u'ๆฆฎ',
u'่ค': u'่ท',
u'่ฅ': u'ๆป',
u'่ฆ': u'็',
u'่ง': u'็',
u'่จ': u'่',
u'่ฉ': u'่',
u'่ช': u'่',
u'่ซ': u'่ญ',
u'่ฌ': u'่',
u'่ญ': u'่',
u'่ฎ': u'่ค',
u'่ฏ': u'่ฏ',
u'่
': u'่',
u'่ฑ': u'่',
u'่ฒ': u'่ฎ',
u'่ณ': u'่',
u'่ด': u'่ต',
u'่ถ': u'่',
u'่ท': u'็ฒ',
u'่ธ': u'่',
u'่น': u'็ฉ',
u'่บ': u'้ถฏ',
u'่ผ': u'่',
u'่': u'่ฟ',
u'่ค': u'่ข',
u'่ฅ': u'็',
u'่ฆ': u'็ธ',
u'่ง': u'่ญ',
u'่จ': u'่ฉ',
u'่ฑ': u'่ฅ',
u'่': u'่',
u'่': u'่ข',
u'่': u'่ฃ',
u'่': u'่',
u'่': u'่',
u'่': u'่',
u'่ ': u'่บ',
u'่ฃ': u'่ท',
u'่ฅ': u'้ฃ',
u'่ฆ': u'้ฉ',
u'่': u'่',
u'่ท': u'่',
u'่น': u'่',
u'่บ': u'่บ',
u'่ผ': u'่น',
u'่ฐ': u'่',
u'่ฒ': u'่',
u'่ด': u'่',
u'่ฎ': u'่ช',
u'่': u'่',
u'่': u'ๆซฑ',
u'่': u'่',
u'่': u'ๆ
ฎ',
u'่': u'่',
u'่ซ': u'่ฒ',
u'่ฌ': u'่ฏ',
u'่ฎ': u'่ฃ',
u'่ฝ': u'้',
u'่พ': u'่ฆ',
u'่ฟ': u'่ ',
u'่': u'่',
u'่': u'่ป',
u'่': u'่',
u'่': u'่ ถ',
u'่ฌ': u'่',
u'่': u'่ ฑ',
u'่': u'่ ฃ',
u'่': u'่ถ',
u'่ฎ': u'่ ป',
u'่ฐ': u'่',
u'่ฑ': u'่บ',
u'่ฒ': u'่ฏ',
u'่ณ': u'่',
u'่ด': u'่ ',
u'่': u'่ป',
u'่': u'่ธ',
u'่ก': u'่ ',
u'่': u'่
',
u'่': u'่',
u'่': u'่ฌ',
u'่': u'่ ',
u'่ผ': u'่ป',
u'่พ': u'่ ',
u'่': u'่ฟ',
u'่จ': u'่',
u'่': u'่ จ',
u'่ก
': u'้',
u'่ก': u'้',
u'่กฅ': u'่ฃ',
u'่กฌ': u'่ฅฏ',
u'่กฎ': u'่ข',
u'่ข': u'่ฅ',
u'่ข
': u'่ฃ',
u'่ข': u'่ค',
u'่ข': u'่ฅช',
u'่ขญ': u'่ฅฒ',
u'่ขฏ': u'่ฅ',
u'่ฃ
': u'่ฃ',
u'่ฃ': u'่ฅ ',
u'่ฃ': u'่ค',
u'่ฃข': u'่คณ',
u'่ฃฃ': u'่ฅ',
u'่ฃค': u'่คฒ',
u'่ฃฅ': u'่ฅ',
u'่ค': u'่คธ',
u'่คด': u'่ฅค',
u'่ง': u'่ฆ',
u'่ง': u'่ง',
u'่ง': u'่ฆ',
u'่ง': u'่ฆ',
u'่ง
': u'่ฆ',
u'่ง': u'่ฆ',
u'่ง': u'่ฆ',
u'่ง': u'่ฆฝ',
u'่ง': u'่ฆบ',
u'่ง': u'่ฆฌ',
u'่ง': u'่ฆก',
u'่ง': u'่ฆฟ',
u'่ง': u'่ฆฅ',
u'่ง': u'่ฆฆ',
u'่ง': u'่ฆฏ',
u'่ง': u'่ฆฒ',
u'่ง': u'่ฆท',
u'่ง': u'่งด',
u'่งฆ': u'่งธ',
u'่งฏ': u'่งถ',
u'่จ': u'่ชพ',
u'่ช': u'่ญฝ',
u'่ช': u'่ฌ',
u'่ฎ ': u'่จ',
u'่ฎก': u'่จ',
u'่ฎข': u'่จ',
u'่ฎฃ': u'่จ',
u'่ฎค': u'่ช',
u'่ฎฅ': u'่ญ',
u'่ฎฆ': u'่จ',
u'่ฎง': u'่จ',
u'่ฎจ': u'่จ',
u'่ฎฉ': u'่ฎ',
u'่ฎช': u'่จ',
u'่ฎซ': u'่จ',
u'่ฎฌ': u'่จ',
u'่ฎญ': u'่จ',
u'่ฎฎ': u'่ญฐ',
u'่ฎฏ': u'่จ',
u'่ฎฐ': u'่จ',
u'่ฎฑ': u'่จ',
u'่ฎฒ': u'่ฌ',
u'่ฎณ': u'่ซฑ',
u'่ฎด': u'่ฌณ',
u'่ฎต': u'่ฉ',
u'่ฎถ': u'่จ',
u'่ฎท': u'่จฅ',
u'่ฎธ': u'่จฑ',
u'่ฎน': u'่จ',
u'่ฎบ': u'่ซ',
u'่ฎป': u'่จฉ',
u'่ฎผ': u'่จ',
u'่ฎฝ': u'่ซท',
u'่ฎพ': u'่จญ',
u'่ฎฟ': u'่จช',
u'่ฏ': u'่จฃ',
u'่ฏ': u'่ญ',
u'่ฏ': u'่ฉ',
u'่ฏ': u'่จถ',
u'่ฏ': u'่ฉ',
u'่ฏ
': u'่ฉ',
u'่ฏ': u'่ญ',
u'่ฏ': u'่ฉ',
u'่ฏ': u'่ฉ',
u'่ฏ': u'่จด',
u'่ฏ': u'่จบ',
u'่ฏ': u'่ฉ',
u'่ฏ': u'่ฌ
',
u'่ฏ': u'่ฉ',
u'่ฏ': u'่ฉ',
u'่ฏ': u'่ฉ',
u'่ฏ': u'่ฉ',
u'่ฏ': u'่ญฏ',
u'่ฏ': u'่ฉ',
u'่ฏ': u'่ช',
u'่ฏ': u'่ช',
u'่ฏ': u'่ฉฆ',
u'่ฏ': u'่ฉฟ',
u'่ฏ': u'่ฉฉ',
u'่ฏ': u'่ฉฐ',
u'่ฏ': u'่ฉผ',
u'่ฏ': u'่ช ',
u'่ฏ': u'่ช
',
u'่ฏ': u'่ฉต',
u'่ฏ': u'่ฉฑ',
u'่ฏ': u'่ช',
u'่ฏ': u'่ฉฌ',
u'่ฏ ': u'่ฉฎ',
u'่ฏก': u'่ฉญ',
u'่ฏข': u'่ฉข',
u'่ฏฃ': u'่ฉฃ',
u'่ฏค': u'่ซ',
u'่ฏฅ': u'่ฉฒ',
u'่ฏฆ': u'่ฉณ',
u'่ฏง': u'่ฉซ',
u'่ฏจ': u'่ซข',
u'่ฏฉ': u'่ฉก',
u'่ฏช': u'่ญธ',
u'่ฏซ': u'่ชก',
u'่ฏฌ': u'่ชฃ',
u'่ฏญ': u'่ช',
u'่ฏฎ': u'่ช',
u'่ฏฏ': u'่ชค',
u'่ฏฐ': u'่ชฅ',
u'่ฏฑ': u'่ช',
u'่ฏฒ': u'่ชจ',
u'่ฏณ': u'่ช',
u'่ฏด': u'่ชช',
u'่ฏต': u'่ชฆ',
u'่ฏถ': u'่ช',
u'่ฏท': u'่ซ',
u'่ฏธ': u'่ซธ',
u'่ฏน': u'่ซ',
u'่ฏบ': u'่ซพ',
u'่ฏป': u'่ฎ',
u'่ฏผ': u'่ซ',
u'่ฏฝ': u'่ชน',
u'่ฏพ': u'่ชฒ',
u'่ฏฟ': u'่ซ',
u'่ฐ': u'่ซ',
u'่ฐ': u'่ชฐ',
u'่ฐ': u'่ซ',
u'่ฐ': u'่ชฟ',
u'่ฐ': u'่ซ',
u'่ฐ
': u'่ซ',
u'่ฐ': u'่ซ',
u'่ฐ': u'่ชถ',
u'่ฐ': u'่ซ',
u'่ฐ': u'่ชผ',
u'่ฐ': u'่ฌ',
u'่ฐ': u'่ซถ',
u'่ฐ': u'่ซ',
u'่ฐ': u'่ฌ',
u'่ฐ': u'่ซซ',
u'่ฐ': u'่ซง',
u'่ฐ': u'่ฌ',
u'่ฐ': u'่ฌ',
u'่ฐ': u'่ฌ',
u'่ฐ': u'่ซค',
u'่ฐ': u'่ซญ',
u'่ฐ': u'่ซผ',
u'่ฐ': u'่ฎ',
u'่ฐ': u'่ซฎ',
u'่ฐ': u'่ซณ',
u'่ฐ': u'่ซบ',
u'่ฐ': u'่ซฆ',
u'่ฐ': u'่ฌ',
u'่ฐ': u'่ซ',
u'่ฐ': u'่ซ',
u'่ฐ': u'่ฌจ',
u'่ฐ ': u'่ฎ',
u'่ฐก': u'่ฌ',
u'่ฐข': u'่ฌ',
u'่ฐฃ': u'่ฌ ',
u'่ฐค': u'่ฌ',
u'่ฐฅ': u'่ฌ',
u'่ฐฆ': u'่ฌ',
u'่ฐง': u'่ฌ',
u'่ฐจ': u'่ฌน',
u'่ฐฉ': u'่ฌพ',
u'่ฐช': u'่ฌซ',
u'่ฐซ': u'่ญพ',
u'่ฐฌ': u'่ฌฌ',
u'่ฐญ': u'่ญ',
u'่ฐฎ': u'่ญ',
u'่ฐฏ': u'่ญ',
u'่ฐฐ': u'่ฎ',
u'่ฐฑ': u'่ญ',
u'่ฐฒ': u'่ญ',
u'่ฐณ': u'่ฎ',
u'่ฐด': u'่ญด',
u'่ฐต': u'่ญซ',
u'่ฐถ': u'่ฎ',
u'่ฑฎ': u'่ฑถ',
u'่ด': u'่ฒ',
u'่ด': u'่ฒ',
u'่ด': u'่ฒ ',
u'่ด ': u'่ฒ',
u'่ดก': u'่ฒข',
u'่ดข': u'่ฒก',
u'่ดฃ': u'่ฒฌ',
u'่ดค': u'่ณข',
u'่ดฅ': u'ๆ',
u'่ดฆ': u'่ณฌ',
u'่ดง': u'่ฒจ',
u'่ดจ': u'่ณช',
u'่ดฉ': u'่ฒฉ',
u'่ดช': u'่ฒช',
u'่ดซ': u'่ฒง',
u'่ดฌ': u'่ฒถ',
u'่ดญ': u'่ณผ',
u'่ดฎ': u'่ฒฏ',
u'่ดฏ': u'่ฒซ',
u'่ดฐ': u'่ฒณ',
u'่ดฑ': u'่ณค',
u'่ดฒ': u'่ณ',
u'่ดณ': u'่ฒฐ',
u'่ดด': u'่ฒผ',
u'่ดต': u'่ฒด',
u'่ดถ': u'่ฒบ',
u'่ดท': u'่ฒธ',
u'่ดธ': u'่ฒฟ',
u'่ดน': u'่ฒป',
u'่ดบ': u'่ณ',
u'่ดป': u'่ฒฝ',
u'่ดผ': u'่ณ',
u'่ดฝ': u'่ด',
u'่ดพ': u'่ณ',
u'่ดฟ': u'่ณ',
u'่ต': u'่ฒฒ',
u'่ต': u'่ณ',
u'่ต': u'่ณ',
u'่ต': u'่ด',
u'่ต': u'่ณ',
u'่ต
': u'่ณ
',
u'่ต': u'่ด',
u'่ต': u'่ณ',
u'่ต': u'่ณ',
u'่ต': u'่ณ',
u'่ต': u'่ณ',
u'่ต': u'่ณฆ',
u'่ต': u'่ณญ',
u'่ต': u'้ฝ',
u'่ต': u'่ด',
u'่ต': u'่ณ',
u'่ต': u'่ณ',
u'่ต': u'่ด',
u'่ต': u'่ณ',
u'่ต': u'่ณก',
u'่ต': u'่ณ ',
u'่ต': u'่ณง',
u'่ต': u'่ณด',
u'่ต': u'่ณต',
u'่ต': u'่ด
',
u'่ต': u'่ณป',
u'่ต': u'่ณบ',
u'่ต': u'่ณฝ',
u'่ต': u'่ณพ',
u'่ต': u'่ด',
u'่ต': u'่ด',
u'่ต': u'่ด',
u'่ต ': u'่ด',
u'่ตก': u'่ด',
u'่ตข': u'่ด',
u'่ตฃ': u'่ด',
u'่ตช': u'่ตฌ',
u'่ตต': u'่ถ',
u'่ตถ': u'่ถ',
u'่ถ': u'่ถจ',
u'่ถฑ': u'่ถฒ',
u'่ถธ': u'่บ',
u'่ท': u'่บ',
u'่ท': u'่น',
u'่ท': u'่บ',
u'่ทต': u'่ธ',
u'่ทถ': u'่บ',
u'่ทท': u'่นบ',
u'่ทธ': u'่น',
u'่ทน': u'่บ',
u'่ทป': u'่บ',
u'่ธ': u'่ธด',
u'่ธ': u'่บ',
u'่ธช': u'่นค',
u'่ธฌ': u'่บ',
u'่ธฏ': u'่บ',
u'่น': u'่บก',
u'่น': u'่นฃ',
u'่นฐ': u'่บ',
u'่นฟ': u'่บฅ',
u'่บ': u'่บช',
u'่บ': u'่บฆ',
u'่บฏ': u'่ป',
u'่ปฟ': u'๐ซ',
u'่ฝฆ': u'่ป',
u'่ฝง': u'่ป',
u'่ฝจ': u'่ป',
u'่ฝฉ': u'่ป',
u'่ฝช': u'่ป',
u'่ฝซ': u'่ป',
u'่ฝฌ': u'่ฝ',
u'่ฝญ': u'่ป',
u'่ฝฎ': u'่ผช',
u'่ฝฏ': u'่ป',
u'่ฝฐ': u'่ฝ',
u'่ฝฑ': u'่ปฒ',
u'่ฝฒ': u'่ปป',
u'่ฝณ': u'่ฝค',
u'่ฝด': u'่ปธ',
u'่ฝต': u'่ปน',
u'่ฝถ': u'่ปผ',
u'่ฝท': u'่ปค',
u'่ฝธ': u'่ปซ',
u'่ฝน': u'่ฝข',
u'่ฝบ': u'่ปบ',
u'่ฝป': u'่ผ',
u'่ฝผ': u'่ปพ',
u'่ฝฝ': u'่ผ',
u'่ฝพ': u'่ผ',
u'่ฝฟ': u'่ฝ',
u'่พ': u'่ผ',
u'่พ': u'่ผ',
u'่พ': u'่ผ
',
u'่พ': u'่ผ',
u'่พ': u'่ผ',
u'่พ
': u'่ผ',
u'่พ': u'่ผ',
u'่พ': u'่ผฆ',
u'่พ': u'่ผฉ',
u'่พ': u'่ผ',
u'่พ': u'่ผฅ',
u'่พ': u'่ผ',
u'่พ': u'่ผฌ',
u'่พ': u'่ผ',
u'่พ': u'่ผ',
u'่พ': u'่ผณ',
u'่พ': u'่ผป',
u'่พ': u'่ผฏ',
u'่พ': u'่ฝ',
u'่พ': u'่ผธ',
u'่พ': u'่ฝก',
u'่พ': u'่ฝ
',
u'่พ': u'่ฝ',
u'่พ': u'่ผพ',
u'่พ': u'่ฝ',
u'่พ': u'่ฝ',
u'่พ': u'่ฝ',
u'่พ': u'่พญ',
u'่พฉ': u'่พฏ',
u'่พซ': u'่พฎ',
u'่พน': u'้',
u'่พฝ': u'้ผ',
u'่พพ': u'้',
u'่ฟ': u'้ท',
u'่ฟ': u'้',
u'่ฟ': u'้',
u'่ฟ': u'้',
u'่ฟ': u'้',
u'่ฟ': u'้',
u'่ฟ': u'้ฒ',
u'่ฟ': u'้ ',
u'่ฟ': u'้',
u'่ฟ': u'้ฃ',
u'่ฟ': u'้ฒ',
u'่ฟฉ': u'้',
u'่ฟณ': u'้',
u'่ฟน': u'่ทก',
u'้': u'้ฉ',
u'้': u'้ธ',
u'้': u'้',
u'้': u'้',
u'้ฆ': u'้',
u'้ป': u'้',
u'้': u'้บ',
u'้ฅ': u'้',
u'้': u'้ง',
u'้': u'้บ',
u'้ฌ': u'้',
u'้ฎ': u'้ต',
u'้น': u'้',
u'้บ': u'้ด',
u'้ป': u'้ฐ',
u'้': u'้',
u'้': u'้ถ',
u'้': u'้ญ',
u'้': u'้',
u'้ฆ': u'้
',
u'้ง': u'้',
u'้ธ': u'้ฒ',
u'้
': u'้
',
u'้
': u'้',
u'้
ฆ': u'้ฑ',
u'้
ฑ': u'้ฌ',
u'้
ฝ': u'้
',
u'้
พ': u'้',
u'้
ฟ': u'้',
u'้': u'้',
u'้ด': u'้',
u'้ฎ': u'้พ',
u'้พ': u'้จ',
u'้ญ': u'้ฎ',
u'้
': u'้',
u'้': u'้',
u'้': u'้',
u'้': u'้',
u'้': u'้',
u'้': u'้',
u'้': u'้',
u'้': u'้',
u'้': u'้ท',
u'้': u'้บ',
u'้': u'้ง',
u'้': u'้ค',
u'้': u'้',
u'้': u'้ฉ',
u'้': u'้ฃ',
u'้': u'้',
u'้': u'้น',
u'้': u'้',
u'้': u'้ต',
u'้': u'้',
u'้': u'้ฃ',
u'้': u'้',
u'้': u'้ฆ',
u'้': u'้
',
u'้': u'้',
u'้': u'้',
u'้': u'้พ',
u'้ ': u'้',
u'้ก': u'้',
u'้ข': u'้ผ',
u'้ฃ': u'้',
u'้ค': u'้',
u'้ฅ': u'้ฐ',
u'้ฆ': u'ๆฌฝ',
u'้ง': u'้',
u'้จ': u'้ข',
u'้ฉ': u'้ค',
u'้ช': u'้ง',
u'้ซ': u'้',
u'้ฌ': u'้ฅ',
u'้ญ': u'้',
u'้ฎ': u'้',
u'้ฏ': u'้',
u'้ฐ': u'้บ',
u'้ฑ': u'้ข',
u'้ฒ': u'้ฆ',
u'้ณ': u'้',
u'้ด': u'้ท',
u'้ต': u'็ผฝ',
u'้ถ': u'้ณ',
u'้ท': u'้',
u'้ธ': u'้ฝ',
u'้น': u'้ธ',
u'้บ': u'้',
u'้ป': u'้ฝ',
u'้ผ': u'้ฌ',
u'้ฝ': u'้ญ',
u'้พ': u'้',
u'้ฟ': u'้ฟ',
u'้': u'้พ',
u'้': u'้ต',
u'้': u'้',
u'้': u'้ด',
u'้': u'้ ',
u'้
': u'้',
u'้': u'้',
u'้': u'้',
u'้': u'้ฐ',
u'้': u'้',
u'้': u'้',
u'้': u'้',
u'้': u'้ฎ',
u'้': u'้น',
u'้': u'้ธ',
u'้': u'้ถ',
u'้': u'้ฌ',
u'้': u'้ ',
u'้': u'้บ',
u'้': u'้ฉ',
u'้': u'้',
u'้': u'้ช',
u'้': u'้ฎ',
u'้': u'้',
u'้': u'้ฃ',
u'้': u'้',
u'้': u'้',
u'้': u'้บ',
u'้': u'้
',
u'้': u'้',
u'้': u'้ฑ',
u'้': u'้ฆ',
u'้ ': u'้ง',
u'้ก': u'้',
u'้ข': u'้',
u'้ฃ': u'้',
u'้ค': u'้',
u'้ฅ': u'้ฉ',
u'้ฆ': u'้',
u'้ง': u'้ต',
u'้จ': u'้',
u'้ฉ': u'้ฉ',
u'้ช': u'้ฟ',
u'้ซ': u'้',
u'้ฌ': u'้ป',
u'้ญ': u'้',
u'้ฎ': u'้',
u'้ฏ': u'้ซ',
u'้ฐ': u'้ธ',
u'้ฑ': u'้ฅ',
u'้ฒ': u'้',
u'้ณ': u'้',
u'้ด': u'้',
u'้ต': u'้จ',
u'้ถ': u'้',
u'้ท': u'้ฃ',
u'้ธ': u'้',
u'้น': u'้',
u'้บ': u'้ช',
u'้ป': u'้',
u'้ผ': u'้ธ',
u'้ฝ': u'้ฑ',
u'้พ': u'้',
u'้ฟ': u'้',
u'้': u'้ท',
u'้': u'้',
u'้': u'้ฐ',
u'้': u'้ฅ',
u'้': u'้ค',
u'้
': u'้',
u'้': u'้ฏ',
u'้': u'้จ',
u'้': u'้น',
u'้': u'้ผ',
u'้': u'้',
u'้': u'้',
u'้': u'้
',
u'้': u'้ถ',
u'้': u'้ฆ',
u'้': u'้ง',
u'้': u'้ณ',
u'้': u'้ป',
u'้': u'้',
u'้': u'้',
u'้': u'้ฆ',
u'้': u'้',
u'้': u'้',
u'้': u'้บ',
u'้': u'้ฉ',
u'้': u'้ฏ',
u'้': u'้จ',
u'้': u'้',
u'้': u'้ก',
u'้': u'้',
u'้': u'้',
u'้': u'้',
u'้ ': u'้ฉ',
u'้ก': u'้ซ',
u'้ข': u'้ฎ',
u'้ฃ': u'้ผ',
u'้ค': u'้',
u'้ฅ': u'้',
u'้ฆ': u'้ฆ',
u'้ง': u'้',
u'้จ': u'ๆด',
u'้ฉ': u'้',
u'้ช': u'้',
u'้ซ': u'้',
u'้ฌ': u'้',
u'้ญ': u'้ ',
u'้ฎ': u'้ต',
u'้ฏ': u'้ธ',
u'้ฐ': u'้ณ',
u'้ฑ': u'้',
u'้ฒ': u'้ฅ',
u'้ณ': u'้',
u'้ด': u'้',
u'้ต': u'้',
u'้ถ': u'้ถ',
u'้ท': u'้',
u'้ธ': u'้ค',
u'้น': u'้ฌ',
u'้บ': u'้พ',
u'้ป': u'้',
u'้ผ': u'้ช',
u'้ฝ': u'้ ',
u'้พ': u'้ฐ',
u'้ฟ': u'้',
u'้': u'้',
u'้': u'้',
u'้': u'้ค',
u'้': u'้ก',
u'้': u'้จ',
u'้
': u'้',
u'้': u'้',
u'้': u'้ฎ',
u'้': u'้',
u'้': u'้',
u'้': u'้ท',
u'้': u'้ฒ',
u'้': u'้ซ',
u'้': u'้ณ',
u'้': u'้ฟ',
u'้': u'้ฆ',
u'้': u'้ฌ',
u'้': u'้',
u'้': u'้ฐ',
u'้': u'้ต',
u'้': u'้',
u'้': u'้',
u'้': u'้ข',
u'้': u'้',
u'้': u'้',
u'้': u'้',
u'้': u'้ฐ',
u'้': u'้',
u'้': u'้ก',
u'้': u'้',
u'้': u'้',
u'้': u'้',
u'้ ': u'้',
u'้ก': u'้',
u'้ข': u'้',
u'้ฃ': u'้',
u'้ค': u'้ท',
u'้ฅ': u'้ฅ',
u'้ฆ': u'้',
u'้ง': u'้ญ',
u'้จ': u'้ ',
u'้ฉ': u'้น',
u'้ช': u'้น',
u'้ซ': u'้',
u'้ฌ': u'้',
u'้ญ': u'้ณ',
u'้ฎ': u'้ถ',
u'้ฏ': u'้ฒ',
u'้ฐ': u'้ฎ',
u'้ฑ': u'้ฟ',
u'้ฒ': u'้',
u'้ณ': u'้ฃ',
u'้ด': u'้',
u'้ต': u'้ฑ',
u'้ถ': u'้ฒ',
u'้ฟ': u'้ท',
u'้จ': u'้',
u'้ฉ': u'้',
u'้ช': u'้',
u'้ซ': u'้',
u'้ฌ': u'้',
u'้ญ': u'้',
u'้ฎ': u'ๅ',
u'้ฏ': u'้',
u'้ฐ': u'้',
u'้ฑ': u'้',
u'้ฒ': u'้',
u'้ณ': u'้',
u'้ด': u'้',
u'้ต': u'้',
u'้ถ': u'้',
u'้ท': u'ๆถ',
u'้ธ': u'้',
u'้น': u'้ฌง',
u'้บ': u'้จ',
u'้ป': u'่',
u'้ผ': u'้ฅ',
u'้ฝ': u'้ฉ',
u'้พ': u'้ญ',
u'้ฟ': u'้',
u'้': u'้ฅ',
u'้': u'้ฃ',
u'้': u'้ก',
u'้': u'้ซ',
u'้': u'้ฌฎ',
u'้
': u'้ฑ',
u'้': u'้ฌ',
u'้': u'้',
u'้': u'้พ',
u'้': u'้น',
u'้': u'้ถ',
u'้': u'้ฌฉ',
u'้': u'้ฟ',
u'้': u'้ฝ',
u'้': u'้ป',
u'้': u'้ผ',
u'้': u'้ก',
u'้': u'้',
u'้': u'้',
u'้': u'้ ',
u'้': u'้',
u'้': u'้',
u'้': u'้',
u'้': u'้',
u'้': u'้',
u'้': u'้',
u'้': u'้',
u'้': u'้ค',
u'้': u'้',
u'้ณ': u'้ฝ',
u'้ด': u'้ฐ',
u'้ต': u'้ฃ',
u'้ถ': u'้',
u'้
': u'้',
u'้': u'้ธ',
u'้': u'้ด',
u'้': u'้ณ',
u'้': u'้',
u'้': u'้',
u'้ง': u'้',
u'้จ': u'้',
u'้ฉ': u'้ช',
u'้': u'้จ',
u'้': u'้ฑ',
u'้ถ': u'้ธ',
u'้ฝ': u'้',
u'้พ': u'้ฃ',
u'้': u'้',
u'้ ': u'่ฎ',
u'้ณ': u'้',
u'้พ': u'้ง',
u'้': u'้ฝ',
u'้ก': u'้ข',
u'้ญ': u'้',
u'้': u'้',
u'้': u'้',
u'้ฅ': u'้จ',
u'้': u'้',
u'้': u'้ฝ',
u'้ฏ': u'้',
u'้ฒ': u'้',
u'้ฆ': u'้',
u'้ง': u'้',
u'้จ': u'้',
u'้ฉ': u'้',
u'้ช': u'้',
u'้ซ': u'้',
u'้ฌ': u'้',
u'้ต': u'้ป',
u'้กต': u'้ ',
u'้กถ': u'้ ',
u'้กท': u'้ ',
u'้กธ': u'้ ',
u'้กน': u'้
',
u'้กบ': u'้ ',
u'้กป': u'้ ',
u'้กผ': u'้ ',
u'้กฝ': u'้ ',
u'้กพ': u'้กง',
u'้กฟ': u'้ ',
u'้ข': u'้ ',
u'้ข': u'้ ',
u'้ข': u'้ ',
u'้ข': u'้ ',
u'้ข': u'้ ',
u'้ข
': u'้กฑ',
u'้ข': u'้ ',
u'้ข': u'้ ',
u'้ข': u'้ ธ',
u'้ข': u'้ ก',
u'้ข': u'้ ฐ',
u'้ข': u'้ ฒ',
u'้ข': u'้ ',
u'้ข': u'ๆฝ',
u'้ข': u'็ฒ',
u'้ข': u'้ ฆ',
u'้ข': u'้ ค',
u'้ข': u'้ ป',
u'้ข': u'้ ฎ',
u'้ข': u'้ น',
u'้ข': u'้ ท',
u'้ข': u'้ ด',
u'้ข': u'็ฉ',
u'้ข': u'้ก',
u'้ข': u'้ก',
u'้ข': u'้ก',
u'้ข': u'้ก',
u'้ข': u'้ก',
u'้ข': u'้ก',
u'้ข': u'้ก',
u'้ข': u'้กณ',
u'้ข': u'้กข',
u'้ข ': u'้ก',
u'้ขก': u'้ก',
u'้ขข': u'้กฅ',
u'้ขค': u'้กซ',
u'้ขฅ': u'้กฌ',
u'้ขฆ': u'้กฐ',
u'้ขง': u'้กด',
u'้ฃ': u'้ขจ',
u'้ฃ': u'้ขบ',
u'้ฃ': u'้ขญ',
u'้ฃ': u'้ขฎ',
u'้ฃ': u'้ขฏ',
u'้ฃ': u'้ขถ',
u'้ฃ': u'้ขธ',
u'้ฃ': u'้ขผ',
u'้ฃ': u'้ขป',
u'้ฃ': u'้ฃ',
u'้ฃ': u'้ฃ',
u'้ฃ': u'้ฃ',
u'้ฃ': u'้ฃ',
u'้ฃ': u'้ฃ',
u'้ฃจ': u'้ฅ',
u'้ค': u'้ฅ',
u'้ฅฃ': u'้ฃ ',
u'้ฅค': u'้ฃฃ',
u'้ฅฅ': u'้ฃข',
u'้ฅฆ': u'้ฃฅ',
u'้ฅง': u'้คณ',
u'้ฅจ': u'้ฃฉ',
u'้ฅฉ': u'้คผ',
u'้ฅช': u'้ฃช',
u'้ฅซ': u'้ฃซ',
u'้ฅฌ': u'้ฃญ',
u'้ฅญ': u'้ฃฏ',
u'้ฅฎ': u'้ฃฒ',
u'้ฅฏ': u'้ค',
u'้ฅฐ': u'้ฃพ',
u'้ฅฑ': u'้ฃฝ',
u'้ฅฒ': u'้ฃผ',
u'้ฅณ': u'้ฃฟ',
u'้ฅด': u'้ฃด',
u'้ฅต': u'้ค',
u'้ฅถ': u'้ฅ',
u'้ฅท': u'้ค',
u'้ฅธ': u'้ค',
u'้ฅน': u'้ค',
u'้ฅบ': u'้ค',
u'้ฅป': u'้ค',
u'้ฅผ': u'้ค
',
u'้ฅฝ': u'้ค',
u'้ฅพ': u'้ค',
u'้ฅฟ': u'้ค',
u'้ฆ': u'้ค',
u'้ฆ': u'้ค',
u'้ฆ': u'้ค',
u'้ฆ': u'้ค',
u'้ฆ': u'้ค',
u'้ฆ
': u'้คก',
u'้ฆ': u'้คจ',
u'้ฆ': u'้คท',
u'้ฆ': u'้ฅ',
u'้ฆ': u'้คถ',
u'้ฆ': u'้คฟ',
u'้ฆ': u'้ฅ',
u'้ฆ': u'้ฅ',
u'้ฆ': u'้ฅ',
u'้ฆ': u'้คบ',
u'้ฆ': u'้คพ',
u'้ฆ': u'้ฅ',
u'้ฆ': u'้ฅ',
u'้ฆ': u'้ฅ
',
u'้ฆ': u'้ฅ',
u'้ฆ': u'้ฅ',
u'้ฆ': u'้ฅข',
u'้ฉฌ': u'้ฆฌ',
u'้ฉญ': u'้ฆญ',
u'้ฉฎ': u'้ฆฑ',
u'้ฉฏ': u'้ฆด',
u'้ฉฐ': u'้ฆณ',
u'้ฉฑ': u'้ฉ
',
u'้ฉฒ': u'้ฆน',
u'้ฉณ': u'้ง',
u'้ฉด': u'้ฉข',
u'้ฉต': u'้ง',
u'้ฉถ': u'้ง',
u'้ฉท': u'้ง',
u'้ฉธ': u'้ง',
u'้ฉน': u'้ง',
u'้ฉบ': u'้จถ',
u'้ฉป': u'้ง',
u'้ฉผ': u'้ง',
u'้ฉฝ': u'้ง',
u'้ฉพ': u'้ง',
u'้ฉฟ': u'้ฉ',
u'้ช': u'้ง',
u'้ช': u'้ฉ',
u'้ช': u'็ฝต',
u'้ช': u'้งฐ',
u'้ช': u'้ฉ',
u'้ช
': u'้ฉ',
u'้ช': u'้งฑ',
u'้ช': u'้งญ',
u'้ช': u'้งข',
u'้ช': u'้ฉซ',
u'้ช': u'้ฉช',
u'้ช': u'้จ',
u'้ช': u'้ฉ',
u'้ช': u'้จ',
u'้ช': u'้งธ',
u'้ช': u'้งฟ',
u'้ช': u'้จ',
u'้ช': u'้จ',
u'้ช': u'้จ',
u'้ช': u'้จ
',
u'้ช': u'้จ',
u'้ช': u'้ฉ',
u'้ช': u'้ฉ',
u'้ช': u'้จ',
u'้ช': u'้จญ',
u'้ช': u'้จค',
u'้ช': u'้จท',
u'้ช': u'้จ',
u'้ช': u'้ฉ',
u'้ช': u'้จฎ',
u'้ช': u'้จซ',
u'้ช': u'้จธ',
u'้ช ': u'้ฉ',
u'้ชก': u'้จพ',
u'้ชข': u'้ฉ',
u'้ชฃ': u'้ฉ',
u'้ชค': u'้ฉ',
u'้ชฅ': u'้ฉฅ',
u'้ชฆ': u'้ฉฆ',
u'้ชง': u'้ฉค',
u'้ซ
': u'้ซ',
u'้ซ': u'้ซ',
u'้ซ': u'้ซ',
u'้ฌ': u'้ฌข',
u'้ญ': u'้ญ',
u'้ญ': u'้ญ',
u'้ฑผ': u'้ญ',
u'้ฑฝ': u'้ญ',
u'้ฑพ': u'้ญข',
u'้ฑฟ': u'้ญท',
u'้ฒ': u'้ญจ',
u'้ฒ': u'้ญฏ',
u'้ฒ': u'้ญด',
u'้ฒ': u'ไฐพ',
u'้ฒ': u'้ญบ',
u'้ฒ
': u'้ฎ',
u'้ฒ': u'้ฎ',
u'้ฒ': u'้ฏฐ',
u'้ฒ': u'้ฑธ',
u'้ฒ': u'้ฎ',
u'้ฒ': u'้ฎ',
u'้ฒ': u'้ฎ',
u'้ฒ': u'้ฎ',
u'้ฒ': u'้ฎ',
u'้ฒ': u'้ฑ',
u'้ฒ': u'้ฎ',
u'้ฒ': u'้ฎ',
u'้ฒ': u'้ฎญ',
u'้ฒ': u'้ฎ',
u'้ฒ': u'้ฎณ',
u'้ฒ': u'้ฎช',
u'้ฒ': u'้ฎ',
u'้ฒ': u'้ฎฆ',
u'้ฒ': u'้ฐ',
u'้ฒ': u'้ฎ',
u'้ฒ': u'้ฑ ',
u'้ฒ': u'้ฑญ',
u'้ฒ': u'้ฎซ',
u'้ฒ': u'้ฎฎ',
u'้ฒ': u'้ฎบ',
u'้ฒ': u'้ฏ',
u'้ฒ': u'้ฑ',
u'้ฒ ': u'้ฏ',
u'้ฒก': u'้ฑบ',
u'้ฒข': u'้ฐฑ',
u'้ฒฃ': u'้ฐน',
u'้ฒค': u'้ฏ',
u'้ฒฅ': u'้ฐฃ',
u'้ฒฆ': u'้ฐท',
u'้ฒง': u'้ฏ',
u'้ฒจ': u'้ฏ',
u'้ฒฉ': u'้ฏ',
u'้ฒช': u'้ฎถ',
u'้ฒซ': u'้ฏฝ',
u'้ฒฌ': u'้ฏ',
u'้ฒญ': u'้ฏ',
u'้ฒฎ': u'้ฏช',
u'้ฒฏ': u'้ฏ',
u'้ฒฐ': u'้ฏซ',
u'้ฒฑ': u'้ฏก',
u'้ฒฒ': u'้ฏค',
u'้ฒณ': u'้ฏง',
u'้ฒด': u'้ฏ',
u'้ฒต': u'้ฏข',
u'้ฒถ': u'้ฏฐ',
u'้ฒท': u'้ฏ',
u'้ฒธ': u'้ฏจ',
u'้ฒน': u'้ฐบ',
u'้ฒบ': u'้ฏด',
u'้ฒป': u'้ฏ',
u'้ฒผ': u'้ฑ',
u'้ฒฝ': u'้ฐ',
u'้ฒพ': u'้ฐ',
u'้ฒฟ': u'้ฑจ',
u'้ณ': u'้ฏท',
u'้ณ': u'้ฐฎ',
u'้ณ': u'้ฐ',
u'้ณ': u'้ฐ',
u'้ณ': u'้ฑท',
u'้ณ
': u'้ฐ',
u'้ณ': u'้ฐ',
u'้ณ': u'้ฐ',
u'้ณ': u'้ฐ',
u'้ณ': u'้ฑ',
u'้ณ': u'้ฏฟ',
u'้ณ': u'้ฐ ',
u'้ณ': u'้ฐฒ',
u'้ณ': u'้ฐญ',
u'้ณ': u'้ฐจ',
u'้ณ': u'้ฐฅ',
u'้ณ': u'้ฐฉ',
u'้ณ': u'้ฐ',
u'้ณ': u'้ฐ',
u'้ณ': u'้ฐณ',
u'้ณ': u'้ฐพ',
u'้ณ': u'้ฑ',
u'้ณ': u'้ฑ',
u'้ณ': u'้ฐป',
u'้ณ': u'้ฐต',
u'้ณ': u'้ฑ
',
u'้ณ': u'ไฒ',
u'้ณ': u'้ฐผ',
u'้ณ': u'้ฑ',
u'้ณ': u'้ฑ',
u'้ณ': u'้ฑ',
u'้ณ': u'้ฑ',
u'้ณ ': u'้ฑฏ',
u'้ณก': u'้ฑค',
u'้ณข': u'้ฑง',
u'้ณฃ': u'้ฑฃ',
u'้ธ': u'้ณฅ',
u'้ธ ': u'้ณฉ',
u'้ธก': u'้',
u'้ธข': u'้ณถ',
u'้ธฃ': u'้ณด',
u'้ธค': u'้ณฒ',
u'้ธฅ': u'้ท',
u'้ธฆ': u'้ด',
u'้ธง': u'้ถฌ',
u'้ธจ': u'้ด',
u'้ธฉ': u'้ด',
u'้ธช': u'้ดฃ',
u'้ธซ': u'้ถ',
u'้ธฌ': u'้ธ',
u'้ธญ': u'้ดจ',
u'้ธฎ': u'้ด',
u'้ธฏ': u'้ดฆ',
u'้ธฐ': u'้ด',
u'้ธฑ': u'้ด',
u'้ธฒ': u'้ด',
u'้ธณ': u'้ด',
u'้ธด': u'้ทฝ',
u'้ธต': u'้ด',
u'้ธถ': u'้ทฅ',
u'้ธท': u'้ท',
u'้ธธ': u'้ดฏ',
u'้ธน': u'้ดฐ',
u'้ธบ': u'้ต',
u'้ธป': u'้ดด',
u'้ธผ': u'้ต',
u'้ธฝ': u'้ดฟ',
u'้ธพ': u'้ธ',
u'้ธฟ': u'้ดป',
u'้น': u'้ต',
u'้น': u'้ต',
u'้น': u'้ธ',
u'้น': u'้ต',
u'้น': u'้ต ',
u'้น
': u'้ต',
u'้น': u'้ต',
u'้น': u'้ทณ',
u'้น': u'้ต',
u'้น': u'้ตก',
u'้น': u'้ตฒ',
u'้น': u'้ถ',
u'้น': u'้ตช',
u'้น': u'้ตพ',
u'้น': u'้ตฏ',
u'้น': u'้ตฌ',
u'้น': u'้ตฎ',
u'้น': u'้ถ',
u'้น': u'้ถ',
u'้น': u'้ตท',
u'้น': u'้ทซ',
u'้น': u'้ถ',
u'้น': u'้ถก',
u'้น': u'้ถ',
u'้น': u'้ถป',
u'้น': u'้ถ',
u'้น': u'้ถฟ',
u'้น': u'้ถฅ',
u'้น': u'้ถฉ',
u'้น': u'้ท',
u'้น': u'้ท',
u'้น': u'้ถฒ',
u'้น ': u'้ถน',
u'้นก': u'้ถบ',
u'้นข': u'้ท',
u'้นฃ': u'้ถผ',
u'้นค': u'้ถด',
u'้นฅ': u'้ท',
u'้นฆ': u'้ธ',
u'้นง': u'้ท',
u'้นจ': u'้ท',
u'้นฉ': u'้ทฏ',
u'้นช': u'้ทฆ',
u'้นซ': u'้ทฒ',
u'้นฌ': u'้ทธ',
u'้นญ': u'้ทบ',
u'้นฎ': u'ได',
u'้นฏ': u'้ธ',
u'้นฐ': u'้ทน',
u'้นฑ': u'้ธ',
u'้นฒ': u'้ธ',
u'้นณ': u'้ธ',
u'้นด': u'้ธ',
u'้นพ': u'้นบ',
u'้บฆ': u'้บฅ',
u'้บธ': u'้บฉ',
u'้ป': u'้ป',
u'้ป': u'้ป',
u'้ปก': u'้ปถ',
u'้ปฉ': u'้ปท',
u'้ปช': u'้ปฒ',
u'้ปพ': u'้ปฝ',
u'้ผ': u'้ปฟ',
u'้ผ': u'้ผ',
u'้ผ': u'้',
u'้ผน': u'้ผด',
u'้ฝ': u'้ฝ',
u'้ฝ': u'้ฝ',
u'้ฝ': u'้ฝ',
u'้ฝฟ': u'้ฝ',
u'้พ': u'้ฝ',
u'้พ': u'้ฝ',
u'้พ': u'้ฝ',
u'้พ': u'้ฝ',
u'้พ': u'้ฝก',
u'้พ
': u'้ฝ',
u'้พ': u'้ฝ ',
u'้พ': u'้ฝ',
u'้พ': u'้ฝฆ',
u'้พ': u'้ฝฌ',
u'้พ': u'้ฝช',
u'้พ': u'้ฝฒ',
u'้พ': u'้ฝท',
u'้พ': u'้พ',
u'้พ': u'้พ',
u'้พ': u'้พ',
u'้พ': u'้พ',
u'๐ ฎถ': u'ๅฐ',
u'๐ก': u'ๅฃ',
u'๐ฆ': u'ไ',
u'๐จฐพ': u'้ท',
u'๐จฐฟ': u'้ณ',
u'๐จฑ': u'๐จฅ',
u'๐จฑ': u'้ ',
u'๐จฑ': u'้',
u'๐จฑ': u'้ฒ',
u'๐จฑ': u'้ฏ',
u'๐จฑ
': u'้',
u'๐จฑ': u'้ถ',
u'๐จฑ': u'้',
u'๐จฑ': u'้',
u'๐จฑ': u'๐จงฑ',
u'๐จฑ': u'้',
u'๐จฑ': u'้',
u'๐จฑ': u'้ฏ',
u'๐จฑ': u'้ฎ',
u'๐จฑ': u'้',
u'๐จฑ': u'๐จซ',
u'๐จฑ': u'้',
u'๐จฑ': u'้',
u'๐จฑ': u'้',
u'๐จฑ': u'๐จฎ',
u'๐จธ': u'้',
u'๐จธ': u'้',
u'๐ฉผ': u'ไช',
u'๐ฉฝ': u'๐ฉช',
u'๐ฉพ': u'๐ฉข',
u'๐ฉฟ': u'ไช',
u'๐ฉ': u'ไช',
u'๐ฉ': u'๐ฉฃ',
u'๐ฉ': u'้ก',
u'๐ฉ': u'ไซด',
u'๐ฉฅ': u'้ขฐ',
u'๐ฉฆ': u'๐ฉ',
u'๐ฉง': u'๐ฉก',
u'๐ฉจ': u'๐ฉน',
u'๐ฉฉ': u'๐ฉ',
u'๐ฉช': u'้ขท',
u'๐ฉซ': u'้ขพ',
u'๐ฉฌ': u'๐ฉบ',
u'๐ฉญ': u'๐ฉ',
u'๐ฉฎ': u'ไฌ',
u'๐ฉฏ': u'ไฌ',
u'๐ฉฐ': u'๐ฉ',
u'๐ฉ
': u'๐ฉ',
u'๐ฉ ': u'๐ฉฆ',
u'๐ฉ ': u'ไญ',
u'๐ฉ ': u'ไญ',
u'๐ฉ ': u'๐ฉ',
u'๐ฉ ': u'้คธ',
u'๐ฉงฆ': u'๐ฉกบ',
u'๐ฉงจ': u'้ง',
u'๐ฉงฉ': u'๐ฉค',
u'๐ฉงช': u'ไฎพ',
u'๐ฉงซ': u'้ง',
u'๐ฉงฌ': u'๐ฉขก',
u'๐ฉงญ': u'ไญฟ',
u'๐ฉงฎ': u'๐ฉขพ',
u'๐ฉงฏ': u'้ฉ',
u'๐ฉงฐ': u'ไฎ',
u'๐ฉงฑ': u'๐ฉฅ',
u'๐ฉงฒ': u'้งง',
u'๐ฉงณ': u'๐ฉขธ',
u'๐ฉงด': u'้งฉ',
u'๐ฉงต': u'๐ฉขด',
u'๐ฉงถ': u'๐ฉฃ',
u'๐ฉงบ': u'้งถ',
u'๐ฉงป': u'๐ฉฃต',
u'๐ฉงผ': u'๐ฉฃบ',
u'๐ฉงฟ': u'ไฎ ',
u'๐ฉจ': u'้จ',
u'๐ฉจ': u'ไฎ',
u'๐ฉจ': u'้จ',
u'๐ฉจ': u'้จช',
u'๐ฉจ
': u'๐ฉคธ',
u'๐ฉจ': u'๐ฉค',
u'๐ฉจ': u'้จ',
u'๐ฉจ': u'๐ฉคฒ',
u'๐ฉจ': u'้จ',
u'๐ฉจ': u'๐ฉฅ',
u'๐ฉจ': u'๐ฉฅ',
u'๐ฉจ': u'๐ฉฅ',
u'๐ฉจ': u'ไฎณ',
u'๐ฉจ': u'๐ฉง',
u'๐ฉฝน': u'้ญฅ',
u'๐ฉฝบ': u'๐ฉตฉ',
u'๐ฉฝป': u'๐ฉตน',
u'๐ฉฝผ': u'้ฏถ',
u'๐ฉฝฝ': u'๐ฉถฑ',
u'๐ฉฝพ': u'้ฎ',
u'๐ฉฝฟ': u'๐ฉถฐ',
u'๐ฉพ': u'้ฎ',
u'๐ฉพ': u'้ฏ',
u'๐ฉพ': u'้ฎธ',
u'๐ฉพ': u'๐ฉทฐ',
u'๐ฉพ
': u'๐ฉธ',
u'๐ฉพ': u'๐ฉธฆ',
u'๐ฉพ': u'้ฏฑ',
u'๐ฉพ': u'ไฑ',
u'๐ฉพ': u'ไฑฌ',
u'๐ฉพ': u'ไฑฐ',
u'๐ฉพ': u'้ฑ',
u'๐ฉพ': u'๐ฉฝ',
u'๐ช': u'ไฒฐ',
u'๐ช': u'้ณผ',
u'๐ช': u'๐ฉฟช',
u'๐ช
': u'๐ชฆ',
u'๐ช': u'้ดฒ',
u'๐ช': u'้ด',
u'๐ช': u'๐ช',
u'๐ช': u'้ทจ',
u'๐ช': u'๐ชพ',
u'๐ช': u'๐ช',
u'๐ช': u'้ต',
u'๐ช': u'๐ช',
u'๐ช': u'๐ช',
u'๐ช': u'๐ช',
u'๐ช': u'้ท',
u'๐ช': u'๐ช',
u'๐ช': u'๐ช',
u'๐ช': u'๐ชณ',
u'๐ช': u'ไดฌ',
u'๐ช': u'้บฒ',
u'๐ช': u'้บจ',
u'๐ช': u'ไดด',
u'๐ช': u'้บณ',
u'๐ช': u'๐ช',
u'๐ช': u'๐ชฏ',
u'๐ช': u'ๅ',
u'๐ชก': u'ๅน',
u'๐ชขฎ': u'ๅ',
u'๐ชจ': u'ใ',
u'๐ชจ': u'ๅฑฉ',
u'๐ชป': u'็ฝ',
u'๐ชพข': u'็',
u'๐ซก': u'้ด',
u'๐ซ': u'ไฌ',
u'๐ซจ': u'็ตบ',
u'๐ซธ': u'็บ',
u'๐ซ': u'่ฅ',
u'๐ซจ': u'่ฆผ',
u'๐ซ': u'่จ',
u'๐ซ': u'๐งฆง',
u'๐ซข': u'่ญ',
u'๐ซฐ': u'่ซฐ',
u'๐ซฒ': u'่ฌ',
u'๐ซ': u'่นป',
u'๐ซ': u'่ป',
u'๐ซ': u'่ฝฃ',
u'๐ซ': u'่ปจ',
u'๐ซ': u'่ผ',
u'๐ซ': u'่ผฎ',
u'๐ซง': u'้',
u'๐ซฉ': u'้ฆ',
u'๐ซ': u'้',
u'๐ซ ': u'้คฆ',
u'๐ซฆ': u'้ค',
u'๐ซง': u'้ค',
u'๐ซฎ': u'้คญ',
u'๐ซด': u'้ฅ',
u'๐ซ': u'้ง',
u'๐ซฃ': u'้งป',
u'๐ซค': u'้จ',
u'๐ซจ': u'้จ ',
u'๐ซ': u'้ฑฎ',
u'๐ซ': u'้ญ',
u'๐ซ': u'้ฎ',
u'๐ซ': u'้ฎฐ',
u'๐ซ': u'้ฐค',
u'๐ซ': u'้ฏ',
u'๐ซ': u'้ณท',
u'๐ซ': u'้ด',
u'๐ซข': u'้ธ',
u'๐ซถ': u'้ถ',
u'๐ซธ': u'้ถ',
u'๎ ญ': u'ๆฃก',
u'0ๅคๅช': u'0ๅค้ป',
u'0ๅคฉๅ': u'0ๅคฉๅพ',
u'0ๅช': u'0้ป',
u'0ไฝ': u'0้ค',
u'1ๅคฉๅ': u'1ๅคฉๅพ',
u'1ๅช': u'1้ป',
u'1ไฝ': u'1้ค',
u'2ๅคฉๅ': u'2ๅคฉๅพ',
u'2ๅช': u'2้ป',
u'2ไฝ': u'2้ค',
u'3ๅคฉๅ': u'3ๅคฉๅพ',
u'3ๅช': u'3้ป',
u'3ไฝ': u'3้ค',
u'4ๅคฉๅ': u'4ๅคฉๅพ',
u'4ๅช': u'4้ป',
u'4ไฝ': u'4้ค',
u'5ๅคฉๅ': u'5ๅคฉๅพ',
u'5ๅช': u'5้ป',
u'5ไฝ': u'5้ค',
u'6ๅคฉๅ': u'6ๅคฉๅพ',
u'6ๅช': u'6้ป',
u'6ไฝ': u'6้ค',
u'7ๅคฉๅ': u'7ๅคฉๅพ',
u'7ๅช': u'7้ป',
u'7ไฝ': u'7้ค',
u'8ๅคฉๅ': u'8ๅคฉๅพ',
u'8ๅช': u'8้ป',
u'8ไฝ': u'8้ค',
u'9ๅคฉๅ': u'9ๅคฉๅพ',
u'9ๅช': u'9้ป',
u'9ไฝ': u'9้ค',
u'ยท่': u'ยท่',
u'ใๅ
ๅถ': u'ใๅๅถ',
u'ใๅ
ๅถ': u'ใๅๅถ',
u'ใๅช': u'ใ้ป',
u'ใไฝ': u'ใ้ค',
u'ไธๅนฒไบๅ': u'ไธไนพไบๆทจ',
u'ไธไผไบบ': u'ไธไผไบบ',
u'ไธไผๅคด': u'ไธไผ้ ญ',
u'ไธไผ้ฃ': u'ไธไผ้ฃ',
u'ไธๅนถ': u'ไธไฝต',
u'ไธไธช': u'ไธๅ',
u'ไธไธชๅ': u'ไธๅๆบ',
u'ไธๅบๅ': u'ไธๅบๅ',
u'ไธๅบๅฃ': u'ไธๅบๅฃ',
u'ไธๅบ็': u'ไธๅบ็',
u'ไธๅบ็': u'ไธๅบ็',
u'ไธๅบ็ฅๅฑฑ': u'ไธๅบ็ฅๅฑฑ',
u'ไธๅบ้': u'ไธๅบ้',
u'ไธๅ': u'ไธๅ',
u'ไธๅๅช': u'ไธๅๅช',
u'ไธๅ้ฑ': u'ไธๅ้ข',
u'ไธๅฐ้': u'ไธๅฐ่ฃก',
u'ไธไผ': u'ไธๅคฅ',
u'ไธๅคฉๅ': u'ไธๅคฉๅพ',
u'ไธๅคฉ้': u'ไธๅคฉ้',
u'ไธๅนฒไบบ': u'ไธๅนฒไบบ',
u'ไธๅนฒๅฎถไธญ': u'ไธๅนฒๅฎถไธญ',
u'ไธๅนฒๅผๅ
': u'ไธๅนฒๅผๅ
',
u'ไธๅนฒๅผๅญ': u'ไธๅนฒๅผๅญ',
u'ไธๅนฒ้จไธ': u'ไธๅนฒ้จไธ',
u'ไธๅ': u'ไธๅผ',
u'ไธๅซๅคด': u'ไธๅฝ้ ญ',
u'ไธๆๆ': u'ไธๆๆ',
u'ไธๆ ็พ่ท': u'ไธๆจน็พ็ฉซ',
u'ไธๅ': u'ไธๆบ',
u'ไธไบไธคไธ': u'ไธ็ญๅ
ฉ้',
u'ไธ็ฉๅ
ไธ็ฉ': u'ไธ็ฉๅไธ็ฉ',
u'ไธ็ฎไบ็ถ': u'ไธ็ฎไบ็ถ',
u'ไธๆ': u'ไธ็ดฎ',
u'ไธๅฒ': u'ไธ่ก',
u'ไธ้
้ข': u'ไธ้้บต',
u'ไธๅช': u'ไธ้ป',
u'ไธ้ข้ฃ': u'ไธ้ข้ฃ',
u'ไธไฝ': u'ไธ้ค',
u'ไธๅๅ้ง': u'ไธ้ซฎๅ้',
u'ไธๅ่ๆฃ': u'ไธ้ฌจ่ๆฃ',
u'ไธๅบๅญ': u'ไธ้ฝฃๅญ',
u'ไธไธๅฝๅฝ': u'ไธไธ็ถ็ถ',
u'ไธไธ': u'ไธไธ',
u'ไธไธช': u'ไธๅ',
u'ไธๅบๅ': u'ไธๅบๅ',
u'ไธๅบๅฃ': u'ไธๅบๅฃ',
u'ไธๅบ็': u'ไธๅบ็',
u'ไธๅบ็': u'ไธๅบ็',
u'ไธๅบ็ฅๅฑฑ': u'ไธๅบ็ฅๅฑฑ',
u'ไธๅบ้': u'ไธๅบ้',
u'ไธๅ': u'ไธๅ',
u'ไธๅคฉๅ': u'ไธๅคฉๅพ',
u'ไธๆ
ๅ
ญๆฌฒ': u'ไธๆ
ๅ
ญๆ
พ',
u'ไธๆ': u'ไธ็ดฎ',
u'ไธๅช': u'ไธ้ป',
u'ไธไฝ': u'ไธ้ค',
u'ไธไฟ': u'ไธไฟ',
u'ไธๆ': u'ไธๆ',
u'ไธไธช': u'ไธๅ',
u'ไธๅบๅ': u'ไธๅบๅ',
u'ไธๅบๅฃ': u'ไธๅบๅฃ',
u'ไธๅบ็': u'ไธๅบ็',
u'ไธๅบ็': u'ไธๅบ็',
u'ไธๅบ็ฅๅฑฑ': u'ไธๅบ็ฅๅฑฑ',
u'ไธๅบ้': u'ไธๅบ้',
u'ไธๅคฉๅ': u'ไธๅคฉๅพ',
u'ไธๅพไธ่พ': u'ไธๅพตไธ่พ',
u'ไธๅ': u'ไธๆบ',
u'ไธๆ': u'ไธ็ดฎ',
u'ไธ็ปๅ': u'ไธ็ตฑๆ',
u'ไธ็ปๅๅฒ': u'ไธ็ตฑๆญทๅฒ',
u'ไธๅค': u'ไธ่ค',
u'ไธๅช': u'ไธ้ป',
u'ไธไฝ': u'ไธ้ค',
u'ไธๆขๅฑฑ': u'ไธๆขๅฑฑ',
u'ไธๆข': u'ไธๆจ',
u'ไธ็ญพๅ': u'ไธ็ฐฝๅ',
u'ไธ็ญพๅญ': u'ไธ็ฐฝๅญ',
u'ไธ็ญพๅ': u'ไธ็ฐฝๅฏซ',
u'ไธ็ญพๆถ': u'ไธ็ฐฝๆถ',
u'ไธ็ญพ': u'ไธ็ฑค',
u'ไธ่ฏ': u'ไธ่ฅ',
u'ไธ่ฏพ้': u'ไธ่ชฒ้',
u'ไธ้ข็ณ': u'ไธ้ข็ณ',
u'ไธไป่ทฏ': u'ไธๅด่ทฏ',
u'ไธไบ': u'ไธๆผ',
u'ไธๆข': u'ไธๆจ',
u'ไธๆณจ่งฃ': u'ไธๆณจ่งฃ',
u'ไธ็ญพๅ': u'ไธ็ฐฝๅ',
u'ไธ็ญพๅญ': u'ไธ็ฐฝๅญ',
u'ไธ็ญพๅ': u'ไธ็ฐฝๅฏซ',
u'ไธ็ญพๆถ': u'ไธ็ฐฝๆถ',
u'ไธ็ญพ': u'ไธ็ฑค',
u'ไธ่ฏ': u'ไธ่ฅ',
u'ไธ่ฏพ้': u'ไธ่ชฒ้',
u'ไธๅนฒไธๅ': u'ไธไนพไธๆทจ',
u'ไธๅ ': u'ไธไฝ',
u'ไธๅ
่ชๅถ': u'ไธๅ
่ชๅถ',
u'ไธๅไป': u'ไธๅไป',
u'ไธๅไฝ ': u'ไธๅไฝ ',
u'ไธๅๅฅน': u'ไธๅๅฅน',
u'ไธๅๅฎ': u'ไธๅๅฎ',
u'ไธๅๆ': u'ไธๅๆ',
u'ไธๅๆฒก': u'ไธๅๆฒ',
u'ไธๅ็ฟปๅฐ': u'ไธๅ็ฟปๅฐ',
u'ไธๅ่ฎธ': u'ไธๅ่จฑ',
u'ไธๅ่ฐ': u'ไธๅ่ชฐ',
u'ไธๅ
ๅถ': u'ไธๅๅถ',
u'ไธๅ ่ชๅถ': u'ไธๅ ่ชๅถ',
u'ไธๅ ๅถๅ': u'ไธๅ ๅถๅ',
u'ไธๅ ๅ': u'ไธๅ ๅ',
u'ไธๅ ๅๅถ': u'ไธๅ ๅๅถ',
u'ไธๅ ็ฎ': u'ไธๅ ็ฎ',
u'ไธๅฅฝๅนฒๆถ': u'ไธๅฅฝๅนฒๆถ',
u'ไธๅฅฝๅนฒ้ข': u'ไธๅฅฝๅนฒ้ ',
u'ไธๅฅฝๅนฒ้ ': u'ไธๅฅฝๅนฒ้ ',
u'ไธๅซๆฏไธ': u'ไธๅซๆฏ้',
u'ไธๅฏ่ๆ ': u'ไธๅฏ่ๆ
',
u'ไธๅนฒไบ': u'ไธๅนฒไบ',
u'ไธๅนฒไป': u'ไธๅนฒไป',
u'ไธๅนฒไผ': u'ไธๅนฒไผ',
u'ไธๅนฒไฝ ': u'ไธๅนฒไฝ ',
u'ไธๅนฒๅฅน': u'ไธๅนฒๅฅน',
u'ไธๅนฒๅฎ': u'ไธๅนฒๅฎ',
u'ไธๅนฒๆ': u'ไธๅนฒๆ',
u'ไธๅนฒๆพ': u'ไธๅนฒๆพ',
u'ไธๅนฒๆฐ': u'ไธๅนฒๆพ',
u'ไธๅนฒๆถ': u'ไธๅนฒๆถ',
u'ไธๅนฒ็ ': u'ไธๅนฒ็ ',
u'ไธๅนฒ็ฏ': u'ไธๅนฒ็ฏ',
u'ไธๅนฒ้ข': u'ไธๅนฒ้ ',
u'ไธๅนฒ้ ': u'ไธๅนฒ้ ',
u'ไธๅนฒ': u'ไธๅนน',
u'ไธๅ': u'ไธๅผ',
u'ไธ้': u'ไธๆก',
u'ไธๆ่': u'ไธๆ่ฝ',
u'ไธๆญๅ': u'ไธๆท็ผ',
u'ไธๆฏๅช': u'ไธๆฏๅช',
u'ไธๅ': u'ไธๆบ',
u'ไธๅ็กฎ': u'ไธๆบ็ขบ',
u'ไธ่ฐท': u'ไธ็ฉ',
u'ไธ่ฏ่ๆ': u'ไธ่ฅ่็',
u'ไธๆ': u'ไธ่จ',
u'ไธ่ดๆๆ': u'ไธ่ฒ ๆๆ',
u'ไธ้ๅๅบ': u'ไธ้ๅผๆ
ถ',
u'ไธไธ': u'ไธ้',
u'ไธ้ๅฃฐ': u'ไธ้่ฒ',
u'ไธ้้ข': u'ไธ้ฝ้ผ',
u'ไธ้ฃๅนฒ่
': u'ไธ้ฃไนพ่
',
u'ไธๆ': u'ไธ้ฌฅ',
u'ไธไธ': u'ไธไธ',
u'ไธๅฉๅญ': u'ไธๅฉๅญ',
u'ไธๅนด': u'ไธๅนด',
u'ไธๆฅ': u'ไธๆฅ',
u'ไธๆฆ': u'ไธๆฆ',
u'ไธๆถ': u'ไธๆ',
u'ไธๆ': u'ไธๆ',
u'ไธ่กจๅ': u'ไธ่กจๅ',
u'ไธ่ง': u'ไธ่ง',
u'ไธไบ': u'ไธๆผ',
u'ไธ็ฐ่ฐท': u'ไธ็ฐ่ฐท',
u'ไธ็ๆฏ': u'ไธ็็',
u'ไธ็้': u'ไธ็่ฃก',
u'ไธ็บช้': u'ไธ็ด้',
u'ไธ็บช้่กจ': u'ไธ็ด้้ถ',
u'ไธขไธ': u'ไธ้',
u'ๅนถไธๅ': u'ไธฆไธๅ',
u'ๅนถๅญ็': u'ไธฆๅญ่',
u'ๅนถๆฐๅ
ฅๆท': u'ไธฆๆฐๅ
ฅๆพฑ',
u'ๅนถๅๅจ': u'ไธฆ็ผๅ',
u'ๅนถๅๅฑ': u'ไธฆ็ผๅฑ',
u'ๅนถๅ็ฐ': u'ไธฆ็ผ็พ',
u'ๅนถๅ่กจ': u'ไธฆ็ผ่กจ',
u'ไธญๅฝๅฝ้
ไฟกๆๆ่ตๅ
ฌๅธ': u'ไธญๅๅ้ไฟกๆๆ่ณๅ
ฌๅธ',
u'ไธญๅ้': u'ไธญๅ้',
u'ไธญๅ้่กจ้ข': u'ไธญๅ้่กจ้ข',
u'ไธญๅ้่กจ': u'ไธญๅ้้ถ',
u'ไธญๅ้้ข': u'ไธญๅ้้ข',
u'ไธญไป': u'ไธญๅด',
u'ไธญๅฒณ': u'ไธญๅถฝ',
u'ไธญๅบๅญ': u'ไธญๅบๅญ',
u'ไธญๆ้': u'ไธญๆ่ฃก',
u'ไธญไบ': u'ไธญๆผ',
u'ไธญ็ญพ': u'ไธญ็ฑค',
u'ไธญ็พๅ่กจ': u'ไธญ็พ็ผ่กจ',
u'ไธญ่ฏ': u'ไธญ่ฅ',
u'ไธญ้ฃๅ': u'ไธญ้ขจๅพ',
u'ไธฐๅ': u'ไธฐๅ',
u'ไธฐไปช': u'ไธฐๅ',
u'ไธฐๅ': u'ไธฐๅ',
u'ไธฐๅฐ': u'ไธฐๅฐ',
u'ไธฐๅงฟ': u'ไธฐๅงฟ',
u'ไธฐๅฎน': u'ไธฐๅฎน',
u'ไธฐๅบฆ': u'ไธฐๅบฆ',
u'ไธฐๆ
': u'ไธฐๆ
',
u'ไธฐๆ ': u'ไธฐๆจ',
u'ไธฐๆจไธๅก': u'ไธฐๆจไธๅก',
u'ไธฐๆ ไธๅก': u'ไธฐๆจไธๅก',
u'ไธฐ็ฅ': u'ไธฐ็ฅ',
u'ไธฐ่ธ': u'ไธฐ่ธ',
u'ไธฐ้': u'ไธฐ้',
u'ไธฐ้ต': u'ไธฐ้ป',
u'ไธฐ้ป': u'ไธฐ้ป',
u'ไธธ่ฏ': u'ไธธ่ฅ',
u'ไธน่ฏ': u'ไธน่ฅ',
u'ไธปไป': u'ไธปๅ',
u'ไธปๅนฒ': u'ไธปๅนน',
u'ไธป้ๅทฎ': u'ไธป้ๅทฎ',
u'ไธป้ๆฒ็บฟ': u'ไธป้ๆฒ็ท',
u'ไนไนๅฐไธ': u'ไน้บผๅฐไธ',
u'ไนไธๅช': u'ไนไธๅช',
u'ไนไบๅช': u'ไนไบๅช',
u'ไนๅ
ซไนๅช': u'ไนๅ
ซไนๅช',
u'ไนๅพ': u'ไนๅพต',
u'ไนๆ': u'ไน่จ',
u'ไน้': u'ไน้',
u'ไนไฝ': u'ไน้ค',
u'ไนไธ': u'ไนไธ',
u'ไนไธไนไป': u'ไนไธไน่ฎ',
u'ไนไธช': u'ไนๅ',
u'ไนๅบๅ': u'ไนๅบๅ',
u'ไนๅบๅฃ': u'ไนๅบๅฃ',
u'ไนๅบ็': u'ไนๅบ็',
u'ไนๅบ็': u'ไนๅบ็',
u'ไนๅบ็ฅๅฑฑ': u'ไนๅบ็ฅๅฑฑ',
u'ไนๅบ้': u'ไนๅบ้',
u'ไนๅ': u'ไนๅ',
u'ไนๅคฉๅ': u'ไนๅคฉๅพ',
u'ไน่ฐท': u'ไน็ฉ',
u'ไนๆ': u'ไน็ดฎ',
u'ไนๅช': u'ไน้ป',
u'ไนไฝ': u'ไน้ค',
u'ไน้พ่กจ่ก': u'ไน้พ่กจ่ก',
u'ไนๅ
ๅถ': u'ไนๅๅถ',
u'ไนๆไบ่': u'ไนๆไบ่ฝ',
u'ๅนฒๅนฒ': u'ไนพไนพ',
u'ๅนฒๅนฒๅฟ็': u'ไนพไนพๅ
็',
u'ๅนฒๅนฒๅๅ': u'ไนพไนพๆทจๆทจ',
u'ๅนฒไบ': u'ไนพไบ',
u'ๅนฒไธชๅค': u'ไนพๅๅค ',
u'ๅนฒๅฟ': u'ไนพๅ
',
u'ๅนฒๅฐ': u'ไนพๅฐ',
u'ๅนฒๅท': u'ไนพๅท',
u'ๅนฒๅป็': u'ไนพๅป็',
u'ๅนฒๅฅๅฅ': u'ไนพๅๅ',
u'ๅนฒๅฆ': u'ไนพๅฆ',
u'ๅนฒๅ็ไธๅทด': u'ไนพๅ่ไธๅทด',
u'ๅนฒๅ': u'ไนพๅ',
u'ๅนฒๅณ': u'ไนพๅณ',
u'ๅนฒๅฝ': u'ไนพๅฝ',
u'ๅนฒๅฅ': u'ไนพๅฅ',
u'ๅนฒๅญ': u'ไนพๅญ',
u'ๅนฒๅฑ': u'ไนพๅฑ',
u'ๅนฒๅผ': u'ไนพๅผ',
u'ๅนฒไน': u'ไนพๅฌ',
u'ๅนฒๅ': u'ไนพๅ',
u'ๅนฒๅ': u'ไนพๅฆ',
u'ๅนฒๅ': u'ไนพๅ',
u'ๅนฒๅไป': u'ไนพๅไป',
u'ๅนฒๅๆดๅ': u'ไนพๅๆฝๆทจ',
u'ๅนฒๅฐ': u'ไนพๅฐ',
u'ๅนฒๅค': u'ไนพๅค',
u'ๅนฒๅ': u'ไนพๅกข',
u'ๅนฒๅฅณ': u'ไนพๅฅณ',
u'ๅนฒๅฅดๆ': u'ไนพๅฅดๆ',
u'ๅนฒๅฆน': u'ไนพๅฆน',
u'ๅนฒๅง': u'ไนพๅง',
u'ๅนฒๅจ': u'ไนพๅจ',
u'ๅนฒๅฆ': u'ไนพๅชฝ',
u'ๅนฒๅญ': u'ไนพๅญ',
u'ๅนฒๅญฃ': u'ไนพๅญฃ',
u'ๅนฒๅฐธ': u'ไนพๅฑ',
u'ๅนฒๅฑๆฉ': u'ไนพๅฑๆฉ',
u'ๅนฒๅทด': u'ไนพๅทด',
u'ๅนฒๅผ': u'ไนพๅผ',
u'ๅนฒๅผ': u'ไนพๅผ',
u'ๅนฒๆฅ': u'ไนพๆฅ',
u'ๅนฒๆง': u'ไนพๆง',
u'ๅนฒๆ้ท': u'ไนพๆ้ท',
u'ๅนฒๆ': u'ไนพๆ',
u'ๅนฒๆๅฐ': u'ไนพๆๅฐ',
u'ๅนฒๆไธ': u'ไนพๆไธ',
u'ๅนฒๆฆ': u'ไนพๆฆ',
u'ๅนฒๆฏๅ': u'ไนพๆฏๅ',
u'ๅนฒๆฏๆฏ': u'ไนพๆฏๆฏ',
u'ๅนฒๆฒๆขๅญไธๅๆฒน': u'ไนพๆฒๆขๅญไธ่ณฃๆฒน',
u'ๅนฒๆ': u'ไนพๆ',
u'ๅนฒๆฑ': u'ไนพๆฑ',
u'ๅนฒๆ': u'ไนพๆ',
u'ๅนฒๆ': u'ไนพๆ',
u'ๅนฒๆๆฒ': u'ไนพๆๆฒ',
u'ๅนฒๆฏ': u'ไนพๆฏ',
u'ๅนฒๆ': u'ไนพๆ',
u'ๅนฒๆฏ': u'ไนพๆฏ',
u'ๅนฒๆด': u'ไนพๆด',
u'ๅนฒๆด็็ซ': u'ไนพๆด็็ซ',
u'ๅนฒๆข
': u'ไนพๆข
',
u'ๅนฒๆญป': u'ไนพๆญป',
u'ๅนฒๆฑ ': u'ไนพๆฑ ',
u'ๅนฒๆฒก': u'ไนพๆฒ',
u'ๅนฒๆด': u'ไนพๆด',
u'ๅนฒๆถธ': u'ไนพๆถธ',
u'ๅนฒๅ': u'ไนพๆถผ',
u'ๅนฒๅ': u'ไนพๆทจ',
u'ๅนฒๆธ ': u'ไนพๆธ ',
u'ๅนฒๆธด': u'ไนพๆธด',
u'ๅนฒๆฒ': u'ไนพๆบ',
u'ๅนฒๆผ': u'ไนพๆผ',
u'ๅนฒๆถฉ': u'ไนพๆพ',
u'ๅนฒๆนฟ': u'ไนพๆฟ',
u'ๅนฒ็ฌ': u'ไนพ็ฌ',
u'ๅนฒ็ญ': u'ไนพ็ฑ',
u'ๅนฒ็ฑ': u'ไนพ็ฑ',
u'ๅนฒ็ฏ็': u'ไนพ็็',
u'ๅนฒ็ฅ': u'ไนพ็ฅ',
u'ๅนฒ็ธ': u'ไนพ็ธ',
u'ๅนฒ็น': u'ไนพ็น',
u'ๅนฒ็ฝ': u'ไนพ็ฝ',
u'ๅนฒ็': u'ไนพ็',
u'ๅนฒ็ๅ': u'ไนพ็ๅ',
u'ๅนฒ็ๅญ': u'ไนพ็ๅญ',
u'ๅนฒไบง': u'ไนพ็ข',
u'ๅนฒ็ฐ': u'ไนพ็ฐ',
u'ๅนฒ็ฅ': u'ไนพ็ฅ',
u'ๅนฒ็ฆ': u'ไนพ็ฆ',
u'ๅนฒ็ช': u'ไนพ็',
u'ๅนฒ็ฃ': u'ไนพ็ฌ',
u'ๅนฒ็พ': u'ไนพ็ฎ',
u'ๅนฒ็ฝๅฟ': u'ไนพ็ฝๅ
',
u'ๅนฒ็': u'ไนพ็',
u'ๅนฒ็ผ': u'ไนพ็ผ',
u'ๅนฒ็ช็ผ': u'ไนพ็ช็ผ',
u'ๅนฒ็คผ': u'ไนพ็ฆฎ',
u'ๅนฒ็จฟ': u'ไนพ็จฟ',
u'ๅนฒ็ฌ': u'ไนพ็ฌ',
u'ๅนฒ็ญ': u'ไนพ็ญ',
u'ๅนฒ็ฏพ็': u'ไนพ็ฏพ็',
u'ๅนฒ็ฒ': u'ไนพ็ฒ',
u'ๅนฒ็ฒฎ': u'ไนพ็ณง',
u'ๅนฒ็ป': u'ไนพ็ต',
u'ๅนฒไธ': u'ไนพ็ตฒ',
u'ๅนฒ็บฒ': u'ไนพ็ถฑ',
u'ๅนฒ็ปท': u'ไนพ็น',
u'ๅนฒ่': u'ไนพ่',
u'ๅนฒ่็': u'ไนพ่็',
u'ๅนฒ่ก': u'ไนพ่ก',
u'ๅนฒ่ฅ': u'ไนพ่ฅ',
u'ๅนฒ่': u'ไนพ่',
u'ๅนฒ่ฑ': u'ไนพ่ฑ',
u'ๅนฒๅ': u'ไนพ่ป',
u'ๅนฒ่': u'ไนพ่',
u'ๅนฒ่จ่
': u'ไนพ่จ่',
u'ๅนฒ่ถ้ฑ': u'ไนพ่ถ้ข',
u'ๅนฒ่': u'ไนพ่',
u'ๅนฒ่': u'ไนพ่',
u'ๅนฒ่ฝ': u'ไนพ่ฝ',
u'ๅนฒ็': u'ไนพ่',
u'ๅนฒๅง': u'ไนพ่',
u'ๅนฒ่ช': u'ไนพ่ช',
u'ๅนฒ่': u'ไนพ่',
u'ๅนฒๅท': u'ไนพ่',
u'ๅนฒ่กๆต': u'ไนพ่กๆผฟ',
u'ๅนฒ่กฃ': u'ไนพ่กฃ',
u'ๅนฒ่ฃ': u'ไนพ่ฃ',
u'ๅนฒไบฒ': u'ไนพ่ฆช',
u'ไนพ่ฑกๅ': u'ไนพ่ฑกๆ',
u'ไนพ่ฑกๆ': u'ไนพ่ฑกๆ',
u'ๅนฒ่ด': u'ไนพ่ฒ',
u'ๅนฒ่ดง': u'ไนพ่ฒจ',
u'ๅนฒ่บ': u'ไนพ่บ',
u'ๅนฒ้ผ': u'ไนพ้ผ',
u'ๅนฒ้
ช': u'ไนพ้
ช',
u'ๅนฒ้
ตๆฏ': u'ไนพ้
ตๆฏ',
u'ๅนฒ้': u'ไนพ้',
u'ๅนฒ้': u'ไนพ้',
u'ๅนฒ้': u'ไนพ้',
u'ๅนฒ้ฟๅฅถ': u'ไนพ้ฟๅฅถ',
u'ๅนฒ้': u'ไนพ้',
u'ๅนฒ้ท': u'ไนพ้ท',
u'ๅนฒ็ต': u'ไนพ้ป',
u'ๅนฒ้ไนฑ': u'ไนพ้ไบ',
u'ๅนฒ้ขก': u'ไนพ้ก',
u'ๅนฒๅฐ': u'ไนพ้ขฑ',
u'ๅนฒ้ฅญ': u'ไนพ้ฃฏ',
u'ๅนฒ้ฆ': u'ไนพ้คจ',
u'ๅนฒ็ณ': u'ไนพ้คฑ',
u'ๅนฒ้ฆ': u'ไนพ้คพ',
u'ๅนฒ้ฑผ': u'ไนพ้ญ',
u'ๅนฒ้ฒ': u'ไนพ้ฎฎ',
u'ๅนฒ้ข': u'ไนพ้บต',
u'ไนฑๅ': u'ไบ้ซฎ',
u'ไนฑๅ': u'ไบ้ฌจ',
u'ไนฑๅไธ่ฟๆฅ': u'ไบ้ฌจไธ้ไพ',
u'ไบๅ
ๅถ': u'ไบๅๅถ',
u'ไบๆ
ๅนฒ่': u'ไบๆ
ๅนฒ่',
u'ไบๆๆๅทง': u'ไบๆ้ฌฅๅทง',
u'ไบ่ฟน': u'ไบ่ฟน',
u'ไบ้ฝๅนฒ่': u'ไบ้ฝๅนฒ่',
u'ไบไธๆฃฑ็ป': u'ไบไธ็จ็ป',
u'ไบไธช': u'ไบๅ',
u'ไบๅบๅ': u'ไบๅบๅ',
u'ไบๅบๅฃ': u'ไบๅบๅฃ',
u'ไบๅบ็': u'ไบๅบ็',
u'ไบๅบ็': u'ไบๅบ็',
u'ไบๅบ็ฅๅฑฑ': u'ไบๅบ็ฅๅฑฑ',
u'ไบๅบ้': u'ไบๅบ้',
u'ไบๅ': u'ไบๅ',
u'ไบๅชๅพ': u'ไบๅชๅพ',
u'ไบๅคฉๅ': u'ไบๅคฉๅพ',
u'ไบไป': u'ไบๅด',
u'ไบ็ผถ้ๆ': u'ไบ็ผถ้ๆ',
u'ไบ่ๆฟ': u'ไบ่ๆฟ',
u'ไบ่็ธๆ': u'ไบ่็ธ้ฌฅ',
u'ไบ้ๅคด': u'ไบ้้ ญ',
u'ไบ้้ ญ': u'ไบ้้ ญ',
u'ไบๅช': u'ไบ้ป',
u'ไบไฝ': u'ไบ้ค',
u'ไบไธน': u'ไบไธน',
u'ไบไบ': u'ไบไบ',
u'ไบไปๆณฐ': u'ไบไปๆณฐ',
u'ไบไฝณๅ': u'ไบไฝณๅ',
u'ไบไผๅฝ': u'ไบๅๅ',
u'ไบๅๅ': u'ไบๅๅ',
u'ไบๅ
้ ': u'ไบๅ
้ ',
u'ไบๅ
่ฟ': u'ไบๅ
้ ',
u'ไบๅ
-่ญๅค็ธฃ': u'ไบๅ
-่ญๅค็ธฃ',
u'ไบๅ
-ๅ
ฐๅคๅฟ': u'ไบๅ
-่ญๅค็ธฃ',
u'ไบๅ
ๅ': u'ไบๅ
ๅ',
u'ไบๅ': u'ไบๅ',
u'ไบๅๅฅ': u'ไบๅๅฅ',
u'ไบๅ': u'ไบๅ',
u'ไบๅ่': u'ไบๅ่',
u'ไบๅ ๅ
': u'ไบๅ ๅ
',
u'ไบๅฐ็
': u'ไบๅฐ็
',
u'ไบๅฐ็': u'ไบๅฐ็
',
u'ไบๅณไปป': u'ไบๅณไปป',
u'ไบๅ': u'ไบๅ',
u'ไบๅๆตท': u'ไบๅๆตท',
u'ไบๅฝๆกข': u'ไบๅๆฅจ',
u'ไบๅๆฅจ': u'ไบๅๆฅจ',
u'ไบๅ': u'ไบๅ
',
u'ไบๅ
': u'ไบๅ
',
u'ไบๅคงๅฏถ': u'ไบๅคงๅฏถ',
u'ไบๅคงๅฎ': u'ไบๅคงๅฏถ',
u'ไบๅคฉไป': u'ไบๅคฉไป',
u'ไบๅฅๅบๆๅ
': u'ไบๅฅๅบซๆๅ
',
u'ไบๅฅๅบซๆๅ
': u'ไบๅฅๅบซๆๅ
',
u'ไบๅง': u'ไบๅง',
u'ไบๅจ': u'ไบๅจ',
u'ไบๅจ': u'ไบๅจ',
u'ไบๅญๅ': u'ไบๅญๅ',
u'ไบๅญๅ
ผ': u'ไบๅญๅ
ผ',
u'ไบๅญธๅฟ ': u'ไบๅญธๅฟ ',
u'ไบๅญฆๅฟ ': u'ไบๅญธๅฟ ',
u'ไบๅฎถๅ ก': u'ไบๅฎถๅ ก',
u'ไบๅฏ': u'ไบๅฏ',
u'ไบๅฐไผ': u'ไบๅฐๅ',
u'ไบๅฐๅ': u'ไบๅฐๅ',
u'ไบๅฐๅฝค': u'ไบๅฐๅฝค',
u'ไบๅฑฑ': u'ไบๅฑฑ',
u'ไบๅฑฑๅฝ': u'ไบๅฑฑๅ',
u'ไบๅฑฑๅ': u'ไบๅฑฑๅ',
u'ไบๅธฅ': u'ไบๅธฅ',
u'ไบๅธ
': u'ไบๅธฅ',
u'ไบๅนผ่ป': u'ไบๅนผ่ป',
u'ไบๅนผๅ': u'ไบๅนผ่ป',
u'ไบๅบท้': u'ไบๅบท้',
u'ไบๅปฃๆดฒ': u'ไบๅปฃๆดฒ',
u'ไบๅนฟๆดฒ': u'ไบๅปฃๆดฒ',
u'ไบๅผๆ': u'ไบๅผๆ',
u'ไบๅพๆฟ': u'ไบๅพๆฟ',
u'ไบไปๆฟ': u'ไบๅพๆฟ',
u'ไบๅพทๆตท': u'ไบๅพทๆตท',
u'ไบๅฟๅฎ': u'ไบๅฟๅฏง',
u'ไบๅฟๅฏง': u'ไบๅฟๅฏง',
u'ไบๆ': u'ไบๆ',
u'ไบๆ
่ก': u'ไบๆ
่ก',
u'ไบๆ
ง': u'ไบๆ
ง',
u'ไบๆ้พ': u'ไบๆ้พ',
u'ไบๆ้พ': u'ไบๆ้พ',
u'ไบๆฏ': u'ไบๆฏ',
u'ไบๆฏๆญฆ': u'ไบๆฏๆญฆ',
u'ไบๆ': u'ไบๆ',
u'ไบๆไธญ': u'ไบๆไธญ',
u'ไบๆ': u'ไบๆ',
u'ไบๆฏๅกๅพท': u'ไบๆฏๅกๅพท',
u'ไบๆฏ็บณๅฐๆฏ่ด้': u'ไบๆฏ็ด็พๆฏ่ฒ้',
u'ไบๆฏ็ด็พๆฏ่ฒ้': u'ไบๆฏ็ด็พๆฏ่ฒ้',
u'ไบๆฏ่พพๅฐ': u'ไบๆฏ้็พ',
u'ไบๆฏ้็พ': u'ไบๆฏ้็พ',
u'ไบๆๆถ': u'ไบๆๆฟค',
u'ไบๆๆฟค': u'ไบๆๆฟค',
u'ไบๆฏไน': u'ไบๆฏไน',
u'ไบๆจๆฅ ': u'ไบๆจๆฅ ',
u'ไบๆด': u'ไบๆด',
u'ไบๆๆณณ': u'ไบๆๆณณ',
u'ไบไผๆณณ': u'ไบๆๆณณ',
u'ไบๆ นไผ': u'ไบๆ นๅ',
u'ไบๆ นๅ': u'ไบๆ นๅ',
u'ไบๆ ผ': u'ไบๆ ผ',
u'ไบๆจ': u'ไบๆจ',
u'ไบๆ ๆด': u'ไบๆจนๆฝ',
u'ไบๆจนๆฝ': u'ไบๆจนๆฝ',
u'ไบๆฌฃๆบ': u'ไบๆฌฃๆบ',
u'ไบๆญฃๅ': u'ไบๆญฃๆ',
u'ไบๆญฃๆ': u'ไบๆญฃๆ',
u'ไบๆญฃๆ': u'ไบๆญฃๆ',
u'ไบๅฝ': u'ไบๆญธ',
u'ไบๆฐธๆณข': u'ไบๆฐธๆณข',
u'ไบๆฑ้': u'ไบๆฑ้',
u'ไบๆณข': u'ไบๆณข',
u'ไบๆดชๅบ': u'ไบๆดชๅ',
u'ไบๆดชๅ': u'ไบๆดชๅ',
u'ไบๆตฉๅจ': u'ไบๆตฉๅจ',
u'ไบๆตทๆด': u'ไบๆตทๆด',
u'ไบๆนๅ
ฐ': u'ไบๆน่ญ',
u'ไบๆน่ญ': u'ไบๆน่ญ',
u'ไบๆผข่ถ
': u'ไบๆผข่ถ
',
u'ไบๆฑ่ถ
': u'ไบๆผข่ถ
',
u'ไบๆณฝๅฐ': u'ไบๆพค็พ',
u'ไบๆพค็พ': u'ไบๆพค็พ',
u'ไบๆถ': u'ไบๆฟค',
u'ไบๆฟค': u'ไบๆฟค',
u'ไบ็พๅฒ': u'ไบ็พๅฒ',
u'ไบๅฐๅฒ': u'ไบ็พๅฒ',
u'ไบๅฐๆ น': u'ไบ็พๆ น',
u'ไบ็พๆ น': u'ไบ็พๆ น',
u'ไบๅฐ้ๅ
': u'ไบ็พ้ๅ
',
u'ไบ็พ้ๅ
': u'ไบ็พ้ๅ
',
u'ไบ็นๆฃฎ': u'ไบ็นๆฃฎ',
u'ไบ็็ซ': u'ไบ็็ซ',
u'ไบ็ฐ': u'ไบ็ฐ',
u'ไบ็ฆ': u'ไบ็ฆ',
u'ไบ็งๆ': u'ไบ็งๆ',
u'ไบ็ด ็ง': u'ไบ็ด ็ง',
u'ไบ็พไบบ': u'ไบ็พไบบ',
u'ไบ่ฅๆจ': u'ไบ่ฅๆจ',
u'ไบ่ญ้': u'ไบ่ญ้',
u'ไบ่ซ้': u'ไบ่ญ้',
u'ไบ่กก': u'ไบ่กก',
u'ไบ่ฅฟ็ฟฐ': u'ไบ่ฅฟ็ฟฐ',
u'ไบ่ฌ': u'ไบ่ฌ',
u'ไบ่ฐฆ': u'ไบ่ฌ',
u'ไบ่ฒ็พ': u'ไบ่ฒ็พ',
u'ไบ่ดๅฐ': u'ไบ่ฒ็พ',
u'ไบ่ต ': u'ไบ่ด',
u'ไบ่ด': u'ไบ่ด',
u'ไบ่ถ': u'ไบ่ถ',
u'ไบๅ': u'ไบ่ป',
u'ไบ่ป': u'ไบ่ป',
u'ไบ้ๆณ': u'ไบ้ๆณ',
u'ไบ่ฟไผ': u'ไบ้ ๅ',
u'ไบ้ ๅ': u'ไบ้ ๅ',
u'ไบ้ฝ็ธฃ': u'ไบ้ฝ็ธฃ',
u'ไบ้ฝๅฟ': u'ไบ้ฝ็ธฃ',
u'ไบ้ๅฏ': u'ไบ้ๅฏ',
u'ไบ้': u'ไบ้',
u'ไบ้ๆ': u'ไบ้ๆ',
u'ไบๅๆ': u'ไบ้ๆ',
u'ไบ้ๅฏฐ': u'ไบ้ๅฏฐ',
u'ไบ้็ฏ': u'ไบ้็ฐ',
u'ไบ้็ฐ': u'ไบ้็ฐ',
u'ไบ้': u'ไบ้',
u'ไบ้้': u'ไบ้้',
u'ไบ้ๆฏๅฑ่': u'ไบ้ๆฏๅฑ่',
u'ไบ้ฆๆฏๅฑ่ฑ': u'ไบ้ๆฏๅฑ่',
u'ไบ้ฃๆฟ': u'ไบ้ขจๆฟ',
u'ไบ้ขจๆฟ': u'ไบ้ขจๆฟ',
u'ไบ้ฃ': u'ไบ้ฃ',
u'ไบไฝๆฒๆ': u'ไบ้คๆฒๆ',
u'ไบๅคๆก': u'ไบ้ณณๆก',
u'ไบ้ณณๆก': u'ไบ้ณณๆก',
u'ไบๅค่ณ': u'ไบ้ณณ่ณ',
u'ไบ้ณณ่ณ': u'ไบ้ณณ่ณ',
u'ไบ้ปๅฅฅ': u'ไบ้ปๅฅง',
u'ไบ้ปๅฅง': u'ไบ้ปๅฅง',
u'ไบไน': u'ไบไน',
u'ไบไบ': u'ไบไบ',
u'ไบไฝ': u'ไบไฝ',
u'ไบไธบ': u'ไบ็บ',
u'ไบ็บ': u'ไบ็บ',
u'ไบ็ถ': u'ไบ็ถ',
u'ไบๅฐ': u'ไบ็พ',
u'ไบ๏ผ': u'ไบ๏ผ',
u'ไบไธช': u'ไบๅ',
u'ไบๅบๅ': u'ไบๅบๅ',
u'ไบๅบๅฃ': u'ไบๅบๅฃ',
u'ไบๅบ็': u'ไบๅบ็',
u'ไบๅบ็': u'ไบๅบ็',
u'ไบๅบ็ฅๅฑฑ': u'ไบๅบ็ฅๅฑฑ',
u'ไบๅบ้': u'ไบๅบ้',
u'ไบๅ': u'ไบๅ',
u'ไบๅคฉๅ': u'ไบๅคฉๅพ',
u'ไบๅฒณ': u'ไบๅถฝ',
u'ไบ่ฐท': u'ไบ็ฉ',
u'ไบๆ': u'ไบ็ดฎ',
u'ไบ่ก็ๅ
': u'ไบ่ก็ๅ',
u'ไบ่ฐท็ๅ่ก': u'ไบ่ฐท็ๅ่ก',
u'ไบ่ฐท็ๅ่ก': u'ไบ่ฐท็ๅ่ก',
u'ไบๅช': u'ไบ้ป',
u'ไบไฝ': u'ไบ้ค',
u'ไบๅบ': u'ไบ้ฝฃ',
u'ไบๅนฒๆง่ดฅ': u'ไบๆฆฆๆงๆ',
u'ไบ้': u'ไบ่ฃก',
u'ไบไบ': u'ไบๆผ',
u'ไบ็พๅฐผไบๅ': u'ไบ็พๅฐผไบๆ',
u'ไบคๆ': u'ไบค่จ',
u'ไบคๆธธ': u'ไบค้',
u'ไบคๅ': u'ไบค้ฌจ',
u'ไบฆไบ': u'ไบฆไบ',
u'ไบฆๅบไบฆ่ฐ': u'ไบฆ่ไบฆ่ซง',
u'ไบฎไธ': u'ไบฎ้',
u'ไบฎ้': u'ไบฎ้',
u'ไบบไบ': u'ไบบไบ',
u'ไบบๅๅ ': u'ไบบๅๅ ',
u'ไบบๅๅฑ': u'ไบบๅๅฑ',
u'ไบบๅๆ': u'ไบบๅๆฐ',
u'ไบบๅๆ': u'ไบบๅๆ',
u'ไบบๅๆฟ': u'ไบบๅๆฟ',
u'ไบบๅ็
ง': u'ไบบๅ็
ง',
u'ไบบๅ็': u'ไบบๅ็',
u'ไบบๅ็ฆ
': u'ไบบๅ็ฆช',
u'ไบบๅ่': u'ไบบๅ่',
u'ไบบๅไธ': u'ไบบๅ่',
u'ไบบๅ่ง': u'ไบบๅ่ฆ',
u'ไบบๅ่ง': u'ไบบๅ่ง',
u'ไบบๅ่ฐ': u'ไบบๅ่ฌ',
u'ไบบๅ่ฎฎ': u'ไบบๅ่ญฐ',
u'ไบบๅ่ต': u'ไบบๅ่ด',
u'ไบบๅ้': u'ไบบๅ้',
u'ไบบๅ้': u'ไบบๅ้ธ',
u'ไบบๅ้
': u'ไบบๅ้
',
u'ไบบๅ้
': u'ไบบๅ้ฑ',
u'ไบบๅฆ้ฃๅๅ
ฅๆฑไบ': u'ไบบๅฆ้ขจๅพๅ
ฅๆฑ้ฒ',
u'ไบบๆฌฒ': u'ไบบๆ
พ',
u'ไบบ็ฉๅฟ': u'ไบบ็ฉ่ช',
u'ไบบๅ': u'ไบบ่',
u'ไป้ฆ้ข': u'ไป้ฆ้บต',
u'ไปไน': u'ไป้บผ',
u'ไปไป': u'ไป่ฎ',
u'ไปๅ
ๅถ': u'ไปๅๅถ',
u'ไป้': u'ไป้',
u'ไปๆ': u'ไป่จ',
u'ไปๅ': u'ไปๅ',
u'ไป่ฏ': u'ไป่ฅ',
u'ไปฃ็ ่กจ': u'ไปฃ็ขผ่กจ',
u'ไปฃ่กจ': u'ไปฃ่กจ',
u'ไปคไบบๅๆ': u'ไปคไบบ้ซฎๆ',
u'ไปฅ่ชๅถ': u'ไปฅ่ชๅถ',
u'ไปฐ่ฏ': u'ไปฐ่ฅ',
u'ไปถ้': u'ไปถ้',
u'ไปปไฝ่กจๆผ': u'ไปปไฝ่กจๆผ',
u'ไปปไฝ่กจ็คบ': u'ไปปไฝ่กจ็คบ',
u'ไปปไฝ่กจ้': u'ไปปไฝ่กจ้',
u'ไปปไฝ่กจ่พพ': u'ไปปไฝ่กจ้',
u'ไปปไฝ่กจ': u'ไปปไฝ้ถ',
u'ไปปไฝ้': u'ไปปไฝ้',
u'ไปปไฝ้่กจ': u'ไปปไฝ้้ถ',
u'ไปปๆไบ': u'ไปปๆๆผ',
u'ไปปไบ': u'ไปปๆผ',
u'ไปฟๅถ': u'ไปฟ่ฃฝ',
u'ไผๅ': u'ไผๅ',
u'ไผไบๆนๅบ': u'ไผไบๆนๅบ',
u'ไผๅบ้ข': u'ไผๅบ้บต',
u'ไผๆฏๅ
ฐๆๅ': u'ไผๆฏ่ญๆๆ',
u'ไผๆฏๅ
ฐๆๅๅฒ': u'ไผๆฏ่ญๆๆญทๅฒ',
u'ไผๆฏๅ
ฐๅ': u'ไผๆฏ่ญๆ',
u'ไผๆฏๅ
ฐๅๅฒ': u'ไผๆฏ่ญๆญทๅฒ',
u'ไผ้': u'ไผ้ฌฑ',
u'ไผๅ ': u'ไผๅ ',
u'ไผ็ฝชๅๆฐ': u'ไผ็ฝชๅผๆฐ',
u'ไผๅพ': u'ไผๅพต',
u'ไผๅคด': u'ไผ้ ญ',
u'ไผดๆธธ': u'ไผด้',
u'ไผผไบ': u'ไผผๆผ',
u'ไฝไบ': u'ไฝไบ',
u'ๅธไบ': u'ไฝๆผ',
u'ๅธ้': u'ไฝ้',
u'ๅธ้ทใ': u'ไฝ้ทใ',
u'ๅธ้ทใ': u'ไฝ้ทใ',
u'ๅธ้ทๅฐ้': u'ไฝ้ทๅฐ้',
u'ๅธ้ท็': u'ไฝ้ท็',
u'ๅธ้ท่': u'ไฝ้ท่',
u'ๅธ้ท่ฐ': u'ไฝ้ท่ฆ',
u'ๅธ้ท้ๅบฆ': u'ไฝ้ท้ๅบฆ',
u'ๅธ้ท๏ผ': u'ไฝ้ท๏ผ',
u'ๅธ้ท๏ผ': u'ไฝ้ท๏ผ',
u'ไฝไบ': u'ไฝๆผ',
u'ไฝๅ': u'ไฝๆบ',
u'ไฝๆดผ': u'ไฝๆดผ',
u'ไฝๆ': u'ไฝ็ดฎ',
u'ๅ 0': u'ไฝ0',
u'ๅ 1': u'ไฝ1',
u'ๅ 2': u'ไฝ2',
u'ๅ 3': u'ไฝ3',
u'ๅ 4': u'ไฝ4',
u'ๅ 5': u'ไฝ5',
u'ๅ 6': u'ไฝ6',
u'ๅ 7': u'ไฝ7',
u'ๅ 8': u'ไฝ8',
u'ๅ 9': u'ไฝ9',
u'ๅ A': u'ไฝA',
u'ๅ B': u'ไฝB',
u'ๅ C': u'ไฝC',
u'ๅ D': u'ไฝD',
u'ๅ E': u'ไฝE',
u'ๅ F': u'ไฝF',
u'ๅ G': u'ไฝG',
u'ๅ H': u'ไฝH',
u'ๅ I': u'ไฝI',
u'ๅ J': u'ไฝJ',
u'ๅ K': u'ไฝK',
u'ๅ L': u'ไฝL',
u'ๅ M': u'ไฝM',
u'ๅ N': u'ไฝN',
u'ๅ O': u'ไฝO',
u'ๅ P': u'ไฝP',
u'ๅ Q': u'ไฝQ',
u'ๅ R': u'ไฝR',
u'ๅ S': u'ไฝS',
u'ๅ T': u'ไฝT',
u'ๅ U': u'ไฝU',
u'ๅ V': u'ไฝV',
u'ๅ W': u'ไฝW',
u'ๅ X': u'ไฝX',
u'ๅ Y': u'ไฝY',
u'ๅ Z': u'ไฝZ',
u'ๅ a': u'ไฝa',
u'ๅ b': u'ไฝb',
u'ๅ c': u'ไฝc',
u'ๅ d': u'ไฝd',
u'ๅ e': u'ไฝe',
u'ๅ f': u'ไฝf',
u'ๅ g': u'ไฝg',
u'ๅ h': u'ไฝh',
u'ๅ i': u'ไฝi',
u'ๅ j': u'ไฝj',
u'ๅ k': u'ไฝk',
u'ๅ l': u'ไฝl',
u'ๅ m': u'ไฝm',
u'ๅ n': u'ไฝn',
u'ๅ o': u'ไฝo',
u'ๅ p': u'ไฝp',
u'ๅ q': u'ไฝq',
u'ๅ r': u'ไฝr',
u'ๅ s': u'ไฝs',
u'ๅ t': u'ไฝt',
u'ๅ u': u'ไฝu',
u'ๅ v': u'ไฝv',
u'ๅ w': u'ไฝw',
u'ๅ x': u'ไฝx',
u'ๅ y': u'ไฝy',
u'ๅ z': u'ไฝz',
u'ๅ ใ': u'ไฝใ',
u'ๅ ไธ': u'ไฝไธ',
u'ๅ ไธ': u'ไฝไธ',
u'ๅ ไธ': u'ไฝไธ',
u'ๅ ไธ': u'ไฝไธ',
u'ๅ ไธ้ฃ': u'ไฝไธ้ขจ',
u'ๅ ไธ': u'ไฝไธ',
u'ๅ ไธ้ฃ': u'ไฝไธ้ขจ',
u'ๅ ไธๅ ': u'ไฝไธไฝ',
u'ๅ ไธ่ถณ': u'ไฝไธ่ถณ',
u'ๅ ไธ็': u'ไฝไธ็',
u'ๅ ไธญ': u'ไฝไธญ',
u'ๅ ไธป': u'ไฝไธป',
u'ๅ ไน': u'ไฝไน',
u'ๅ ไบ': u'ไฝไบ',
u'ๅ ไบ': u'ไฝไบ',
u'ๅ ไบ': u'ไฝไบ',
u'ๅ ไบบไพฟๅฎ': u'ไฝไบบไพฟๅฎ',
u'ๅ ไฝ': u'ไฝไฝ',
u'ๅ ไฝ': u'ไฝไฝ',
u'ๅ ๅ ': u'ไฝไฝ',
u'ๅ ไพฟๅฎ': u'ไฝไพฟๅฎ',
u'ๅ ไฟ': u'ไฝไฟ',
u'ๅ ไธช': u'ไฝๅ',
u'ๅ ไธชไฝ': u'ไฝๅไฝ',
u'ๅ ๅ่ฝฆ': u'ไฝๅ่ป',
u'ๅ ไบฟ': u'ไฝๅ',
u'ๅ ไผ': u'ไฝๅช',
u'ๅ ๅ
': u'ไฝๅ
',
u'ๅ ๅ
': u'ไฝๅ
',
u'ๅ ๅ
จ': u'ไฝๅ
จ',
u'ๅ ไธค': u'ไฝๅ
ฉ',
u'ๅ ๅ
ซ': u'ไฝๅ
ซ',
u'ๅ ๅ
ญ': u'ไฝๅ
ญ',
u'ๅ ๅ': u'ไฝๅ',
u'ๅ ๅฐ': u'ไฝๅฐ',
u'ๅ ๅ ': u'ไฝๅ ',
u'ๅ ๅฃ': u'ไฝๅฃ',
u'ๅ ๅ': u'ไฝๅ',
u'ๅ ๅ': u'ไฝๅ',
u'ๅ ๅ': u'ไฝๅ',
u'ๅ ๅ': u'ไฝๅ',
u'ๅ ๅ': u'ไฝๅ',
u'ๅ ๅฐ': u'ไฝๅฐ',
u'ๅ ๅป': u'ไฝๅป',
u'ๅ ๅ': u'ไฝๅ',
u'ๅ ๅฐ': u'ไฝๅฐ',
u'ๅ ๅบไนณ': u'ไฝๅบไนณ',
u'ๅ ๅซ': u'ไฝๅ',
u'ๅ ๅ': u'ไฝๅ',
u'ๅ ๅฝๅ
': u'ไฝๅๅ
ง',
u'ๅ ๅจ': u'ไฝๅจ',
u'ๅ ๅฐ': u'ไฝๅฐ',
u'ๅ ๅบ': u'ไฝๅ ด',
u'ๅ ๅ': u'ไฝๅฃ',
u'ๅ ๅค': u'ไฝๅค',
u'ๅ ๅคง': u'ไฝๅคง',
u'ๅ ๅฅฝ': u'ไฝๅฅฝ',
u'ๅ ๅฐ': u'ไฝๅฐ',
u'ๅ ๅฐ': u'ไฝๅฐ',
u'ๅ ๅฑ้จ': u'ไฝๅฑ้จ',
u'ๅ ๅฑ': u'ไฝๅฑ',
u'ๅ ๅฑฑ': u'ไฝๅฑฑ',
u'ๅ ๅธๅบ': u'ไฝๅธๅ ด',
u'ๅ ๅนณๅ': u'ไฝๅนณๅ',
u'ๅ ๅบ': u'ไฝๅบ',
u'ๅ ๅบง': u'ไฝๅบง',
u'ๅ ๅ': u'ไฝๅพ',
u'ๅ ๅพ': u'ไฝๅพ',
u'ๅ ๅพท': u'ไฝๅพท',
u'ๅ ๆ': u'ไฝๆ',
u'ๅ ๆฎ': u'ไฝๆ',
u'ๅ ๆดไฝ': u'ไฝๆด้ซ',
u'ๅ ๆฐ': u'ไฝๆฐ',
u'ๅ ๆ': u'ไฝๆ',
u'ๅ ๆๆฌฒ': u'ไฝๆๆ
พ',
u'ๅ ไธ': u'ไฝๆฑ',
u'ๅ ๆฅ': u'ไฝๆฅ',
u'ๅ ๆฌก': u'ไฝๆฌก',
u'ๅ ๆฏ': u'ไฝๆฏ',
u'ๅ ๆณ': u'ไฝๆณ',
u'ๅ ๆปก': u'ไฝๆปฟ',
u'ๅ ๆพณ': u'ไฝๆพณ',
u'ๅ ไธบ': u'ไฝ็บ',
u'ๅ ็': u'ไฝ็',
u'ๅ ็จ': u'ไฝ็จ',
u'ๅ ๆฏ': u'ไฝ็ข',
u'ๅ ็พ': u'ไฝ็พ',
u'ๅ ๅฐฝ': u'ไฝ็ก',
u'ๅ ็จณ': u'ไฝ็ฉฉ',
u'ๅ ็ฝ': u'ไฝ็ถฒ',
u'ๅ ็บฟ': u'ไฝ็ท',
u'ๅ ๆป': u'ไฝ็ธฝ',
u'ๅ ็ผบ': u'ไฝ็ผบ',
u'ๅ ็พ': u'ไฝ็พ',
u'ๅ ่': u'ไฝ่',
u'ๅ ่ณๅค': u'ไฝ่ณๅค',
u'ๅ ่ณๅฐ': u'ไฝ่ณๅฐ',
u'ๅ ่ฑ': u'ไฝ่ฑ',
u'ๅ ็': u'ไฝ่',
u'ๅ ่ก': u'ไฝ่ก',
u'ๅ ่': u'ไฝ่',
u'ๅ ่ฅฟ': u'ไฝ่ฅฟ',
u'ๅ ่ตๆบ': u'ไฝ่ณๆบ',
u'ๅ ่ตท': u'ไฝ่ตท',
u'ๅ ่ถ
่ฟ': u'ไฝ่ถ
้',
u'ๅ ่ฟ': u'ไฝ้',
u'ๅ ้': u'ไฝ้',
u'ๅ ้ถ': u'ไฝ้ถ',
u'ๅ ้ ': u'ไฝ้ ',
u'ๅ ้ข': u'ไฝ้ ',
u'ๅ ๅคด': u'ไฝ้ ญ',
u'ๅ ๅคด็ญน': u'ไฝ้ ญ็ฑ',
u'ๅ ้ฅญ': u'ไฝ้ฃฏ',
u'ๅ ้ฆ': u'ไฝ้ฆ',
u'ๅ ้ฉฌ': u'ไฝ้ฆฌ',
u'ๅ ้ซๆๅฟ': u'ไฝ้ซๆๅ
',
u'ๅ ๏ผ': u'ไฝ๏ผ',
u'ๅ ๏ผ': u'ไฝ๏ผ',
u'ๅ ๏ผ': u'ไฝ๏ผ',
u'ๅ ๏ผ': u'ไฝ๏ผ',
u'ๅ ๏ผ': u'ไฝ๏ผ',
u'ๅ ๏ผ': u'ไฝ๏ผ',
u'ๅ ๏ผ': u'ไฝ๏ผ',
u'ๅ ๏ผ': u'ไฝ๏ผ',
u'ๅ ๏ผ': u'ไฝ๏ผ',
u'ๅ ๏ผ': u'ไฝ๏ผ',
u'ๅ ๏ผก': u'ไฝ๏ผก',
u'ๅ ๏ผข': u'ไฝ๏ผข',
u'ๅ ๏ผฃ': u'ไฝ๏ผฃ',
u'ๅ ๏ผค': u'ไฝ๏ผค',
u'ๅ ๏ผฅ': u'ไฝ๏ผฅ',
u'ๅ ๏ผฆ': u'ไฝ๏ผฆ',
u'ๅ ๏ผง': u'ไฝ๏ผง',
u'ๅ ๏ผจ': u'ไฝ๏ผจ',
u'ๅ ๏ผฉ': u'ไฝ๏ผฉ',
u'ๅ ๏ผช': u'ไฝ๏ผช',
u'ๅ ๏ผซ': u'ไฝ๏ผซ',
u'ๅ ๏ผฌ': u'ไฝ๏ผฌ',
u'ๅ ๏ผญ': u'ไฝ๏ผญ',
u'ๅ ๏ผฎ': u'ไฝ๏ผฎ',
u'ๅ ๏ผฏ': u'ไฝ๏ผฏ',
u'ๅ ๏ผฐ': u'ไฝ๏ผฐ',
u'ๅ ๏ผฑ': u'ไฝ๏ผฑ',
u'ๅ ๏ผฒ': u'ไฝ๏ผฒ',
u'ๅ ๏ผณ': u'ไฝ๏ผณ',
u'ๅ ๏ผด': u'ไฝ๏ผด',
u'ๅ ๏ผต': u'ไฝ๏ผต',
u'ๅ ๏ผถ': u'ไฝ๏ผถ',
u'ๅ ๏ผท': u'ไฝ๏ผท',
u'ๅ ๏ผธ': u'ไฝ๏ผธ',
u'ๅ ๏ผน': u'ไฝ๏ผน',
u'ๅ ๏ผบ': u'ไฝ๏ผบ',
u'ๅ ๏ฝ': u'ไฝ๏ฝ',
u'ๅ ๏ฝ': u'ไฝ๏ฝ',
u'ๅ ๏ฝ': u'ไฝ๏ฝ',
u'ๅ ๏ฝ': u'ไฝ๏ฝ',
u'ๅ ๏ฝ
': u'ไฝ๏ฝ
',
u'ๅ ๏ฝ': u'ไฝ๏ฝ',
u'ๅ ๏ฝ': u'ไฝ๏ฝ',
u'ๅ ๏ฝ': u'ไฝ๏ฝ',
u'ๅ ๏ฝ': u'ไฝ๏ฝ',
u'ๅ ๏ฝ': u'ไฝ๏ฝ',
u'ๅ ๏ฝ': u'ไฝ๏ฝ',
u'ๅ ๏ฝ': u'ไฝ๏ฝ',
u'ๅ ๏ฝ': u'ไฝ๏ฝ',
u'ๅ ๏ฝ': u'ไฝ๏ฝ',
u'ๅ ๏ฝ': u'ไฝ๏ฝ',
u'ๅ ๏ฝ': u'ไฝ๏ฝ',
u'ๅ ๏ฝ': u'ไฝ๏ฝ',
u'ๅ ๏ฝ': u'ไฝ๏ฝ',
u'ๅ ๏ฝ': u'ไฝ๏ฝ',
u'ๅ ๏ฝ': u'ไฝ๏ฝ',
u'ๅ ๏ฝ': u'ไฝ๏ฝ',
u'ๅ ๏ฝ': u'ไฝ๏ฝ',
u'ๅ ๏ฝ': u'ไฝ๏ฝ',
u'ๅ ๏ฝ': u'ไฝ๏ฝ',
u'ๅ ๏ฝ': u'ไฝ๏ฝ',
u'ๅ ๏ฝ': u'ไฝ๏ฝ',
u'ไฝไธ่': u'ไฝไธๅ',
u'ไฝไธๅ': u'ไฝไธๅ',
u'ไฝๅ
ไธญ': u'ไฝๅ
ไธญ',
u'ไฝๅ
็': u'ไฝๅ
็',
u'ไฝๅง': u'ไฝๅง',
u'ไฝๅจๅพท': u'ไฝๅจๅพท',
u'ไฝๅญๆ': u'ไฝๅญๆ',
u'ไฝๆๆ': u'ไฝๆๆ',
u'ไฝ็ฝๆฃฑ่จ': u'ไฝ็พ
็จ่ฉ',
u'ไฝ้': u'ไฝ้',
u'ไฝๅ้': u'ไฝๅ่ฃก',
u'ไฝๅฅธ็ฏ็ง': u'ไฝๅงฆ็ฏ็ง',
u'ไฝๅ': u'ไฝๆบ',
u'ไฝๅบ': u'ไฝ่',
u'ไฝ ๅ
ๅถ': u'ไฝ ๅๅถ',
u'ไฝ ๆไบ่': u'ไฝ ๆไบ่ฝ',
u'ไฝ ๆๅญๅๆ': u'ไฝ ็บๅญ็ผๆ',
u'ไฝฃ้ๆถ็': u'ไฝฃ้ๆถ็',
u'ไฝฃ้่ดน็จ': u'ไฝฃ้่ฒป็จ',
u'ไฝณ่ด': u'ไฝณ่ด',
u'ไฝณ้้ฎ': u'ไฝณ้้ฎ',
u'ๅนถไธไธไบ': u'ไฝตไธไธไบ',
u'ๅนถๅ
ฅ': u'ไฝตๅ
ฅ',
u'ๅนถๅ
ผ': u'ไฝตๅ
ผ',
u'ๅนถๅฐ': u'ไฝตๅฐ',
u'ๅนถๅ': u'ไฝตๅ',
u'ๅนถๅ': u'ไฝตๅ',
u'ๅนถๅไธ': u'ไฝตๅไธ',
u'ๅนถๆข': u'ไฝตๆ',
u'ๅนถๆก': u'ไฝตๆก',
u'ๅนถๆต': u'ไฝตๆต',
u'ๅนถ็ซ': u'ไฝต็ซ',
u'ๅนถไธบไธๅฎถ': u'ไฝต็บไธๅฎถ',
u'ๅนถไธบไธไฝ': u'ไฝต็บไธ้ซ',
u'ๅนถไบง': u'ไฝต็ข',
u'ๅนถๅฝ': u'ไฝต็ถ',
u'ๅนถๅ ': u'ไฝต็',
u'ๅนถๅ': u'ไฝต็ผ',
u'ๅนถ็ง': u'ไฝต็ง',
u'ๅนถ็ฝ': u'ไฝต็ถฒ',
u'ๅนถ็บฟ': u'ไฝต็ท',
u'ๅนถ่ฉๅญ': u'ไฝต่ฉๅญ',
u'ๅนถ่ดญ': u'ไฝต่ณผ',
u'ๅนถ้ค': u'ไฝต้ค',
u'ๅนถ้ชจ': u'ไฝต้ชจ',
u'ไฝฟๅ
ถๆ': u'ไฝฟๅ
ถ้ฌฅ',
u'ๆฅไบ': u'ไพๆผ',
u'ๆฅๅค': u'ไพ่ค',
u'ไพไป': u'ไพๅ',
u'ไพๅถ': u'ไพ่ฃฝ',
u'ไพไพไธ่': u'ไพไพไธๆจ',
u'ไพๆ': u'ไพ่จ',
u'ไพตๅ ': u'ไพตไฝ',
u'ไพตๅนถ': u'ไพตไฝต',
u'ไพตๅ ๅฐ': u'ไพตๅ ๅฐ',
u'ไพตๅ ็ฝช': u'ไพตๅ ็ฝช',
u'ไพฟ่ฏ': u'ไพฟ่ฅ',
u'็ณปๆฐ': u'ไฟๆธ',
u'็ณปไธบ': u'ไฟ็บ',
u'ไฟๅ ': u'ไฟไฝ',
u'ไฟ้ฉๆ': u'ไฟ้ชๆ',
u'ไฟกๆ่ดธๆ': u'ไฟกๆ่ฒฟๆ',
u'ไฟกๆ': u'ไฟก่จ',
u'ไฟฎๆฐๆฅท': u'ไฟฎๆฐๆฅท',
u'ไฟฎ็ผ': u'ไฟฎ้',
u'ไฟฎ่กๅ': u'ไฟฎ้ฌๅ',
u'ไฟฏๅฒ': u'ไฟฏ่ก',
u'ไธชไบบ': u'ๅไบบ',
u'ไธช้': u'ๅ่ฃก',
u'ไธช้': u'ๅ้',
u'ไธช้่กจ': u'ๅ้้ถ',
u'ไปฌๅ
ๅถ': u'ๅๅๅถ',
u'ไปฌๆไบ่': u'ๅๆไบ่ฝ',
u'ๅ็ปทๅญฉๅฟ': u'ๅ็นๅญฉๅ
',
u'ๅนธๅ
': u'ๅๅ
',
u'ๅนธๅญ': u'ๅๅญ',
u'ๅนธๅนธ': u'ๅๅนธ',
u'ๅไธ': u'ๅ้',
u'ๅๅฌไบ่': u'ๅ่ฝๆผ่พ',
u'ๅฆๆธธ': u'ๅฆ้',
u'ๅ่ฏ': u'ๅ่ฅ',
u'ๅๆ': u'ๅ่จ',
u'ๅๅ': u'ๅ้ซฎ',
u'ๅๅนฒ': u'ๅไนพ',
u'ๅๅบ': u'ๅ่',
u'ๅๅๅฝๅฝ': u'ๅๅ็ถ็ถ',
u'ๅๅพ': u'ๅๅพต',
u'ๅๅถ': u'ๅ่ฃฝ',
u'ๅท้ธกไธ็': u'ๅท้ไธ่',
u'ไผช่ฏ': u'ๅฝ่ฅ',
u'ๅคๆณจ': u'ๅ่จป',
u'ๅฎถไผ': u'ๅขไผ',
u'ๅฎถไฟฑ': u'ๅขไฟฑ',
u'ๅฎถๅ
ท': u'ๅขๅ
ท',
u'ๅฌๅนถ': u'ๅฌไฝต',
u'ไฝฃไธญไฝผไฝผ': u'ๅญไธญไฝผไฝผ',
u'ไฝฃไบบ': u'ๅญไบบ',
u'ไฝฃไป': u'ๅญๅ',
u'ไฝฃๅ
ต': u'ๅญๅ
ต',
u'ไฝฃๅทฅ': u'ๅญๅทฅ',
u'ไฝฃๆ': u'ๅญๆถ',
u'ไฝฃไนฆ': u'ๅญๆธ',
u'ไฝฃ้': u'ๅญ้',
u'ๅฒ้ๆ้ช': u'ๅฒ้้ฌฅ้ช',
u'ไผ ไฝไบๅๅคชๅญ': u'ๅณไฝไบๅๅคชๅญ',
u'ไผ ไบ': u'ๅณๆผ',
u'ไผค็็ดฏ็ดฏ': u'ๅท็็บ็บ',
u'ๅป้ๅปๆฐ': u'ๅป่ฃกๅปๆฐฃ',
u'ๅพๅค': u'ๅพ่ค',
u'ไปไบบ': u'ๅไบบ',
u'ไปไฝฟ': u'ๅไฝฟ',
u'ไปไป': u'ๅๅ',
u'ไปๅฎ': u'ๅๅฎ',
u'ไปๅ': u'ๅๅ',
u'ไปๅบๆๆฉ': u'ๅๅบๆทๆฉ',
u'ไปๅคซ': u'ๅๅคซ',
u'ไปๅง': u'ๅๅง',
u'ไปๅฉข': u'ๅๅฉข',
u'ไปๅฆ': u'ๅๅฉฆ',
u'ไปๅฐ': u'ๅๅฐ',
u'ไปๅฐ': u'ๅๅฐ',
u'ไปๅฝน': u'ๅๅฝน',
u'ไปไป': u'ๅๅพ',
u'ไปๆ': u'ๅๆ',
u'ไปๆฌง': u'ๅๆญ',
u'ไป็จ': u'ๅ็จ',
u'ไป่ฝ็ฝข้ฉฝ': u'ๅ้็ฝท้ง',
u'ไพฅๅนธ': u'ๅฅๅ',
u'ๅฎไป': u'ๅฎๅ',
u'้ไธป': u'ๅฑไธป',
u'้ไบบ': u'ๅฑไบบ',
u'้ไฝฃ': u'ๅฑๅญ',
u'้ๅฐ': u'ๅฑๅฐ',
u'้ๅ': u'ๅฑๅก',
u'้ๅทฅ': u'ๅฑๅทฅ',
u'้็จ': u'ๅฑ็จ',
u'้ๅ': u'ๅฑ่พฒ',
u'ไปช่': u'ๅ็ฏ',
u'ไปช่กจ': u'ๅ้ถ',
u'ไบฟไธช': u'ๅๅ',
u'ไบฟๅคๅช': u'ๅๅค้ป',
u'ไบฟๅคฉๅ': u'ๅๅคฉๅพ',
u'ไบฟๅช': u'ๅ้ป',
u'ไบฟไฝ': u'ๅ้ค',
u'ไฟญไป': u'ๅๅ',
u'ไฟญๆด': u'ๅๆจธ',
u'ไฟญ็กฎไนๆ': u'ๅ็กฎไนๆ',
u'ๅ็ฅๆน้ฉๅ': u'ๅ็ฅๆน้ฉๆ',
u'ๅ็ฅๆน้ฉๅๅฒ': u'ๅ็ฅๆน้ฉๆญทๅฒ',
u'ๅ็ฅๅ': u'ๅ็ฅๆ',
u'ๅ็ฅๅๅฒ': u'ๅ็ฅๆญทๅฒ',
u'ๅฐฝๅฐฝ': u'ๅๅ',
u'ๅฐฝๅ
': u'ๅๅ
',
u'ๅฐฝๅ
ถๆๆ': u'ๅๅ
ถๆๆ',
u'ๅฐฝๅ': u'ๅๅ',
u'ๅฐฝๅฏ่ฝ': u'ๅๅฏ่ฝ',
u'ๅฐฝๅฟซ': u'ๅๅฟซ',
u'ๅฐฝๆฉ': u'ๅๆฉ',
u'ๅฐฝๆฏ': u'ๅๆฏ',
u'ๅฐฝ็ฎก': u'ๅ็ฎก',
u'ๅฐฝ้': u'ๅ้',
u'ไผไบ': u'ๅชๆผ',
u'ไผๆธธ': u'ๅช้',
u'ๅ
ๆฏ': u'ๅ
ๆฎ',
u'ๅ
ๅถ': u'ๅ
ๅ
',
u'ๅ
้ฅฅ': u'ๅ
้ฅ',
u'ๅ
ไธช': u'ๅ
ๅ',
u'ๅ
ไฝ': u'ๅ
้ค',
u'ๅถๅ': u'ๅ
ๅ',
u'ๅถๅจ': u'ๅ
ๅจ',
u'ๅถๅซ': u'ๅ
ๅซ',
u'ๅถๅทดๅทด': u'ๅ
ๅทดๅทด',
u'ๅถๅพ': u'ๅ
ๅพ',
u'ๅถๆ': u'ๅ
ๆ',
u'ๅถๆถ': u'ๅ
ๆก',
u'ๅถๆ': u'ๅ
ๆ',
u'ๅถๆก': u'ๅ
ๆก',
u'ๅถๆช': u'ๅ
ๆง',
u'ๅถๆจช': u'ๅ
ๆฉซ',
u'ๅถๆฎ': u'ๅ
ๆฎ',
u'ๅถๆฎ': u'ๅ
ๆฎ',
u'ๅถๆฎบ': u'ๅ
ๆฎบ',
u'ๅถๆ': u'ๅ
ๆฎบ',
u'ๅถ็ฏ': u'ๅ
็ฏ',
u'ๅถ็ ': u'ๅ
็ ',
u'ๅถ็': u'ๅ
็',
u'ๅถ็': u'ๅ
็',
u'ๅถ็ธ': u'ๅ
็ธ',
u'ๅถ้ฉ': u'ๅ
้ช',
u'ๅ
ๅ ': u'ๅ
ไฝ',
u'ๅ
้': u'ๅ
ๆก',
u'ๅ
่ด่ด': u'ๅ
็ทป็ทป',
u'ๅ
่ฏ': u'ๅ
่ฅ',
u'ๅ
ๅค': u'ๅ
่ค',
u'ๅ
ๅพ': u'ๅ
ๅพต',
u'ๅ
ๅ': u'ๅ
ๅ',
u'ๅ
ๅคชๅฐ': u'ๅ
ๅคชๅฐ',
u'ๅ
ๆ่ฑ': u'ๅ
ๆท่ฑ',
u'ๅ
่ฟ': u'ๅ
้ฒ',
u'ๅ
้
': u'ๅ
้
',
u'ๅ
้กน': u'ๅ
้
',
u'ๅ
ๅถ': u'ๅ
ง่ฃฝ',
u'ๅ
้ขๅ
': u'ๅ
ง้ขๅ
',
u'ๅ
้ขๅ
็': u'ๅ
ง้ขๅ
็',
u'ๅ
ๆ': u'ๅ
ง้ฌฅ',
u'ๅ
ๅ': u'ๅ
ง้ฌจ',
u'ๅ
จๅนฒ': u'ๅ
จไนพ',
u'ๅ
จ้ขๅ
ๅด': u'ๅ
จ้ขๅ
ๅ',
u'ๅ
จ้ขๅ
่ฃน': u'ๅ
จ้ขๅ
่ฃน',
u'ไธคไธช': u'ๅ
ฉๅ',
u'ไธคๅคฉๅ': u'ๅ
ฉๅคฉๅพ',
u'ไธคๅคฉๆ็ฝ': u'ๅ
ฉๅคฉๆ็ถฒ',
u'ไธคๆ': u'ๅ
ฉ็ดฎ',
u'ไธค่ๅ
ฑๆ': u'ๅ
ฉ่ๅ
ฑ้ฌฅ',
u'ไธคๅช': u'ๅ
ฉ้ป',
u'ไธคไฝ': u'ๅ
ฉ้ค',
u'ไธค้ผ ๆ็ฉด': u'ๅ
ฉ้ผ ้ฌฅ็ฉด',
u'ๅ
ซไธช': u'ๅ
ซๅ',
u'ๅ
ซๅบๅ': u'ๅ
ซๅบๅ',
u'ๅ
ซๅบๅฃ': u'ๅ
ซๅบๅฃ',
u'ๅ
ซๅบ็': u'ๅ
ซๅบ็',
u'ๅ
ซๅบ็': u'ๅ
ซๅบ็',
u'ๅ
ซๅบ็ฅๅฑฑ': u'ๅ
ซๅบ็ฅๅฑฑ',
u'ๅ
ซๅบ้': u'ๅ
ซๅบ้',
u'ๅ
ซๅคง่กๅ': u'ๅ
ซๅคง่กๅ',
u'ๅ
ซๅคฉๅ': u'ๅ
ซๅคฉๅพ',
u'ๅ
ซๅญ่ก': u'ๅ
ซๅญ้ฌ',
u'ๅ
ซๆ': u'ๅ
ซ็ดฎ',
u'ๅ
ซ่ก': u'ๅ
ซ่ก',
u'ๅ
ซๅช': u'ๅ
ซ้ป',
u'ๅ
ซไฝ': u'ๅ
ซ้ค',
u'ๅ
ฌไป้ข': u'ๅ
ฌไป้บต',
u'ๅ
ฌไป': u'ๅ
ฌๅ',
u'ๅ
ฌๅญไธ': u'ๅ
ฌๅญซไธ',
u'ๅ
ฌๅนฒ': u'ๅ
ฌๅนน',
u'ๅ
ฌๅ': u'ๅ
ฌๆ',
u'ๅ
ฌๅๅฒ': u'ๅ
ฌๆญทๅฒ',
u'ๅ
ฌๅ': u'ๅ
ฌ้',
u'ๅ
ฌไฝ': u'ๅ
ฌ้ค',
u'ๅ
ญไธช': u'ๅ
ญๅ',
u'ๅ
ญๅบๅ': u'ๅ
ญๅบๅ',
u'ๅ
ญๅบๅฃ': u'ๅ
ญๅบๅฃ',
u'ๅ
ญๅบ็': u'ๅ
ญๅบ็',
u'ๅ
ญๅบ็': u'ๅ
ญๅบ็',
u'ๅ
ญๅบ็ฅๅฑฑ': u'ๅ
ญๅบ็ฅๅฑฑ',
u'ๅ
ญๅบ้': u'ๅ
ญๅบ้',
u'ๅ
ญๅ': u'ๅ
ญๅ',
u'ๅ
ญๅคฉๅ': u'ๅ
ญๅคฉๅพ',
u'ๅ
ญ่ฐท': u'ๅ
ญ็ฉ',
u'ๅ
ญๆ': u'ๅ
ญ็ดฎ',
u'ๅ
ญๅฒ': u'ๅ
ญ่ก',
u'ๅ
ญๅช': u'ๅ
ญ้ป',
u'ๅ
ญไฝ': u'ๅ
ญ้ค',
u'ๅ
ญๅบ': u'ๅ
ญ้ฝฃ',
u'ๅ
ฑๅๅ': u'ๅ
ฑๅๆ',
u'ๅ
ฑๅๅๅฒ': u'ๅ
ฑๅๆญทๅฒ',
u'ๅ
ถไธๅช': u'ๅ
ถไธๅช',
u'ๅ
ถไบๅช': u'ๅ
ถไบๅช',
u'ๅ
ถๅ
ซไนๅช': u'ๅ
ถๅ
ซไนๅช',
u'ๅ
ถๆฌก่พๅฐ': u'ๅ
ถๆฌก่พๅฐ',
u'ๅ
ถไฝ': u'ๅ
ถ้ค',
u'ๅ
ธ่': u'ๅ
ธ็ฏ',
u'ๅ
ผๅนถ': u'ๅ
ผๅนถ',
u'ๅๆไป': u'ๅๆๅ',
u'ๅไฝ': u'ๅ้ค',
u'ๅคไป': u'ๅค่ฎ',
u'ๅฅ่': u'ๅฅๆฟ',
u'ๅฌๅคฉ้': u'ๅฌๅคฉ่ฃก',
u'ๅฌๅฑฑๅบ': u'ๅฌๅฑฑๅบ',
u'ๅฌๆฅ้': u'ๅฌๆฅ่ฃก',
u'ๅฌๆธธ': u'ๅฌ้',
u'ๅถๆธธ': u'ๅถ้',
u'ๅทๅบๅญ': u'ๅท่ๅญ',
u'ๅท้ข็ธ': u'ๅท้ข็ธ',
u'ๅท้ข': u'ๅท้บต',
u'ๅไธๅ': u'ๅไธๅ',
u'ๅไธๅไป': u'ๅไธๅไป',
u'ๅไธๅไฝ ': u'ๅไธๅไฝ ',
u'ๅไธๅๅฅน': u'ๅไธๅๅฅน',
u'ๅไธๅๅฎ': u'ๅไธๅๅฎ',
u'ๅไธๅๆ': u'ๅไธๅๆ',
u'ๅไธๅ่ฎธ': u'ๅไธๅ่จฑ',
u'ๅไธๅ่ฐ': u'ๅไธๅ่ชฐ',
u'ๅไฟๆค': u'ๅไฟ่ญท',
u'ๅไฟ้': u'ๅไฟ้',
u'ๅ่ๅ': u'ๅๆฟๅ',
u'ๅ็ผ': u'ๅ้',
u'ๅ ไธ': u'ๅ ไธ',
u'ๅ ๅ ': u'ๅ ๅ ',
u'ๅ ๅณ': u'ๅ ๅณ',
u'ๅ ๅญ': u'ๅ ๅญ',
u'ๅ ๆ': u'ๅ ๆ',
u'ๅ ๆ': u'ๅ ๆ',
u'ๅ ๆก': u'ๅ ๆก',
u'ๅ ๆค
': u'ๅ ๆค
',
u'ๅ ๆฆป': u'ๅ ๆฆป',
u'ๅ ๅ็ชๆ': u'ๅ ๆทจ็ชๆ',
u'ๅ ็ญต': u'ๅ ็ญต',
u'ๅ ไธ': u'ๅ ็ตฒ',
u'ๅ ้ขไธ': u'ๅ ้ขไธ',
u'ๅถๆๆก': u'ๅถๆฎบๆก',
u'ๅถ็ธๆฏ้ฒ': u'ๅถ็ธ็ข้ฒ',
u'ๅนๆด้': u'ๅนๆด่ฃก',
u'ๅบไนๅผไธ': u'ๅบไนๅผ้',
u'ๅบไน้ฒไธ': u'ๅบไน้ฒ้',
u'ๅบๅพๆถ': u'ๅบๅพๆถ',
u'ๅบไบ': u'ๅบๆผ',
u'ๅบ่ฐๅ็ญ': u'ๅบ่ฌๅ็ญ',
u'ๅบๆธธ': u'ๅบ้',
u'ๅบไธ': u'ๅบ้',
u'ๅบ้ค': u'ๅบ้',
u'ๅๅ ': u'ๅไฝ',
u'ๅๅซ่ด': u'ๅๅซ่ด',
u'ๅๅ้': u'ๅๅ้',
u'ๅๅค้': u'ๅๅค้',
u'ๅๅญ้': u'ๅๅญ้',
u'ๅๅธๅ': u'ๅๅธๅ',
u'ๅๅธๅพ': u'ๅๅธๅ',
u'ๅๅธไบ': u'ๅๅธๆผ',
u'ๅๆฃไบ': u'ๅๆฃๆผ',
u'ๅ้': u'ๅ้',
u'ๅไฝ': u'ๅ้ค',
u'ๅไธๆกจ': u'ๅไธๆงณ',
u'ๅไบไธไผ': u'ๅไบไธๆ',
u'ๅๆฅๅๅป': u'ๅไพๅๅป',
u'ๅๅฐๅฒธ': u'ๅๅฐๅฒธ',
u'ๅๅฐๆฑๅฟ': u'ๅๅฐๆฑๅฟ',
u'ๅๅพๆฅ': u'ๅๅพไพ',
u'ๅ็': u'ๅ่',
u'ๅ็่ตฐ': u'ๅ่่ตฐ',
u'ๅ้พ่': u'ๅ้พ่',
u'ๅคๆญๅ': u'ๅคๆท็ผ',
u'ๅซๆฅๅ้ธฟๆๅๅป': u'ๅฅๆฅๅ้ดป็บๅๅป',
u'ๅซ่ด': u'ๅฅ็ทป',
u'ๅซๅบ': u'ๅฅ่',
u'ๅซ็': u'ๅฅ่',
u'ๅซ่พ': u'ๅฅ้ข',
u'ๅฉๆฌฒ': u'ๅฉๆ
พ',
u'ๅฉไบ': u'ๅฉๆผ',
u'ๅฉๆฌฒ็ๅฟ': u'ๅฉๆฌฒ็ๅฟ',
u'ๅฎๆฅๅฎๅป': u'ๅฎไพๅฎๅป',
u'ๅฎ็': u'ๅฎ่',
u'ๅฎ่ตทๆฅ': u'ๅฎ่ตทไพ',
u'ๅฎ้ฃไธ้ชๅไพฟๅฎ': u'ๅฎ้ขจไธ้ชๅไพฟๅฎ',
u'ๅฎ่ก': u'ๅฎ้ฌ',
u'ๅถๅทๆบ': u'ๅถๅทๆฉ',
u'ๅถ็ญพ': u'ๅถ็ฑค',
u'ๅถ้': u'ๅถ้',
u'ๅบ็ปฃ': u'ๅบ็นก',
u'ๅปๅ': u'ๅปๅ',
u'ๅปๅ้': u'ๅปๅ้',
u'ๅปๅค้': u'ๅปๅค้',
u'ๅป้': u'ๅป้',
u'ๅๅ': u'ๅ้ซฎ',
u'ๅ่ก': u'ๅ้ฌ',
u'ๅ้กป': u'ๅ้ฌ',
u'ๅๅ': u'ๅ้ซฎ',
u'ๅ้ข': u'ๅ้บต',
u'ๅ
ๅถไธไบ': u'ๅๅถไธไบ',
u'ๅ
ๅถไธไฝ': u'ๅๅถไธไฝ',
u'ๅ
ๆฃ': u'ๅๆฃ',
u'ๅ
ๆ': u'ๅๆ',
u'ๅ
ๆ': u'ๅๆ',
u'ๅ
ๆญป': u'ๅๆญป',
u'ๅ
่': u'ๅ่',
u'ๅ่จไธ็ญๅ่ฏญ': u'ๅ่จไธ็ญๅพ่ช',
u'ๅ้ขๅบ': u'ๅ้ขๅบ',
u'ๅๅบ่ดง': u'ๅ่่ฒจ',
u'ๅๅนฒ': u'ๅไนพ',
u'ๅ้': u'ๅๅฑ',
u'ๅๆไธ่ฝฝ': u'ๅ็บไธ่ผ',
u'ๅฅๅถ': u'ๅ่ฃฝ',
u'ๅฉไฝ': u'ๅฉ้ค',
u'ๅชๅ
ถๅ': u'ๅชๅ
ถ้ซฎ',
u'ๅช็กไธนๅ็': u'ๅช็กไธนๅ็',
u'ๅชๅฝฉ': u'ๅช็ถต',
u'ๅชๅ': u'ๅช้ซฎ',
u'ๅฒ่': u'ๅฒๆจ',
u'ๅ่ท': u'ๅต็ฉซ',
u'ๅๅถ': u'ๅต่ฃฝ',
u'้ฒๅบ': u'ๅทๅบ',
u'้ฒๅ': u'ๅทๅ',
u'้ฒๅนณ': u'ๅทๅนณ',
u'้ฒ้ค': u'ๅท้ค',
u'้ฒๅคด': u'ๅท้ ญ',
u'ๅไธ': u'ๅไธ',
u'ๅไธ': u'ๅไธ',
u'ๅไธ': u'ๅไธ',
u'ๅไบ': u'ๅไบ',
u'ๅๅ
ฅ': u'ๅๅ
ฅ',
u'ๅๅบ': u'ๅๅบ',
u'ๅๅ': u'ๅๅ',
u'ๅๅฐ': u'ๅๅฐ',
u'ๅๅ': u'ๅๅ',
u'ๅๅป': u'ๅๅป',
u'ๅๅจ': u'ๅๅจ',
u'ๅๅฐ': u'ๅๅฐ',
u'ๅๅฎ': u'ๅๅฎ',
u'ๅๅพ': u'ๅๅพ',
u'ๅๆ': u'ๅๆ',
u'ๅๆ': u'ๅๆ',
u'ๅๆจ': u'ๅๆฅ',
u'ๅๆถไปฃ': u'ๅๆไปฃ',
u'ๅๆฌพ': u'ๅๆฌพ',
u'ๅๅฝ': u'ๅๆญธ',
u'ๅๆณ': u'ๅๆณ',
u'ๅๆธ
': u'ๅๆธ
',
u'ๅไธบ': u'ๅ็บ',
u'ๅ็': u'ๅ็',
u'ๅ็ ด': u'ๅ็ ด',
u'ๅ็บฟ': u'ๅ็ท',
u'ๅ่ถณ': u'ๅ่ถณ',
u'ๅ่ฟ': u'ๅ้',
u'ๅๅผ': u'ๅ้',
u'ๅง่ฏ': u'ๅ่ฅ',
u'ๅๅ
ๅบ': u'ๅๅ
่',
u'ๅๅ
ๅถ': u'ๅๅๅถ',
u'ๅๆผ': u'ๅๆ',
u'ๅๆผไผๆ': u'ๅๆผ็พๆต',
u'ๅๆฑๅ
ๅถ': u'ๅๆฑๅๅถ',
u'ๅไบไธๆธธ': u'ๅ็ญไธ้',
u'ๅ่ด': u'ๅ็ทป',
u'ๅ ๆฐข็ฒพๅถ': u'ๅ ๆฐซ็ฒพๅถ',
u'ๅ ่ฏ': u'ๅ ่ฅ',
u'ๅ ๆณจ': u'ๅ ่จป',
u'ๅฃไบ': u'ๅฃๆผ',
u'ๅฉไบ': u'ๅฉๆผ',
u'ๅซไฝ': u'ๅซ้ค',
u'ๅ้': u'ๅ้ฌฑ',
u'ๅจ่ก': u'ๅ่ฉ',
u'่ไบ': u'ๅๆผ',
u'ๅณๅๅฃซ่กจ': u'ๅๅๅฃซ้ถ',
u'ๅคไป': u'ๅคๅ',
u'ๅคๆด': u'ๅคๆจธ',
u'ๅ็ซ ': u'ๅณ็ซ ',
u'ๅบ่ฏ': u'ๅบ่ฅ',
u'ๅพๅนฒ': u'ๅพๅนน',
u'ๅพๅฟๆ่ง': u'ๅพๅฟ้ฌฅ่ง',
u'ๅพ้ญ่ก้ญ': u'ๅพ้ญ่ฉ้ญ',
u'ๅ
ๆฌ': u'ๅ
ๆฌ',
u'ๅ
ๅ': u'ๅ
ๆบ',
u'ๅ
่ฐท': u'ๅ
็ฉ',
u'ๅ
ๆ': u'ๅ
็ดฎ',
u'ๅ
ๅบ': u'ๅ
่',
u'ๅ็ณป': u'ๅ็นซ',
u'ๅๅฒณ': u'ๅๅถฝ',
u'ๅๅ็บฟ': u'ๅ่ฟด็ท',
u'ๅๅ้่ทฏ': u'ๅ่ฟด้ต่ทฏ',
u'ๅกๅค': u'ๅก่ค',
u'ๅชๅนฒ': u'ๅชๅนน',
u'ๅฟไบ': u'ๅฟๆผ',
u'ๅบๅ': u'ๅๅ',
u'ๅไธช': u'ๅๅ',
u'ๅๅบๅ': u'ๅๅบๅ',
u'ๅๅบๅฃ': u'ๅๅบๅฃ',
u'ๅๅบ็': u'ๅๅบ็',
u'ๅๅบ็': u'ๅๅบ็',
u'ๅๅบ็ฅๅฑฑ': u'ๅๅบ็ฅๅฑฑ',
u'ๅๅบ้': u'ๅๅบ้',
u'ๅๅ': u'ๅๅ',
u'ๅๅคๅช': u'ๅๅค้ป',
u'ๅๅคฉๅ': u'ๅๅคฉๅพ',
u'ๅๆ': u'ๅ็ดฎ',
u'ๅๅช': u'ๅ้ป',
u'ๅไฝ': u'ๅ้ค',
u'ๅๅบ': u'ๅ้ฝฃ',
u'ๅไธช': u'ๅๅ',
u'ๅๅชๅฏ': u'ๅๅชๅฏ',
u'ๅๅชๅค': u'ๅๅชๅค ',
u'ๅๅชๆ': u'ๅๅชๆ',
u'ๅๅช่ฝ': u'ๅๅช่ฝ',
u'ๅๅช่ถณๅค': u'ๅๅช่ถณๅค ',
u'ๅๅคๅช': u'ๅๅค้ป',
u'ๅๅคฉๅ': u'ๅๅคฉๅพ',
u'ๅๆ': u'ๅ็ดฎ',
u'ๅไธไธ็ผ': u'ๅ็ตฒ่ฌ็ธท',
u'ๅๅ็พๆ': u'ๅ่ฟด็พๆ',
u'ๅๅ็พ่ฝฌ': u'ๅ่ฟด็พ่ฝ',
u'ๅ้งไธๅ': u'ๅ้ไธ้ซฎ',
u'ๅๅช': u'ๅ้ป',
u'ๅไฝ': u'ๅ้ค',
u'ๅๅฎๅ่ดข': u'ๅๅฎ็ผ่ฒก',
u'ๅๅถๅ': u'ๅๅถๅ',
u'ๅๅชๅฏ': u'ๅๅชๅฏ',
u'ๅๅชๅค': u'ๅๅชๅค ',
u'ๅไบ': u'ๅๆผ',
u'ๅๅช': u'ๅ้ป',
u'ๅไบฌ้': u'ๅไบฌ้',
u'ๅไบฌ้่กจ': u'ๅไบฌ้้ถ',
u'ๅๅฎซ้': u'ๅๅฎฎ้',
u'ๅๅฑๆ้': u'ๅๅฑๆ้',
u'ๅๅฒณ': u'ๅๅถฝ',
u'ๅ็ญ': u'ๅ็ญ',
u'ๅๅ็บฟ': u'ๅ่ฟด็ท',
u'ๅๅ้่ทฏ': u'ๅ่ฟด้ต่ทฏ',
u'ๅๆธธ': u'ๅ้',
u'ๅๆฑ': u'ๅๅฝ',
u'ๅ้': u'ๅๆก',
u'ๅๅบ': u'ๅ่',
u'ๅๅบๅญ': u'ๅ่ๅญ',
u'ๅ ไบๅ': u'ๅ ไบๅ',
u'ๅ ไพฟๅฎ็ๆฏๅ': u'ๅ ไพฟๅฎ็ๆฏ็',
u'ๅ ๅ': u'ๅ ๅ',
u'ๅ ๅคๆฐ': u'ๅ ๅคๆธ',
u'ๅ ๆไบไธ้ช': u'ๅ ๆไบไธ้ฉ',
u'ๅ ๆๆ': u'ๅ ๆๆฌ',
u'ๅฐ็ดฏ็ปถ่ฅ': u'ๅฐ็บ็ถฌ่ฅ',
u'ๅฐๅถ': u'ๅฐ่ฃฝ',
u'ๅฑไบ': u'ๅฑๆผ',
u'ๅตไธ็ณๆ': u'ๅต่็ณ้ฌฅ',
u'ๅท้กป': u'ๅท้ฌ',
u'ๅ้จ': u'ๅ้จ',
u'ๅ่ชไบ็ซ': u'ๅ่ชๆผ็ซ',
u'ๅๅญ้': u'ๅๅญ้',
u'ๅ้': u'ๅ้',
u'ๅ็ฉไนๆ': u'ๅค็ฉไนๆ',
u'ๅๅ': u'ๅๅ',
u'ๅ่ไปทๅผ': u'ๅ่ๅนๅผ',
u'ๅไธ': u'ๅ่',
u'ๅไธไบบๅ': u'ๅ่ไบบๅก',
u'ๅไธๅถ': u'ๅ่ๅถ',
u'ๅไธๆ': u'ๅ่ๆ',
u'ๅไธ่
': u'ๅ่่
',
u'ๅ่งๅข': u'ๅ่งๅ',
u'ๅ่งๅขไฝ': u'ๅ่งๅ้ซ',
u'ๅ้
': u'ๅ้ฑ',
u'ๅๆด': u'ๅๆจธ',
u'ๅๅฒ': u'ๅ่ก',
u'ๅๅคๅถ': u'ๅ่ค่ฃฝ',
u'ๅๅค': u'ๅ่ฆ',
u'ๅ่ฆ': u'ๅ่ฆ',
u'ๅ่': u'ๅๆจ',
u'ๅๆ': u'ๅ่จ',
u'ๅฃๅนฒ': u'ๅฃไนพ',
u'ๅฃๅนฒๅ': u'ๅฃๅนฒๅ',
u'ๅฃๅนฒๆฟ': u'ๅฃๅนฒๆฟ',
u'ๅฃๅนฒๆถ': u'ๅฃๅนฒๆถ',
u'ๅฃๅนฒ็ฏ': u'ๅฃๅนฒ็ฏ',
u'ๅฃๅนฒ้ข': u'ๅฃๅนฒ้ ',
u'ๅฃ็ฅๅๅนฒ': u'ๅฃ็ฅๅไนพ',
u'ๅฃ่
นไนๆฌฒ': u'ๅฃ่
นไนๆ
พ',
u'ๅฃ้': u'ๅฃ่ฃก',
u'ๅฃ้': u'ๅฃ้',
u'ๅคไนฆไบ': u'ๅคๆธไบ',
u'ๅคๆธไบ': u'ๅคๆธไบ',
u'ๅคๆฏๅธ': u'ๅคๆฏ้นน',
u'ๅคๆด': u'ๅคๆจธ',
u'ๅค่ฏญไบ': u'ๅค่ชไบ',
u'ๅค่ชไบ': u'ๅค่ชไบ',
u'ๅค่ฟน': u'ๅค่ฟน',
u'ๅค้': u'ๅค้',
u'ๅค้่กจ': u'ๅค้้ถ',
u'ๅฆ่พ': u'ๅฆ้ข',
u'ๅฉ้': u'ๅฉ้',
u'ๅชๅ ': u'ๅชไฝ',
u'ๅชๅ ๅ': u'ๅชๅ ๅ',
u'ๅชๅ ๅ': u'ๅชๅ ๅ',
u'ๅชๅ ็ฅ้ฎๅ': u'ๅชๅ ็ฅๅๅ',
u'ๅชๅ ็ฎ': u'ๅชๅ ็ฎ',
u'ๅช้': u'ๅชๆก',
u'ๅชๅฒ': u'ๅช่ก',
u'ๅช่บซไธๅทฒ': u'ๅช่บซไธๅทฒ',
u'ๅช่บซไธๆ': u'ๅช่บซไธๆ',
u'ๅช่บซไธๆฒก': u'ๅช่บซไธๆฒ',
u'ๅช่บซไธๆ ': u'ๅช่บซไธ็ก',
u'ๅช่บซไธ็': u'ๅช่บซไธ็',
u'ๅช่บซไธ': u'ๅช่บซไธ',
u'ๅช่บซไปฝ': u'ๅช่บซไปฝ',
u'ๅช่บซๅ': u'ๅช่บซๅ',
u'ๅช่บซๅ': u'ๅช่บซๅ',
u'ๅช่บซๅญ': u'ๅช่บซๅญ',
u'ๅช่บซๅฝข': u'ๅช่บซๅฝข',
u'ๅช่บซๅฝฑ': u'ๅช่บซๅฝฑ',
u'ๅช่บซๅ': u'ๅช่บซๅพ',
u'ๅช่บซๅฟ': u'ๅช่บซๅฟ',
u'ๅช่บซๆ': u'ๅช่บซๆ',
u'ๅช่บซๆ': u'ๅช่บซๆ',
u'ๅช่บซๆฎต': u'ๅช่บซๆฎต',
u'ๅช่บซไธบ': u'ๅช่บซ็บ',
u'ๅช่บซ่พน': u'ๅช่บซ้',
u'ๅช่บซ้ฆ': u'ๅช่บซ้ฆ',
u'ๅช่บซไฝ': u'ๅช่บซ้ซ',
u'ๅช่บซ้ซ': u'ๅช่บซ้ซ',
u'ๅช้ๅฃฐ': u'ๅช้่ฒ',
u'ๅฎๅฎๅฝๅฝ': u'ๅฎๅฎๅนๅน',
u'ๅฎๅฝ': u'ๅฎๅน',
u'ๅฏไปฅๅ
ๅถ': u'ๅฏไปฅๅๅถ',
u'ๅฏ็ดงๅฏๆพ': u'ๅฏ็ทๅฏ้ฌ',
u'ๅฏ่ชๅถ': u'ๅฏ่ชๅถ',
u'ๅฐๅญๅฅณ': u'ๅฐๅญๅฅณ',
u'ๅฐๅญๅญ': u'ๅฐๅญๅญซ',
u'ๅฐๅธๆฏ': u'ๅฐๅธๆฏ',
u'ๅฐๅๅฒ': u'ๅฐๆญทๅฒ',
u'ๅฐ้': u'ๅฐ้',
u'ๅฐ้ขๅ': u'ๅฐ้ขๅ',
u'ๅฑๅค903': u'ๅฑๅค903',
u'ๅฑๅคMY903': u'ๅฑๅคMY903',
u'ๅฑๅคMy903': u'ๅฑๅคMy903',
u'ๅฑๅคๅฑๅฑๅค': u'ๅฑๅคๅฑๅฑๅค',
u'ๅฑๅคๅฑๅคๅฑๅคๅค': u'ๅฑๅคๅฑๅคๅฑๅคๅค',
u'ๅฑๅคๅค': u'ๅฑๅคๅค',
u'ๅฑๅคไนๅ': u'ๅฑๅคๆจๅฃ',
u'ๅฑๅคๆจๅฃ': u'ๅฑๅคๆจๅฃ',
u'ๅถ ๆญๅผ': u'ๅถ ๆญๅผ',
u'ๅถใๆญๅผ': u'ๅถใๆญๅผ',
u'ๅถๆญๅผ': u'ๅถๆญๅผ',
u'ๅถ้ณ': u'ๅถ้ณ',
u'ๅถ้ต': u'ๅถ้ป',
u'ๅๆฟๅ้ข': u'ๅๆฟๅ้บต',
u'ๅ็ไธๅฐฝ': u'ๅ่ไธ็ก',
u'ๅๅง': u'ๅ่',
u'ๅ่ฏ': u'ๅ่ฅ',
u'ๅ้ๆๅค': u'ๅ่ฃกๆๅค',
u'ๅ้็ฌๅค': u'ๅ่ฃก็ฌๅค',
u'ๅ่พฃ้ข': u'ๅ่พฃ้บต',
u'ๅ้่ฏ': u'ๅ้ฏ่ฅ',
u'ๅ่พ': u'ๅ้ข',
u'ๅ็ฑป้': u'ๅ้ก้',
u'ๅไผไบบ': u'ๅไผไบบ',
u'ๅๅนถ': u'ๅไฝต',
u'ๅไผ': u'ๅๅคฅ',
u'ๅๅบไธ': u'ๅๅบไธ',
u'ๅ้': u'ๅๆก',
u'ๅๅ': u'ๅๆ',
u'ๅๅๅฒ': u'ๅๆญทๅฒ',
u'ๅๅ': u'ๅๆบ',
u'ๅ็': u'ๅ่',
u'ๅ่่
': u'ๅ่่
',
u'ๅๅถๅบๅ': u'ๅๅถๆ
ถๅผ',
u'ๅๅธฆ่ฃค': u'ๅๅธถ่คฒ',
u'ๅๆ็': u'ๅๆ่',
u'ๅๆ': u'ๅๆ',
u'ๅ็': u'ๅ่',
u'ๅ่ฃค': u'ๅ่คฒ',
u'ๅ่ฃคๅธฆ': u'ๅ่คฒๅธถ',
u'ๅ้': u'ๅ้',
u'ๅไผ': u'ๅๅคฅ',
u'ๅไบ': u'ๅๆผ',
u'ๅไฝ': u'ๅ้ค',
u'ๅๅ ': u'ๅๅ ',
u'ๅๅ่ก': u'ๅๅ่ก',
u'ๅๅ': u'ๅๅ',
u'ๅๅฆ': u'ๅๅฆ',
u'ๅๅฎ่ทฏ': u'ๅๅฎ่ทฏ',
u'ๅๅนณ่ทฏ': u'ๅๅนณ่ทฏ',
u'ๅๅบง': u'ๅๅบง',
u'ๅๆตทๆนพ': u'ๅๆตท็ฃ',
u'ๅๆตท็ฃ': u'ๅๆตท็ฃ',
u'ๅ็จท': u'ๅ็จท',
u'ๅ็พฟ': u'ๅ็พฟ',
u'ๅ่ก': u'ๅ่ก',
u'ๅ่ง': u'ๅ่ง',
u'ๅไธฐ': u'ๅ่ฑ',
u'ๅ่ฑ': u'ๅ่ฑ',
u'ๅ้': u'ๅ้',
u'ๅ้ซฎๅบง': u'ๅ้ซฎๅบง',
u'ๅๅๅบง': u'ๅ้ซฎๅบง',
u'ๅๅบๆๅ': u'ๅๅบๆ้ซฎ',
u'ๅๅบๆกๅ': u'ๅๅบๆก้ซฎ',
u'ๅๅพๆฅ': u'ๅๅพไพ',
u'ๅๅพๅธธ': u'ๅๅพๅธธ',
u'ๅๅพๆฅ': u'ๅๅพๆฅ',
u'ๅๅพๆถ': u'ๅๅพๆ',
u'ๅ็': u'ๅ่',
u'ๅๅนถ': u'ๅไฝต',
u'ๅๆธธ': u'ๅ้',
u'ๅซ้ฝฟๆดๅ': u'ๅซ้ฝๆด้ซฎ',
u'ๅนๅนฒ': u'ๅนไนพ',
u'ๅนๅ': u'ๅน้ซฎ',
u'ๅน่ก': u'ๅน้ฌ',
u'ๅพไธบไน่ๆ้ฉฐ้ฉฑ': u'ๅพ็ฒไน็ฏๆ้ฆณ้ฉ
',
u'ๅๅ': u'ๅๅ',
u'ๅๅ': u'ๅๅ',
u'ๅๅๅปๅป': u'ๅๅๅปๅป',
u'ๅๅๆฃๆฃ': u'ๅๅๆๆ',
u'ๅๅๅ
ฝ': u'ๅๅ็ธ',
u'ๅๅ็ฌจ็ฌจ': u'ๅๅ็ฌจ็ฌจ',
u'ๅ่ด่ด': u'ๅ็ทป็ทป',
u'ๅ้ๅๆฐ': u'ๅ่ฃกๅๆฐฃ',
u'ๅจไธ': u'ๅจไธ',
u'ๅจไธ': u'ๅจไธ',
u'ๅจไบ': u'ๅจไบ',
u'ๅจไบ': u'ๅจไบ',
u'ๅจๅ
ญ': u'ๅจๅ
ญ',
u'ๅจๅ': u'ๅจๅ',
u'ๅจๅ': u'ๅจๆ',
u'ๅจๆฐไผฆ': u'ๅจๆฐๅซ',
u'ๅจๆฐๅซ': u'ๅจๆฐๅซ',
u'ๅจๅๅฒ': u'ๅจๆญทๅฒ',
u'ๅจๅบ็': u'ๅจ่็',
u'ๅจๆธธ': u'ๅจ้',
u'ๅผๅ': u'ๅผ็ฑฒ',
u'ๅฝไธญๆณจๅฎ': u'ๅฝไธญๆณจๅฎ',
u'ๅๅ
ๅถ': u'ๅๅๅถ',
u'ๅๅฅธ': u'ๅๅงฆ',
u'ๅๅพ': u'ๅๅพต',
u'ๅๅ้': u'ๅๅ้',
u'ๅฌๅงๅท้': u'ๅฌ่ๅท้',
u'ๅฏๅฝ': u'ๅฏๅน',
u'ๅณๅฝ่ฏ': u'ๅณๅฝ่ฅ',
u'ๅๅ': u'ๅๅผ',
u'ๅๆฝ': u'ๅ่ผ',
u'ๅๆฑ': u'ๅๅฝ',
u'ๅๅ ๅคง็ฌ': u'ๅๅ ๅคง็ฌ',
u'ๅๅฑฑๅบ': u'ๅกๅฑฑๅบ',
u'ๅช้': u'ๅช่ฃก',
u'ๅญ่': u'ๅญ้ซ',
u'ๅๅ': u'ๅๅผ',
u'ๅ่ต': u'ๅ่ฎ',
u'ๅๅนฒ': u'ๅไนพ',
u'ๅฏไธๅช': u'ๅฏไธๅช',
u'ๅฑๆธธ': u'ๅฑ้',
u'ๅพ้ข่ชๅนฒ': u'ๅพ้ข่ชไนพ',
u'ๅพไฝ': u'ๅพ้ค',
u'ๅๅ': u'ๅๆ',
u'ๅๅๅฒ': u'ๅๆญทๅฒ',
u'ๅทๅฝ': u'ๅทๅน',
u'ๅไบไธๅฃฐ': u'ๅไบไธ่ฒ',
u'ๅไบ': u'ๅๆผ',
u'ๅๅๅพ': u'ๅๅๅพ',
u'ๅๆฌข่กจ': u'ๅๆญก้ถ',
u'ๅๆฌข้': u'ๅๆญก้',
u'ๅๆฌข้่กจ': u'ๅๆญก้้ถ',
u'ๅๅนฒ': u'ๅไนพ',
u'ๅงๅ': u'ๅง้ฌจ',
u'ไธง้': u'ๅช้',
u'ไนๅฒณ': u'ๅฌๅถฝ',
u'ๅไบ': u'ๅฎไบ',
u'ๅๅไบ': u'ๅฎๅฎๆผ',
u'ๅๅนฒ': u'ๅฎๅนน',
u'ๅๆ็ฌๆ': u'ๅฎๆ็จ้ฌฅ',
u'ๅๅช': u'ๅฎ้ป',
u'ๅ่ฏ': u'ๅ่ฅ',
u'ๅๅ็่กจ': u'ๅๅ็้ถ',
u'ๅ่ฐท': u'ๅ็ฉ',
u'ๅ่ด': u'ๅ่ด',
u'ๅด้': u'ๅด่ฃก',
u'ๆถๅฟ': u'ๅๅฟ',
u'ๅ้ฝฟๆดๅ': u'ๅ้ฝๆด้ซฎ',
u'ๅทๆด': u'ๅดๆด',
u'ๅฝๅท': u'ๅนๅท',
u'ๅฝๅฝ': u'ๅนๅน',
u'ๅ่': u'ๅๅ',
u'ๅๅฏผ': u'ๅฎๅฐ',
u'ๅๅพ': u'ๅฎๅพ',
u'ๅๅบ': u'ๅฎๆ',
u'ๅ่ฟฉ': u'ๅฎ้',
u'ไธฅไบ': u'ๅดๆผ',
u'ไธฅไธๅ็ผ': u'ๅด็ตฒๅ็ธซ',
u'ๅผ่ฐท': u'ๅผ็ฉ',
u'ๅๅ่่': u'ๅๅๅๅ',
u'ๅ่': u'ๅๅ',
u'ๅฑๆ': u'ๅ่จ',
u'ๅไธช': u'ๅๅ',
u'ๅๅบๅ': u'ๅๅบๅ',
u'ๅๅบๅฃ': u'ๅๅบๅฃ',
u'ๅๅบๅพๆถ': u'ๅๅบๅพตๆถ',
u'ๅๅบ็': u'ๅๅบ็',
u'ๅๅบ็': u'ๅๅบ็',
u'ๅๅบ็ฅๅฑฑ': u'ๅๅบ็ฅๅฑฑ',
u'ๅๅบ้': u'ๅๅบ้',
u'ๅๅๅ': u'ๅๅๆ',
u'ๅๅๅๅฒ': u'ๅๅๆญทๅฒ',
u'ๅๅคฉๅ': u'ๅๅคฉๅพ',
u'ๅ่ไบๅ
ฅ': u'ๅๆจไบๅ
ฅ',
u'ๅ่ๅ
ญๅ
ฅ': u'ๅๆจๅ
ญๅ
ฅ',
u'ๅๆ': u'ๅ็ดฎ',
u'ๅๅช': u'ๅ้ป',
u'ๅ้ขๅ
': u'ๅ้ขๅ
',
u'ๅ้ข้': u'ๅ้ข้',
u'ๅไฝ': u'ๅ้ค',
u'ๅๅบ': u'ๅ้ฝฃ',
u'ๅ้': u'ๅๆก',
u'ๅๆๅ ้': u'ๅๆๅ ้',
u'ๅๅ': u'ๅๆ',
u'ๅๅๅฒ': u'ๅๆญทๅฒ',
u'ๅไธ': u'ๅ็ตฒ',
u'ๅ็': u'ๅ่',
u'ๅ่ก': u'ๅ่ฉ',
u'ๅๆธธ': u'ๅ้',
u'ๅ้ณ่กๆฐ': u'ๅ้ฝ่ฉๆฐฃ',
u'ๅ ไบ': u'ๅ ๆผ',
u'ๅฐๅฆ่ตทๆฅ': u'ๅฐๅฆ่ตทไพ',
u'ๅฐๅ
ฝไนๆ': u'ๅฐ็ธไน้ฌฅ',
u'ๅฐๅ
ฝ็นๆ': u'ๅฐ็ธ็ถ้ฌฅ',
u'ๅฐๆ': u'ๅฐ้ฌฅ',
u'ๅบๅพ': u'ๅบๅพต',
u'ๅฟไบ': u'ๅฟๆผ',
u'ๅๅ ': u'ๅไฝ',
u'ๅๅญ้': u'ๅๅญ่ฃก',
u'ๅๆข': u'ๅๆจ',
u'ๅ้': u'ๅ่ฃก',
u'ๅฝไนๆกขๅนฒ': u'ๅไนๆฅจๆฆฆ',
u'ๅฝไบ': u'ๅๆผ',
u'ๅฝๅ': u'ๅๆ',
u'ๅฝๅไปฃ': u'ๅๆญทไปฃ',
u'ๅฝๅไปป': u'ๅๆญทไปป',
u'ๅฝๅๅฒ': u'ๅๆญทๅฒ',
u'ๅฝๅๅฑ': u'ๅๆญทๅฑ',
u'ๅฝไป': u'ๅ่ฎ',
u'ๅญ้': u'ๅ่ฃก',
u'ๅญๆธธไผ': u'ๅ้ๆ',
u'ๅพ้': u'ๅ่ฃก',
u'ๅพ้ด': u'ๅ้',
u'ๅ้': u'ๅ่ฃก',
u'ๅๅถ': u'ๅ่ฃฝ',
u'ๅ้็ด ': u'ๅ้็ด ',
u'ๅจๅถๅ': u'ๅจๅถๅ',
u'ๅจๅ
ๅถ': u'ๅจๅๅถ',
u'ๅจไบ': u'ๅจๆผ',
u'ๅฐๅ ': u'ๅฐไฝ',
u'ๅฐๅ
ๅถ': u'ๅฐๅๅถ',
u'ๅฐๆนๅฟ': u'ๅฐๆนๅฟ',
u'ๅฐๅฟ': u'ๅฐ่ช',
u'ๅฐไธๅพท้ฝ': u'ๅฐ้ๅพท้ฝ',
u'ๅไบ': u'ๅๆผ',
u'ๅๅฆ้': u'ๅๅฆ้',
u'ๅๅบ': u'ๅ่',
u'ๅ้': u'ๅ้',
u'ๅ้': u'ๅ่ฃก',
u'ๅค่': u'ๅค็ฏ',
u'ๅฆ่ก': u'ๅฆ่ฉ',
u'ๅฆ่ก่ก': u'ๅฆ่ฉ่ฉ',
u'ๅฑ้': u'ๅฑ้ฌฑ',
u'ๅไบ': u'ๅๆผ',
u'ๅ่': u'ๅ็ฏ',
u'ๅๅ': u'ๅ้ซฎ',
u'ๅ่': u'ๅ็ฏ',
u'ๅๅๅ': u'ๅๅๆ',
u'ๅๅๅๅฒ': u'ๅๅๆญทๅฒ',
u'ๅๅ่ณๅ': u'ๅๅ่ฑๅ',
u'ๅ่ฃๅฒ': u'ๅๆฆฎ่ก',
u'ๅๅคดๅฏป่กจ': u'ๅ้ ญๅฐ้ถ',
u'ๅๅคดๅฏป้': u'ๅ้ ญๅฐ้',
u'ๅๅคดๅฏป้่กจ': u'ๅ้ ญๅฐ้้ถ',
u'ๅ้': u'ๅ่ฃก',
u'ๅ่ฃก็คพๆซๅขพๅฑ': u'ๅ่ฃ็คพๆซๅขพๅฑ',
u'ๅ้็คพๆๅฆๅฑ': u'ๅ่ฃ็คพๆซๅขพๅฑ',
u'ๅ่ฃ็คพๆซๅขพๅฑ': u'ๅ่ฃ็คพๆซๅขพๅฑ',
u'ๅบๅนฒ': u'ๅบๅนน',
u'ๅบไบ': u'ๅบๆผ',
u'ๅบๅ': u'ๅบๆบ',
u'ๅ่ด': u'ๅ
็ทป',
u'ๅ ๆท': u'ๅ ๆพฑ',
u'ๆถ็': u'ๅก่',
u'ๆถ่ฏ': u'ๅก่ฅ',
u'ๅก่ณ็้': u'ๅก่ณ็้',
u'ๅก่ฏ': u'ๅก่ฅ',
u'ๅขๅฟ้ญ': u'ๅขๅฟ้',
u'ๅขๅฟ': u'ๅข่ช',
u'ๅข่พ': u'ๅข้ข',
u'ๅขจๆฒ': u'ๅขจๆฒ',
u'ๅขจๆฒๆชๅนฒ': u'ๅขจ็ๆชไนพ',
u'ๅ ่่ฏ': u'ๅขฎ่่ฅ',
u'ๅฆๅค': u'ๅขพ่ค',
u'ๅฆ่พ': u'ๅขพ้ข',
u'ๅๆญไปทๆ ผ': u'ๅฃๆทๅนๆ ผ',
u'ๅๆญ่ตไบง': u'ๅฃๆท่ณ็ข',
u'ๅๆญ้ๅข': u'ๅฃๆท้ๅ',
u'ๅฃฎๆธธ': u'ๅฃฏ้',
u'ๅฃฎ้ข': u'ๅฃฏ้บต',
u'ๅฃน้': u'ๅฃน้ฌฑ',
u'ๅฃถ้': u'ๅฃบ่ฃก',
u'ๅฃธ่': u'ๅฃผ็ฏ',
u'ๅฏฟ้ข': u'ๅฃฝ้บต',
u'ๅคไบไน': u'ๅคไบๅฌ',
u'ๅคไบๅฌ': u'ๅคไบๅฌ',
u'ๅคๅคฉ้': u'ๅคๅคฉ่ฃก',
u'ๅคๆฅ้': u'ๅคๆฅ่ฃก',
u'ๅคๅ': u'ๅคๆ',
u'ๅคๅๅฒ': u'ๅคๆญทๅฒ',
u'ๅคๆธธ': u'ๅค้',
u'ๅคๅผบไธญๅนฒ': u'ๅคๅผทไธญไนพ',
u'ๅคๅถ': u'ๅค่ฃฝ',
u'ๅคๅ ': u'ๅคไฝ',
u'ๅคๅ': u'ๅคๅ',
u'ๅคๅๅช': u'ๅคๅๅช',
u'ๅคๅชๅฏ': u'ๅคๅชๅฏ',
u'ๅคๅชๅจ': u'ๅคๅชๅจ',
u'ๅคๅชๆฏ': u'ๅคๅชๆฏ',
u'ๅคๅชไผ': u'ๅคๅชๆ',
u'ๅคๅชๆ': u'ๅคๅชๆ',
u'ๅคๅช่ฝ': u'ๅคๅช่ฝ',
u'ๅคๅช้': u'ๅคๅช้',
u'ๅคๅคฉๅ': u'ๅคๅคฉๅพ',
u'ๅคไบ': u'ๅคๆผ',
u'ๅคๅฒ': u'ๅค่ก',
u'ๅคไธ': u'ๅค้',
u'ๅคๅช': u'ๅค้ป',
u'ๅคไฝ': u'ๅค้ค',
u'ๅคไน': u'ๅค้บผ',
u'ๅคๅ
่กจ': u'ๅคๅ
้ถ',
u'ๅค้': u'ๅค่ฃก',
u'ๅคๆธธ': u'ๅค้',
u'ๅคๅ
ๅถ': u'ๅค ๅๅถ',
u'ๆขฆๆไบไธๅ ': u'ๅคขๆไบไธๅ ',
u'ๆขฆ้': u'ๅคข่ฃก',
u'ๆขฆๆธธ': u'ๅคข้',
u'ไผไผด': u'ๅคฅไผด',
u'ไผๅ': u'ๅคฅๅ',
u'ไผๅ': u'ๅคฅๅ',
u'ไผไผ': u'ๅคฅ็พ',
u'ไผ่ฎก': u'ๅคฅ่จ',
u'ๅคงไธ': u'ๅคงไธ',
u'ๅคงไผๅฟ': u'ๅคงไผๅ
',
u'ๅคงๅชๅฏ': u'ๅคงๅชๅฏ',
u'ๅคงๅชๅจ': u'ๅคงๅชๅจ',
u'ๅคงๅชๆฏ': u'ๅคงๅชๆฏ',
u'ๅคงๅชไผ': u'ๅคงๅชๆ',
u'ๅคงๅชๆ': u'ๅคงๅชๆ',
u'ๅคงๅช่ฝ': u'ๅคงๅช่ฝ',
u'ๅคงๅช้': u'ๅคงๅช้',
u'ๅคงๅจๅ': u'ๅคงๅจๅ',
u'ๅคงๅ้': u'ๅคงๅ้',
u'ๅคงๅ้่กจ้ข': u'ๅคงๅ้่กจ้ข',
u'ๅคงๅ้่กจ': u'ๅคงๅ้้ถ',
u'ๅคงๅ้้ข': u'ๅคงๅ้้ข',
u'ๅคงไผ': u'ๅคงๅคฅ',
u'ๅคงๅนฒ': u'ๅคงๅนน',
u'ๅคงๆนๆถๅฐ': u'ๅคงๆนๆนงๅฐ',
u'ๅคงๆๅฟ': u'ๅคงๆบๅ
',
u'ๅคงๆๅ': u'ๅคงๆๆ',
u'ๅคงๆๅๅฒ': u'ๅคงๆๆญทๅฒ',
u'ๅคงๅ': u'ๅคงๆ',
u'ๅคงๆฌ้': u'ๅคงๆฌ้',
u'ๅคงๆฌ้ๆฒ': u'ๅคงๆฌ้ๆฒ',
u'ๅคงๅๅฒ': u'ๅคงๆญทๅฒ',
u'ๅคงๅ': u'ๅคง็',
u'ๅคง็
ๅๆ': u'ๅคง็
ๅ็',
u'ๅคง็ฎๅนฒ่ฟ': u'ๅคง็ฎไนพ้ฃ',
u'ๅคง็ฌจ้': u'ๅคง็ฌจ้',
u'ๅคง็ฌจ้ๆฒ': u'ๅคง็ฌจ้ๆฒ',
u'ๅคง่ก': u'ๅคง่ก',
u'ๅคง่กๅ': u'ๅคง่กๆ',
u'ๅคง่กๅๅฒ': u'ๅคง่กๆญทๅฒ',
u'ๅคง่จ้ๅคธ': u'ๅคง่จ้ๅคธ',
u'ๅคง่ต': u'ๅคง่ฎ',
u'ๅคงๅจๆ': u'ๅคง้ฑๆบ',
u'ๅคง้ๅ่': u'ๅคง้้ซฎ่',
u'ๅคง้ค': u'ๅคง้',
u'ๅคง้': u'ๅคง้',
u'ๅคงๅช': u'ๅคง้ป',
u'ๅคง้ฃๅ': u'ๅคง้ขจๅพ',
u'ๅคงๆฒ': u'ๅคง้บด',
u'ๅคฉๅนฒ็ฉ็ฅ': u'ๅคฉไนพ็ฉ็ฅ',
u'ๅคฉๅ
ๅฐๅฒ': u'ๅคฉๅ
ๅฐ่ก',
u'ๅคฉๅ': u'ๅคฉๅ',
u'ๅคฉๅๅฎซ': u'ๅคฉๅๅฎฎ',
u'ๅคฉๅฐๅฟ็ผ': u'ๅคฉๅฐๅฟ็ผ',
u'ๅคฉๅฐไธบ่': u'ๅคฉๅฐ็บ็ฏ',
u'ๅคฉๅนฒๅฐๆฏ': u'ๅคฉๅนฒๅฐๆฏ',
u'ๅคฉๆๅญฆ้': u'ๅคฉๆๅญธ้',
u'ๅคฉๆ้': u'ๅคฉๆ้',
u'ๅคฉๅ': u'ๅคฉๆ',
u'ๅคฉๅๅฒ': u'ๅคฉๆญทๅฒ',
u'ๅคฉ็ฟปๅฐ่ฆ': u'ๅคฉ็ฟปๅฐ่ฆ',
u'ๅคฉ่ฆๅฐ่ฝฝ': u'ๅคฉ่ฆๅฐ่ผ',
u'ๅคชไป': u'ๅคชๅ',
u'ๅคชๅๅ': u'ๅคชๅๆ',
u'ๅคชๅๅๅฒ': u'ๅคชๅๆญทๅฒ',
u'ๅคชๅ': u'ๅคชๅ',
u'ๅคฏๅนฒ': u'ๅคฏๅนน',
u'ๅคธไบบ': u'ๅคธไบบ',
u'ๅคธๅ
': u'ๅคธๅ
',
u'ๅคธๅคธๅ
ถ่ฐ': u'ๅคธๅคธๅ
ถ่ซ',
u'ๅคธๅงฃ': u'ๅคธๅงฃ',
u'ๅคธๅฎน': u'ๅคธๅฎน',
u'ๅคธๆฏ': u'ๅคธๆฏ',
u'ๅคธ็ถ': u'ๅคธ็ถ',
u'ๅคธ็น': u'ๅคธ็น',
u'ๅคธ่ฑ': u'ๅคธ่ซ',
u'ๅคธ่ฏ': u'ๅคธ่ช',
u'ๅคธ่ฏไธ็ป': u'ๅคธ่ชไธ็ถ',
u'ๅคธไธฝ': u'ๅคธ้บ',
u'ๅฅ่ฟน': u'ๅฅ่ฟน',
u'ๅฅไธ': u'ๅฅ้',
u'ๅฅๆ': u'ๅฅๆบ',
u'ๅฅฅๅ ': u'ๅฅงไฝ',
u'ๅคบๆ': u'ๅฅช้ฌฅ',
u'ๅฅๆ': u'ๅฅฎ้ฌฅ',
u'ๅฅณไธ': u'ๅฅณไธ',
u'ๅฅณไฝฃไบบ': u'ๅฅณไฝฃไบบ',
u'ๅฅณไฝฃ': u'ๅฅณๅญ',
u'ๅฅณไป': u'ๅฅณๅ',
u'ๅฅดไป': u'ๅฅดๅ',
u'ๅฅธๆทซๆณๆ ': u'ๅฅธๆทซๆๆ ',
u'ๅฅนๅ
ๅถ': u'ๅฅนๅๅถ',
u'ๅฅฝๅนฒ': u'ๅฅฝไนพ',
u'ๅฅฝๅฎถไผ': u'ๅฅฝๅขๅคฅ',
u'ๅฅฝๅๆ็ ': u'ๅฅฝๅ้ฌฅ็ ',
u'ๅฅฝๆๅคง': u'ๅฅฝๆๅคง',
u'ๅฅฝๆๅฎค': u'ๅฅฝๆๅฎค',
u'ๅฅฝๆ็ฌ ': u'ๅฅฝๆ็ฌ ',
u'ๅฅฝๆ็ฏท': u'ๅฅฝๆ็ฏท',
u'ๅฅฝๆ่': u'ๅฅฝๆ่ฝ',
u'ๅฅฝๆ่ฌ': u'ๅฅฝๆ่ฌ',
u'ๅฅฝไบ': u'ๅฅฝๆผ',
u'ๅฅฝๅ': u'ๅฅฝ็',
u'ๅฅฝๅฐ': u'ๅฅฝ็',
u'ๅฅฝ็ญพ': u'ๅฅฝ็ฑค',
u'ๅฅฝไธ': u'ๅฅฝ้',
u'ๅฅฝๆ': u'ๅฅฝ้ฌฅ',
u'ๅฆๆๅนฒ': u'ๅฆๆๅนน',
u'ๅฆ้ฅฅไผผๆธด': u'ๅฆ้ฅไผผๆธด',
u'ๅฆๅ': u'ๅฆๅ',
u'ๅฆ่ฏ': u'ๅฆ่ฅ',
u'ๅงไบ': u'ๅงๆผ',
u'ๅงๆ': u'ๅง่จ',
u'ๅงๆไนฆ': u'ๅง่จๆธ',
u'ๅงๆๆฐ': u'ๅงๆๆฐ',
u'ๅฅธๅคซ': u'ๅงฆๅคซ',
u'ๅฅธๅฆ': u'ๅงฆๅฉฆ',
u'ๅฅธๅฎ': u'ๅงฆๅฎ',
u'ๅฅธๆ
': u'ๅงฆๆ
',
u'ๅฅธๆ': u'ๅงฆๆฎบ',
u'ๅฅธๆฑก': u'ๅงฆๆฑ',
u'ๅฅธๆทซ': u'ๅงฆๆทซ',
u'ๅฅธ็พ': u'ๅงฆ็พ',
u'ๅฅธ็ป': u'ๅงฆ็ดฐ',
u'ๅฅธ้ช': u'ๅงฆ้ช',
u'ๅจๆฃฑ': u'ๅจ็จ',
u'ๅฉขไป': u'ๅฉขๅ',
u'ๅจฒๆ': u'ๅชงๆ',
u'ๅซ็ฅธไบ': u'ๅซ็ฆๆผ',
u'ๅซๅถ': u'ๅซๅ
',
u'ๅซๅฅฝ้ไธ': u'ๅซๅฅฝ้้',
u'ๅฌๆธธ': u'ๅฌ้',
u'ๅฌๅนธ': u'ๅฌๅ',
u'ๅฌดไฝ': u'ๅฌด้ค',
u'ๅญไนไธฐๅ
ฎ': u'ๅญไนไธฐๅ
ฎ',
u'ๅญไบ': u'ๅญไบ',
u'ๅญๆฑ': u'ๅญๅฝ',
u'ๅญ็ ่กจ': u'ๅญ็ขผ่กจ',
u'ๅญ้่ก้ด': u'ๅญ่ฃก่ก้',
u'ๅญๅไธไบๅ็พ': u'ๅญๅไธๆผๅ็พ',
u'ๅญๆ': u'ๅญๆบ',
u'ๅญไบ': u'ๅญๆผ',
u'ๅญคๅฏกไธ่ฐท': u'ๅญคๅฏกไธ็ฉ',
u'ๅญฆ้': u'ๅญธ่ฃก',
u'ๅฎๅฎๅฟ': u'ๅฎๅฎ่ช',
u'ๅฎไบ': u'ๅฎๆผ',
u'ๅฎๆฒ้่ทฏ': u'ๅฎ็้ต่ทฏ',
u'ๅฎ็ ่ฏ': u'ๅฎ็ ่ฅ',
u'ๅฎ่่ฏ': u'ๅฎ่่ฅ',
u'ๅฎๅจ้': u'ๅฎๅจ้',
u'ๅฎไธๆๅคงๅชๆ็ฎก': u'ๅฎไธๆๅคงๅชๆ็ฎก',
u'ๅฎๅฐไธบ้': u'ๅฎๅฐ็บๅฏ',
u'ๅฎๅ': u'ๅฎๆ',
u'ๅฎๅๅฒ': u'ๅฎๆญทๅฒ',
u'ๅฎๅบ': u'ๅฎ่',
u'ๅฎไบ': u'ๅฎๆผ',
u'ๅฎๅ': u'ๅฎๆบ',
u'ๅฎๅถ': u'ๅฎ่ฃฝ',
u'ๅฎไบ': u'ๅฎไบ',
u'ๅฎฃๆณ': u'ๅฎฃๆดฉ',
u'ๅฎฆๆธธ': u'ๅฎฆ้',
u'ๅฎซ้': u'ๅฎฎ่ฃก',
u'ๅฎณไบ': u'ๅฎณๆผ',
u'ๅฎดๆธธ': u'ๅฎด้',
u'ๅฎถไป': u'ๅฎถๅ',
u'ๅฎถๅ
ทๅค': u'ๅฎถๅ
ทๅ',
u'ๅฎถๅ
ทๆ': u'ๅฎถๅ
ทๆ',
u'ๅฎถๅ
ทๆจๅทฅ็ง': u'ๅฎถๅ
ทๆจๅทฅ็ง',
u'ๅฎถๅ
ท่ก': u'ๅฎถๅ
ท่ก',
u'ๅฎถๅ
ทไฝ': u'ๅฎถๅ
ท้ซ',
u'ๅฎถๅบ': u'ๅฎถ่',
u'ๅฎถ้': u'ๅฎถ่ฃก',
u'ๅฎถไธ': u'ๅฎถ้',
u'ๅฎนไบ': u'ๅฎนๆผ',
u'ๅฎน่': u'ๅฎน็ฏ',
u'ๅฎฟ่': u'ๅฎฟ่',
u'ๅฏๆๅจ': u'ๅฏๆๅจ',
u'ๅฏๆ': u'ๅฏ่จ',
u'ๅฏ่ด': u'ๅฏ็ทป',
u'ๅฏๅ': u'ๅฏๆบ',
u'ๅฏไป': u'ๅฏ่ฎ',
u'ๅฏไฝ': u'ๅฏ้ค',
u'ๅฏๅ้': u'ๅฏๅ่ฃก',
u'ๅฏๆ ': u'ๅฏๆ
',
u'ๅฏไบ': u'ๅฏๆผ',
u'ๅฏไบ': u'ๅฏๆผ',
u'ๅฏกๅ ': u'ๅฏกไฝ',
u'ๅฏกๆฌฒ': u'ๅฏกๆ
พ',
u'ๅฎๅนฒ': u'ๅฏฆๅนน',
u'ๅๅญๅฐ': u'ๅฏซๅญๆชฏ',
u'ๅฎฝๅฎฝๆพๆพ': u'ๅฏฌๅฏฌ้ฌ้ฌ',
u'ๅฎฝไบ': u'ๅฏฌๆผ',
u'ๅฎฝไฝ': u'ๅฏฌ้ค',
u'ๅฎฝๆพ': u'ๅฏฌ้ฌ',
u'ๅฏฎ้': u'ๅฏฎๅฏ',
u'ๅฎๅฑฑๅบ': u'ๅฏถๅฑฑๅบ',
u'ๅฏถๆ': u'ๅฏถๆ',
u'ๅฎๅ': u'ๅฏถๆ',
u'ๅฎๅๅฒ': u'ๅฏถๆญทๅฒ',
u'ๅฎๅบ': u'ๅฏถ่',
u'ๅฎ้ๅฎๆฐ': u'ๅฏถ่ฃกๅฏถๆฐฃ',
u'ๅฏธๅๅ้': u'ๅฏธ้ซฎๅ้',
u'ๅฏบ้': u'ๅฏบ้',
u'ๅฐๅ': u'ๅฐๅ',
u'ๅฐ้ข้': u'ๅฐ้ข่ฃก',
u'ๅฐ้': u'ๅฐ้ตฐ',
u'ๅฐๅ ': u'ๅฐไฝ',
u'ๅฐๅ ๅ': u'ๅฐๅ ๅ',
u'ไธๅๅพ': u'ๅฐๅๅพ',
u'ไธๆณจ': u'ๅฐ่จป',
u'ไธ่พ้': u'ๅฐ่ผฏ่ฃก',
u'ๅฏนๆ': u'ๅฐๆบ',
u'ๅฏนไบ': u'ๅฐๆผ',
u'ๅฏนๅ': u'ๅฐๆบ',
u'ๅฏนๅ่กจ': u'ๅฐๆบ้ถ',
u'ๅฏนๅ้': u'ๅฐๆบ้',
u'ๅฏนๅ้่กจ': u'ๅฐๆบ้้ถ',
u'ๅฏนๅๅๅจ': u'ๅฐ่ฏ็ผๅ',
u'ๅฏน่กจไธญ': u'ๅฐ่กจไธญ',
u'ๅฏน่กจๆฌ': u'ๅฐ่กจๆ',
u'ๅฏน่กจๆ': u'ๅฐ่กจๆ',
u'ๅฏน่กจๆผ': u'ๅฐ่กจๆผ',
u'ๅฏน่กจ็ฐ': u'ๅฐ่กจ็พ',
u'ๅฏน่กจ่พพ': u'ๅฐ่กจ้',
u'ๅฏน่กจ': u'ๅฐ้ถ',
u'ๅฏผๆธธ': u'ๅฐ้',
u'ๅฐไธ': u'ๅฐไธ',
u'ๅฐไปท': u'ๅฐไปท',
u'ๅฐไป': u'ๅฐๅ',
u'ๅฐๅ ': u'ๅฐๅ ',
u'ๅฐๅชๅฏ': u'ๅฐๅชๅฏ',
u'ๅฐๅชๅจ': u'ๅฐๅชๅจ',
u'ๅฐๅชๆฏ': u'ๅฐๅชๆฏ',
u'ๅฐๅชไผ': u'ๅฐๅชๆ',
u'ๅฐๅชๆ': u'ๅฐๅชๆ',
u'ๅฐๅช่ฝ': u'ๅฐๅช่ฝ',
u'ๅฐๅช้': u'ๅฐๅช้',
u'ๅฐๅจๅ': u'ๅฐๅจๅ',
u'ๅฐๅ้': u'ๅฐๅ้',
u'ๅฐๅ้่กจ้ข': u'ๅฐๅ้่กจ้ข',
u'ๅฐๅ้่กจ': u'ๅฐๅ้้ถ',
u'ๅฐๅ้้ข': u'ๅฐๅ้้ข',
u'ๅฐไผๅญ': u'ๅฐๅคฅๅญ',
u'ๅฐ็ฑณ้ข': u'ๅฐ็ฑณ้บต',
u'ๅฐๅช': u'ๅฐ้ป',
u'ๅฐๅ ': u'ๅฐไฝ',
u'ๅฐ้': u'ๅฐๆก',
u'ๅฐฑๅ
ๅถ': u'ๅฐฑๅๅถ',
u'ๅฐฑ่': u'ๅฐฑ็ฏ',
u'ๅฐฑ้': u'ๅฐฑ่ฃก',
u'ๅฐธไฝ็ด ้ค': u'ๅฐธไฝ็ด ้ค',
u'ๅฐธๅฉ': u'ๅฐธๅฉ',
u'ๅฐธๅฑ
ไฝๆฐ': u'ๅฐธๅฑ
้คๆฐฃ',
u'ๅฐธ็ฅ': u'ๅฐธ็ฅ',
u'ๅฐธ็ฆ': u'ๅฐธ็ฅฟ',
u'ๅฐธ่ฃ': u'ๅฐธ่ฃ',
u'ๅฐธ่ฐ': u'ๅฐธ่ซซ',
u'ๅฐธ้ญ็': u'ๅฐธ้ญ็',
u'ๅฐธ้ธ ': u'ๅฐธ้ณฉ',
u'ๅฑ้': u'ๅฑ่ฃก',
u'ๅฑ่กๅคงๅไบๅฟ': u'ๅฑ่กๅคงๅผไบๅฟ',
u'ๅฑๅญ้': u'ๅฑๅญ่ฃก',
u'ๅฑๆข': u'ๅฑๆจ',
u'ๅฑ้': u'ๅฑ่ฃก',
u'ๅฑ้ฃๅ': u'ๅฑ้ขจๅพ',
u'ๅฑไบ': u'ๅฑๆผ',
u'ๅฑก้กพๅฐไป': u'ๅฑข้กง็พๅ',
u'ๅฑไบ': u'ๅฑฌๆผ',
u'ๅฑๆ': u'ๅฑฌ่จ',
u'ๅฑฏๆ': u'ๅฑฏ็ดฎ',
u'ๅฑฏ้': u'ๅฑฏ่ฃก',
u'ๅฑฑๅดฉ้ๅบ': u'ๅฑฑๅดฉ้ๆ',
u'ๅฑฑๅฒณ': u'ๅฑฑๅถฝ',
u'ๅฑฑๆข': u'ๅฑฑๆจ',
u'ๅฑฑๆด้': u'ๅฑฑๆด่ฃก',
u'ๅฑฑๆฃฑ': u'ๅฑฑ็จ',
u'ๅฑฑ็พ่ก': u'ๅฑฑ็พ้ฌ',
u'ๅฑฑๅบ': u'ๅฑฑ่',
u'ๅฑฑ่ฏ': u'ๅฑฑ่ฅ',
u'ๅฑฑ้': u'ๅฑฑ่ฃก',
u'ๅฑฑ้ๆฐดๅค': u'ๅฑฑ้ๆฐด่ค',
u'ๅฒฑๅฒณ': u'ๅฒฑๅถฝ',
u'ๅณฐๅ': u'ๅณฐ่ฟด',
u'ๅณปๅฒญ': u'ๅณปๅฒญ',
u'ๆๅง': u'ๅดๅ',
u'ๆๅฑฑ': u'ๅดๅฑฑ',
u'ๆไป': u'ๅดๅด',
u'ๆไปๅฑฑ่': u'ๅดๅดๅฑฑ่',
u'ๆๆฒ': u'ๅดๆฒ',
u'ๆ่
': u'ๅด่
',
u'ๆ่': u'ๅด่',
u'ๆ่ฐ': u'ๅด่ชฟ',
u'ๅดๅนฟ': u'ๅดๅนฟ',
u'ไป่': u'ๅด่',
u'ๅถๆฃฑ': u'ๅถ็จ',
u'ๅฒณๅฒณ': u'ๅถฝๅถฝ',
u'ๅฒณ้บ': u'ๅถฝ้บ',
u'ๅท่ฐท': u'ๅท็ฉ',
u'ๅทกๅๅป็': u'ๅทกๅ้ซ็',
u'ๅทกๅ': u'ๅทก่ฟด',
u'ๅทกๆธธ': u'ๅทก้',
u'ๅทฅ่ด': u'ๅทฅ็ทป',
u'ๅทฆๅฒๅณ็ช': u'ๅทฆ่กๅณ็ช',
u'ๅทงๅฆๅไธๅพๆ ้ข้ฆ้ฅฆ': u'ๅทงๅฉฆๅไธๅพ็ก้บต้คบ้ฃฅ',
u'ๅทงๅนฒ': u'ๅทงๅนน',
u'ๅทงๅ': u'ๅทงๆ',
u'ๅทงๅๅฒ': u'ๅทงๆญทๅฒ',
u'ๅทฎไนๆฏซๅ': u'ๅทฎไนๆฏซๅ',
u'ๅทฎไนๆฏซๅ๏ผ่ฐฌไปฅๅ้': u'ๅทฎไนๆฏซ้๏ผ่ฌฌไปฅๅ้',
u'ๅทฎไบ': u'ๅทฎๆผ',
u'ๅทฑไธ': u'ๅทฑไธ',
u'ๅทฒๅ ': u'ๅทฒไฝ',
u'ๅทฒๅ ๅ': u'ๅทฒๅ ๅ',
u'ๅทฒๅ ็ฎ': u'ๅทฒๅ ็ฎ',
u'ๅทดๅฐๅนฒ': u'ๅทด็พๅนน',
u'ๅทท้': u'ๅทท่ฃก',
u'ๅธๅ ': u'ๅธไฝ',
u'ๅธๅ ็': u'ๅธไฝ็',
u'ๅธ้': u'ๅธ่ฃก',
u'ๅธ่ฐท': u'ๅธ็ฉ',
u'ๅธ่ฐท้ธ้': u'ๅธ็ฉ้ณฅ้',
u'ๅธๅบ': u'ๅธ่',
u'ๅธ่ฐท้ธ': u'ๅธ่ฐท้ณฅ',
u'ๅธไผฏๆฅๅ': u'ๅธไผฏไพๆ',
u'ๅธไผฏๆฅๅๅฒ': u'ๅธไผฏไพๆญทๅฒ',
u'ๅธๅญ': u'ๅธๅญ',
u'ๅธๅธ': u'ๅธๅธ',
u'ๅธ่': u'ๅธซ็ฏ',
u'ๅธญๅท': u'ๅธญๆฒ',
u'ๅธฆๅขๅๅ ': u'ๅธถๅๅๅ ',
u'ๅธฆๅพ': u'ๅธถๅพต',
u'ๅธฆๅไฟฎ่ก': u'ๅธถ้ซฎไฟฎ่ก',
u'ๅธฎไฝฃ': u'ๅนซๅญ',
u'ๅนฒ็ณป': u'ๅนฒไฟ',
u'ๅนฒ็ๆฅ': u'ๅนฒ่ๆฅ',
u'ๅนณๅนณๅฝๅฝ': u'ๅนณๅนณ็ถ็ถ',
u'ๅนณๆณๅบ': u'ๅนณๆณ่',
u'ๅนณๅ': u'ๅนณๆบ',
u'ๅนดไปฃ้': u'ๅนดไปฃ่ฃก',
u'ๅนดๅ': u'ๅนดๆ',
u'ๅนดๅๅฒ': u'ๅนดๆญทๅฒ',
u'ๅนด่ฐท': u'ๅนด็ฉ',
u'ๅนด้': u'ๅนด่ฃก',
u'ๅนถๅ': u'ๅนถๅ',
u'ๅนถๅ': u'ๅนถๅ',
u'ๅนถๅท': u'ๅนถๅท',
u'ๅนถๆฅ่้ฃ': u'ๅนถๆฅ่้ฃ',
u'ๅนถ่ก': u'ๅนถ่ก',
u'ๅนถ่ฟญ': u'ๅนถ่ฟญ',
u'ๅนธๅ
ไบ้พ': u'ๅนธๅ
ๆผ้ฃ',
u'ๅนธไบ': u'ๅนธๆผ',
u'ๅนธ่ฟ่ก': u'ๅนธ้้ฌ',
u'ๅนฒไธ': u'ๅนนไธ',
u'ๅนฒไธๅป': u'ๅนนไธๅป',
u'ๅนฒไธไบ': u'ๅนนไธไบ',
u'ๅนฒไธๆ': u'ๅนนไธๆ',
u'ๅนฒไบ': u'ๅนนไบ',
u'ๅนฒไบ': u'ๅนนไบ',
u'ๅนฒไบ': u'ๅนนไบ',
u'ๅนฒไบบ': u'ๅนนไบบ',
u'ๅนฒไปไน': u'ๅนนไป้บผ',
u'ๅนฒไธช': u'ๅนนๅ',
u'ๅนฒๅฒ': u'ๅนนๅ',
u'ๅนฒๅฒๅฒๅคฉ': u'ๅนนๅๆฒๅคฉ',
u'ๅนฒๅ': u'ๅนนๅ',
u'ๅนฒๅ': u'ๅนนๅก',
u'ๅนฒๅฅ': u'ๅนนๅฅ',
u'ๅนฒๅ': u'ๅนนๅ',
u'ๅนฒๅ': u'ๅนนๅ',
u'ๅนฒๅไบ': u'ๅนนๅฃไบ',
u'ๅนฒๅฎ': u'ๅนนๅฎ',
u'ๅนฒๅฎถ': u'ๅนนๅฎถ',
u'ๅนฒๅฐ': u'ๅนนๅฐ',
u'ๅนฒๅพ': u'ๅนนๅพ',
u'ๅนฒๆงๆฒน': u'ๅนนๆงๆฒน',
u'ๅนฒๆ': u'ๅนนๆ',
u'ๅนฒๆ': u'ๅนนๆ',
u'ๅนฒๆข': u'ๅนนๆข',
u'ๅนฒๆ ก': u'ๅนนๆ ก',
u'ๅนฒๆดป': u'ๅนนๆดป',
u'ๅนฒๆต': u'ๅนนๆต',
u'ๅนฒๆต': u'ๅนนๆฟ',
u'ๅนฒ่ฅ็': u'ๅนน็็',
u'ๅนฒ็ถไน่': u'ๅนน็ถไน่ ฑ',
u'ๅนฒ็ๆธฉๅบฆ': u'ๅนน็ๆบซๅบฆ',
u'ๅนฒ็ไน': u'ๅนน็้บผ',
u'ๅนฒ็ฅ': u'ๅนน็ฅ',
u'ๅนฒๅฝ': u'ๅนน็ถ',
u'ๅนฒ็ๅๅฝ': u'ๅนน็ๅ็ถ',
u'ๅนฒ็ป่': u'ๅนน็ดฐ่',
u'ๅนฒ็ดฐ่': u'ๅนน็ดฐ่',
u'ๅนฒ็บฟ': u'ๅนน็ท',
u'ๅนฒ็ป': u'ๅนน็ทด',
u'ๅนฒ็ผบ': u'ๅนน็ผบ',
u'ๅนฒ็พคๅ
ณ็ณป': u'ๅนน็พค้ไฟ',
u'ๅนฒ่': u'ๅนน่ ฑ',
u'ๅนฒ่ญฆ': u'ๅนน่ญฆ',
u'ๅนฒ่ตทๆฅ': u'ๅนน่ตทไพ',
u'ๅนฒ่ทฏ': u'ๅนน่ทฏ',
u'ๅนฒๅ': u'ๅนน่พฆ',
u'ๅนฒ่ฟไธ่ก': u'ๅนน้ไธ่ก',
u'ๅนฒ่ฟ็งไบ': u'ๅนน้็จฎไบ',
u'ๅนฒ้': u'ๅนน้',
u'ๅนฒ้จ': u'ๅนน้จ',
u'ๅนฒ้ฉๅฝ': u'ๅนน้ฉๅฝ',
u'ๅนฒๅคด': u'ๅนน้ ญ',
u'ๅนฒไน': u'ๅนน้บผ',
u'ๅ ๅ': u'ๅนพๅ',
u'ๅ ๅคฉๅ': u'ๅนพๅคฉๅพ',
u'ๅ ๅช': u'ๅนพ้ป',
u'ๅ ๅบ': u'ๅนพ้ฝฃ',
u'ๅนฟ้จ': u'ๅนฟ้จ',
u'ๅบ็จผไบบ': u'ๅบ็จผไบบ',
u'ๅบ็จผ้ข': u'ๅบ็จผ้ข',
u'ๅบ้': u'ๅบ่ฃก',
u'ๅบๅนฒๅฟ': u'ๅบๅนฒๅฟ',
u'ๅบๅนฒๆพ': u'ๅบๅนฒๆพ',
u'ๅบๅนฒๆฐ': u'ๅบๅนฒๆพ',
u'ๅบๅนฒๆฟ': u'ๅบๅนฒๆฟ',
u'ๅบๅนฒๆถ': u'ๅบๅนฒๆถ',
u'ๅบๅนฒ็ฏ': u'ๅบๅนฒ็ฏ',
u'ๅบๅนฒ้ ': u'ๅบๅนฒ้ ',
u'ๅบๅนฒ้ข': u'ๅบๅนฒ้ ',
u'ๅบๅนฒ': u'ๅบๅนน',
u'ๅบง้': u'ๅบง้',
u'ๅบทๅบๅคง้': u'ๅบทๅบๅคง้',
u'ๅบท้ๆฉ': u'ๅบทๆกๆฉ',
u'ๅบทๅบ': u'ๅบท่',
u'ๅจไฝ': u'ๅป้ค',
u'ๅฎๆ': u'ๅป้ฌฅ',
u'ๅบ้': u'ๅป่ฃก',
u'ๅบๅ': u'ๅปขๅ',
u'ๅปขๅ': u'ๅปขๅ',
u'ๅนฟๅพ': u'ๅปฃๅพต',
u'ๅนฟ่': u'ๅปฃๆจ',
u'ๅปบไบ': u'ๅปบๆผ',
u'ๅผๅนฒ': u'ๅผไนพ',
u'ๅผไธ': u'ๅผ้',
u'ๅผ่': u'ๅผ้ซ',
u'ๅผๆพ': u'ๅผ้ฌ',
u'ๅผ้ฌผๅ็ด': u'ๅผ้ฌผๅผ็ด',
u'ๅๅฟ้ๅฝ': u'ๅผๅ
้็ถ',
u'ๅๅท': u'ๅผๅท',
u'ๅๅ': u'ๅผๅ',
u'ๅๅค': u'ๅผๅค',
u'ๅๅคๅฏปๅนฝ': u'ๅผๅคๅฐๅนฝ',
u'ๅๅ': u'ๅผๅ',
u'ๅ้ฎ': u'ๅผๅ',
u'ๅๅ': u'ๅผๅ',
u'ๅไธง': u'ๅผๅช',
u'ๅไธง้ฎ็พ': u'ๅผๅชๅ็พ',
u'ๅๅญ': u'ๅผๅญ',
u'ๅๅบ': u'ๅผๅ ด',
u'ๅๅฅ ': u'ๅผๅฅ ',
u'ๅๅญ': u'ๅผๅญ',
u'ๅๅฎข': u'ๅผๅฎข',
u'ๅๅฎด': u'ๅผๅฎด',
u'ๅๅธฆ': u'ๅผๅธถ',
u'ๅๅฝฑ': u'ๅผๅฝฑ',
u'ๅๆ
ฐ': u'ๅผๆ
ฐ',
u'ๅๆฃ': u'ๅผๆฃ',
u'ๅๆท': u'ๅผๆท',
u'ๅๆท็ปทๆ': u'ๅผๆท็นๆ',
u'ๅๆ': u'ๅผๆ',
u'ๅๆ': u'ๅผๆ',
u'ๅๆ': u'ๅผๆ',
u'ๅๆ': u'ๅผๆ',
u'ๅไนฆ': u'ๅผๆธ',
u'ๅๆกฅ': u'ๅผๆฉ',
u'ๅๆญป': u'ๅผๆญป',
u'ๅๆญป้ฎๅญค': u'ๅผๆญปๅๅญค',
u'ๅๆญป้ฎ็พ': u'ๅผๆญปๅ็พ',
u'ๅๆฐ': u'ๅผๆฐ',
u'ๅๆฐไผ็ฝช': u'ๅผๆฐไผ็ฝช',
u'ๅ็ฅญ': u'ๅผ็ฅญ',
u'ๅ็บธ': u'ๅผ็ด',
u'ๅ่
ๅคงๆฆ': u'ๅผ่
ๅคงๆ
',
u'ๅ่
ฐๆ่ทจ': u'ๅผ่
ฐๆ่ทจ',
u'ๅ่ๅฟไบ': u'ๅผ่
ณๅ
ไบ',
u'ๅ่ๅญ': u'ๅผ่ๅญ',
u'ๅ่ฏ': u'ๅผ่ฉ',
u'ๅ่ฏก': u'ๅผ่ฉญ',
u'ๅ่ฏก็ๅฅ': u'ๅผ่ฉญ็ๅฅ',
u'ๅ่ฐ': u'ๅผ่ฌ',
u'ๅ่ดบ่ฟ้': u'ๅผ่ณ่ฟ้',
u'ๅๅคด': u'ๅผ้ ญ',
u'ๅ้ข': u'ๅผ้ ธ',
u'ๅ้นค': u'ๅผ้ถด',
u'ๅผๆ': u'ๅผ้ฌฅ',
u'ๅผๅ': u'ๅผๆ',
u'ๅผๅๅฒ': u'ๅผๆญทๅฒ',
u'ๅผฑไบ': u'ๅผฑๆผ',
u'ๅผฑๆฐดไธๅๅชๅไธ็ข': u'ๅผฑๆฐดไธๅๅชๅไธ็ข',
u'ๅผ ไธไธฐ': u'ๅผตไธไธฐ',
u'ๅผตไธไธฐ': u'ๅผตไธไธฐ',
u'ๅผ ๅ': u'ๅผตๅณ',
u'ๅผบๅ ': u'ๅผทไฝ',
u'ๅผบๅถไฝ็จ': u'ๅผทๅถไฝ็จ',
u'ๅผบๅฅธ': u'ๅผทๅงฆ',
u'ๅผบๅนฒ': u'ๅผทๅนน',
u'ๅผบไบ': u'ๅผทๆผ',
u'ๅซๅฃๆฐ': u'ๅฝๅฃๆฐฃ',
u'ๅซๅผบ': u'ๅฝๅผท',
u'ๅซๆญ': u'ๅฝๆญ',
u'ๅซๆ': u'ๅฝๆ',
u'ๅซๆฐ': u'ๅฝๆฐฃ',
u'ๅผนๅญๅฐ': u'ๅฝๅญๆชฏ',
u'ๅผน็ ๅฐ': u'ๅฝ็ ๆชฏ',
u'ๅผน่ฏ': u'ๅฝ่ฅ',
u'ๆฑๅ': u'ๅฝๅ',
u'ๆฑๆฅ': u'ๅฝๅ ฑ',
u'ๆฑๆด': u'ๅฝๆด',
u'ๆฑ็ฎ': u'ๅฝ็ฎ',
u'ๆฑ็ผ': u'ๅฝ็ทจ',
u'ๆฑ็บ': u'ๅฝ็บ',
u'ๆฑ่พ': u'ๅฝ่ผฏ',
u'ๆฑ้': u'ๅฝ้',
u'ๅฝขๅๅฝฑๅช': u'ๅฝขๅฎๅฝฑ้ป',
u'ๅฝขๅฝฑ็ธๅ': u'ๅฝขๅฝฑ็ธๅผ',
u'ๅฝขไบ': u'ๅฝขๆผ',
u'ๅฝฑๅ': u'ๅฝฑๅ',
u'ไปฟไฝ': u'ๅฝทๅฝฟ',
u'ๅฝนไบ': u'ๅฝนๆผ',
u'ๅฝผๆญคๅ
ๅถ': u'ๅฝผๆญคๅๅถ',
u'ๅพๆฅ็กไป': u'ๅพๆฅ็ก่ฎ',
u'ๅพ้': u'ๅพ่ฃก',
u'ๅพๅค': u'ๅพ่ค',
u'ๅพๅนฒ': u'ๅพไนพ',
u'ๅพๅถ': u'ๅพๅ
',
u'ๅพไธ': u'ๅพ้',
u'ๅพๅๅฟ': u'ๅพๆๅฟ',
u'ๅๅฐ': u'ๅพๅฐ',
u'ๅๅฐ่ๆฟ': u'ๅพๅฐ่ๆฟ',
u'ๅๅบ': u'ๅพๅบ',
u'ๅ้ขๅบ': u'ๅพ้ขๅบ',
u'ๅพๅนฒ': u'ๅพๅนน',
u'ๅพๆ็ฉบ่จ': u'ๅพ่จ็ฉบ่จ',
u'ๅพๅ
ๅถ': u'ๅพๅๅถ',
u'ไปไบ': u'ๅพๆผ',
u'ไป้ๅฐๅค': u'ๅพ่ฃกๅฐๅค',
u'ไป้ๅๅค': u'ๅพ่ฃกๅๅค',
u'ๅคๅง': u'ๅพฉๅง',
u'ๅพไบบ': u'ๅพตไบบ',
u'ๅพไปค': u'ๅพตไปค',
u'ๅพๅ ': u'ๅพตไฝ',
u'ๅพไฟก': u'ๅพตไฟก',
u'ๅพๅ': u'ๅพตๅ',
u'ๅพๅ
': u'ๅพตๅ
',
u'ๅพๅ
ต': u'ๅพตๅ
ต',
u'ๅพๅฐ': u'ๅพตๅฐ',
u'ๅพๅ': u'ๅพตๅ',
u'ๅพๅ': u'ๅพตๅ',
u'ๅพๅฌ': u'ๅพตๅฌ',
u'ๅพๅ่ดฃๅฎ': u'ๅพตๅ่ฒฌๅฏฆ',
u'ๅพๅ': u'ๅพตๅ',
u'ๅพๅ': u'ๅพตๅ',
u'ๅพๅฏ': u'ๅพตๅ',
u'ๅพๅฃซ': u'ๅพตๅฃซ',
u'ๅพๅฉ': u'ๅพตๅฉ',
u'ๅพๅฎ': u'ๅพตๅฏฆ',
u'ๅพๅบธ': u'ๅพตๅบธ',
u'ๅพๅผ': u'ๅพตๅผ',
u'ๅพๅพ': u'ๅพตๅพ',
u'ๅพๆช': u'ๅพตๆช',
u'ๅพๆ': u'ๅพตๆ',
u'ๅพๆ': u'ๅพตๆ',
u'ๅพๆถ': u'ๅพตๆถ',
u'ๅพๆ': u'ๅพตๆ',
u'ๅพๆ': u'ๅพตๆ',
u'ๅพๆฑ': u'ๅพตๆฑ',
u'ๅพ็ถ': u'ๅพต็',
u'ๅพ็จ': u'ๅพต็จ',
u'ๅพๅ': u'ๅพต็ผ',
u'ๅพ็จ': u'ๅพต็จ
',
u'ๅพ็จฟ': u'ๅพต็จฟ',
u'ๅพ็ญ': u'ๅพต็ญ',
u'ๅพ็ป': u'ๅพต็ต',
u'ๅพๅฃ': u'ๅพต่',
u'ๅพ่': u'ๅพต่',
u'ๅพ่ฎญ': u'ๅพต่จ',
u'ๅพ่ฏข': u'ๅพต่ฉข',
u'ๅพ่ฐ': u'ๅพต่ชฟ',
u'ๅพ่ฑก': u'ๅพต่ฑก',
u'ๅพ่ดญ': u'ๅพต่ณผ',
u'ๅพ่ฟน': u'ๅพต่ทก',
u'ๅพ่ฝฆ': u'ๅพต่ป',
u'ๅพ่พ': u'ๅพต่พ',
u'ๅพ้': u'ๅพต้',
u'ๅพ้': u'ๅพต้ธ',
u'ๅพ้': u'ๅพต้',
u'ๅพ้ฃๅฌ้จ': u'ๅพต้ขจๅฌ้จ',
u'ๅพ้ช': u'ๅพต้ฉ',
u'ๅพทๅ ': u'ๅพทไฝ',
u'ๅฟๆฟ': u'ๅฟๆฟ',
u'ๅฟไบ': u'ๅฟๆผ',
u'ๅฟ็': u'ๅฟ็',
u'ๅฟ็ปๅฆๅ': u'ๅฟ็ดฐๅฆ้ซฎ',
u'ๅฟ็ณปไธ': u'ๅฟ็นซไธ',
u'ๅฟ็ณปไธ': u'ๅฟ็นซไธ',
u'ๅฟ็ณปไธญ': u'ๅฟ็นซไธญ',
u'ๅฟ็ณปไน': u'ๅฟ็นซไน',
u'ๅฟ็ณปไบ': u'ๅฟ็นซไบ',
u'ๅฟ็ณปไบฌ': u'ๅฟ็นซไบฌ',
u'ๅฟ็ณปไบบ': u'ๅฟ็นซไบบ',
u'ๅฟ็ณปไป': u'ๅฟ็นซไป',
u'ๅฟ็ณปไผ': u'ๅฟ็นซไผ',
u'ๅฟ็ณปไฝ': u'ๅฟ็นซไฝ',
u'ๅฟ็ณปไฝ ': u'ๅฟ็นซไฝ ',
u'ๅฟ็ณปๅฅ': u'ๅฟ็นซๅฅ',
u'ๅฟ็ณปไผ ': u'ๅฟ็นซๅณ',
u'ๅฟ็ณปๅ
จ': u'ๅฟ็นซๅ
จ',
u'ๅฟ็ณปไธค': u'ๅฟ็นซๅ
ฉ',
u'ๅฟ็ณปๅ': u'ๅฟ็นซๅ',
u'ๅฟ็ณปๅ': u'ๅฟ็นซๅ',
u'ๅฟ็ณปๅจ': u'ๅฟ็นซๅ',
u'ๅฟ็ณปๅ': u'ๅฟ็นซๅ',
u'ๅฟ็ณปๅ': u'ๅฟ็นซๅ',
u'ๅฟ็ณปๅ': u'ๅฟ็นซๅ',
u'ๅฟ็ณปๅ': u'ๅฟ็นซๅ',
u'ๅฟ็ณปๅ': u'ๅฟ็นซๅ',
u'ๅฟ็ณปๅฐ': u'ๅฟ็นซๅฐ',
u'ๅฟ็ณปๅ': u'ๅฟ็นซๅ',
u'ๅฟ็ณปๅช': u'ๅฟ็นซๅช',
u'ๅฟ็ณปๅ': u'ๅฟ็นซๅ',
u'ๅฟ็ณปๅฑ': u'ๅฟ็นซๅ',
u'ๅฟ็ณปๅ': u'ๅฟ็นซๅ',
u'ๅฟ็ณปๅฐ': u'ๅฟ็นซๅฐ',
u'ๅฟ็ณปๅฝ': u'ๅฟ็นซๅ',
u'ๅฟ็ณปๅจ': u'ๅฟ็นซๅจ',
u'ๅฟ็ณปๅฐ': u'ๅฟ็นซๅฐ',
u'ๅฟ็ณปๅคง': u'ๅฟ็นซๅคง',
u'ๅฟ็ณปๅคฉ': u'ๅฟ็นซๅคฉ',
u'ๅฟ็ณปๅคซ': u'ๅฟ็นซๅคซ',
u'ๅฟ็ณปๅฅฅ': u'ๅฟ็นซๅฅง',
u'ๅฟ็ณปๅฅณ': u'ๅฟ็นซๅฅณ',
u'ๅฟ็ณปๅฅน': u'ๅฟ็นซๅฅน',
u'ๅฟ็ณปๅฆป': u'ๅฟ็นซๅฆป',
u'ๅฟ็ณปๅฆ': u'ๅฟ็นซๅฉฆ',
u'ๅฟ็ณปๅญ': u'ๅฟ็นซๅญ',
u'ๅฟ็ณปๅฎ': u'ๅฟ็นซๅฎ',
u'ๅฟ็ณปๅฎฃ': u'ๅฟ็นซๅฎฃ',
u'ๅฟ็ณปๅฎถ': u'ๅฟ็นซๅฎถ',
u'ๅฟ็ณปๅฏ': u'ๅฟ็นซๅฏ',
u'ๅฟ็ณปๅฐ': u'ๅฟ็นซๅฐ',
u'ๅฟ็ณปๅฑฑ': u'ๅฟ็นซๅฑฑ',
u'ๅฟ็ณปๅท': u'ๅฟ็นซๅท',
u'ๅฟ็ณปๅนผ': u'ๅฟ็นซๅนผ',
u'ๅฟ็ณปๅนฟ': u'ๅฟ็นซๅปฃ',
u'ๅฟ็ณปๅฝผ': u'ๅฟ็นซๅฝผ',
u'ๅฟ็ณปๅพท': u'ๅฟ็นซๅพท',
u'ๅฟ็ณปๆจ': u'ๅฟ็นซๆจ',
u'ๅฟ็ณปๆ
': u'ๅฟ็นซๆ
',
u'ๅฟ็ณปๆ': u'ๅฟ็นซๆ',
u'ๅฟ็ณปๆฉ': u'ๅฟ็นซๆฉ',
u'ๅฟ็ณปๆ
': u'ๅฟ็นซๆ
',
u'ๅฟ็ณปๆฐ': u'ๅฟ็นซๆฐ',
u'ๅฟ็ณปๆฅ': u'ๅฟ็นซๆฅ',
u'ๅฟ็ณปๆ': u'ๅฟ็นซๆ',
u'ๅฟ็ณปๆ': u'ๅฟ็นซๆ',
u'ๅฟ็ณปๆผ': u'ๅฟ็นซๆผ',
u'ๅฟ็ณปไธ': u'ๅฟ็นซๆฑ',
u'ๅฟ็ณปๆ': u'ๅฟ็นซๆ',
u'ๅฟ็ณปๆฏ': u'ๅฟ็นซๆฏ',
u'ๅฟ็ณปๆฐ': u'ๅฟ็นซๆฐ',
u'ๅฟ็ณปๆฑ': u'ๅฟ็นซๆฑ',
u'ๅฟ็ณปๆฑถ': u'ๅฟ็นซๆฑถ',
u'ๅฟ็ณปๆฒ': u'ๅฟ็นซๆฒ',
u'ๅฟ็ณปๆฒ': u'ๅฟ็นซๆฒ',
u'ๅฟ็ณปๆณฐ': u'ๅฟ็นซๆณฐ',
u'ๅฟ็ณปๆต': u'ๅฟ็นซๆต',
u'ๅฟ็ณปๆธฏ': u'ๅฟ็นซๆธฏ',
u'ๅฟ็ณปๆน': u'ๅฟ็นซๆน',
u'ๅฟ็ณปๆพณ': u'ๅฟ็นซๆพณ',
u'ๅฟ็ณป็พ': u'ๅฟ็นซ็ฝ',
u'ๅฟ็ณป็ถ': u'ๅฟ็นซ็ถ',
u'ๅฟ็ณป็': u'ๅฟ็นซ็',
u'ๅฟ็ณป็
': u'ๅฟ็นซ็
',
u'ๅฟ็ณป็พ': u'ๅฟ็นซ็พ',
u'ๅฟ็ณป็': u'ๅฟ็นซ็',
u'ๅฟ็ณปไผ': u'ๅฟ็นซ็พ',
u'ๅฟ็ณป็คพ': u'ๅฟ็นซ็คพ',
u'ๅฟ็ณป็ฅ': u'ๅฟ็นซ็ฅ',
u'ๅฟ็ณป็ฅ': u'ๅฟ็นซ็ฅ',
u'ๅฟ็ณป็บข': u'ๅฟ็นซ็ด
',
u'ๅฟ็ณป็พ': u'ๅฟ็นซ็พ',
u'ๅฟ็ณป็พค': u'ๅฟ็นซ็พค',
u'ๅฟ็ณป่': u'ๅฟ็นซ่',
u'ๅฟ็ณป่': u'ๅฟ็นซ่',
u'ๅฟ็ณป่ฑ': u'ๅฟ็นซ่ฑ',
u'ๅฟ็ณป่ถ': u'ๅฟ็นซ่ถ',
u'ๅฟ็ณปไธ': u'ๅฟ็นซ่ฌ',
u'ๅฟ็ณป็': u'ๅฟ็นซ่',
u'ๅฟ็ณปๅ
ฐ': u'ๅฟ็นซ่ญ',
u'ๅฟ็ณป่ฅฟ': u'ๅฟ็นซ่ฅฟ',
u'ๅฟ็ณป่ดซ': u'ๅฟ็นซ่ฒง',
u'ๅฟ็ณป่พ': u'ๅฟ็นซ่ผธ',
u'ๅฟ็ณป่ฟ': u'ๅฟ็นซ่ฟ',
u'ๅฟ็ณป่ฟ': u'ๅฟ็นซ้ ',
u'ๅฟ็ณป้': u'ๅฟ็นซ้ธ',
u'ๅฟ็ณป้': u'ๅฟ็นซ้',
u'ๅฟ็ณป้ฟ': u'ๅฟ็นซ้ท',
u'ๅฟ็ณป้ฎ': u'ๅฟ็นซ้ฎ',
u'ๅฟ็ณป้': u'ๅฟ็นซ้',
u'ๅฟ็ณป้': u'ๅฟ็นซ้',
u'ๅฟ็ณป้ฃ': u'ๅฟ็นซ้ขจ',
u'ๅฟ็ณป้ฆ': u'ๅฟ็นซ้ฆ',
u'ๅฟ็ณป้ซ': u'ๅฟ็นซ้ซ',
u'ๅฟ็ณป้บฆ': u'ๅฟ็นซ้บฅ',
u'ๅฟ็ณป้ป': u'ๅฟ็นซ้ป',
u'ๅฟ่': u'ๅฟ่',
u'ๅฟ่ก': u'ๅฟ่ฉ',
u'ๅฟ่ฏ': u'ๅฟ่ฅ',
u'ๅฟ้้ข': u'ๅฟ่ฃ้ข',
u'ๅฟ้': u'ๅฟ่ฃก',
u'ๅฟ้ฟๅ็ญ': u'ๅฟ้ท้ซฎ็ญ',
u'ๅฟไฝ': u'ๅฟ้ค',
u'ๅฟ
้กป': u'ๅฟ
้ ',
u'ๅฟๅนถ': u'ๅฟไฝต',
u'ๅฟ้': u'ๅฟ่ฃก',
u'ๅฟ้ๅท้ฒ': u'ๅฟ่ฃกๅท้',
u'ๅฟ ไบบไนๆ': u'ๅฟ ไบบไนๆ',
u'ๅฟ ไป': u'ๅฟ ๅ',
u'ๅฟ ไบ': u'ๅฟ ๆผ',
u'ๅฟซๅนฒ': u'ๅฟซไนพ',
u'ๅฟซๅ
ๅถ': u'ๅฟซๅๅถ',
u'ๅฟซๅฟซๅฝๅฝ': u'ๅฟซๅฟซ็ถ็ถ',
u'ๅฟซๅฒ': u'ๅฟซ่ก',
u'ๆไน': u'ๆ้บผ',
u'ๆไน็': u'ๆ้บผ่',
u'ๆไบ': u'ๆๆผ',
u'ๆๅๅฒๅ ': u'ๆ้ซฎ่กๅ ',
u'ๆๅฆๆณๆถ': u'ๆๅฆๆณๆนง',
u'ๆ ไบ': u'ๆ ๆผ',
u'ๆฅไบ': u'ๆฅๆผ',
u'ๆฅๅฒ่ไธ': u'ๆฅ่ก่ไธ',
u'ๆงๅพ': u'ๆงๅพต',
u'ๆงๆฌฒ': u'ๆงๆ
พ',
u'ๆช้ๆชๆฐ': u'ๆช่ฃกๆชๆฐฃ',
u'ๆซ้': u'ๆซ้ฌฑ',
u'ๆๆ ': u'ๆๆ
',
u'ๆ็ๆๆฐ': u'ๆ็ๆๆธ',
u'ๆ็่กไปทๆๆฐ': u'ๆ็่กๅนๆๆธ',
u'ๆ็้ถ่ก': u'ๆ็้่ก',
u'ๆไนไปทๅฌ': u'ๆไนไปทๅฌ',
u'ๆฏไบค็ปๆธธ': u'ๆฏไบค็ต้',
u'ๆฏ่ฐท': u'ๆฏ็ฉ',
u'ๆฐๆ': u'ๆฐ็บ',
u'ๆ่ฏ': u'ๆ่ฅ',
u'ๆ้': u'ๆ้ฌฑ',
u'ๆ ๆ ่ก่ก': u'ๆ ๆ ่ฉ่ฉ',
u'ๆ ่ก': u'ๆ ่ฉ',
u'ๆ ๆธธ': u'ๆ ้',
u'ๆจๅ
ๅถ': u'ๆจๅๅถ',
u'ๆฒ็ญ': u'ๆฒ็ญ',
u'ๆฒ้': u'ๆฒ้ฌฑ',
u'้ท็ๅคดๅฟๅนฒ': u'ๆถ่้ ญๅ
ๅนน',
u'ๆธๆ ': u'ๆธๆ
',
u'ๆ
ๆฌฒ': u'ๆ
ๆ
พ',
u'ๆๆด': u'ๆๆจธ',
u'ๆถ็ดไธๆญฃ': u'ๆก็ด้ๆญฃ',
u'ๆถๆ': u'ๆก้ฌฅ',
u'ๆณๅ
ๅถ': u'ๆณๅๅถ',
u'ๆดๆ ': u'ๆดๆ
',
u'ๆๅ ': u'ๆไฝ',
u'ๆๅ
ๅถ': u'ๆๅๅถ',
u'ๆๅคงๅฉ้ข': u'ๆๅคงๅฉ้บต',
u'ๆ้ข': u'ๆ้บต',
u'็ฑๅฐ': u'ๆ็',
u'ๆๅ่ฏ': u'ๆๅ่ฅ',
u'ๆไบ': u'ๆๆผ',
u'ๆฟๆด': u'ๆฟๆจธ',
u'ๆฟ่ๆญ': u'ๆฟ่ๆญ',
u'ๆ ๅฝ': u'ๆ
ๅฝ',
u'ๆ ๆ ': u'ๆ
ๆ
',
u'ๆ
้ๆ
ๅผ ': u'ๆ
่ฃกๆ
ๅผต',
u'ๅบๅ': u'ๆ
ถๅผ',
u'ๅบๅ': u'ๆ
ถๆ',
u'ๅบๅๅฒ': u'ๆ
ถๆญทๅฒ',
u'ๆฌฒไปคๆบๆ': u'ๆ
พไปคๆบๆ',
u'ๆฌฒๅฃ้พๅกซ': u'ๆ
พๅฃ้ฃๅกซ',
u'ๆฌฒๅฟต': u'ๆ
พๅฟต',
u'ๆฌฒๆ': u'ๆ
พๆ',
u'ๆฌฒๆตท': u'ๆ
พๆตท',
u'ๆฌฒ็ซ': u'ๆ
พ็ซ',
u'ๆฌฒ้': u'ๆ
พ้',
u'ๅฟง้': u'ๆ้ฌฑ',
u'ๅญๅ ': u'ๆๅ ',
u'ๅญๅ': u'ๆๅผ',
u'ๅญๆ': u'ๆๆบ',
u'ๅญๅ': u'ๆๆบ',
u'ๅญๅ': u'ๆ่',
u'ๅญๅ็': u'ๆ่่',
u'ๆณๆ': u'ๆ่จ',
u'ๆๆพ': u'ๆ้ฌ',
u'ๅบๅ
ๅถ': u'ๆๅๅถ',
u'ๅบๅพ': u'ๆๅพต',
u'ๅบ้': u'ๆ้',
u'ๆๆ ': u'ๆๆ
',
u'่ๆ': u'ๆๆ',
u'่่ๆๆ': u'ๆๆๆๆ',
u'่็ด': u'ๆ็ด',
u'ๆฉๅฟฟ็ชๆฌฒ': u'ๆฒๅฟฟ็ชๆฌฒ',
u'ๆ้': u'ๆท่ฃก',
u'ๆ่กจ': u'ๆท้ถ',
u'ๆ้': u'ๆท้',
u'ๆฌๆ': u'ๆธๆ',
u'ๆฌๆข': u'ๆธๆจ',
u'ๆฌ่ๆข': u'ๆธ่ๆจ',
u'ๆฌ้': u'ๆธ้',
u'ๆฟ่': u'ๆฟ็ฏ',
u'ๆๆไธ่': u'ๆๆไธๆจ',
u'ๆไบ': u'ๆๆผ',
u'ๆไบๆ': u'ๆๆผๆ',
u'ๆ่ฏ': u'ๆ่ฅ',
u'ๆๅ
ๅถ': u'ๆๅๅถ',
u'ๆฌ่ฐท': u'ๆฉ็ฉ',
u'ๆชๅ': u'ๆช้ซฎ',
u'ๆๅคฉๆๅฐ': u'ๆฐๅคฉ้ฌฅๅฐ',
u'ๆๆ ': u'ๆฐๆ
',
u'ๆๆ': u'ๆฐ้ฌฅ',
u'ๆๅฝฉๅจฑไบฒ': u'ๆฒ็ถตๅจ่ฆช',
u'ๆ้': u'ๆฒ่ฃก',
u'ๆด่กจ': u'ๆด้ถ',
u'ๆดๅๅซ้ฝฟ': u'ๆด้ซฎๅซ้ฝ',
u'ๆฟ้': u'ๆฟ่ฃก',
u'ๆไบ': u'ๆไบ',
u'ๆไบไบ': u'ๆไบไบ',
u'ๆๅ ': u'ๆไฝ',
u'ๆๅ ๅ': u'ๆๅ ๅ',
u'ๆๅ ๆ': u'ๆๅ ๆ',
u'ๆๅ ็ฎ': u'ๆๅ ็ฎ',
u'ๆๆ': u'ๆ่จ',
u'ๆๆ่ฐท็่ซ': u'ๆๆฌ็ฉ็่ฒ',
u'ๆๅกๆฒป่ซ': u'ๆๅกๆฒป่ซ',
u'ๆๅขๆฒป่ซ': u'ๆๅกๆฒป่ซ',
u'ๆๆ': u'ๆๆบ',
u'ๆ่กจๆ': u'ๆ่กจๆ
',
u'ๆ่กจๆ': u'ๆ่กจๆ',
u'ๆ่กจๅณ': u'ๆ่กจๆฑบ',
u'ๆ่กจๆผ': u'ๆ่กจๆผ',
u'ๆ่กจ็ฐ': u'ๆ่กจ็พ',
u'ๆ่กจ็คบ': u'ๆ่กจ็คบ',
u'ๆ่กจ่พพ': u'ๆ่กจ้',
u'ๆ่กจ้ฒ': u'ๆ่กจ้ฒ',
u'ๆ่กจ้ข': u'ๆ่กจ้ข',
u'ๆ้': u'ๆ่ฃก',
u'ๆ่กจ': u'ๆ้ถ',
u'ๆๆพ': u'ๆ้ฌ',
u'ๆๅ
ๅถ': u'ๆๅๅถ',
u'ๆๅนฒไผ': u'ๆๅนฒไผ',
u'ๆๅนฒๆ': u'ๆๅนฒๆ',
u'ๆๅนฒๆฐ': u'ๆๅนฒๆพ',
u'ๆๅนฒๆฟ': u'ๆๅนฒๆฟ',
u'ๆๅนฒๆถ': u'ๆๅนฒๆถ',
u'ๆๅนฒ้ข': u'ๆๅนฒ้ ',
u'ๆๅนฒ': u'ๆๅนน',
u'ๆๅฅฝๅบๅญ': u'ๆๅฅฝๅบๅญ',
u'ๆๅฅฝๆ น': u'ๆๅฅฝๆ น',
u'ๆไฝๆๅ': u'ๆไฝๆๅ',
u'ๆๆ': u'ๆๆ',
u'ๆๆ': u'ๆๆป',
u'ๆๅนฒๅ': u'ๆไนพๅฆ',
u'ๆๅนถ': u'ๆไฝต',
u'ๆๅบๅๅ
ฅ': u'ๆๅบๅผๅ
ฅ',
u'ๆๅก้': u'ๆๅก้',
u'ๆๅจ': u'ๆๅจ',
u'ๆๅนฒ': u'ๆๅนน',
u'ๆๆผ': u'ๆๆ',
u'ๆๆญๅ': u'ๆๆท็ผ',
u'ๆ่ฐท': u'ๆ็ฉ',
u'ๆ็้': u'ๆ่้',
u'ๆ่ทฏๅบๆฟ': u'ๆ่ทฏ่ๆฟ',
u'ๆ้': u'ๆ้',
u'ๆ้ฃๅ': u'ๆ้ขจๅพ',
u'ๆๆ': u'ๆ้ฌฅ',
u'ๆ็ฎกๅฝ': u'ๆ็ฎกๅ',
u'ๆๅคงๆข': u'ๆๅคงๆจ',
u'ๆๅพก': u'ๆ็ฆฆ',
u'ๆฏ้ข': u'ๆฏ้บต',
u'ๆถไฝๅฝ': u'ๆถ้คๅ',
u'ๆนๅ็': u'ๆนๅ็',
u'ๆนๅค': u'ๆน่ค',
u'ๆนๆณจ': u'ๆน่จป',
u'ๆนๆ': u'ๆน้ฌฅ',
u'ๆฟๅถ': u'ๆฟ่ฃฝ',
u'ๆๅถไฝ็จ': u'ๆๅถไฝ็จ',
u'ๆ้': u'ๆ้ฌฑ',
u'ๆๅฅธ': u'ๆๅงฆ',
u'ๆ่ฏ': u'ๆ่ฅ',
u'ๆๆ': u'ๆ้ฌฅ',
u'ๆ่ฏ': u'ๆ่ฅ',
u'ๆ็่ฏ': u'ๆ็่ฅ',
u'ๆๅพก': u'ๆ็ฆฆ',
u'ๆ่ฏ': u'ๆ่ฅ',
u'ๆๅๅพ': u'ๆๅๅพ',
u'ๆๅญๆ': u'ๆๅญๆฒ',
u'ๆๆๆฒๆฒณ': u'ๆๆๆฒๆฒณ',
u'ๆๅฒ': u'ๆ่ก',
u'ๆซๆฆ้ๅ
ฐ': u'ๆซๆฆๆก่ญ',
u'ๆซๅคดๆฃๅ': u'ๆซ้ ญๆฃ้ซฎ',
u'ๆซๅ': u'ๆซ้ซฎ',
u'ๆฑๆด่้ฟๅๅ
ฎ': u'ๆฑๆด่้ทๅๅ
ฎ',
u'ๆฑ็ด ๆๆด': u'ๆฑ็ด ๆทๆจธ',
u'ๆตๅพก': u'ๆต็ฆฆ',
u'ๆนๅนฒ': u'ๆนไนพ',
u'ๆฝๅ
ฌ็ญพ': u'ๆฝๅ
ฌ็ฑค',
u'ๆฝ็ญพ': u'ๆฝ็ฑค',
u'ๆฟๅ': u'ๆฟ้ซฎ',
u'ๆ้ๆ ๅฃฐ': u'ๆ้็ก่ฒ',
u'ๆไผ': u'ๆๅคฅ',
u'ๆ้กป': u'ๆ้ฌ',
u'ๆๅ
ๆฝๅฐๅพท้': u'ๆๅ
ๆฝ็พๅพท้',
u'ๆๆ': u'ๆๆ',
u'ๆ็บค': u'ๆ็ธด',
u'ๆ้ขไธ': u'ๆ้ขไธ',
u'ๆ้ขๅ
ท': u'ๆ้ขๅ
ท',
u'ๆ้ขๅ': u'ๆ้ขๅ',
u'ๆ้ขๅทพ': u'ๆ้ขๅทพ',
u'ๆ้ขๆ ': u'ๆ้ข็ก',
u'ๆ้ข็ฎ': u'ๆ้ข็ฎ',
u'ๆ้ข็ฝฉ': u'ๆ้ข็ฝฉ',
u'ๆ้ข่ฒ': u'ๆ้ข่ฒ',
u'ๆ้ข้จ': u'ๆ้ข้จ',
u'ๆ้ข': u'ๆ้บต',
u'ๆไบบไบ': u'ๆไบบๆผ',
u'ๆไบ': u'ๆๆผ',
u'ๆๆด': u'ๆๆจธ',
u'ๆๅ': u'ๆ้ซฎ',
u'ๆ้กป': u'ๆ้ฌ',
u'ๆๅซ': u'ๆๅฝ',
u'ๆไบ': u'ๆๆผ',
u'ๆไบ': u'ๆๆผ',
u'ๆๆด': u'ๆๆจธ',
u'ๆผๅด': u'ๆๅป',
u'ๆผๅฝ': u'ๆๅฝ',
u'ๆผ่': u'ๆๆจ',
u'ๆผๆญป': u'ๆๆญป',
u'ๆผ็ๅฐฝๆญป': u'ๆ็็กๆญป',
u'ๆผ็ป': u'ๆ็ต',
u'ๆผ่ๅฝ': u'ๆ่ๅฝ',
u'ๆผๆ': u'ๆ้ฌฅ',
u'ๆๆ': u'ๆ่จ',
u'ๆฌๅ': u'ๆฌ้ซฎ',
u'ๆญๅนฒ': u'ๆญไนพ',
u'ๆฎๆฎ': u'ๆฎๆฎ',
u'ๆผๆญปๆผๆดป': u'ๆผๆญปๆผๆดป',
u'ๆพๆฒ': u'ๆพ็',
u'ๆฟไธ่กจ': u'ๆฟไธ้ถ',
u'ๆฟไธ้': u'ๆฟไธ้',
u'ๆฟๅ': u'ๆฟๆบ',
u'ๆฟ็ ดไป': u'ๆฟ็ ดๅด',
u'ๆๅ': u'ๆๅ',
u'ๆๅพ': u'ๆๅ',
u'ๆๅธ
': u'ๆๅธฅ',
u'ๆๅฝฉ': u'ๆๅฝฉ',
u'ๆๅฟต': u'ๆๅฟต',
u'ๆๅท': u'ๆ่',
u'ๆ่ฝฆ': u'ๆ่ป',
u'ๆ้ข': u'ๆ้ข',
u'ๆๆๅ่': u'ๆๆๅ่
ณ',
u'ๆๆ': u'ๆ้ฌฅ',
u'ๆๅคงๆข': u'ๆๅคงๆจ',
u'ๆๆ': u'ๆ้ฌฅ',
u'ๆฏ่ก': u'ๆฏ่ฉ',
u'ๆๆ': u'ๆ็ดฎ',
u'ๆๅฅธๅพ': u'ๆๅฅธๅพ',
u'ๆๅฅธ็ป': u'ๆๅฅธ็ดฐ',
u'ๆๅฅธ่ดผ': u'ๆๅฅธ่ณ',
u'ๆๅฅธๅ
': u'ๆๅฅธ้ปจ',
u'ๆๅฅธ': u'ๆๅงฆ',
u'ๆๅ': u'ๆ้ซฎ',
u'ๆๅพก': u'ๆ็ฆฆ',
u'ๆ้ขไบบ': u'ๆ้บตไบบ',
u'่ไธๅพ': u'ๆจไธๅพ',
u'่ๅบ': u'ๆจๅบ',
u'่ๅป': u'ๆจๅป',
u'่ๅฝ': u'ๆจๅฝ',
u'่ๅ ': u'ๆจๅขฎ',
u'่ๅฎๅฐฑๅฑ': u'ๆจๅฎๅฐฑๅฑ',
u'่ๅฎ': u'ๆจๅฏฆ',
u'่ๅทฑไปไบบ': u'ๆจๅทฑๅพไบบ',
u'่ๅทฑๆไบบ': u'ๆจๅทฑๆไบบ',
u'่ๅทฑไธบไบบ': u'ๆจๅทฑ็บไบบ',
u'่ๅทฑไธบๅ
ฌ': u'ๆจๅทฑ็บๅ
ฌ',
u'่ๅทฑไธบๅฝ': u'ๆจๅทฑ็บๅ',
u'่ๅพ': u'ๆจๅพ',
u'่ๆๅ
ถ่ฐ': u'ๆจๆๅ
ถ่ชฐ',
u'่ๆฌ้ๆซ': u'ๆจๆฌ้ๆซ',
u'่ๅผ': u'ๆจๆฃ',
u'่ๆญปๅฟ็': u'ๆจๆญปๅฟ็',
u'่็': u'ๆจ็',
u'่็ญๅ้ฟ': u'ๆจ็ญๅ้ท',
u'่่บซ': u'ๆจ่บซ',
u'่่ฝฆไฟๅธ
': u'ๆจ่ปไฟๅธฅ',
u'่่ฟๆฑ่ฟ': u'ๆจ่ฟๆฑ้ ',
u'ๅทไฝ': u'ๆฒไฝ',
u'ๅทๆฅ': u'ๆฒไพ',
u'ๅทๅฟ': u'ๆฒๅ
',
u'ๅทๅ
ฅ': u'ๆฒๅ
ฅ',
u'ๅทๅจ': u'ๆฒๅ',
u'ๅทๅป': u'ๆฒๅป',
u'ๅทๅพ': u'ๆฒๅ',
u'ๅทๅ้ๆฅ': u'ๆฒๅ้ไพ',
u'ๅทๅฐบ': u'ๆฒๅฐบ',
u'ๅทๅฟ่': u'ๆฒๅฟ่',
u'ๅทๆ': u'ๆฒๆ',
u'ๅทๆฒ': u'ๆฒๆฒ',
u'ๅทๆฌพ': u'ๆฒๆฌพ',
u'ๅทๆฏ': u'ๆฒๆฏ',
u'ๅท็': u'ๆฒ็
',
u'ๅท็ญ': u'ๆฒ็ญ',
u'ๅทๅธ': u'ๆฒ็ฐพ',
u'ๅท็บธ': u'ๆฒ็ด',
u'ๅท็ผฉ': u'ๆฒ็ธฎ',
u'ๅท่': u'ๆฒ่',
u'ๅท่็': u'ๆฒ่่',
u'ๅท่ธ': u'ๆฒ่ธ',
u'ๅท่ข': u'ๆฒ่ข',
u'ๅท่ตฐ': u'ๆฒ่ตฐ',
u'ๅท่ตท': u'ๆฒ่ตท',
u'ๅท่ฝด': u'ๆฒ่ปธ',
u'ๅท้': u'ๆฒ้',
u'ๅท้บ็': u'ๆฒ้ช่',
u'ๅทไบ': u'ๆฒ้ฒ',
u'ๅท้ฃ': u'ๆฒ้ขจ',
u'ๅทๅ': u'ๆฒ้ซฎ',
u'ๆต้ข': u'ๆต้บต',
u'ๆถ็ผ': u'ๆถ้',
u'ๆซ่ก': u'ๆ่ฉ',
u'ๆๆ': u'ๆๆ',
u'ๆ้ชจ้ข': u'ๆ้ชจ้บต',
u'ๆๅธ': u'ๆๅธ',
u'ๆๅ': u'ๆๆ',
u'ๆ้ฉ': u'ๆ้',
u'ๆ้': u'ๆ้',
u'้ไธ': u'ๆกไธ',
u'้ไผ': u'ๆกไผ',
u'้ไฝ': u'ๆกไฝ',
u'้ไฟก': u'ๆกไฟก',
u'้ๅ
': u'ๆกๅ
',
u'้ๅฐ': u'ๆกๅฐ',
u'้ๅถ': u'ๆกๅถ',
u'้ๅบ': u'ๆกๅ',
u'้ๅป': u'ๆกๅป',
u'้ๅ': u'ๆกๅ',
u'้ๅ': u'ๆกๅ',
u'้ๅจ': u'ๆกๅจ',
u'้ๅฅฝ': u'ๆกๅฅฝ',
u'้ๅพ': u'ๆกๅพ',
u'้ๆพ': u'ๆกๆพ',
u'้ๆ': u'ๆกๆ',
u'้ๆ': u'ๆกๆ',
u'้ๆ': u'ๆกๆ',
u'้ๆญ': u'ๆกๆญ',
u'้ๆฉ': u'ๆกๆ',
u'้ๆท': u'ๆกๆท',
u'้ๆถ': u'ๆกๆถ',
u'้ๆ': u'ๆกๆ',
u'้ๆ': u'ๆกๆ',
u'้ๆก': u'ๆกๆก',
u'้ๆ ท': u'ๆกๆจฃ',
u'้ๆจตไบบ': u'ๆกๆจตไบบ',
u'้ๆ ็ง': u'ๆกๆจน็จฎ',
u'้ๆฐ': u'ๆกๆฐฃ',
u'้ๆฒน': u'ๆกๆฒน',
u'้ไธบ': u'ๆก็บ',
u'้็
ค': u'ๆก็
ค',
u'้่ท': u'ๆก็ฒ',
u'้็': u'ๆก็ต',
u'้็ ': u'ๆก็ ',
u'้็ๆๅฒ': u'ๆก็ๆๅฒ',
u'้็จ': u'ๆก็จ',
u'้็': u'ๆก็',
u'้็ณ': u'ๆก็ณ',
u'้็ ๅบ': u'ๆก็ ๅ ด',
u'้็ฟ': u'ๆก็คฆ',
u'้็ง': u'ๆก็จฎ',
u'้็ฉบๅบ': u'ๆก็ฉบๅ',
u'้็ฉบ้็ฉ': u'ๆก็ฉบๆก็ฉ',
u'้็ด': u'ๆก็ด',
u'้็บณ': u'ๆก็ด',
u'้็ป': u'ๆก็ตฆ',
u'้่ฑ': u'ๆก่ฑ',
u'้่นไบบ': u'ๆก่นไบบ',
u'้่ถ': u'ๆก่ถ',
u'้่': u'ๆก่',
u'้่ฒ': u'ๆก่ฎ',
u'้่': u'ๆก่',
u'้่ช': u'ๆก่ช',
u'้่ฏ': u'ๆก่ฅ',
u'้่ก': u'ๆก่ก',
u'้่กฅ': u'ๆก่ฃ',
u'้่ฎฟ': u'ๆก่จช',
u'้่ฏ': u'ๆก่ญ',
u'้ไนฐ': u'ๆก่ฒท',
u'้่ดญ': u'ๆก่ณผ',
u'้ๅ': u'ๆก่พฆ',
u'้่ฟ': u'ๆก้',
u'้่ฟ': u'ๆก้',
u'้้': u'ๆก้ธ',
u'้้': u'ๆก้',
u'้ๅฝ': u'ๆก้',
u'้้': u'ๆก้ต',
u'้้': u'ๆก้',
u'้้ฃ': u'ๆก้ขจ',
u'้้ฃ้ฎไฟ': u'ๆก้ขจๅไฟ',
u'้้ฃ': u'ๆก้ฃ',
u'้็': u'ๆก้นฝ',
u'ๆฃ็ญพ': u'ๆฃ็ฑค',
u'ๆฅ็่ฏด': u'ๆฅ่่ชช',
u'ๆงๅถ': u'ๆงๅถ',
u'ๆจๆ
ๅ็': u'ๆจๆ
ๆบ็',
u'ๆจๆไน่ฏ': u'ๆจๆไน่ฉ',
u'ๆจ่ไบ้': u'ๆจ่ๆผ้ธ',
u'ๆจๆ': u'ๆจ่จ',
u'ๆๅญๅนฒ': u'ๆๅญไนพ',
u'ๆๅฟๅ่': u'ๆๅฟๅผ่ฝ',
u'ๆๆฉๅคชๅไนฆ': u'ๆๆฉๅคชๅพๆธ',
u'ๆไบ': u'ๆๆผ',
u'ๆข็ญพ': u'ๆ็ฑค',
u'ๆข่ฏ': u'ๆ่ฅ',
u'ๆขๅช': u'ๆ้ป',
u'ๆขๅ': u'ๆ้ซฎ',
u'ๆกๅ': u'ๆก้ซฎ',
u'ๆฉๅนฒ': u'ๆฉไนพ',
u'ๆช้': u'ๆชๆก',
u'ๆชๅ': u'ๆช้ซฎ',
u'ๆช้กป': u'ๆช้ฌ',
u'ๆญไธ': u'ๆญ้',
u'ๆฅๆ่กจ': u'ๆฎๆ่กจ',
u'ๆฅๆ': u'ๆฎๆ',
u'ๆ้ข': u'ๆ้บต',
u'ๆไบ': u'ๆๆผ',
u'ๆๆ': u'ๆ้ฌฅ',
u'ๆๆ่ก่ก': u'ๆๆ่ฉ่ฉ',
u'ๆ่ก': u'ๆ่ฉ',
u'ๆฃ้ฌผๅ็ฝ': u'ๆ้ฌผๅผ็ฝ',
u'ๆค่ฎๆ่': u'ๆค่ฎๆ่',
u'ๆฌๆ': u'ๆฌ้ฌฅ',
u'ๆญๅนฒ้บ': u'ๆญไนพ้ช',
u'ๆญไผ': u'ๆญๅคฅ',
u'ๆขๅ ': u'ๆถไฝ',
u'ๆฝ่ฏ': u'ๆฝ่ฅ',
u'ๆงๅ่ทไธ': u'ๆงๅ
็ฒ้',
u'ๆญ้': u'ๆญๆก',
u'ๆธๆฃฑ': u'ๆธ็จ',
u'ๆธ้': u'ๆธ้',
u'ๆๅ': u'ๆบๅ',
u'ๆๅฅ': u'ๆบๅฅ',
u'ๆๅญ': u'ๆบๅญ',
u'ๆๅฐบ': u'ๆบๅฐบ',
u'ๆๆ': u'ๆบๆ',
u'ๆๆขฏ': u'ๆบๆขฏ',
u'ๆๆค
': u'ๆบๆค
',
u'ๆๅ ': u'ๆบ็',
u'ๆ็': u'ๆบ็',
u'ๆ็ฏท': u'ๆบ็ฏท',
u'ๆ็บธ': u'ๆบ็ด',
u'ๆ่ฃ': u'ๆบ่ฃ',
u'ๆๅ': u'ๆๅผ',
u'ๆๅนฒ': u'ๆไนพ',
u'ๆ้ข': u'ๆ้บต',
u'ๆ้กป': u'ๆ้ฌ',
u'ๆ็ๅฐ': u'ๆ็ๆชฏ',
u'ๆ้': u'ๆ้',
u'ๆ้ตๅฒๅ': u'ๆ้ฃ่ก่ป',
u'ๆคๅนถ': u'ๆคไฝต',
u'ๆจ่ฐท': u'ๆฅ็ฉ',
u'ๆฉๆ': u'ๆฉ้ฌฅ',
u'ๆญไบ': u'ๆญๆผ',
u'ๆๅฌ': u'ๆฒ้ผ',
u'ๆๅฌๅฌ': u'ๆฒ้ผ้ผ',
u'ๆ้ข': u'ๆ้บต',
u'ๅปๆ': u'ๆๆ',
u'ๅป้': u'ๆ้',
u'ๆไฝ้': u'ๆไฝ้',
u'ๆ
ไป้ข': u'ๆไป้บต',
u'ๆ
ๆ
้ข': u'ๆๆ้บต',
u'ๆ
็': u'ๆ่',
u'ๆ
่ด็': u'ๆ่ฒ ่',
u'ๆๅ': u'ๆๅ',
u'ๆฎไบ': u'ๆไบ',
u'ๆฎๅนฒ่็ชฅไบๅบ': u'ๆๆฆฆ่็ชบไบๅบ',
u'ๆขๅ': u'ๆข้ซฎ',
u'ๆฆๅนฒ': u'ๆฆไนพ',
u'ๆฆๅนฒๅ': u'ๆฆไนพๆทจ',
u'ๆฆ่ฏ': u'ๆฆ่ฅ',
u'ๆงๅนฒ': u'ๆฐไนพ',
u'ๆ้': u'ๆบ้',
u'ๆๅถ': u'ๆ่ฃฝ',
u'ๆฏๅนฒ': u'ๆฏๅนน',
u'ๆฏๆ': u'ๆฏๆ',
u'ๆถ่ท': u'ๆถ็ฉซ',
u'ๆนๅพ': u'ๆนๅพต',
u'ๆปๅ ': u'ๆปไฝ',
u'ๆพ่ๆฃ': u'ๆพๆๆ',
u'ๆพ่ก': u'ๆพ่ฉ',
u'ๆพๆพ': u'ๆพ้ฌ',
u'ๆ
ไบ้': u'ๆ
ไบ่ฃก',
u'ๆ
ไบ': u'ๆ
ไบ',
u'ๆไบ': u'ๆๆผ',
u'ๆ่ฏ': u'ๆ่ฅ',
u'่ดฅไบ': u'ๆๆผ',
u'ๅ่ฏด็': u'ๆ่ชช่',
u'ๆๅญฆ้': u'ๆๅญธ้',
u'ๆไบ': u'ๆๆผ',
u'ๆ่': u'ๆ็ฏ',
u'ๆขๅนฒ': u'ๆขๅนน',
u'ๆขๆ
ๆฌฒ': u'ๆขๆ
ๆฌฒ',
u'ๆขๆไบ่': u'ๆขๆไบ่ฝ',
u'ๆฃไผ': u'ๆฃๅคฅ',
u'ๆฃไบ': u'ๆฃๆผ',
u'ๆฃ่ก': u'ๆฃ่ฉ',
u'ๆฆๆด': u'ๆฆๆจธ',
u'ๆฌๆฝ': u'ๆฌ่ผ',
u'ๆฒๆ': u'ๆฒๆ',
u'ๆฒ้': u'ๆฒ้',
u'ๆดๅบ': u'ๆด่',
u'ๆดๅช': u'ๆด้ป',
u'ๆด้ฃๅ': u'ๆด้ขจๅพ',
u'ๆดๅ็จๅ': u'ๆด้ซฎ็จๅ',
u'ๆๅฟพๅไป': u'ๆตๆพๅ่ฎ',
u'ๆท่ฏ': u'ๆท่ฅ',
u'ๆฐๅคฉๅ': u'ๆธๅคฉๅพ',
u'ๆฐๅญ้': u'ๆธๅญ้',
u'ๆฐๅญ้่กจ': u'ๆธๅญ้้ถ',
u'ๆฐ็ฝชๅนถ็ฝ': u'ๆธ็ฝชไฝต็ฝฐ',
u'ๆฐไธ่็กฎ': u'ๆธ่่็กฎ',
u'ๆไธ': u'ๆไธ',
u'ๆๆฑๆฅ': u'ๆๅฏๅ ฑ',
u'ๆๅพๆ': u'ๆๅพตๆ',
u'ๆๆๆณๆถ': u'ๆๆๆณๆนง',
u'ๆ้้้': u'ๆ้้้',
u'ๆ่ฝฌๅๆจช': u'ๆ่ฝๅๆฉซ',
u'ๆซ้ไธบๆด': u'ๆซ้็บๆจธ',
u'ๆฐๅ': u'ๆฐๆ',
u'ๆฐๅๅฒ': u'ๆฐๆญทๅฒ',
u'ๆฐๆ': u'ๆฐ็ดฎ',
u'ๆฐๅบ': u'ๆฐ่',
u'ๆฐๅบๅธ': u'ๆฐ่ๅธ',
u'ๆฒ้ไธบๆด': u'ๆฒ้็บๆจธ',
u'ๆญๅ': u'ๆท้ซฎ',
u'ๆญๅๆ่บซ': u'ๆท้ซฎๆ่บซ',
u'ๆนไพฟ้ข': u'ๆนไพฟ้บต',
u'ๆนๅ ': u'ๆนๅ ',
u'ๆนๅๅพ': u'ๆนๅๅพ',
u'ๆนๅฟ': u'ๆน่ช',
u'ๆน้ข': u'ๆน้ข',
u'ไบ0': u'ๆผ0',
u'ไบ1': u'ๆผ1',
u'ไบ2': u'ๆผ2',
u'ไบ3': u'ๆผ3',
u'ไบ4': u'ๆผ4',
u'ไบ5': u'ๆผ5',
u'ไบ6': u'ๆผ6',
u'ไบ7': u'ๆผ7',
u'ไบ8': u'ๆผ8',
u'ไบ9': u'ๆผ9',
u'ไบไธ': u'ๆผไธ',
u'ไบไธๅฝน': u'ๆผไธๅฝน',
u'ไบไธ': u'ๆผไธ',
u'ไบไธ': u'ๆผไธ',
u'ไบไธ': u'ๆผไธ',
u'ไบไน': u'ๆผไน',
u'ไบไน': u'ๆผไน',
u'ไบไน': u'ๆผไน',
u'ไบไบ': u'ๆผไบ',
u'ไบไบ': u'ๆผไบ',
u'ไบไบ': u'ๆผไบ',
u'ไบไบบ': u'ๆผไบบ',
u'ไบไป': u'ๆผไป',
u'ไบไป': u'ๆผไป',
u'ไบไผ': u'ๆผไผ',
u'ไบไฝ': u'ๆผไฝ',
u'ไบไฝ ': u'ๆผไฝ ',
u'ไบๅ
ซ': u'ๆผๅ
ซ',
u'ไบๅ
ญ': u'ๆผๅ
ญ',
u'ไบๅ
ๅถ': u'ๆผๅๅถ',
u'ไบๅ': u'ๆผๅ',
u'ไบๅฃ': u'ๆผๅฃ',
u'ไบๅค': u'ๆผๅค',
u'ไบๅ': u'ๆผๅ',
u'ไบๅ': u'ๆผๅ',
u'ไบๅผๅๅ': u'ๆผๅผๅๅ',
u'ไบๅ': u'ๆผๅ',
u'ไบๅฝ': u'ๆผๅ',
u'ไบๅ': u'ๆผๅ',
u'ไบๅ': u'ๆผๅ',
u'ไบๅคซ็ฝ': u'ๆผๅคซ็พ
',
u'ๆผๅคซ็ฝ': u'ๆผๅคซ็พ
',
u'ๆผๅคซ็พ
': u'ๆผๅคซ็พ
',
u'ไบๅฅน': u'ๆผๅฅน',
u'ไบๅฅฝ': u'ๆผๅฅฝ',
u'ไบๅง': u'ๆผๅง',
u'ๆผๅง': u'ๆผๅง',
u'ไบๅฎ': u'ๆผๅฎ',
u'ไบๅฎถ': u'ๆผๅฎถ',
u'ไบๅฏ': u'ๆผๅฏ',
u'ไบๅทฎ': u'ๆผๅทฎ',
u'ไบๅทฑ': u'ๆผๅทฑ',
u'ไบๅธ': u'ๆผๅธ',
u'ไบๅน': u'ๆผๅน',
u'ไบๅผฑ': u'ๆผๅผฑ',
u'ไบๅผบ': u'ๆผๅผท',
u'ไบๅ': u'ๆผๅพ',
u'ไบๅพ': u'ๆผๅพต',
u'ไบๅฟ': u'ๆผๅฟ',
u'ไบๆ': u'ๆผๆท',
u'ไบๆ': u'ๆผๆ',
u'ไบๆ': u'ๆผๆฒ',
u'ไบๆ': u'ๆผๆ',
u'ไบๆฏ': u'ๆผๆฏ',
u'ไบๆฏ': u'ๆผๆฏ',
u'ไบๆฏไน': u'ๆผๆฏไน',
u'ไบๆถ': u'ๆผๆ',
u'ไบๆขจๅ': u'ๆผๆขจ่ฏ',
u'ๆผๆขจ่ฏ': u'ๆผๆขจ่ฏ',
u'ไบไน': u'ๆผๆจ',
u'ไบๆญค': u'ๆผๆญค',
u'ๆผๆฐ': u'ๆผๆฐ',
u'ไบๆฐ': u'ๆผๆฐ',
u'ไบๆฐด': u'ๆผๆฐด',
u'ไบๆณ': u'ๆผๆณ',
u'ไบๆฝๅฟ': u'ๆผๆฝ็ธฃ',
u'ไบ็ซ': u'ๆผ็ซ',
u'ไบ็': u'ๆผ็',
u'ไบๅข': u'ๆผ็',
u'ไบ็ฉ': u'ๆผ็ฉ',
u'ไบๆฏ': u'ๆผ็ข',
u'ไบๅฐฝ': u'ๆผ็ก',
u'ไบ็ฒ': u'ๆผ็ฒ',
u'ไบ็ฅ': u'ๆผ็ฅ',
u'ไบ็ฉ': u'ๆผ็ฉ',
u'ไบ็ป': u'ๆผ็ต',
u'ไบ็พ': u'ๆผ็พ',
u'ไบ่ฒ': u'ๆผ่ฒ',
u'ไบ่': u'ๆผ่',
u'ไบ่': u'ๆผ่',
u'ไบ่ก': u'ๆผ่ก',
u'ไบ่กท': u'ๆผ่กท',
u'ไบ่ฏฅ': u'ๆผ่ฉฒ',
u'ไบๅ': u'ๆผ่พฒ',
u'ไบ้': u'ๆผ้',
u'ไบ่ฟ': u'ๆผ้',
u'ไบ้': u'ๆผ้',
u'ไบไธ': u'ๆผ้',
u'ไบ้': u'ๆผ้',
u'ไบ้': u'ๆผ้ธ',
u'ไบ๏ผ': u'ๆผ๏ผ',
u'ไบ๏ผ': u'ๆผ๏ผ',
u'ไบ๏ผ': u'ๆผ๏ผ',
u'ไบ๏ผ': u'ๆผ๏ผ',
u'ไบ๏ผ': u'ๆผ๏ผ',
u'ไบ๏ผ': u'ๆผ๏ผ',
u'ไบ๏ผ': u'ๆผ๏ผ',
u'ไบ๏ผ': u'ๆผ๏ผ',
u'ไบ๏ผ': u'ๆผ๏ผ',
u'ไบ๏ผ': u'ๆผ๏ผ',
u'ๆฝ่': u'ๆฝๆจ',
u'ๆฝไบ': u'ๆฝๆผ',
u'ๆฝ่ไน้': u'ๆฝ่ไน้',
u'ๆฝ่ฏ': u'ๆฝ่ฅ',
u'ๆๅพๅๅผ': u'ๆๅพตๅๅผ',
u'ๆๆณจ': u'ๆ่จป',
u'ๆ
ๆธธ': u'ๆ
้',
u'ๆๅนฒ่ฝฌๅค': u'ๆไนพ่ฝๅค',
u'ๆ็ป็': u'ๆ็น่',
u'ๆๅ': u'ๆ่ฟด',
u'ๆ้': u'ๆ่ฃก',
u'ๆๆ': u'ๆๆ',
u'ๆฅๅ ': u'ๆฅไฝ',
u'ๆฅๅญ้': u'ๆฅๅญ่ฃก',
u'ๆฅๆ': u'ๆฅๆ',
u'ๆฅๅ': u'ๆฅๆ',
u'ๆฅๅๅฒ': u'ๆฅๆญทๅฒ',
u'ๆฅๅฟ': u'ๆฅ่ช',
u'ๆฉไบ': u'ๆฉๆผ',
u'ๆฑๅนฒ': u'ๆฑไนพ',
u'ๆไปๅฑฑ': u'ๆๅดๅฑฑ',
u'ๅๅนณ': u'ๆๅนณ',
u'ๅ้ณ': u'ๆ้ฝ',
u'ๆๅคฉไธๅ': u'ๆๅคฉไธๅผ',
u'ๆๅพ': u'ๆๅพต',
u'ๆ็ฎๅผ ่': u'ๆ็ฎๅผต่',
u'ๆ็ชๅๅ ': u'ๆ็ชๆทจๅ ',
u'ๆ่': u'ๆ็ฏ',
u'ๆ้': u'ๆ่ฃก',
u'ๆๅ
ๅถ': u'ๆๅๅถ',
u'ๆไบ': u'ๆๆผ',
u'ๆๅทดๅ
': u'ๆๅทดๅ
',
u'ๆๅ': u'ๆๆ',
u'ๆๆๅ': u'ๆๆๅพ',
u'ๆๅๅฒ': u'ๆๆญทๅฒ',
u'ๆ่พฐ่กจ': u'ๆ่พฐ้ถ',
u'ๆฅๅ้': u'ๆฅๅ่ฃก',
u'ๆฅๅคฉ้': u'ๆฅๅคฉ่ฃก',
u'ๆฅๆฅ้': u'ๆฅๆฅ่ฃก',
u'ๆฅ่ฏ': u'ๆฅ่ฅ',
u'ๆฅๆธธ': u'ๆฅ้',
u'ๆฅ้ฆๆๅญฆ': u'ๆฅ้ฆ้ฌฅๅญธ',
u'ๆถ้': u'ๆ้',
u'ๆถ้ด้': u'ๆ้่ฃก',
u'ๆ่ก': u'ๆ่ฉ',
u'ๆๅ': u'ๆ้',
u'ๆๅนฒ': u'ๆไนพ',
u'ๆไผค': u'ๆๅท',
u'ๆๅพ': u'ๆๅ',
u'ๆๅพ็บธ': u'ๆๅ็ด',
u'ๆๆ': u'ๆๆ',
u'ๆๆ': u'ๆๆ',
u'ๆ็': u'ๆ็
',
u'ๆ็ง': u'ๆ็จฎ',
u'ๆ่กฃ': u'ๆ่กฃ',
u'ๆ้ป': u'ๆ้ป',
u'ๆไบ': u'ๆๆผ',
u'ๆ้': u'ๆ้',
u'ๆๅ': u'ๆ้ซฎ',
u'ๆจ้': u'ๆจ้',
u'ๆฎๅฌๅฌ': u'ๆฎ้ผ้ผ',
u'ๆฏ่ด': u'ๆฏ็ทป',
u'ๆพๅนฒ': u'ๆพไนพ',
u'ๆ่น่ฏ': u'ๆ่น่ฅ',
u'ๆ่ฝฆ่ฏ': u'ๆ่ป่ฅ',
u'ๆๅ้': u'ๆๅ่ฃก',
u'ๆๅฐ้': u'ๆๅฐ่ฃก',
u'ๆๆฒ้': u'ๆๆบ่ฃก',
u'ๆ้': u'ๆ่ฃก',
u'ๆๆ': u'ๆ้ฌฅ',
u'็
ๆธธ': u'ๆข้',
u'ๆดๆๆจชๅพ': u'ๆดๆๆฉซๅพต',
u'ๆดๆ': u'ๆดๆ',
u'ๅๅ
': u'ๆๅ
',
u'ๅๅฝ': u'ๆๅฝ',
u'ๅๅง': u'ๆๅง',
u'ๅๅฎค': u'ๆๅฎค',
u'ๅๅฐพ': u'ๆๅฐพ',
u'ๅๆฐ': u'ๆๆธ',
u'ๅๆฅ': u'ๆๆฅ',
u'ๅไนฆ': u'ๆๆธ',
u'ๅๆฌ': u'ๆๆฌ',
u'ๅๆณ': u'ๆๆณ',
u'ๅ็ฑ': u'ๆ็',
u'ๅ็บช': u'ๆ็ด',
u'ๅ่ฑก': u'ๆ่ฑก',
u'ๆๆ': u'ๆๆ',
u'ๆ่ฐท': u'ๆฌ็ฉ',
u'ๆฐไบ': u'ๆฐไบ',
u'ๆดไป้พๆฐ': u'ๆดๅ้ฃๆธ',
u'ๆด็ญพ': u'ๆด็ฑค',
u'ๆด้': u'ๆด้',
u'ไนฆๅๅญ': u'ๆธ็ๅญ',
u'ไนฆ็ญพ': u'ๆธ็ฑค',
u'ๆผ่ฐทไบบ': u'ๆผ่ฐทไบบ',
u'ๆพๆด': u'ๆพๆจธ',
u'ๆๅค': u'ๆๅค',
u'ไผไธ็ญพ็ฝฒ': u'ๆไธ็ฐฝ็ฝฒ',
u'ไผไธ็ญพ่ฎข': u'ๆไธ็ฐฝ่จ',
u'ไผๅ ': u'ๆไฝ',
u'ไผๅ ๅ': u'ๆๅ ๅ',
u'ไผๅนฒๆฐ': u'ๆๅนฒๆพ',
u'ๆๅนฒๆพ': u'ๆๅนฒๆพ',
u'ไผๅนฒ': u'ๆๅนน',
u'ไผๅ': u'ๆๅผ',
u'ไผ้': u'ๆ่ฃก',
u'ๆๅ': u'ๆๆ',
u'ๆๅๅฒ': u'ๆๆญทๅฒ',
u'ๆ็ฆปไบๆฏ': u'ๆ้ขๆผ็ข',
u'ๆ้ข': u'ๆ้ข',
u'ๆไธฝไบ็ฎ': u'ๆ้บๆผ็ฎ',
u'ๆไบไนๆ ่': u'ๆไบไน็ก็ฏ',
u'ๆไป': u'ๆๅ',
u'ๆๅชไธ': u'ๆๅชไธ',
u'ๆๅชๅ
': u'ๆๅชๅ
',
u'ๆๅชๅฎน': u'ๆๅชๅฎน',
u'ๆๅชๆก': u'ๆๅชๆก',
u'ๆๅช้': u'ๆๅชๆก',
u'ๆๅชๆฏ': u'ๆๅชๆฏ',
u'ๆๅช็จ': u'ๆๅช็จ',
u'ๆๅค่ต': u'ๆๅค ่ฎ',
u'ๆๅพไผ': u'ๆๅพไผ',
u'ๆๅพๆ': u'ๆๅพๆฐ',
u'ๆๅพๆ': u'ๆๅพๆ',
u'ๆๅพ่ฎจ': u'ๆๅพ่จ',
u'ๆๅพ': u'ๆๅพต',
u'ๆๆ่ก': u'ๆๆ่ก',
u'ๆๆ ๅท': u'ๆๆ ๅท',
u'ๆๅ': u'ๆๆบ',
u'ๆๆฃฑๆ่ง': u'ๆ็จๆ่ง',
u'ๆๅช': u'ๆ้ป',
u'ๆไฝ': u'ๆ้ค',
u'ๆๅๅคด้ๅฏบ': u'ๆ้ซฎ้ ญ้ๅฏบ',
u'ๆไบ': u'ๆๆผ',
u'ๆ่ฏ': u'ๆ่ฅ',
u'ๆไบๆ': u'ๆไบๆ',
u'ๆๅ็ณ': u'ๆๅ็ณ',
u'ๆ็่กจ': u'ๆ่้ถ',
u'ๆ็้': u'ๆ่้',
u'ๆ็้่กจ': u'ๆ่้้ถ',
u'ๆไนพๅคๆ': u'ๆไนพๅคๆ',
u'ๆ้': u'ๆ้',
u'ๆฆ่ง': u'ๆฆๆง',
u'่่ง': u'ๆฆๆง',
u'ๆจๅถๆๆ': u'ๆจๅถๆฒ็ดฎ',
u'ๆจๆ': u'ๆจๆ',
u'ๆจๆๅนฒ้ฆ': u'ๆจๆไนพ้คพ',
u'ๆจๆข': u'ๆจๆจ',
u'ๆจๅถ': u'ๆจ่ฃฝ',
u'ๆจ้': u'ๆจ้',
u'ๆชๅนฒ': u'ๆชไนพ',
u'ๆซ่ฏ': u'ๆซ่ฅ',
u'ๆฌๅพ': u'ๆฌๅพต',
u'ๆฏ่ตค': u'ๆฎ่ตค',
u'ๆฑไป่ก': u'ๆฑๅด่ก',
u'ๆฑๅบไฝ': u'ๆฑๆ
ถ้ค',
u'ๆฑ็ๅฎๅ': u'ๆฑ็ๅฎๆ',
u'ๆฑ็ๅฎๅๅฒ': u'ๆฑ็ๅฎๆญทๅฒ',
u'ๆๅญ': u'ๆๅญ',
u'ๆ้ฃๆฐ': u'ๆ้ฃๆฐ',
u'ๆ่ฟๆฐ': u'ๆ้ฃๆฐ',
u'ๆๅนฒ': u'ๆๅนน',
u'ๆๅญ้': u'ๆๅญ่ฃก',
u'ๆๅบ': u'ๆ่',
u'ๆ่ฝๅ': u'ๆ่ฝ็ผ',
u'ๆ้': u'ๆ่ฃก',
u'ๆ่ๅฟ้': u'ๆ่่ช้',
u'ๆๅฎๆ ๅพ': u'ๆๅฎ็กๅพต',
u'ๆๅ': u'ๆ้ซฎ',
u'ๆฏๅนฒ': u'ๆฏไนพ',
u'ๆฏ้ข': u'ๆฏ้บต',
u'ๆฐไผฆ': u'ๆฐๅซ',
u'ๆฐ็น': u'ๆฐ็น',
u'ไธๅจ้': u'ๆฑๅจ้',
u'ไธๅฒณ': u'ๆฑๅถฝ',
u'ไธๅฒ่ฅฟ็ช': u'ๆฑ่ก่ฅฟ็ช',
u'ไธๆธธ': u'ๆฑ้',
u'ๆพๅฑฑๅบ': u'ๆพๅฑฑๅบ',
u'ๆฟ็่ธ': u'ๆฟ่่',
u'ๆฟ่ก': u'ๆฟ่ฉ',
u'ๆๅฎๅฒณ': u'ๆๅฎๅถฝ',
u'ๆ้ๆน': u'ๆ้ๆน',
u'ๆ้': u'ๆ้',
u'ๆๅนฒ': u'ๆไนพ',
u'ๆๅญๅนฒ': u'ๆๅญไนพ',
u'ๆไธๅพๅคงไบๅนฒ': u'ๆไธๅพๅคงๆผๆฆฆ',
u'ๆๅนฒ': u'ๆๅนน',
u'ๆฏๅนฒ': u'ๆฏไนพ',
u'ๅฐๅ': u'ๆฑๆ',
u'ๆถ้': u'ๆถ้',
u'ๆๅช': u'ๆ้ป',
u'ๆๆไบ': u'ๆๆๆผ',
u'ๆๆฎฟๅ': u'ๆๆฎฟๅ',
u'ๆๅ': u'ๆ้ซฎ',
u'ๆไธ': u'ๆไธ',
u'ๆๅญ': u'ๆๅญ',
u'ๆๆณ': u'ๆๆณ',
u'ๆฑๆข': u'ๆฑๆจ',
u'ๆณ่ฏๅพ': u'ๆณ่ฉๅพต',
u'ๆ ๆ ็็': u'ๆ ๆ ็็',
u'ๆ กๅ': u'ๆ กๆบ',
u'ๆ กไป': u'ๆ ก่ฎ',
u'ๆ ธๅ็': u'ๆ ธๅ็',
u'ๆ ผไบ': u'ๆ ผๆผ',
u'ๆ ผ่': u'ๆ ผ็ฏ',
u'ๆ ผ้ๅ': u'ๆ ผ้ๆ',
u'ๆ ผ้้ซๅฉๅ': u'ๆ ผ้้ซๅฉๆ',
u'ๆ ผๆ': u'ๆ ผ้ฌฅ',
u'ๆกๅๅนฒ': u'ๆกๅไนพ',
u'ๆก
ๆ': u'ๆก
ๆ',
u'ๆกๅ ': u'ๆกๅ ',
u'ๆกๅ': u'ๆกๆ',
u'ๆกๅๅฒ': u'ๆกๆญทๅฒ',
u'ๆกๅนฒ': u'ๆกไนพ',
u'ๆขไธๅๅญ': u'ๆขไธๅๅญ',
u'ๆกๅนฒ': u'ๆขๅนน',
u'ๆขจๅนฒ': u'ๆขจไนพ',
u'ๆขฏๅฒ': u'ๆขฏ่ก',
u'ๆขฐ็ณป': u'ๆขฐ็นซ',
u'ๆขฐๆ': u'ๆขฐ้ฌฅ',
u'ๅผ่': u'ๆฃๆจ',
u'ๆฃๅถ': u'ๆฃ่ฃฝ',
u'ๆฃๅญ้ข': u'ๆฃๅญ้บต',
u'ๆฃๅบ': u'ๆฃ่',
u'ๆ ๆข': u'ๆฃๆจ',
u'ๆฃซๆด': u'ๆฃซๆจธ',
u'ๆฃฎๆ้': u'ๆฃฎๆ่ฃก',
u'ๆฃบๆ้': u'ๆฃบๆ่ฃก',
u'ๆคๅ': u'ๆค้ซฎ',
u'ๆคฐๆฃๅนฒ': u'ๆคฐๆฃไนพ',
u'ๆฅๅบ้ฎ้ผ': u'ๆฅ่ๅ้ผ',
u'ๆฅๅบ็': u'ๆฅ่็',
u'ๆฅๅบ็ป็ผจ': u'ๆฅ่็ต็บ',
u'ๆกขๅนฒ': u'ๆฅจๅนน',
u'ไธไฝ': u'ๆฅญ้ค',
u'ๆฆจๅนฒ': u'ๆฆจไนพ',
u'ๆ ๆ': u'ๆงๆกฟ',
u'ไนๅจ้': u'ๆจๅจ้',
u'ๆจไบๆ': u'ๆจๆผๆ',
u'ๆขไธ': u'ๆจไธ',
u'ๆขๆฑ': u'ๆจๆฑ',
u'ๆ ๆ': u'ๆจๆ',
u'ๆ ๆ ่ด่ด': u'ๆจๆจ่ด่ด',
u'ๆ ๅ': u'ๆจๆบ',
u'ๆ ็ญพ': u'ๆจ็ฑค',
u'ๆ ่ด': u'ๆจ็ทป',
u'ๆ ๆณจ': u'ๆจ่จป',
u'ๆ ๅฟ': u'ๆจ่ช',
u'ๆจกๆฃฑ': u'ๆจก็จ',
u'ๆจก่': u'ๆจก็ฏ',
u'ๆจก่ๆฃๆฃๅ ': u'ๆจก่ๆฃๆฃๅ ',
u'ๆจกๅถ': u'ๆจก่ฃฝ',
u'ๆ ท่': u'ๆจฃ็ฏ',
u'ๆจต้': u'ๆจตๆก',
u'ๆดไฟฎๆฏ': u'ๆจธไฟฎๆฏ',
u'ๆดๅ': u'ๆจธๅ',
u'ๆดๅญฆ': u'ๆจธๅญธ',
u'ๆดๅฎ': u'ๆจธๅฏฆ',
u'ๆดๅฟตไป': u'ๆจธๅฟตไป',
u'ๆดๆ': u'ๆจธๆ',
u'ๆดๆจ': u'ๆจธๆจ',
u'ๆด็ถ': u'ๆจธ็ถ',
u'ๆด็ด': u'ๆจธ็ด',
u'ๆด็ด ': u'ๆจธ็ด ',
u'ๆด่ฎท': u'ๆจธ่จฅ',
u'ๆด่ดจ': u'ๆจธ่ณช',
u'ๆด้': u'ๆจธ้',
u'ๆด้': u'ๆจธ้',
u'ๆด้': u'ๆจธ้',
u'ๆด้': u'ๆจธ้',
u'ๆด้': u'ๆจธ้',
u'ๆด้ฉฌ': u'ๆจธ้ฆฌ',
u'ๆด้ฒ': u'ๆจธ้ญฏ',
u'ๆ ๅนฒ': u'ๆจนๆฆฆ',
u'ๆ ๆข': u'ๆจนๆจ',
u'ๆกฅๆข': u'ๆฉๆจ',
u'ๆฉๆขฐ็ณป': u'ๆฉๆขฐ็ณป',
u'ๆบๆขฐ็ณป': u'ๆฉๆขฐ็ณป',
u'ๆบๆขฐ่กจ': u'ๆฉๆขฐ้ถ',
u'ๆบๆขฐ้': u'ๆฉๆขฐ้',
u'ๆบๆขฐ้่กจ': u'ๆฉๆขฐ้้ถ',
u'ๆบ็ปฃ': u'ๆฉ็นก',
u'ๆจชๅพๆดๆ': u'ๆฉซๅพตๆดๆ',
u'ๆจชๆ': u'ๆฉซๆ',
u'ๆจชๆข': u'ๆฉซๆจ',
u'ๆจชๅฒ': u'ๆฉซ่ก',
u'ๅฐๅญ': u'ๆชฏๅญ',
u'ๅฐๅธ': u'ๆชฏๅธ',
u'ๅฐ็ฏ': u'ๆชฏ็',
u'ๅฐ็': u'ๆชฏ็',
u'ๅฐ้ข': u'ๆชฏ้ข',
u'ๆๅฐ': u'ๆซๆชฏ',
u'ๆ ๅๅทฅ': u'ๆซ้ซฎๅทฅ',
u'ๆ ๆ': u'ๆฌๆ',
u'ๆฌฒๆตท้พๅกซ': u'ๆฌฒๆตท้ฃๅกซ',
u'ๆฌบ่': u'ๆฌบ็',
u'ๆญๅ': u'ๆญๅ',
u'ๆญ้': u'ๆญ้',
u'ๆฌงๆธธ': u'ๆญ้',
u'ๆญขๅณ่ฏ': u'ๆญขๅณ่ฅ',
u'ๆญขไบ': u'ๆญขๆผ',
u'ๆญข็่ฏ': u'ๆญข็่ฅ',
u'ๆญข่ก่ฏ': u'ๆญข่ก่ฅ',
u'ๆญฃๅจๅฑๅค': u'ๆญฃๅจๅฑๅค',
u'ๆญฃๅฎๅบ': u'ๆญฃๅฎๅบ',
u'ๆญฃๅฝ็': u'ๆญฃ็ถ่',
u'ๆญฆไธ': u'ๆญฆไธ',
u'ๆญฆๅ': u'ๆญฆๅ',
u'ๆญฆๆ': u'ๆญฆ้ฌฅ',
u'ๅฒ่ฟไบๆฎ': u'ๆญฒ่ฟไบๆฎ',
u'ๅๅฒ้': u'ๆญทๅฒ่ฃก',
u'ๅฝๅนถ': u'ๆญธไฝต',
u'ๅฝไบ': u'ๆญธๆผ',
u'ๅฝไฝ': u'ๆญธ้ค',
u'ๆญนๆ': u'ๆญน้ฌฅ',
u'ๆญปไบ': u'ๆญปๆผ',
u'ๆญป่กๅ': u'ๆญป่กๅ',
u'ๆญป้ๆฑ็': u'ๆญป่ฃกๆฑ็',
u'ๆญป้้็': u'ๆญป่ฃก้็',
u'ๆฎ่ฐท': u'ๆฎ็ฉ',
u'ๆฎ่ด': u'ๆฎ่ด',
u'ๆฎไฝ': u'ๆฎ้ค',
u'ๅตๅฐธ': u'ๆฎญๅฑ',
u'ๆฎทๅธ็ๆ': u'ๆฎทๅธซ็้ฌฅ',
u'ๆ่ซ่ฏ': u'ๆฎบ่ฒ่ฅ',
u'ๅฃณ้': u'ๆฎผ่ฃก',
u'ๆฎฟ้่ช้ธฃ': u'ๆฎฟ้่ช้ณด',
u'ๆฏไบ': u'ๆฏๆผ',
u'ๆฏ้ไธบ้': u'ๆฏ้็บ้ธ',
u'ๆฎดๆ': u'ๆฏ้ฌฅ',
u'ๆฏๅ': u'ๆฏๅ',
u'ๆฏ่': u'ๆฏ็ฏ',
u'ๆฏไธ': u'ๆฏ้',
u'ๆฏๆฏๅช': u'ๆฏๆฏๅช',
u'ๆฏๅช': u'ๆฏ้ป',
u'ๆฏ่ฏ': u'ๆฏ่ฅ',
u'ๆฏๅ': u'ๆฏๅ',
u'ๆฏๅ': u'ๆฏๅ',
u'ๆฏๅง': u'ๆฏ่',
u'ๆฏๅ': u'ๆฏ้ซฎ',
u'ๆฏซๅ': u'ๆฏซ้',
u'ๆฏซๅ': u'ๆฏซ้ซฎ',
u'ๆฐๅฒๆ็': u'ๆฐฃๆฒๆ็',
u'ๆฐ้': u'ๆฐฃ้ฌฑ',
u'ๆฐค้': u'ๆฐค้ฌฑ',
u'ๆฐดๆฅๆฑค้ๅป': u'ๆฐดไพๆนฏ่ฃกๅป',
u'ๆฐดๅ': u'ๆฐดๆบ',
u'ๆฐด้': u'ๆฐด่ฃก',
u'ๆฐด้้': u'ๆฐด้้',
u'ๆฐด้ไนก': u'ๆฐด้้',
u'ๆฐธๅ': u'ๆฐธๆ',
u'ๆฐธๅๅฒ': u'ๆฐธๆญทๅฒ',
u'ๆฐธๅฟไธๅฟ': u'ๆฐธ่ชไธๅฟ',
u'ๆฑ็ฅๆฌฒ': u'ๆฑ็ฅๆ
พ',
u'ๆฑ็ญพ': u'ๆฑ็ฑค',
u'ๆฑ้ไบ็ฒ': u'ๆฑ้ๆผ็ฒ',
u'ๆฑ ้': u'ๆฑ ่ฃก',
u'ๆฑก่': u'ๆฑก่ก',
u'ๆฑฒไบ': u'ๆฑฒๆผ',
u'ๅณๆ': u'ๆฑบ้ฌฅ',
u'ๆฒๆท': u'ๆฒๆพฑ',
u'ๆฒ็': u'ๆฒ่',
u'ๆฒ้': u'ๆฒ้ฌฑ',
u'ๆฒๆท': u'ๆฒๆพฑ',
u'ๆฒ้': u'ๆฒ้ฌฑ',
u'ๆฒกๅนฒๆฒกๅ': u'ๆฒไนพๆฒๆทจ',
u'ๆฒกไบๅนฒ': u'ๆฒไบๅนน',
u'ๆฒกๅนฒ': u'ๆฒๅนน',
u'ๆฒกๆ่ณ': u'ๆฒๆบ่ณ',
u'ๆฒกๆขขๅนฒ': u'ๆฒๆขขๅนน',
u'ๆฒกๆ ท่': u'ๆฒๆจฃ็ฏ',
u'ๆฒกๅ': u'ๆฒๆบ',
u'ๆฒก่ฏ': u'ๆฒ่ฅ',
u'ๅฒๅ ๅๆ': u'ๆฒๅ ้ซฎๆ',
u'ๆฒ้ๆท้': u'ๆฒ่ฃกๆท้',
u'ๆฒณๅฒณ': u'ๆฒณๅถฝ',
u'ๆฒณๆตๆฑ้': u'ๆฒณๆตๅฏ้',
u'ๆฒณ้': u'ๆฒณ่ฃก',
u'ๆฒนๆ': u'ๆฒน้ฌฅ',
u'ๆฒน้ข': u'ๆฒน้บต',
u'ๆฒปๆ': u'ๆฒป็',
u'ๆฒฟๆบฏ': u'ๆฒฟๆณ',
u'ๆณๅ ': u'ๆณไฝ',
u'ๆณ่ชๅถ': u'ๆณ่ชๅถ',
u'ๆณๆธธ': u'ๆณ้',
u'ๆณกๅถ': u'ๆณก่ฃฝ',
u'ๆณก้ข': u'ๆณก้บต',
u'ๆณขๆฃฑ่': u'ๆณข็จ่',
u'ๆณขๅ่ป': u'ๆณข้ซฎ่ป',
u'ๆณฅไบ': u'ๆณฅๆผ',
u'ๆณจไบ': u'ๆณจไบ',
u'ๆณจ้': u'ๆณจ้',
u'ๆณฐๅฑฑๆขๆจ': u'ๆณฐๅฑฑๆขๆจ',
u'ๆณฑ้': u'ๆณฑ้ฌฑ',
u'ๆณณๆฐ้': u'ๆณณๆฐฃ้',
u'ๆดๆธธ': u'ๆด้',
u'ๆดๅฎถ': u'ๆดๅฎถ',
u'ๆดๆซ': u'ๆดๆ',
u'ๆดๆฐด': u'ๆดๆฐด',
u'ๆดๆด': u'ๆดๆด',
u'ๆดๆท
': u'ๆดๆท
',
u'ๆดๆถค': u'ๆดๆป',
u'ๆดๆฟฏ': u'ๆดๆฟฏ',
u'ๆด็ถ': u'ๆด็ถ',
u'ๆด่ฑ': u'ๆด่ซ',
u'ๆด็ผ': u'ๆด้',
u'ๆด็ป': u'ๆด้',
u'ๆดๅ': u'ๆด้ซฎ',
u'ๆด้ไธๅบ': u'ๆด้ๆฑๆ',
u'ๆณๆฌฒ': u'ๆดฉๆ
พ',
u'ๆดช่': u'ๆดช็ฏ',
u'ๆดช้': u'ๆดช้',
u'ๆดช้': u'ๆดช้',
u'ๆฑนๆถ': u'ๆดถๆนง',
u'ๆดพๅขๅๅ ': u'ๆดพๅๅๅ ',
u'ๆตๅพ': u'ๆตๅพต',
u'ๆตไบ': u'ๆตๆผ',
u'ๆต่ก': u'ๆต่ฉ',
u'ๆต้ฃไฝไฟ': u'ๆต้ขจ้คไฟ',
u'ๆต้ฃไฝ้ต': u'ๆต้ขจ้ค้ป',
u'ๆตฉๆตฉ่ก่ก': u'ๆตฉๆตฉ่ฉ่ฉ',
u'ๆตฉ่ก': u'ๆตฉ่ฉ',
u'ๆตช็ด่กจ': u'ๆตช็ด้ถ',
u'ๆตช่ก': u'ๆตช่ฉ',
u'ๆตชๆธธ': u'ๆตช้',
u'ๆตฎไบ': u'ๆตฎๆผ',
u'ๆตฎ่ก': u'ๆตฎ่ฉ',
u'ๆตฎๅคธ': u'ๆตฎ่ช',
u'ๆตฎๆพ': u'ๆตฎ้ฌ',
u'ๆตทไธๅธ้ท': u'ๆตทไธไฝ้ท',
u'ๆตทๅนฒ': u'ๆตทไนพ',
u'ๆตทๆนพๅธ้ท': u'ๆตท็ฃไฝ้ท',
u'ๆถๅๅฆฎ': u'ๆถๅๅฆฎ',
u'ๆถๅค': u'ๆถๅค',
u'ๆถๅฃฏๅณ': u'ๆถๅฃฏๅณ',
u'ๆถๅฃฎๅ': u'ๆถๅฃฏๅณ',
u'ๆถๅคฉ็ธ': u'ๆถๅคฉ็ธ',
u'ๆถๅง': u'ๆถๅง',
u'ๆถๅบ็': u'ๆถๅบ็',
u'ๆถๆๆ': u'ๆถๆๆ',
u'ๆถๆๆ': u'ๆถๆๆ',
u'ๆถๆพคๆฐ': u'ๆถๆพคๆฐ',
u'ๆถๆณฝๆฐ': u'ๆถๆพคๆฐ',
u'ๆถ็ป็
': u'ๆถ็ดน็
',
u'ๆถ็พฝๅฟ': u'ๆถ็พฝๅฟ',
u'ๆถ่ฌน็ณ': u'ๆถ่ฌน็ณ',
u'ๆถ่ฐจ็ณ': u'ๆถ่ฌน็ณ',
u'ๆถ้ขๅนด': u'ๆถ้ขๅนด',
u'ๆถ้ๅฒ': u'ๆถ้ๅฒ',
u'ๆถ้ทๆ': u'ๆถ้ทๆ',
u'ๆถ้ฟๆ': u'ๆถ้ทๆ',
u'ๆถ้ธฟ้ฆ': u'ๆถ้ดปๆฌฝ',
u'ๆถ้ดปๆฌฝ': u'ๆถ้ดปๆฌฝ',
u'ๆถ็่ฏ': u'ๆถ็่ฅ',
u'ๆถ่ฟ่ฏ': u'ๆถ่
ซ่ฅ',
u'ๆถฒๆถ่กจ': u'ๆถฒๆถ้ถ',
u'ๆถณ่': u'ๆถณๆฟ',
u'ๆถธๅนฒ': u'ๆถธไนพ',
u'ๅ้ข': u'ๆถผ้บต',
u'ๆทไฝๅ': u'ๆท้คๅ',
u'ๆท่': u'ๆท็ฏ',
u'ๆณชๅนฒ': u'ๆทไนพ',
u'ๆณชๅฆๆณๆถ': u'ๆทๅฆๆณๆนง',
u'ๆทกไบ': u'ๆทกๆผ',
u'ๆทก่่': u'ๆทกๆฟๆฟ',
u'ๆทกๆฑ': u'ๆทก็ก',
u'ๅไฝ': u'ๆทจ้ค',
u'ๅๅ': u'ๆทจ้ซฎ',
u'ๆทซๆฌฒ': u'ๆทซๆ
พ',
u'ๆทซ่ก': u'ๆทซ่ฉ',
u'ๆทฌ็ผ': u'ๆทฌ้',
u'ๆทฑๅฑฑไฝๅค้': u'ๆทฑๅฑฑไฝ่้',
u'ๆทฑๆธ้': u'ๆทฑๆทต่ฃก',
u'ๆทณไบ': u'ๆทณไบ',
u'ๆทณๆด': u'ๆทณๆจธ',
u'ๆธๆทณๅฒณๅณ': u'ๆทตๆทณๅถฝๅณ',
u'ๆต
ๆท': u'ๆทบๆพฑ',
u'ๆธ
ๅฟๅฏกๆฌฒ': u'ๆธ
ๅฟๅฏกๆฌฒ',
u'ๆธ
ๆฑคๆ้ข': u'ๆธ
ๆนฏๆ้บต',
u'ๅ่ฅ่ฏ': u'ๆธ่ฅ่ฅ',
u'ๆธ ๅฒ': u'ๆธ ่ก',
u'ๆธฏๅถ': u'ๆธฏ่ฃฝ',
u'ๆตๆด': u'ๆธพๆจธ',
u'ๆตไธช': u'ๆธพ็ฎ',
u'ๅๅ็': u'ๆนๅ่',
u'ๆน้': u'ๆน่ฃก',
u'ๆน็ปฃ': u'ๆน็นก',
u'ๆน็ดฏ': u'ๆน็บ',
u'ๆนๆฝฆ็่น': u'ๆนๆฝฆ็่น',
u'ๆถไธ': u'ๆนงไธ',
u'ๆถๆฅ': u'ๆนงไพ',
u'ๆถๅ
ฅ': u'ๆนงๅ
ฅ',
u'ๆถๅบ': u'ๆนงๅบ',
u'ๆถๅ': u'ๆนงๅ',
u'ๆถๆณ': u'ๆนงๆณ',
u'ๆถ็ฐ': u'ๆนง็พ',
u'ๆถ่ตท': u'ๆนง่ตท',
u'ๆถ่ฟ': u'ๆนง้ฒ',
u'ๆนฎ้': u'ๆนฎ้ฌฑ',
u'ๆฑคไธ้ข': u'ๆนฏไธ้บต',
u'ๆฑคๅข': u'ๆนฏ็ณฐ',
u'ๆฑค่ฏ': u'ๆนฏ่ฅ',
u'ๆฑค้ข': u'ๆนฏ้บต',
u'ๆบไบ': u'ๆบๆผ',
u'ๅไธๅ': u'ๆบไธๆบ',
u'ๅไพ': u'ๆบไพ',
u'ๅไฟ': u'ๆบไฟ',
u'ๅๅค': u'ๆบๅ',
u'ๅๅฟ': u'ๆบๅ
',
u'ๅๅๅญ': u'ๆบๅๅญ',
u'ๅๅ': u'ๆบๅ',
u'ๅๅถๅฐ': u'ๆบๅถ็พ',
u'ๅๅฎ': u'ๆบๅฎ',
u'ๅๅนณๅ': u'ๆบๅนณๅ',
u'ๅๅบฆ': u'ๆบๅบฆ',
u'ๅๅผ': u'ๆบๅผ',
u'ๅๆฟ็ฃ': u'ๆบๆฟ็ฃ',
u'ๅๆฎ': u'ๆบๆ',
u'ๅๆ': u'ๆบๆฌ',
u'ๅๆฐๅจ': u'ๆบๆฐๅจ',
u'ๅๆฐ้': u'ๆบๆฐ้',
u'ๅๆ': u'ๆบๆ',
u'ๅๆฏ': u'ๆบๆฏ',
u'ๅๆถ': u'ๆบๆ',
u'ๅไผ': u'ๆบๆ',
u'ๅๅณ่ต': u'ๆบๆฑบ่ณฝ',
u'ๅ็': u'ๆบ็',
u'ๅ็กฎ': u'ๆบ็ขบ',
u'ๅ็บฟ': u'ๆบ็ท',
u'ๅ็ปณ': u'ๆบ็นฉ',
u'ๅ่ฏ': u'ๆบ่ฉฑ',
u'ๅ่ฐฑ': u'ๆบ่ญ',
u'ๅ่ดงๅธ': u'ๆบ่ฒจๅนฃ',
u'ๅๅคด': u'ๆบ้ ญ',
u'ๅ็น': u'ๆบ้ป',
u'ๆบ่': u'ๆบๆฟ',
u'ๆบขไบ': u'ๆบขๆผ',
u'ๆบฒ้ข': u'ๆบฒ้บต',
u'ๆบบไบ': u'ๆบบๆผ',
u'ๆป้': u'ๆป้ฌฑ',
u'ๆปๅ': u'ๆป่',
u'ๆฑไธฐ': u'ๆป่ฑ',
u'ๅคๅณ': u'ๆปทๅณ',
u'ๅคๆฐด': u'ๆปทๆฐด',
u'ๅคๆฑ': u'ๆปทๆฑ',
u'ๅคๆน': u'ๆปทๆน',
u'ๅค่': u'ๆปท่',
u'ๅค่': u'ๆปท่',
u'ๅค่': u'ๆปท่',
u'ๅคๅถ': u'ๆปท่ฃฝ',
u'ๅค้ธก': u'ๆปท้',
u'ๅค้ข': u'ๆปท้บต',
u'ๆปกๆผ่ชๅฐฝ': u'ๆปฟๆ่ช็ก',
u'ๆปกๆปกๅฝๅฝ': u'ๆปฟๆปฟ็ถ็ถ',
u'ๆปกๅคดๆดๅ': u'ๆปฟ้ ญๆด้ซฎ',
u'ๆผ่ก': u'ๆผ่ฉ',
u'ๆผๆฝ': u'ๆผ่ผ',
u'ๆฒค้': u'ๆผ้ฌฑ',
u'ๆฑๅผฅ็ป้': u'ๆผขๅฝ็ป้',
u'ๆฑๅผฅ็ป้่กจๅ
ฌๅธ': u'ๆผขๅฝ็ป้้ถๅ
ฌๅธ',
u'ๆผซๆธธ': u'ๆผซ้',
u'ๆฝๆ่ฏ้': u'ๆฝๆ่ญ่ฃก',
u'ๆฝๆฐด่กจ': u'ๆฝๆฐด้ถ',
u'ๆฝๆฐด้': u'ๆฝๆฐด้',
u'ๆฝๆฐด้่กจ': u'ๆฝๆฐด้้ถ',
u'ๆฝญ้': u'ๆฝญ่ฃก',
u'ๆฝฎๆถ': u'ๆฝฎๆนง',
u'ๆบไบ': u'ๆฝฐๆผ',
u'ๆพๆพน็ฒพ่ด': u'ๆพๆพน็ฒพ่ด',
u'ๆพ่': u'ๆพๆฟ',
u'ๆณฝๆธๆผ่ไธ้': u'ๆพคๆปฒ็่ไธ้',
u'ๆทไนไธ่ไนๅฐ': u'ๆพฑไนไธ่ไนๅฐ',
u'ๆทๅ็': u'ๆพฑๅ็',
u'ๆทๅฑฑ': u'ๆพฑๅฑฑ',
u'ๆทๆท': u'ๆพฑๆพฑ',
u'ๆท็งฏ': u'ๆพฑ็ฉ',
u'ๆท็ฒ': u'ๆพฑ็ฒ',
u'ๆท่งฃ็ฉ': u'ๆพฑ่งฃ็ฉ',
u'ๆท่ฐไนๆป': u'ๆพฑ่ฌไนๆป',
u'ๆพนๅฐ': u'ๆพน่บ',
u'ๆพน่ก': u'ๆพน่ฉ',
u'ๆฟ่ก': u'ๆฟ่ฉ',
u'ๆตๅ': u'ๆฟ้ซฎ',
u'่ๆฑ': u'ๆฟๆฑ',
u'่่็ป้จ': u'ๆฟๆฟ็ดฐ้จ',
u'่้พ': u'ๆฟ้ง',
u'่ๆพ้จ': u'ๆฟ้ฌ้จ',
u'่้ธฟ': u'ๆฟ้ดป',
u'ๆณป่ฏ': u'็่ฅ',
u'ๆฒๅ็บฟ': u'็ๅ็ท',
u'ๆฒๅฑฑ็บฟ': u'็ๅฑฑ็ท',
u'ๆฒๅท': u'็ๅท',
u'ๆฒๆฐด': u'็ๆฐด',
u'ๆฒๆฒณ': u'็ๆฒณ',
u'ๆฒๆตท': u'็ๆตท',
u'ๆฒๆตท้่ทฏ': u'็ๆตท้ต่ทฏ',
u'ๆฒ้ณ': u'็้ฝ',
u'ๆฝๆด': u'็ๆด',
u'ๅผฅๅฑฑ้้': u'็ฐๅฑฑ้้',
u'ๅผฅๆผซ': u'็ฐๆผซ',
u'ๅผฅๆผซ็': u'็ฐๆผซ่',
u'ๅผฅๅผฅ': u'็ฐ็ฐ',
u'็่ฏ': u'็่ฅ',
u'ๆผๆฐด': u'็ๆฐด',
u'ๆผๆฑ': u'็ๆฑ',
u'ๆผๆน': u'็ๆน',
u'ๆผ็ถ': u'็็ถ',
u'ๆปฉๆถ': u'็ๆถ',
u'็ซๅนถ้': u'็ซไธฆ้',
u'็ซๅนถ': u'็ซไฝต',
u'็ซๆผ': u'็ซๆ',
u'็ซๆๅญ': u'็ซๆบๅญ',
u'็ซ็ฎญๅธ้ท': u'็ซ็ฎญไฝ้ท',
u'็ซ็ญพ': u'็ซ็ฑค',
u'็ซ่ฏ': u'็ซ่ฅ',
u'็ฐ่': u'็ฐๆฟ',
u'็ฐ่่': u'็ฐๆฟๆฟ',
u'็้ข': u'็้บต',
u'็้ข': u'็้บต',
u'็ฎๅถ': u'็ฎ่ฃฝ',
u'็ธ่ฏ': u'็ธ่ฅ',
u'็ธ้
ฑ้ข': u'็ธ้ฌ้บต',
u'ไธบๅ': u'็บๆบ',
u'ไธบ็': u'็บ่',
u'ไนๅ': u'็้ซฎ',
u'ไน้พ้ข': u'็้พ้บต',
u'็ๅนฒ': u'็ไนพ',
u'็ๅถ': u'็่ฃฝ',
u'็คๅนฒ': u'็คไนพ',
u'็คๆ': u'็คๆ',
u'็ๅนฒ': u'็ไนพ',
u'ๆ ๅพไธไฟก': u'็กๅพตไธไฟก',
u'ๆ ไธๆธธๆฐ': u'็กๆฅญๆธธๆฐ',
u'ๆ ๆขๆฅผ็': u'็กๆจๆจ่',
u'ๆ ๆณๅ
ๅถ': u'็กๆณๅๅถ',
u'ๆ ่ฏๅฏๆ': u'็ก่ฅๅฏๆ',
u'็ก่จไธไป': u'็ก่จไธ่ฎ',
u'ๆ ไฝ': u'็ก้ค',
u'็ถ่บซๆญปๆๆฐๆ่ณ': u'็ถ่บซๆญป็บๆธๆ่ณ',
u'็ผ่ฏ': u'็
่ฅ',
u'็ผๅถ': u'็
่ฃฝ',
u'็
่ฏ': u'็
่ฅ',
u'็
้ข': u'็
้บต',
u'็ๅท': u'็
ๆฒ',
u'็ๆไธ': u'็
ๆ็ตฒ',
u'็
งๅ ': u'็
งไฝ',
u'็
งๅ
ฅ็ญพ': u'็
งๅ
ฅ็ฑค',
u'็
งๅ': u'็
งๆบ',
u'็
ง็ธๅนฒ็': u'็
ง็ธไนพ็',
u'็
จๅนฒ': u'็
จไนพ',
u'็
ฎ้ข': u'็
ฎ้บต',
u'่ง้': u'็้ฌฑ',
u'็ฌ่ฏ': u'็ฌ่ฅ',
u'็่ฏ': u'็่ฅ',
u'็ๅ': u'็้ซฎ',
u'็งๅนฒ': u'็ไนพ',
u'็ๅ ': u'็ๅ ',
u'็ๅทขไบๅน': u'็ๅทขๆผๅน',
u'็็ไบ้ฃ': u'็็ไบ้ฃ',
u'็ๆธธ': u'็้',
u'็ซไธไธชๅ': u'็ไธๅ้ซฎ',
u'็ซไธๆฌกๅ': u'็ไธๆฌก้ซฎ',
u'็ซไธชๅ': u'็ๅ้ซฎ',
u'็ซๅฎๅ': u'็ๅฎ้ซฎ',
u'็ซๆฌกๅ': u'็ๆฌก้ซฎ',
u'็ซๅ': u'็้ซฎ',
u'็ซ้ข': u'็้บต',
u'่ฅๅนฒ': u'็ๅนน',
u'็ฌไฝ': u'็ผ้ค',
u'ไบๅฅๆๅฆ': u'็ญๅฅ้ฌฅๅฆ',
u'ไบๅฅๆๅผ': u'็ญๅฅ้ฌฅ็ฐ',
u'ไบๅฅๆ่ณ': u'็ญๅฅ้ฌฅ่ฑ',
u'ไบๅฆๆๅฅ': u'็ญๅฆ้ฌฅๅฅ',
u'ไบๅฆๆ่ณ': u'็ญๅฆ้ฌฅ่ฑ',
u'ไบ็บขๆ็ดซ': u'็ญ็ด
้ฌฅ็ดซ',
u'ไบๆ': u'็ญ้ฌฅ',
u'็ฐๅฎ็ฅฅๅ': u'็ฐๅฎ็ฅฅๅค',
u'็ฝ่ก': u'็ฝ่ฉ',
u'ๅฐๅฌๅ': u'็พๅฌ้',
u'ๅข้': u'็่ฃก',
u'็่จๅช่ฏญ': u'็่จ้ป่ช',
u'็็ญพ': u'็็ฑค',
u'็่้ข': u'็่้บต',
u'็ๅช': u'็้ป',
u'็ฉๆฌฒ': u'็ฉๆ
พ',
u'็นๅซ่ด': u'็นๅซ่ด',
u'็นๅถไฝ': u'็นๅถไฝ',
u'็นๅถๅฎ': u'็นๅถๅฎ',
u'็นๅถๆญข': u'็นๅถๆญข',
u'็นๅถ่ฎข': u'็นๅถ่จ',
u'็นๅพ': u'็นๅพต',
u'็นๆ่ฏ': u'็นๆ่ฅ',
u'็นๅถ': u'็น่ฃฝ',
u'็ตไธๅ': u'็ฝไธ้ซฎ',
u'็ตๆ': u'็ฝๆ',
u'็ต็ณป': u'็ฝ็นซ',
u'่ฆ็กฎ': u'็็กฎ',
u'็ๅ ': u'็ไฝ',
u'็ๅนถๆฝฎ': u'็ไฝตๆฝฎ',
u'็ไบ': u'็ๆผ',
u'็ๅ่ๅจ': u'็่่ๅจ',
u'็ไบ': u'็ๆผ',
u'็ๅฒ': u'็่ก',
u'็ไธๅไบ': u'็ไธๅไบ',
u'็นๅฆ่กจ': u'็ถๅฆ้ถ',
u'็นๅฆ้': u'็ถๅฆ้',
u'็นๅฆ้่กจ': u'็ถๅฆ้้ถ',
u'ๅไธฒไบ็ฎ': u'็ไธฒไบ็ฎ',
u'ๅไบ': u'็ไบ',
u'ๅไบบ': u'็ไบบ',
u'ๅๅญ': u'็ๅญ',
u'ๅๆง': u'็ๆง',
u'ๅๆณ': u'็ๆณ',
u'ๅๆจๅ': u'็ๆจ็',
u'ๅๆ น': u'็ๆ น',
u'ๅๆฐ': u'็ๆฐฃ',
u'ๅๆป': u'็ๆปฏ',
u'ๅๅ': u'็็',
u'ๅ็ด': u'็็ด',
u'ๅ็ฃ': u'็็ฃ',
u'ๅ็ญ': u'็็ญ',
u'ๅ่': u'็่
ฆ',
u'ๅ็': u'็่',
u'ๅ่ฏ': u'็่ฉฑ',
u'ๅๅคด': u'็้ ญ',
u'็ฑ้': u'็่ฃก',
u'ๅฅๆฏ': u'็็',
u'็ฌๅ ': u'็จไฝ',
u'็ฌๅ ้ณๅคด': u'็จไฝ้ฐฒ้ ญ',
u'็ฌ่พ่นๅพ': u'็จ้ข่นๅพ',
u'่ทๅชๅ
ถไธ': u'็ฒๅชๅ
ถ้',
u'ๅ
ฝๆฌฒ': u'็ธๆ
พ',
u'็ฎไธ': u'็ป้',
u'็ๅขๅๅ ': u'็ๅๅๅ ',
u'็ๅ': u'็ๆ',
u'็ๅๅฒ': u'็ๆญทๅฒ',
u'็ไพฏๅ': u'็ไพฏๅ',
u'็ๅ': u'็ๅ',
u'็ๅบ': u'็่',
u'็ไฝ้ฑผ': u'็้ค้ญ',
u'็่ดๅผ้ฆ': u'็่ด็ฐ้ฅ',
u'็ญ้': u'็ญ่ฃก',
u'็ฐไบ': u'็พๆผ',
u'็ๆ': u'็ๆ',
u'็ไธไธชๅ': u'็ไธๅ้ซฎ',
u'็ไธๆฌกๅ': u'็ไธๆฌก้ซฎ',
u'็ไธชๅ': u'็ๅ้ซฎ',
u'็ๅฎๅ': u'็ๅฎ้ซฎ',
u'็ๆฌกๅ': u'็ๆฌก้ซฎ',
u'็ๅ': u'็้ซฎ',
u'็ด้': u'็ด้',
u'็ๅพ': u'็ๅพต',
u'็ถ็ญพ': u'็ค็ฑค',
u'็ฏๆธธ': u'็ฐ้',
u'็ฎๅฎ': u'็ๅฎ',
u'็ไบ': u'็ๆผ',
u'็ไน': u'็้บผ',
u'็ๆฐด้ข': u'็ๆฐด้บต',
u'็้ข้
ฑ': u'็้บต้ฌ',
u'็ๅ้ข': u'็ๅ้บต',
u'็ไบ': u'็ๆผ',
u'็ๆฎๆดๆธธ': u'็ๆฎๆดๆธธ',
u'็็ฉ้': u'็็ฉ้',
u'็ๅ็': u'็็ผ็',
u'็ๅๅ': u'็่ฏ้ซฎ',
u'็ๅง': u'็่',
u'็้': u'็้ฝ',
u'็ๅ': u'็้ซฎ',
u'ไบงๅตๆดๆธธ': u'็ขๅตๆดๆธธ',
u'็จ่ฏ': u'็จ่ฅ',
u'็ฉๅ': u'็ฉ้ซฎ',
u'็ฐ่ฐท': u'็ฐ็ฉ',
u'็ฐๅบ': u'็ฐ่',
u'็ฐ้': u'็ฐ่ฃก',
u'็ฑไฝ': u'็ฑไฝ',
u'็ฑไบ': u'็ฑๆผ',
u'็ฑ่กจๅ้': u'็ฑ่กจๅ่ฃก',
u'็ทไฝฃไบบ': u'็ทไฝฃไบบ',
u'็ทไป': u'็ทๅ',
u'็ท็จ่กจ': u'็ท็จ้ถ',
u'็ไบ': u'็ๆผ',
u'็ๅ': u'็้ซฎ',
u'ๆฏไบ': u'็ขๆผ',
u'ๆฏไธไบ': u'็ขๆฅญๆผ',
u'ๆฏ็ๅๅฑ': u'็ข็็ผๅฑ',
u'็ป็': u'็ซ่',
u'ๅฝๅฎถๆ็ฅๆด็ฑณไปท': u'็ถๅฎถ็บ็ฅๆด็ฑณๅน',
u'ๅฝๅ': u'็ถๆบ',
u'ๅฝๅฝไธไธ': u'็ถ็ถไธไธ',
u'ๅฝ็': u'็ถ่',
u'็ๆพ': u'็้ฌ',
u'็็ณป': u'็ไฟ',
u'็ๅถ': u'็ๅ
',
u'็ฒไบ': u'็ฒๆผ',
u'็ฒๅฐ': u'็ฒ็',
u'็
ๅพ': u'็
ๅพต',
u'็
ๆ': u'็
็',
u'็
ไฝ': u'็
้ค',
u'็ๅ็พค': u'็ๅ็พค',
u'็ๆ': u'็็',
u'็็น': u'็็น',
u'็็': u'็็',
u'็่ฟน': u'็่ฟน',
u'ๆๅ': u'็ๅ',
u'็ๅ': u'็ฅๅ',
u'็็ถ': u'็ฅ็',
u'็็ป': u'็ฅ็ต',
u'็ธไธ': u'็ธไธ',
u'ๅๅนฒ': u'็ผไนพ',
u'ๅๆฑ่ฏ': u'็ผๆฑ่ฅ',
u'ๅๅ': u'็ผ็',
u'ๅ่': u'็ผ็',
u'ๅ็ญพ': u'็ผ็ฑค',
u'ๅๅบ': u'็ผ่',
u'ๅ็': u'็ผ่',
u'ๅ่กจ': u'็ผ่กจ',
u'็ผ่กจ': u'็ผ่กจ',
u'ๅๆพ': u'็ผ้ฌ',
u'ๅ้ข': u'็ผ้บต',
u'็ฝๅนฒ': u'็ฝไนพ',
u'็ฝๅ
ๆฃ่ฏ': u'็ฝๅ
ๆฃ่ฅ',
u'็ฝๅนฒๅฟ': u'็ฝๅนฒๅ
',
u'็ฝๆฏ': u'็ฝๆฎ',
u'็ฝๆด': u'็ฝๆจธ',
u'็ฝๅ้ข็ฎ': u'็ฝๆทจ้ข็ฎ',
u'็ฝๅๅ
ถไบ': u'็ฝ็ผๅ
ถไบ',
u'็ฝ็ฎๆพ': u'็ฝ็ฎๆพ',
u'็ฝ็ฒ้ข': u'็ฝ็ฒ้บต',
u'็ฝ้้็บข': u'็ฝ่ฃก้็ด
',
u'็ฝๅ': u'็ฝ้ซฎ',
u'็ฝ่ก': u'็ฝ้ฌ',
u'็ฝ้': u'็ฝ้ปด',
u'็พไธช': u'็พๅ',
u'็พๅชๅฏ': u'็พๅชๅฏ',
u'็พๅชๅค': u'็พๅชๅค ',
u'็พๅชๆ': u'็พๅชๆ',
u'็พๅช่ถณๅค': u'็พๅช่ถณๅค ',
u'็พๅคๅช': u'็พๅค้ป',
u'็พๅคฉๅ': u'็พๅคฉๅพ',
u'็พๆๅไธ': u'็พๆๅ้',
u'็พ็ง้': u'็พ็ง่ฃก',
u'็พ่ฐท': u'็พ็ฉ',
u'็พๆ': u'็พ็ดฎ',
u'็พ่ฑๅ': u'็พ่ฑๆ',
u'็พ่ฑๅๅฒ': u'็พ่ฑๆญทๅฒ',
u'็พ่ฏไน้ฟ': u'็พ่ฅไน้ท',
u'็พ็ผ': u'็พ้',
u'็พๅช': u'็พ้ป',
u'็พไฝ': u'็พ้ค',
u'็ๅ
ๅถ': u'็ๅๅถ',
u'็้': u'็้',
u'็้่กจ': u'็้้ถ',
u'็ๅฏไฝๆท': u'็ๅฏไฝๆพฑ',
u'็ๅ': u'็ๆบ',
u'็ๅ': u'็ๅ',
u'็ๅ': u'็ๆ',
u'็ๆๅ': u'็ๆฅตๆ',
u'็ๆๅๅฒ': u'็ๆฅตๆญทๅฒ',
u'็ๅๅฒ': u'็ๆญทๅฒ',
u'็ๅบ': u'็่',
u'็ๅ': u'็้ซฎ',
u'็ฎๅถๆ': u'็ฎๅถๆ',
u'็ฎ้ๆฅ็ง': u'็ฎ่ฃกๆฅ็ง',
u'็ฎ้้ณ็ง': u'็ฎ่ฃก้ฝ็ง',
u'็ฎๅถ': u'็ฎ่ฃฝ',
u'็ฎๆพ': u'็ฎ้ฌ',
u'็ฑๅซ': u'็บๅฝ',
u'็ฑๆ': u'็บๆบ',
u'็ๅ': u'็ๅผ',
u'็ไฝ': u'็้ค',
u'็ไบ': u'็ๆผ',
u'็้': u'็่ฃก',
u'็่ต': u'็่ฎ',
u'็้': u'็ๆก',
u'็้': u'็้',
u'ๅฐฝ้ๅ
ๅถ': u'็ก้ๅๅถ',
u'็ๅถ': u'็ฃ่ฃฝ',
u'็้': u'็ค่ฃก',
u'็ๅ': u'็ค่ฟด',
u'ๅขๆฃฑไผฝ': u'็ง็จไผฝ',
u'็ฒๅนฒ': u'็ฒๅนน',
u'็ดๆฅๅไธ': u'็ดๆฅๅไธ',
u'็ดไบ': u'็ดๆผ',
u'็ดๅฒ': u'็ด่ก',
u'็ธๅนถ': u'็ธไฝต',
u'็ธๅ
ๅถ': u'็ธๅ
ๅถ',
u'็ธๅ
ๆ': u'็ธๅ
ๆ',
u'็ธๅ
': u'็ธๅ',
u'็ธๅนฒ': u'็ธๅนฒ',
u'็ธไบ': u'็ธๆผ',
u'็ธๅฒ': u'็ธ่ก',
u'็ธๆ': u'็ธ้ฌฅ',
u'็ไธ่กจ': u'็ไธ้ถ',
u'็ไธ้': u'็ไธ้',
u'็ๅ': u'็ๆบ',
u'็็่กจ': u'็่้ถ',
u'็็้': u'็่้',
u'็็้่กจ': u'็่้้ถ',
u'็่กจ้ข': u'็่กจ้ข',
u'็่กจ': u'็้ถ',
u'็้': u'็้',
u'็ๅถ': u'็ๅ
',
u'็ไธช': u'็็ฎ',
u'็ผๅนฒ': u'็ผไนพ',
u'็ผๅธ': u'็ผๅธ',
u'็ผ็ถ้': u'็ผ็ถ่ฃก',
u'็ผ็้': u'็ผ็่ฃก',
u'็ผ่ฏ': u'็ผ่ฅ',
u'็ผ้': u'็ผ่ฃก',
u'ๅฐไน': u'็ไน',
u'ๅฐๅฆ': u'็ๅฆ',
u'ๅฐ่ง': u'็่ฆบ',
u'็ก็ไบ': u'็ก่ไบ',
u'็กๆธธ็
': u'็ก้็
',
u'็ๅ': u'็ๆบ',
u'็
ไธ่กจ': u'็
ไธ้ถ',
u'็
ไธ้': u'็
ไธ้',
u'็ง็่กจ': u'็ง่้ถ',
u'็ง็้': u'็ง่้',
u'็ง็้่กจ': u'็ง่้้ถ',
u'ไบๆ': u'็ญๆ',
u'ไบ็ถ': u'็ญ็ถ',
u'ไบ่ฅๆๆ': u'็ญ่ฅๆๆ',
u'็ณ่': u'็ณ็',
u'่ไบ': u'็ไบ',
u'่ๆงๆ ็ฅ': u'็ๆง็ก็ฅ',
u'่ๆทท': u'็ๆทท',
u'่็': u'็็',
u'่็ฌ': u'็็',
u'่่ฉ': u'็่ต',
u'่็': u'็่',
u'่็้
ๅฟ': u'็่้ๅ
',
u'่ๅคด่ฝฌ': u'็้ ญ่ฝ',
u'่้ช': u'็้จ',
u'็ฉๆ': u'็่จ',
u'็ๅบ': u'็่',
u'็ญๅ ': u'็ญๅ ',
u'็ญไบ': u'็ญๆผ',
u'็ญๅ': u'็ญ้ซฎ',
u'็ฎๅ ': u'็ฎๅ ',
u'็ณๅ ': u'็ณๅ ',
u'็ณๅฎถๅบ': u'็ณๅฎถ่',
u'็ณๆข': u'็ณๆจ',
u'็ณ่ฑ่กจ': u'็ณ่ฑ้ถ',
u'็ณ่ฑ้': u'็ณ่ฑ้',
u'็ณ่ฑ้่กจ': u'็ณ่ฑ้้ถ',
u'็ณ่ผ': u'็ณ่ด',
u'็ณ้ไนณ': u'็ณ้ไนณ',
u'็ฝ่ฐท': u'็ฝ่ฐท',
u'็ ๅถ': u'็ ่ฃฝ',
u'็ ฐๅฝ': u'็ ฐๅน',
u'ๆฑๅ็้ฝฟ': u'็กๅ็้ฝ',
u'ๆฑๆน': u'็กๆน',
u'ๆฑ็ ': u'็ก็ ',
u'ๆฑ็ฌ': u'็ก็ญ',
u'ๆฑ็บข่ฒ': u'็ก็ด
่ฒ',
u'ๆฑ่ฒ': u'็ก่ฒ',
u'ๆฑ่ฐ': u'็ก่ซญ',
u'็กฌๅนฒ': u'็กฌๅนน',
u'็กฎ็ ': u'็กฎ็ ',
u'็ขๅฟ': u'็ข่ช',
u'็ขฐ้': u'็ขฐ้',
u'็ ่กจ': u'็ขผ้ถ',
u'็ฃๅถ': u'็ฃ่ฃฝ',
u'็ฃจๅถ': u'็ฃจ่ฃฝ',
u'็ฃจ็ผ': u'็ฃจ้',
u'็ฃฌ้': u'็ฃฌ้',
u'็ก็กฎ': u'็ฃฝ็กฎ',
u'็ข้พ็
งๅ': u'็ค้ฃ็
งๅ',
u'็ ป่ฐทๆบ': u'็คฑ็ฉๆฉ',
u'็คบ่': u'็คบ็ฏ',
u'็คพ้': u'็คพ่ฃก',
u'็ฅ่ต': u'็ฅ่ฎ',
u'็ฅๅ': u'็ฅ้ซฎ',
u'็ฅ่ผ้ๅ': u'็ฅ่ผ้ฌฑๅฃ',
u'็ฅๆธธ': u'็ฅ้',
u'็ฅ้ๅ': u'็ฅ้ๅ',
u'็ฅ้': u'็ฅ้ตฐ',
u'็ฅจๅบ': u'็ฅจ่',
u'็ฅญๅ': u'็ฅญๅผ',
u'็ฅญๅๆ': u'็ฅญๅผๆ',
u'็ฆๆฌฒ': u'็ฆๆ
พ',
u'็ฆๆฌฒไธปไน': u'็ฆๆฌฒไธป็พฉ',
u'็ฆ่ฏ': u'็ฆ่ฅ',
u'็ฅธไบ': u'็ฆๆผ',
u'ๅพกไพฎ': u'็ฆฆไพฎ',
u'ๅพกๅฏ': u'็ฆฆๅฏ',
u'ๅพกๅฏ': u'็ฆฆๅฏ',
u'ๅพกๆ': u'็ฆฆๆต',
u'็คผ่ต': u'็ฆฎ่ฎ',
u'็ฆนไฝ็ฒฎ': u'็ฆน้ค็ณง',
u'็ฆพ่ฐท': u'็ฆพ็ฉ',
u'็งๅฆไนๅ': u'็ฆฟๅฆไน้ซฎ',
u'็งๅ': u'็ฆฟ้ซฎ',
u'็งๅ': u'็ง้ซฎ',
u'็งไธ้': u'็งไธ่ฃก',
u'็งๆฌฒ': u'็งๆ
พ',
u'็งๆ': u'็ง้ฌฅ',
u'็งๅ้': u'็งๅ่ฃก',
u'็งๅคฉ้': u'็งๅคฉ่ฃก',
u'็งๆฅ้': u'็งๆฅ่ฃก',
u'็ง่ฃค': u'็ง่คฒ',
u'็งๆธธ': u'็ง้',
u'็ง้ดๅ
ฅไบๅนฒ': u'็ง้ฐๅ
ฅไบๅนน',
u'็งๅ': u'็ง้ซฎ',
u'็งๅธไธญ': u'็งๅธซไธญ',
u'็งๅธ้': u'็งๅธซ้',
u'็งๆพ': u'็งๆพ',
u'็งๆ': u'็งๆ',
u'็ง่': u'็ง็ฏ',
u'็ง่กจๆ': u'็ง่กจๆ',
u'็ง่กจ็คบ': u'็ง่กจ็คบ',
u'็ง่กจ': u'็ง้ถ',
u'็ง้': u'็ง้',
u'็งป็ฅธไบ': u'็งป็ฆๆผ',
u'็จๆพ': u'็จ้ฌ',
u'ๆฃฑๅฐ': u'็จๅฐ',
u'ๆฃฑๅญ': u'็จๅญ',
u'ๆฃฑๅฑ': u'็จๅฑค',
u'ๆฃฑๆฑ': u'็จๆฑ',
u'ๆฃฑ็ป': u'็จ็ป',
u'ๆฃฑๆฃฑ': u'็จ็จ',
u'ๆฃฑ็ญ็ป': u'็จ็ญ็ป',
u'ๆฃฑ็บฟ': u'็จ็ท',
u'ๆฃฑ็ผ': u'็จ็ธซ',
u'ๆฃฑ่ง': u'็จ่ง',
u'ๆฃฑ้ฅ': u'็จ้',
u'ๆฃฑ้': u'็จ้ก',
u'ๆฃฑไฝ': u'็จ้ซ',
u'็ง่ฐท': u'็จฎ็ฉ',
u'็งฐ่ต': u'็จฑ่ฎ',
u'็จป่ฐท': u'็จป็ฉ',
u'็จฝๅพ': u'็จฝๅพต',
u'่ฐทไบบ': u'็ฉไบบ',
u'่ฐทไฟๅฎถๅ': u'็ฉไฟๅฎถๅ',
u'่ฐทไป': u'็ฉๅ',
u'่ฐทๅญ': u'็ฉๅญ',
u'่ฐทๅบ': u'็ฉๅ ด',
u'่ฐทๅญ': u'็ฉๅญ',
u'่ฐทๆฅ': u'็ฉๆฅ',
u'่ฐทๆฆ': u'็ฉๆฆ',
u'่ฐทๆข': u'็ฉๆข',
u'่ฐทๅฃณ': u'็ฉๆฎผ',
u'่ฐท็ฉ': u'็ฉ็ฉ',
u'่ฐท็ฎ': u'็ฉ็ฎ',
u'่ฐท็ฅ': u'็ฉ็ฅ',
u'่ฐท่ฐท': u'็ฉ็ฉ',
u'่ฐท็ฑณ': u'็ฉ็ฑณ',
u'่ฐท็ฒ': u'็ฉ็ฒ',
u'่ฐท่ฑ': u'็ฉ่',
u'่ฐท่': u'็ฉ่',
u'่ฐท่': u'็ฉ่',
u'่ฐท่ดต้ฅฟๅ': u'็ฉ่ฒด้ค่พฒ',
u'่ฐท่ดฑไผคๅ': u'็ฉ่ณคๅท่พฒ',
u'่ฐท้': u'็ฉ้',
u'่ฐท้จ': u'็ฉ้จ',
u'่ฐท็ฑป': u'็ฉ้ก',
u'่ฐท้ฃ': u'็ฉ้ฃ',
u'็ฉ็ฝ้ปๅพทๅ': u'็ฉ็ฝ้ปๅพทๆ',
u'็ฉ็ฝ้ปๅพทๅๅฒ': u'็ฉ็ฝ้ปๅพทๆญทๅฒ',
u'็งฏๆๅไธ': u'็ฉๆๅไธ',
u'็งฏๆๅๅ ': u'็ฉๆๅๅ ',
u'็งฏๆท': u'็ฉๆพฑ',
u'็งฏ่ฐท': u'็ฉ็ฉ',
u'็งฏ่ฐท้ฒ้ฅฅ': u'็ฉ็ฉ้ฒ้ฅ',
u'็งฏ้': u'็ฉ้ฌฑ',
u'็จณๅ ': u'็ฉฉไฝ',
u'็จณๆ': u'็ฉฉ็ดฎ',
u'็ฉบไธญๅธ้ท': u'็ฉบไธญไฝ้ท',
u'็ฉบๆๅธ้ท': u'็ฉบๆไฝ้ท',
u'็ฉบ่': u'็ฉบๆฟ',
u'็ฉบ่ก': u'็ฉบ่ฉ',
u'็ฉบ่ก่ก': u'็ฉบ่ฉ่ฉ',
u'็ฉบ่ฐทๅ้ณ': u'็ฉบ่ฐทๅ้ณ',
u'็ฉบ้': u'็ฉบ้',
u'็ฉบไฝ': u'็ฉบ้ค',
u'็ชๆฌฒ': u'็ชๆ
พ',
u'็ชๅฐไธ': u'็ชๅฐไธ',
u'็ชๅธ': u'็ชๅธ',
u'็ชๆๅ ไบฎ': u'็ชๆๅ ไบฎ',
u'็ชๆๅ ๅ': u'็ชๆๅ ๆทจ',
u'็ชๅฐ': u'็ชๆชฏ',
u'็ช้': u'็ชฉ่ฃก',
u'็ฉทไบ': u'็ชฎๆผ',
u'็ฉท่ฟฝไธ่': u'็ชฎ่ฟฝไธๆจ',
u'็ฉทๅ': u'็ชฎ้ซฎ',
u'็ช้ๆฉ่ณ': u'็ซ้ๆฉ่ณ',
u'็ซไบ': u'็ซๆผ',
u'็ซ่': u'็ซ็ฏ',
u'็ซๅนฒๅฒธๅฟ': u'็ซไนพๅฒธๅ
',
u'็ซฅไป': u'็ซฅๅ',
u'็ซฏๅบ': u'็ซฏ่',
u'็ซๆ': u'็ซถ้ฌฅ',
u'็ซนๅ ': u'็ซนๅ ',
u'็ซนๆไนๆธธ': u'็ซนๆไน้',
u'็ซน็ญพ': u'็ซน็ฑค',
u'็ฌ้่ๅ': u'็ฌ่ฃก่ๅ',
u'็ฌจ็ฌจๅๅ': u'็ฌจ็ฌจๅๅ',
u'็ฌฌๅๅบๅฑ': u'็ฌฌๅๅบๅฑ',
u'็ฌๅ': u'็ญๅ',
u'็ฌ็งๅขจๅนฒ': u'็ญ็ฆฟๅขจไนพ',
u'็ญไบ': u'็ญๆผ',
u'็ฌๅนฒ': u'็ญไนพ',
u'็ญๅ': u'็ญๅ',
u'็ญๅ': u'็ญๅ',
u'็ญๅท': u'็ญๅท',
u'็ญๅพ': u'็ญๅพ',
u'็ญๅ': u'็ญๅพ',
u'็ญๆณข': u'็ญๆณข',
u'็ญ็ดซ': u'็ญ็ดซ',
u'็ญ่ฅ': u'็ญ่ฅ',
u'็ญ่ฅฟ': u'็ญ่ฅฟ',
u'็ญ้ฆ': u'็ญ้ฆ',
u'็ญ้ฝ': u'็ญ้ฝ',
u'็ญ้ณ': u'็ญ้ฝ',
u'็ญๅค': u'็ญ่ฆ',
u'็ญ่ฆ': u'็ญ่ฆ',
u'็ญๅ': u'็ญๅ',
u'็ญตๅ ': u'็ญตๅ ',
u'ไธชไธญๅๅ ': u'็ฎไธญๅๅ ',
u'ไธชไธญๅฅฅๅฆ': u'็ฎไธญๅฅงๅฆ',
u'ไธชไธญๅฅฅ็ง': u'็ฎไธญๅฅง็ง',
u'ไธชไธญๅฅฝๆ': u'็ฎไธญๅฅฝๆ',
u'ไธชไธญๅผบๆ': u'็ฎไธญๅผทๆ',
u'ไธชไธญๆถๆฏ': u'็ฎไธญๆถๆฏ',
u'ไธชไธญๆปๅณ': u'็ฎไธญๆปๅณ',
u'ไธชไธญ็ๆบ': u'็ฎไธญ็ๆฉ',
u'ไธชไธญ็็ฑ': u'็ฎไธญ็็ฑ',
u'ไธชไธญ่ฎฏๆฏ': u'็ฎไธญ่จๆฏ',
u'ไธชไธญ่ต่ฎฏ': u'็ฎไธญ่ณ่จ',
u'ไธชไธญ้ซๆ': u'็ฎไธญ้ซๆ',
u'ไธชๆง': u'็ฎ่',
u'็ฎๅ': u'็ฎๆ',
u'็ฎๅๅฒ': u'็ฎๆญทๅฒ',
u'็ฎๅ': u'็ฎๆบ',
u'็ฎๅ': u'็ฎ้ซฎ',
u'็ฎกไบบๅ่ๅฟไบ': u'็ฎกไบบๅผ่
ณๅ
ไบ',
u'็ฎกๅถๆณ': u'็ฎกๅถๆณ',
u'็ฎกๅนฒ': u'็ฎกๅนน',
u'่ๆฌฒ': u'็ฏๆ
พ',
u'่ไฝ': u'็ฏ้ค',
u'่ไพ': u'็ฏไพ',
u'่ๅด': u'็ฏๅ',
u'่ๅญ': u'็ฏๅญ',
u'่ๅผ': u'็ฏๅผ',
u'่ๆงๅฝขๅ': u'็ฏๆงๅฝข่ฎ',
u'่ๆ': u'็ฏๆ',
u'่ๆฌ': u'็ฏๆฌ',
u'่็ด': u'็ฏ็',
u'่้': u'็ฏ้',
u'็ฎๅนถ': u'็ฐกไฝต',
u'็ฎๆด': u'็ฐกๆจธ',
u'็ฐธ่ก': u'็ฐธ่ฉ',
u'็ญพ็': u'็ฐฝ่',
u'็ญนๅ': u'็ฑๅ',
u'็ญพๅน': u'็ฑคๅน',
u'็ญพๆผ': u'็ฑคๆผ',
u'็ญพๆก': u'็ฑคๆข',
u'็ญพ่ฏ': u'็ฑค่ฉฉ',
u'ๅๅคฉ': u'็ฑฒๅคฉ',
u'ๅๆฑ': u'็ฑฒๆฑ',
u'ๅ่ฏท': u'็ฑฒ่ซ',
u'็ฑณ่ฐท': u'็ฑณ็ฉ',
u'็ฒๆณ็ปฃ่
ฟ': u'็ฒๆณ็นก่
ฟ',
u'็ฒ็ญพๅญ': u'็ฒ็ฑคๅญ',
u'็ฒๅถ': u'็ฒ่ฃฝ',
u'็ฒพๅถไผ': u'็ฒพๅถไผ',
u'็ฒพๅถไฝ': u'็ฒพๅถไฝ',
u'็ฒพๅถๆ': u'็ฒพๅถๆ',
u'็ฒพๅนฒ': u'็ฒพๅนน',
u'็ฒพไบ': u'็ฒพๆผ',
u'็ฒพๅ': u'็ฒพๆบ',
u'็ฒพ่ด': u'็ฒพ็ทป',
u'็ฒพๅถ': u'็ฒพ่ฃฝ',
u'็ฒพ็ผ': u'็ฒพ้',
u'็ฒพ่พ': u'็ฒพ้ข',
u'็ฒพๆพ': u'็ฒพ้ฌ',
u'็ณ้็ณๆถ': u'็ณ่ฃก็ณๅก',
u'็ณๅนฒ': u'็ณไนพ',
u'็ฒช็งฝ่้ข': u'็ณ็ฉข่ก้ข',
u'ๅขๅญ': u'็ณฐๅญ',
u'็ณปๅ้': u'็ณปๅ่ฃก',
u'็ณป็': u'็ณป่',
u'็ณป้': u'็ณป่ฃก',
u'็บชๅ': u'็ดๆ',
u'็บชๅๅฒ': u'็ดๆญทๅฒ',
u'็บฆๅ ': u'็ดไฝ',
u'็บข็ปณ็ณป่ถณ': u'็ด
็นฉ็นซ่ถณ',
u'็บข้': u'็ด
้',
u'็บข้็ด ': u'็ด
้็ด ',
u'็บขๅ': u'็ด
้ซฎ',
u'็บกๅ': u'็ด่ฟด',
u'็บกไฝ': u'็ด้ค',
u'็บก้': u'็ด้ฌฑ',
u'็บณๅพ': u'็ดๅพต',
u'็บฏๆด': u'็ดๆจธ',
u'็บธๆ': u'็ด็ดฎ',
u'็ด ๆด': u'็ด ๆจธ',
u'็ด ๅ': u'็ด ้ซฎ',
u'็ด ้ข': u'็ด ้บต',
u'็ดข้ฉฌ้': u'็ดข้ฆฌ้',
u'็ดข้ฆฌ้': u'็ดข้ฆฌ้',
u'็ดข้ข': u'็ดข้บต',
u'็ดซๅง': u'็ดซ่',
u'ๆไธ': u'็ดฎไธ',
u'ๆไธ': u'็ดฎไธ',
u'ๆๅฎ': u'็ดฎๅฎ',
u'ๆๅฅฝ': u'็ดฎๅฅฝ',
u'ๆๅฎ': u'็ดฎๅฏฆ',
u'ๆๅฏจ': u'็ดฎๅฏจ',
u'ๆๅธฆๅญ': u'็ดฎๅธถๅญ',
u'ๆๆ': u'็ดฎๆ',
u'ๆๆ น': u'็ดฎๆ น',
u'ๆ่ฅ': u'็ดฎ็',
u'ๆ็ดง': u'็ดฎ็ท',
u'ๆ่': u'็ดฎ่
ณ',
u'ๆ่ฃน': u'็ดฎ่ฃน',
u'ๆ่ฏ': u'็ดฎ่ฉ',
u'ๆ่ตท': u'็ดฎ่ตท',
u'ๆ้': u'็ดฎ้ต',
u'็ปไธๅฎนๅ': u'็ดฐไธๅฎน้ซฎ',
u'็ปๅฆๅ': u'็ดฐๅฆ้ซฎ',
u'็ป่ด': u'็ดฐ็ทป',
u'็ป็ผ': u'็ดฐ้',
u'็ปไบ': u'็ตๆผ',
u'็ป้': u'็ต่ฃก',
u'็ปไผดๅๆธธ': u'็ตไผดๅ้',
u'็ปไผ': u'็ตๅคฅ',
u'็ปๆ': u'็ต็ดฎ',
u'็ปๅฝฉ': u'็ต็ถต',
u'็ปไฝ': u'็ต้ค',
u'็ปๅ': u'็ต้ซฎ',
u'็ปๅฏนๅ็
ง': u'็ตๅฐๅ็
ง',
u'็ปไบ': u'็ตๆผ',
u'็ปๅนฒ': u'็ตไนพ',
u'็ป่
ฎ่ก': u'็ตก่
ฎ้ฌ',
u'็ปๆๅนฒ่': u'็ตฆๆๅนฒ่',
u'็ปไบ': u'็ตฆๆผ',
u'ไธๆฅ็บฟๅป': u'็ตฒไพ็ทๅป',
u'ไธๅธ': u'็ตฒๅธ',
u'ไธๆฉๅๆจ': u'็ตฒๆฉ้ซฎๆจ',
u'ไธๆฟ': u'็ตฒๆฟ',
u'ไธ็ๅธ': u'็ตฒ็ๅธ',
u'ไธ็ปๅธ': u'็ตฒ็ตจๅธ',
u'ไธ็บฟ': u'็ตฒ็ท',
u'ไธ็ปๅ': u'็ตฒ็นๅป ',
u'ไธ่ซ': u'็ตฒ่ฒ',
u'ไธๅ': u'็ตฒ้ซฎ',
u'็ปๆ': u'็ถ็ดฎ',
u'็ถๆ': u'็ถ็ดฎ',
u'็ปๆไบ': u'็ถๆไบ',
u'็ถๆไบ': u'็ถๆไบ',
u'็ปฟๅ': u'็ถ ้ซฎ',
u'็ปธ็ผๅบ': u'็ถข็ท่',
u'็ปด็ณป': u'็ถญ็นซ',
u'็ปพๅ': u'็ถฐ้ซฎ',
u'็ฝ้': u'็ถฒ่ฃก',
u'็ฝๅฟ': u'็ถฒ่ช',
u'ๅฝฉๅธฆ': u'็ถตๅธถ',
u'ๅฝฉๆ': u'็ถตๆ',
u'ๅฝฉๆฅผ': u'็ถตๆจ',
u'ๅฝฉ็ๆฅผ': u'็ถต็ๆจ',
u'ๅฝฉ็': u'็ถต็',
u'ๅฝฉ็ปธ': u'็ถต็ถข',
u'ๅฝฉ็บฟ': u'็ถต็ท',
u'ๅฝฉ่น': u'็ถต่น',
u'ๅฝฉ่กฃ': u'็ถต่กฃ',
u'็ดง่ด': u'็ท็ทป',
u'็ดง็ปท': u'็ท็น',
u'็ดง็ปท็ปท': u'็ท็น็น',
u'็ดง็ปท็': u'็ท็น่',
u'็ดง่ฟฝไธ่': u'็ท่ฟฝไธๆจ',
u'็ปชไฝ': u'็ท้ค',
u'็ทๅถ': u'็ทๅ
',
u'็ผๅถ': u'็ทๅ
',
u'็ผไฝ': u'็ทจไฝ',
u'็ผๅถๆณ': u'็ทจๅถๆณ',
u'็ผ้': u'็ทจๆก',
u'็ผ็ ่กจ': u'็ทจ็ขผ่กจ',
u'็ผๅถ': u'็ทจ่ฃฝ',
u'็ผ้': u'็ทจ้',
u'็ผๅ': u'็ทจ้ซฎ',
u'็ผๅพ': u'็ทฉๅพต',
u'็ผๅฒ': u'็ทฉ่ก',
u'่ดๅฏ': u'็ทปๅฏ',
u'่ฆๅ': u'็ธ่ฟด',
u'็ผ่ด': u'็ธ็ทป',
u'ๅฟ้': u'็ธฃ่ฃก',
u'ๅฟๅฟ': u'็ธฃ่ช',
u'็ผ้': u'็ธซ่ฃก',
u'็ผๅถ': u'็ธซ่ฃฝ',
u'็ผฉๆ ': u'็ธฎๆ
',
u'็บตๆฌฒ': u'็ธฑๆ
พ',
u'็บคๅคซ': u'็ธดๅคซ',
u'็บคๆ': u'็ธดๆ',
u'ๆป่ฃๅถ': u'็ธฝ่ฃๅถ',
u'็นๅค': u'็น่ค',
u'็น้': u'็น้',
u'็ปทไฝ': u'็นไฝ',
u'็ปทๅญ': u'็นๅญ',
u'็ปทๅธฆ': u'็นๅธถ',
u'็ปทๆๅๆท': u'็นๆๅผๆท',
u'็ปท็ดง': u'็น็ท',
u'็ปท่ธ': u'็น่',
u'็ปท็': u'็น่',
u'็ปท็่ธ': u'็น่่',
u'็ปท็่ธๅฟ': u'็น่่ๅ
',
u'็ปทๅผ': u'็น้',
u'็ฉๅธ้ฃไบๅนฒ': u'็นๅน้ฃไบๅนน',
u'็ปๆข': u'็นๆจ',
u'็ปฃๅ': u'็นกๅ',
u'็ปฃๅฃ': u'็นกๅฃ',
u'็ปฃๅพ': u'็นกๅพ',
u'็ปฃๆท': u'็นกๆถ',
u'็ปฃๆฟ': u'็นกๆฟ',
u'็ปฃๆฏฏ': u'็นกๆฏฏ',
u'็ปฃ็': u'็นก็',
u'็ปฃ็': u'็นก็',
u'็ปฃ่ฑ': u'็นก่ฑ',
u'็ปฃ่กฃ': u'็นก่กฃ',
u'็ปฃ่ตท': u'็นก่ตท',
u'็ปฃ้': u'็นก้ฃ',
u'็ปฃ้': u'็นก้',
u'็ปๅถ': u'็นช่ฃฝ',
u'็ณปไธ': u'็นซไธ',
u'็ณปไธ': u'็นซไธ',
u'็ณปๅฐ': u'็นซๅฐ',
u'็ณปๅ': u'็นซๅ',
u'็ณปๅฟ': u'็นซๅฟ',
u'็ณปๅฟต': u'็นซๅฟต',
u'็ณปๆ': u'็นซๆท',
u'็ณปๆ': u'็นซๆ',
u'็ณปไบ': u'็นซๆผ',
u'็ณปไบไธๅ': u'็นซๆผไธ้ซฎ',
u'็ณป็ป': u'็นซ็ต',
u'็ณป็ดง': u'็นซ็ท',
u'็ณป็ปณ': u'็นซ็นฉ',
u'็ณป็ดฏ': u'็นซ็บ',
u'็ณป่พ': u'็นซ่พญ',
u'็ณป้ฃๆๅฝฑ': u'็นซ้ขจๆๅฝฑ',
u'็ดฏๅ': u'็บๅ',
u'็ดฏๅ ': u'็บๅ ',
u'็ดฏ็ฆ็ป็ปณ': u'็บ็ฆ็ต็นฉ',
u'็ดฏ็ป': u'็บ็ดฒ',
u'็ดฏ่ฃ': u'็บ่ฃ',
u'็ผ ๆ': u'็บ้ฌฅ',
u'ๆๅ': u'็บๅ',
u'ๆๅฏๅฎน้ขๅไบไฝ': u'็บๅฏๅฎน้กๅไบ้ค',
u'ๆๅพไธคๅนด': u'็บๅพๅ
ฉๅนด',
u'ๆๆญค': u'็บๆญค',
u'ๅๅญ': u'็ฝๅญ',
u'ๅๅ็ฝ็ฝ': u'็ฝ็ฝ็ฝ็ฝ',
u'ๅ้จ': u'็ฝ้จ',
u'็ฝฎไบ': u'็ฝฎๆผ',
u'็ฝฎ่จๆ่': u'็ฝฎ่จๆ็ฏ',
u'้ช็': u'็ฝต่',
u'็ฝขไบ': u'็ฝทๆผ',
u'็พ็ณป': u'็พ็นซ',
u'็พๅ ': u'็พไฝ',
u'็พไป': u'็พๅด',
u'็พไบ': u'็พๆผ',
u'็พๅถ': u'็พ่ฃฝ',
u'็พไธ': u'็พ้',
u'็พๅ': u'็พ้ซฎ',
u'็พคไธ': u'็พค้',
u'็พกไฝ': u'็พจ้ค',
u'ไนๅ ': u'็พฉไฝ',
u'ไนไป': u'็พฉๅ',
u'ไนๅบ': u'็พฉ่',
u'็ฟ่พ': u'็ฟ้ข',
u'็ฟฑๆธธ': u'็ฟฑ้',
u'็ฟปๆถ': u'็ฟปๆนง',
u'็ฟปไบ่ฆ้จ': u'็ฟป้ฒ่ฆ้จ',
u'็ฟปๆพ': u'็ฟป้ฌ',
u'่ๅนฒ': u'่ไนพ',
u'่ไป': u'่ๅ',
u'่ๅนฒ้จ': u'่ๅนน้จ',
u'่่': u'่ๆ',
u'่ไบ': u'่ๆผ',
u'่็ท้': u'่็บ้',
u'่ๅบ': u'่่',
u'่ๅง': u'่่',
u'่ๆฟ': u'่้',
u'่้ข็ฎ': u'่้ข็ฎ',
u'่ๅพ': u'่ๅพต',
u'่ๅ
ๅถ': u'่ๅๅถ',
u'่ๆ': u'่้ฌฅ',
u'่ไฝฃ': u'่ๅญ',
u'่่ท': u'่็ฉซ',
u'่ณไฝ': u'่ณ้ค',
u'่ฟไบ': u'่ฟๆผ',
u'่ๆๅฟๅผ': u'่้ฝๅฟ็ฐ',
u'่้': u'่ๅฑ',
u'้ป้ฃๅ': u'่้ขจๅพ',
u'่็ณป': u'่ฏ็นซ',
u'ๅฌไบ': u'่ฝๆผ',
u'่ๅนฒ': u'่ไนพ',
u'่ๆฌฒ': u'่ๆ
พ',
u'่ไธ้ข': u'่็ตฒ้บต',
u'่็พน้ข': u'่็พน้บต',
u'่ๆพ': u'่้ฌ',
u'่้': u'่่ฃก',
u'่่': u'่่',
u'่้': u'่้ฌฑ',
u'่กๆ ': u'่กๆ
',
u'่ฅ็ญๆน่จ': u'่ฅ็ญๆน่จ',
u'่ด้ฆ': u'่ด้ฅ',
u'่บ่': u'่บ่',
u'่่ฏ': u'่่ฅ',
u'่้': u'่่ฃก',
u'่ๅ็': u'่ๅ่',
u'่ๅฐ้': u'่ๅฐ่ฃก',
u'่ๅ': u'่้ซฎ',
u'่่ฝ': u'่่ฝ',
u'่้ฎ': u'่้ต',
u'่กไบ': u'่กไบ',
u'่กๅญๆ': u'่กๅญๆ',
u'่กๆดๅฎ': u'่กๆจธๅฎ',
u'่ก้่กๆถ': u'่ก่ฃก่กๅก',
u'่ฝๅ
ๅถ': u'่ฝๅๅถ',
u'่ฝๅนฒไผ': u'่ฝๅนฒไผ',
u'่ฝๅนฒๆ': u'่ฝๅนฒๆ',
u'่ฝๅนฒๆฐ': u'่ฝๅนฒๆพ',
u'่ฝๅนฒๆฟ': u'่ฝๅนฒๆฟ',
u'่ฝๅนฒๆถ': u'่ฝๅนฒๆถ',
u'่ฝๅนฒ้ข': u'่ฝๅนฒ้ ',
u'่ฝๅนฒ': u'่ฝๅนน',
u'่ฝ่ชๅถ': u'่ฝ่ชๅถ',
u'่ๅฒ': u'่่ก',
u'่ๆข่': u'่ๆข่',
u'่ๆข้ชจ': u'่ๆข้ชจ',
u'่ๆข': u'่ๆจ',
u'่ฑ่ฐทๆบ': u'่ซ็ฉๆฉ',
u'่ฑๅ': u'่ซ้ซฎ',
u'่พ่': u'่พ่',
u'่
ไนไปฅไธบ้ฅต': u'่
ไนไปฅ็บ้ค',
u'่
ๅณ': u'่
ๅณ',
u'่
ๆฏ': u'่
ๆฏ',
u'่
็ฌ': u'่
็ญ',
u'่พ่': u'่
่',
u'่
ๅนฒ': u'่
ไนพ',
u'่
ไฝ': u'่
้ค',
u'่
่กจ': u'่
้ถ',
u'่ๅญ้': u'่
ฆๅญ่ฃก',
u'่ๅนฒ': u'่
ฆๅนน',
u'่
ฐ้': u'่
ฐ่ฃก',
u'่ๆณจ': u'่
ณ่จป',
u'่็ผ': u'่
ณ้',
u'่่ฏ': u'่่ฅ',
u'่คๅ': u'่้ซฎ',
u'่ถๅท': u'่ ๆฒ',
u'่จๆพ': u'่จ้ฌ',
u'่ฃไป': u'่ฃๅ',
u'ๅงๆธธ': u'่ฅ้',
u'่ง่ฐทไบก็พ': u'่ง็ฉไบก็พ',
u'ไธดๆฝผๆๅฎ': u'่จๆฝผ้ฌฅๅฏถ',
u'่ชๅถไธไธ': u'่ชๅถไธไธ',
u'่ชๅถไธๆฅ': u'่ชๅถไธไพ',
u'่ชๅถไธ': u'่ชๅถไธ',
u'่ชๅถไนๅ': u'่ชๅถไนๅ',
u'่ชๅถไน่ฝ': u'่ชๅถไน่ฝ',
u'่ชๅถไป': u'่ชๅถไป',
u'่ชๅถไผ': u'่ชๅถไผ',
u'่ชๅถไฝ ': u'่ชๅถไฝ ',
u'่ชๅถๅ': u'่ชๅถๅ',
u'่ชๅถๅฐ': u'่ชๅถๅฐ',
u'่ชๅถๅฅน': u'่ชๅถๅฅน',
u'่ชๅถๆ
': u'่ชๅถๆ
',
u'่ชๅถๆ': u'่ชๅถๆ',
u'่ชๅถๆ': u'่ชๅถๆ',
u'่ชๅถ็่ฝ': u'่ชๅถ็่ฝ',
u'่ชๅถ่ฝๅ': u'่ชๅถ่ฝๅ',
u'่ชไบ': u'่ชๆผ',
u'่ชๅถ': u'่ช่ฃฝ',
u'่ช่ง่ชๆฟ': u'่ช่ฆบ่ชๆฟ',
u'่ณๅค': u'่ณๅค',
u'่ณไบ': u'่ณๆผ',
u'่ดไบ': u'่ดๆผ',
u'่ปไบ': u'่ปๆผ',
u'่่ฐท': u'่็ฉ',
u'ไธๅ
ๅถ': u'่ๅๅถ',
u'ๅ
ด่ด': u'่็ทป',
u'ไธพๆ่กจ': u'่ๆ่กจ',
u'ไธพๆ่กจๅณ': u'่ๆ่กจๆฑบ',
u'ๆงๅบ': u'่ๅบ',
u'ๆงๅ': u'่ๆ',
u'ๆงๅๅฒ': u'่ๆญทๅฒ',
u'ๆง่ฏ': u'่่ฅ',
u'ๆงๆธธ': u'่้',
u'ๆง่กจ': u'่้ถ',
u'ๆง้': u'่้',
u'ๆง้่กจ': u'่้้ถ',
u'่ๅนฒๅ็ฆ': u'่ไนพๅ็ฆ',
u'่ๅท': u'่ๆฒ',
u'่ชๆตทๅ': u'่ชๆตทๆ',
u'่ชๆตทๅๅฒ': u'่ชๆตทๆญทๅฒ',
u'่นๅชๅพ': u'่นๅชๅพ',
u'่นๅชๆ': u'่นๅชๆ',
u'่นๅช่ฝ': u'่นๅช่ฝ',
u'่น้': u'่น้',
u'่นๅช': u'่น้ป',
u'่ฐๅช': u'่ฆ้ป',
u'่ฏ่ฏ': u'่ฏ่ฅ',
u'่ฒๆฌฒ': u'่ฒๆ
พ',
u'่ทๅ': u'่ทๅ',
u'่ณๅ': u'่ทๅ',
u'่ธๆจไธฐไธฐ': u'่ธๆจไธฐไธฐ',
u'่่ฏ': u'่่ฅ',
u'่ๆๅนฒ': u'่ๆไนพ',
u'่ฑๆณ็ปฃ่
ฟ': u'่ฑๆณ็นก่
ฟ',
u'่ฑๅท': u'่ฑๆฒ',
u'่ฑ็้': u'่ฑ็่ฃก',
u'่ฑๅบต่ฏ้': u'่ฑ่ด่ฉ้ธ',
u'่ฑ่ฏ': u'่ฑ่ฅ',
u'่ฑ้': u'่ฑ้',
u'่ฑ้ฉฌๅๅด': u'่ฑ้ฆฌๅผๅด',
u'่ฑๅ': u'่ฑ้ฌจ',
u'่้': u'่่ฃก',
u'่ฅๅนฒ': u'่ฅๅนฒ',
u'่ฆๅนฒ': u'่ฆๅนน',
u'่ฆ่ฏ': u'่ฆ่ฅ',
u'่ฆ้': u'่ฆ่ฃก',
u'่ฆๆ': u'่ฆ้ฌฅ',
u'่้บป': u'่ง้บป',
u'่ฑๅ ': u'่ฑไฝ',
u'่น่ฆ': u'่น็ธ',
u'่้ฝๆท': u'่้ฝๆพฑ',
u'่ๆๅ': u'่ๆๅ',
u'่ๆๆญฃๅ
ฌ': u'่ๆๆญฃๅ
ฌ',
u'่ๆ็พ': u'่ๆ็พ',
u'่ๆๆพ': u'่ๆ็พ',
u'่ๆ็
ง': u'่ๆ็
ง',
u'่ๆ็จ': u'่ๆ็จ',
u'่ๆ่ณ': u'่ๆ่ณ',
u'่ๆ่ค': u'่ๆ่ค',
u'่ๆ่': u'่ๆ่',
u'่็ปๅ ก': u'่็ปๅ ก',
u'่ถๅ ': u'่ถๅ ',
u'่ถๅบ': u'่ถ่',
u'่ถไฝ': u'่ถ้ค',
u'่ถ้ข': u'่ถ้บต',
u'่ไธ้': u'่ๅข่ฃก',
u'่ๅนฟ': u'่ๅนฟ',
u'่่': u'่่',
u'่่ฏ': u'่่ฅ',
u'่ๅฑ
': u'่ๅฑ
',
u'่่ป': u'่่ป',
u'่้ฅฅ': u'่้ฅ',
u'่ท่ฑๆท': u'่ท่ฑๆพฑ',
u'ๅบไธ': u'่ไธ',
u'ๅบไธป': u'่ไธป',
u'ๅบๅจ': u'่ๅจ',
u'ๅบๅ': u'่ๅก',
u'ๅบไธฅ': u'่ๅด',
u'ๅบๅญ': u'่ๅ',
u'ๅบๅฃซ้กฟ้': u'่ๅฃซ้ ้',
u'ๅบๅญ': u'่ๅญ',
u'ๅบๅฎข': u'่ๅฎข',
u'ๅบๅฎถ': u'่ๅฎถ',
u'ๅบๆท': u'่ๆถ',
u'ๅบๆฟ': u'่ๆฟ',
u'ๅบๆฌ': u'่ๆฌ',
u'ๅบ็ฐ': u'่็ฐ',
u'ๅบ็จผ': u'่็จผ',
u'ๅบ่่ถๅ': u'่่่ถๅ',
u'ๅบ้': u'่่ฃก',
u'ๅบ่ฏญ': u'่่ช',
u'ๅบๅ': u'่่พฒ',
u'ๅบ้': u'่้',
u'ๅบ้ข': u'่้ข',
u'ๅบ้ช': u'่้จท',
u'่ๅนฒ': u'่ๅนน',
u'่ฝ่ก': u'่ฝ่ฉ',
u'่ไธไฝ': u'่็ตฒ้ซ',
u'่ๅนฒ': u'่ไนพ',
u'่่ด': u'่่ด',
u'่ ๆฃฑ่': u'่ ็จ่',
u'่ ่ๅนฒ': u'่ ่ฟไนพ',
u'ๅไธฅ้': u'่ฏๅด้',
u'ๅๅ': u'่ฏ้ซฎ',
u'ไธไธๅช': u'่ฌไธๅช',
u'ไธไธช': u'่ฌๅ',
u'ไธๅคๅช': u'่ฌๅค้ป',
u'ไธๅคฉๅ': u'่ฌๅคฉๅพ',
u'ไธๅนดๅ่กจ': u'่ฌๅนดๆ้ถ',
u'ไธๅ': u'่ฌๆ',
u'ไธๅๅฒ': u'่ฌๆญทๅฒ',
u'ไธ็ญพๆๆถ': u'่ฌ็ฑคๆๆถ',
u'ไธๆ': u'่ฌ็ดฎ',
u'ไธ่ฑก': u'่ฌ่ฑก',
u'ไธๅช': u'่ฌ้ป',
u'ไธไฝ': u'่ฌ้ค',
u'่ฝ่
ฎ่ก': u'่ฝ่
ฎ้ฌ',
u'่ฝๅ': u'่ฝ้ซฎ',
u'ๅถๅถ็น': u'่ๅถ็น',
u'็ๅฟ': u'่ๅ
',
u'็ๅ
ๅถ': u'่ๅๅถ',
u'็ไนฆ็ซ่ฏด': u'่ๆธ็ซ่ชช',
u'็่ฒ่ฝฏไฝ': u'่่ฒ่ป้ซ',
u'็้ๆๅบ': u'่้ๆๅบ',
u'็ๅฝ': u'่้',
u'็ๅฝ่งๅ': u'่้่ฆๅ',
u'่กๅ ': u'่กไฝ',
u'่ก่ๅนฒ': u'่ก่ไนพ',
u'่ฃๆฐๅฐๅ': u'่ฃๆฐๅฐ้ซฎ',
u'่ซ่ฆ้ๅ็ไน่ฏ': u'่ซ่่ฃก่ณฃ็้บผ่ฅ',
u'่ๆฑ่ฏ': u'่ๆฑ่ฅ',
u'่ๅบ': u'่่',
u'่้พ้ฒ': u'่้ง้ฒ',
u'่ๅ': u'่้ซฎ',
u'่ๆฏ': u'่ผๆฎ',
u'่ๅ': u'่ผ้ซฎ',
u'่้': u'่ผ้ฌฑ',
u'่ๅ': u'่้ซฎ',
u'่่ก': u'่้ฌ',
u'่้กป': u'่้ฌ',
u'่้': u'่้ฌฑ',
u'่ฌ่ฌๆพๆพ': u'่ฌ่ฌ้ฌ้ฌ',
u'่ฌๅ': u'่ฌ้ซฎ',
u'่ฌๆพ': u'่ฌ้ฌ',
u'ๅ็ปฅ': u'่็ถ',
u'่ฑ้': u'่ฅ้ฌฑ',
u'่้บฆ้ข': u'่้บฅ้บต',
u'่กๆฅ่กๅป': u'่ฉไพ่ฉๅป',
u'่กๅฅณ': u'่ฉๅฅณ',
u'่กๅฆ': u'่ฉๅฉฆ',
u'่กๅฏ': u'่ฉๅฏ',
u'่กๅนณ': u'่ฉๅนณ',
u'่กๆฐๅ่ ': u'่ฉๆฐฃ่ฟด่
ธ',
u'่กๆถค': u'่ฉๆป',
u'่กๆผพ': u'่ฉๆผพ',
u'่ก็ถ': u'่ฉ็ถ',
u'่กไบง': u'่ฉ็ข',
u'่ก่': u'่ฉ่',
u'่ก่น': u'่ฉ่น',
u'่ก่ก': u'่ฉ่ฉ',
u'่งๅ': u'่ญ่',
u'่ๅนธ': u'่ๅ',
u'่ๅนฒ': u'่ๅนน',
u'ๅงๆฏ่็่พฃ': u'่ๆฏ่็่พฃ',
u'ๅงๆซ': u'่ๆซ',
u'ๅงๆก': u'่ๆก',
u'ๅงๆฏ': u'่ๆฏ',
u'ๅงๆฑ': u'่ๆฑ',
u'ๅงๆฑค': u'่ๆนฏ',
u'ๅง็': u'่็',
u'ๅง็ณ': u'่็ณ',
u'ๅงไธ': u'่็ตฒ',
u'ๅง่่พฃ': u'่่่พฃ',
u'ๅง่ถ': u'่่ถ',
u'ๅง่': u'่่',
u'ๅง้ฅผ': u'่้ค
',
u'ๅง้ป': u'่้ป',
u'่ๅ': u'่้ซฎ',
u'่ๅ': u'่่',
u'่งๆด': u'่ดๆด',
u'่ด็ฏ': u'่ด็ฏ',
u'่ง็ฏ': u'่ด็ฏ',
u'ๅไปฅ': u'่ไปฅ',
u'ๅๅฉ': u'่ๅฉ',
u'ๅๅฏๅ
ต': u'่ๅฏๅ
ต',
u'ๅๆ': u'่ๆ',
u'ๅๆบ': u'่ๆฉ',
u'ๅๆญค': u'่ๆญค',
u'ๅ็ฑ': u'่็ฑ',
u'ๅ็ฎธไปฃ็ญน': u'่็ฎธไปฃ็ฑ',
u'ๅ็': u'่่',
u'ๅ่ต': u'่่ณ',
u'่ๆท': u'่ๆพฑ',
u'่ไบ': u'่ๆผ',
u'่ๅ': u'่ๆ',
u'่ๅๅฒ': u'่ๆญทๅฒ',
u'่่ๆญๅฟ': u'่็ๆญๅ
',
u'่คๅถ': u'่ค่ฃฝ',
u'่ฏไธธ': u'่ฅไธธ',
u'่ฏๅ
ธ': u'่ฅๅ
ธ',
u'่ฏๅฐๅฝ้ค': u'่ฅๅฐๅฝ้ค',
u'่ฏๅฐ็
้ค': u'่ฅๅฐ็
้ค',
u'่ฏๅ': u'่ฅๅ',
u'่ฏๅ': u'่ฅๅ',
u'่ฏๅ
': u'่ฅๅ
',
u'่ฏๅ': u'่ฅๅ',
u'่ฏๅณ': u'่ฅๅณ',
u'่ฏๅ': u'่ฅๅ',
u'่ฏๅ': u'่ฅๅ',
u'่ฏๅ': u'่ฅๅฎ',
u'่ฏๅฉ': u'่ฅๅฉ',
u'่ฏๅญฆ': u'่ฅๅญธ',
u'่ฏๅฎณ': u'่ฅๅฎณ',
u'่ฏไธ': u'่ฅๅฐ',
u'่ฏๅฑ': u'่ฅๅฑ',
u'่ฏๅธ': u'่ฅๅธซ',
u'่ฏๅบ': u'่ฅๅบ',
u'่ฏๅ': u'่ฅๅป ',
u'่ฏๅผ': u'่ฅๅผ',
u'่ฏๆง': u'่ฅๆง',
u'่ฏๆฟ': u'่ฅๆฟ',
u'่ฏๆ': u'่ฅๆ',
u'่ฏๆน': u'่ฅๆน',
u'่ฏๆ': u'่ฅๆ',
u'่ฏๆฃ': u'่ฅๆฃ',
u'่ฏๆฃๅฑ': u'่ฅๆชขๅฑ',
u'่ฏๆฐด': u'่ฅๆฐด',
u'่ฏๆฒน': u'่ฅๆฒน',
u'่ฏๆถฒ': u'่ฅๆถฒ',
u'่ฏๆธฃ': u'่ฅๆธฃ',
u'่ฏ็': u'่ฅ็',
u'่ฏ็ฉ': u'่ฅ็ฉ',
u'่ฏ็': u'่ฅ็',
u'่ฏ็': u'่ฅ็',
u'่ฏ็ถ': u'่ฅ็ถ',
u'่ฏ็จ': u'่ฅ็จ',
u'่ฏ็': u'่ฅ็',
u'่ฏ็': u'่ฅ็',
u'่ฏ็ณ': u'่ฅ็ณ',
u'่ฏ็ง': u'่ฅ็ง',
u'่ฏ็ฎฑ': u'่ฅ็ฎฑ',
u'่ฏ็ญพ': u'่ฅ็ฑค',
u'่ฏ็ฒ': u'่ฅ็ฒ',
u'่ฏ็ณ': u'่ฅ็ณ',
u'่ฏ็บฟ': u'่ฅ็ท',
u'่ฏ็ฝ': u'่ฅ็ฝ',
u'่ฏ่': u'่ฅ่',
u'่ฏ่': u'่ฅ่',
u'่ฏ่ถ': u'่ฅ่ถ',
u'่ฏ่': u'่ฅ่',
u'่ฏ่ก': u'่ฅ่ก',
u'่ฏ่ดฉ': u'่ฅ่ฒฉ',
u'่ฏ่ดน': u'่ฅ่ฒป',
u'่ฏ้
': u'่ฅ้
',
u'่ฏๅปๅญฆ็ณป': u'่ฅ้ซๅญธ็ณป',
u'่ฏ้': u'่ฅ้',
u'่ฏ้': u'่ฅ้',
u'่ฏ้บ': u'่ฅ้ช',
u'่ฏๅคด': u'่ฅ้ ญ',
u'่ฏ้ฅต': u'่ฅ้ค',
u'่ฏ้ขๅฟ': u'่ฅ้บตๅ
',
u'่ๆ': u'่ๅด',
u'่ดๅซ็': u'่ๅซ่',
u'่ดๆถต็': u'่ๆถต่',
u'่นๆๅนฒ': u'่ๆไนพ',
u'่ๅ': u'่ฟ่',
u'่ๅๅนฒ': u'่ฟ่ไนพ',
u'่้กป': u'่้ฌ',
u'่ๆ': u'่้ฌฅ',
u'ๅทๅฟ': u'่่ช',
u'่ซ้จ': u'่ซ้จ',
u'่ๅจ็ๆ': u'่ๅ็้ฌฅ',
u'่ๅๅฅณๅฆ': u'่้ซฎๅฅณๅฆ',
u'่่ซ่ฏ': u'่่ฒ่ฅ',
u'่ๅ': u'่ๅ',
u'่ๆถ': u'่ๆนง',
u'่ๅ': u'่ๆบ',
u'่้่ฐๆฒน': u'่่ฃก่ชฟๆฒน',
u'่กๆ': u'่กๆ',
u'่ก็ฅญ': u'่ก็ฅญ',
u'่่่ซ่ซ': u'่่่ซ่ซ',
u'่่ฐฎ': u'่่ญ',
u'่ฎ่จ็ธๅ': u'่ฃ่จ็ธๅผ',
u'่ๅนฒ': u'่ถไนพ',
u'่ๅ': u'่ปๅ',
u'่ปๅ': u'่ปๅ',
u'่ ๅนฒ': u'่ ๅนน',
u'่ฎๅนฒ': u'่ ปๅนน',
u'่กๆผ': u'่กๆ',
u'่กไฝ': u'่ก้ค',
u'่กไบๅ': u'่กไบๆ',
u'่กไบๅๅฒ': u'่กไบๆญทๅฒ',
u'่กๅถ': u'่กๅ
',
u'่กๅถๅ': u'่กๅ
ๅ',
u'่กๅถๅพ': u'่กๅ
ๅพ',
u'่กไบ': u'่กๆผ',
u'่ก็พ้่
ๅไบไนๅ': u'่ก็พ้่
ๅๆผไนๅ',
u'่กๅ': u'่ก่ก',
u'ๅซๆ้': u'่กๆ้',
u'ๅฒไธ': u'่กไธ',
u'ๅฒไธ': u'่กไธ',
u'ๅฒๆฅ': u'่กไพ',
u'ๅฒๅ': u'่กๅ',
u'ๅฒๅ ': u'่กๅ ',
u'ๅฒๅบ': u'่กๅบ',
u'ๅฒๅฐ': u'่กๅฐ',
u'ๅฒๅบ': u'่กๅบ',
u'ๅฒๅ
': u'่กๅ',
u'ๅฒๅ': u'่กๅ',
u'ๅฒๅฒ': u'่กๅ',
u'ๅฒๅจ': u'่กๅ',
u'ๅฒๅป': u'่กๅป',
u'ๅฒๅฃ': u'่กๅฃ',
u'ๅฒๅฎ': u'่กๅฎ',
u'ๅฒๅ ': u'่กๅ ',
u'ๅฒๅ้ท้ต': u'่กๅ
้ท้ฃ',
u'ๅฒๅ': u'่กๅฃ',
u'ๅฒๅคฉ': u'่กๅคฉ',
u'ๅฒๅทๆๅบ': u'่กๅทๆๅบ',
u'ๅฒๅฟ': u'่กๅฟ',
u'ๅฒๆ': u'่กๆ',
u'ๅฒๆ': u'่กๆ',
u'ๅฒๅป': u'่กๆ',
u'ๅฒๆฃ': u'่กๆฃ',
u'ๅฒๆ': u'่กๆฎบ',
u'ๅฒๅณ': u'่กๆฑบ',
u'ๅฒๆณข': u'่กๆณข',
u'ๅฒๆตช': u'่กๆตช',
u'ๅฒๆฟ': u'่กๆฟ',
u'ๅฒ็ถ': u'่ก็ถ',
u'ๅฒ็น': u'่ก็น',
u'ๅฒ็ ด': u'่ก็ ด',
u'ๅฒ็จ': u'่ก็จ',
u'ๅฒ็ช': u'่ก็ช',
u'ๅฒ็บฟ': u'่ก็ท',
u'ๅฒ็': u'่ก่',
u'ๅฒ่ฆ': u'่ก่ฆ',
u'ๅฒ่ตท': u'่ก่ตท',
u'ๅฒ่ฝฆ': u'่ก่ป',
u'ๅฒ่ฟ': u'่ก้ฒ',
u'ๅฒ่ฟ': u'่ก้',
u'ๅฒ้': u'่ก้',
u'ๅฒ้': u'่ก้',
u'ๅฒ้ท': u'่ก้ท',
u'ๅฒๅคด้ต': u'่ก้ ญ้ฃ',
u'ๅฒ้ฃ': u'่ก้ขจ',
u'่กฃ็ปฃๆผ่ก': u'่กฃ็นกๆ่ก',
u'่กจๅพ': u'่กจๅพต',
u'่กจ้': u'่กจ่ฃก',
u'่กจ้ข': u'่กจ้ข',
u'่กทไบ': u'่กทๆผ',
u'่ข้': u'่ข่ฃก',
u'่ข่กจ': u'่ข้ถ',
u'่ข้': u'่ข่ฃก',
u'่ขซ้': u'่ขซ่ฃก',
u'่ขซๅค': u'่ขซ่ค',
u'่ขซ่ฆ็': u'่ขซ่ฆ่',
u'่ขซๅไฝฏ็': u'่ขซ้ซฎไฝฏ็',
u'่ขซๅๅ
ฅๅฑฑ': u'่ขซ้ซฎๅ
ฅๅฑฑ',
u'่ขซๅๅทฆ่กฝ': u'่ขซ้ซฎๅทฆ่กฝ',
u'่ขซๅ็ผจๅ ': u'่ขซ้ซฎ็บๅ ',
u'่ขซๅ้ณ็': u'่ขซ้ซฎ้ฝ็',
u'่ฃๅนถ': u'่ฃไฝต',
u'่ฃๅถ': u'่ฃ่ฃฝ',
u'้ๆ': u'่ฃๆ',
u'้ๆตท': u'่ฃๆตท',
u'่กฅไบ': u'่ฃๆผ',
u'่กฅ่ฏ': u'่ฃ่ฅ',
u'่กฅ่ก่ฏ': u'่ฃ่ก่ฅ',
u'่กฅๆณจ': u'่ฃ่จป',
u'่ฃ
ๆ': u'่ฃๆบ',
u'้ๅพๅค่ฟ': u'่ฃกๅพๅค้ฃ',
u'้ๅค': u'่ฃกๅค',
u'้ๅฑ': u'่ฃกๅฑ',
u'้ๅฑ': u'่ฃกๅฑค',
u'้ๅธ': u'่ฃกๅธ',
u'้ๅธฆ': u'่ฃกๅธถ',
u'้ๅผฆ': u'่ฃกๅผฆ',
u'้ๅบๅคๅ': u'่ฃกๆๅคๅ',
u'้่': u'่ฃก่',
u'้่กฃ': u'่ฃก่กฃ',
u'้้ๅคๅฝ': u'่ฃก้ๅคๅ',
u'้้ๅคๆ': u'่ฃก้ๅคๆต',
u'้่พน': u'่ฃก้',
u'้้ด': u'่ฃก้',
u'้้ข': u'่ฃก้ข',
u'้้ขๅ
': u'่ฃก้ขๅ
',
u'้ๅคด': u'่ฃก้ ญ',
u'ๅถไปถ': u'่ฃฝไปถ',
u'ๅถไฝ': u'่ฃฝไฝ',
u'ๅถๅ': u'่ฃฝๅ',
u'ๅถๅค': u'่ฃฝๅ',
u'ๅถๅฐ': u'่ฃฝๅฐ',
u'ๅถๅท': u'่ฃฝๅท',
u'ๅถๅ': u'่ฃฝๅ',
u'ๅถๅ': u'่ฃฝๅ',
u'ๅถๅ': u'่ฃฝๅ',
u'ๅถๅพ': u'่ฃฝๅ',
u'ๅถๅพ': u'่ฃฝๅพ',
u'ๅถๆ': u'่ฃฝๆ',
u'ๅถๆณ': u'่ฃฝๆณ',
u'ๅถๆต': u'่ฃฝๆผฟ',
u'ๅถไธบ': u'่ฃฝ็บ',
u'ๅถ็': u'่ฃฝ็',
u'ๅถ็': u'่ฃฝ็',
u'ๅถ็จ': u'่ฃฝ็จ',
u'ๅถ็ณ': u'่ฃฝ็ณ',
u'ๅถ็บธ': u'่ฃฝ็ด',
u'ๅถ่ฏ': u'่ฃฝ่ฅ',
u'ๅถ่กจ': u'่ฃฝ่กจ',
u'ๅถ้ ': u'่ฃฝ้ ',
u'ๅถ้ฉ': u'่ฃฝ้ฉ',
u'ๅถ้': u'่ฃฝ้',
u'ๅถ็': u'่ฃฝ้นฝ',
u'ๅคไปๅนดๅฆ': u'่คไปๅนดๅฆ',
u'ๅคไปฅ็พไธ': u'่คไปฅ็พ่ฌ',
u'ๅคไฝ': u'่คไฝ',
u'ๅคไฟก': u'่คไฟก',
u'ๅคๅ
้ณ': u'่คๅ
้ณ',
u'ๅคๅฝๆฐ': u'่คๅฝๆธ',
u'ๅคๅๆฐ': u'่คๅๆธ',
u'ๅคๅๆ': u'่คๅๆ',
u'ๅคๅ่งฃ': u'่คๅ่งฃ',
u'ๅคๅ': u'่คๅ',
u'ๅคๅฉ': u'่คๅฉ',
u'ๅคๅฐ': u'่คๅฐ',
u'ๅคๅฅ': u'่คๅฅ',
u'ๅคๅ': u'่คๅ',
u'ๅคๅ': u'่คๅ',
u'ๅคๅ': u'่คๅก',
u'ๅคๅฃ': u'่คๅฃ',
u'ๅคๅฃฎ': u'่คๅฃฏ',
u'ๅคๅง': u'่คๅง',
u'ๅคๅญ้ฎ': u'่คๅญ้ต',
u'ๅคๅฎก': u'่คๅฏฉ',
u'ๅคๅ': u'่คๅฏซ',
u'ๅคๅฏนๆฐ': u'่คๅฐๆธ',
u'ๅคๅนณ้ข': u'่คๅนณ้ข',
u'ๅคๅผ': u'่คๅผ',
u'ๅคๅค': u'่คๅพฉ',
u'ๅคๆฐ': u'่คๆธ',
u'ๅคๆฌ': u'่คๆฌ',
u'ๅคๆฅ': u'่คๆฅ',
u'ๅคๆ ธ': u'่คๆ ธ',
u'ๅคๆฃ': u'่คๆชข',
u'ๅคๆฌก': u'่คๆฌก',
u'ๅคๆฏ': u'่คๆฏ',
u'ๅคๅณ': u'่คๆฑบ',
u'ๅคๆต': u'่คๆต',
u'ๅคๆต': u'่คๆธฌ',
u'ๅคไบฉ็': u'่ค็็',
u'ๅคๅ': u'่ค็ผ',
u'ๅค็ฎ': u'่ค็ฎ',
u'ๅค็ผ': u'่ค็ผ',
u'ๅค็ง': u'่ค็จฎ',
u'ๅค็บฟ': u'่ค็ท',
u'ๅคไน ': u'่ค็ฟ',
u'ๅค่ฒ': u'่ค่ฒ',
u'ๅคๅถ': u'่ค่',
u'ๅคๅถ': u'่ค่ฃฝ',
u'ๅค่ฏ': u'่ค่จบ',
u'ๅค่ฏ': u'่ค่ฉ',
u'ๅค่ฏ': u'่ค่ฉ',
u'ๅค่ฏ': u'่ค่ฉฆ',
u'ๅค่ฏพ': u'่ค่ชฒ',
u'ๅค่ฎฎ': u'่ค่ญฐ',
u'ๅคๅๅฝๆฐ': u'่ค่ฎๅฝๆธ',
u'ๅค่ต': u'่ค่ณฝ',
u'ๅค่พ
้ณ': u'่ค่ผ้ณ',
u'ๅค่ฟฐ': u'่ค่ฟฐ',
u'ๅค้': u'่ค้ธ',
u'ๅค้ฑ': u'่ค้ข',
u'ๅค้
': u'่ค้ฑ',
u'ๅคๆ': u'่ค้',
u'ๅค็ต': u'่ค้ป',
u'ๅค้ณ': u'่ค้ณ',
u'ๅค้ต': u'่ค้ป',
u'่ค่ต': u'่ค่ฎ',
u'่กฌ้': u'่ฅฏ่ฃก',
u'่ฅฟๅ ': u'่ฅฟไฝ',
u'่ฅฟๅจ้': u'่ฅฟๅจ้',
u'่ฅฟๅฒณ': u'่ฅฟๅถฝ',
u'่ฅฟๆ': u'่ฅฟๆ',
u'่ฅฟๅ': u'่ฅฟๆ',
u'่ฅฟๅๅฒ': u'่ฅฟๆญทๅฒ',
u'่ฅฟ็ฑณ่ฐท': u'่ฅฟ็ฑณ่ฐท',
u'่ฅฟ่ฏ': u'่ฅฟ่ฅ',
u'่ฅฟ่ฐท็ฑณ': u'่ฅฟ่ฐท็ฑณ',
u'่ฅฟๆธธ': u'่ฅฟ้',
u'่ฆๅ ': u'่ฆไฝ',
u'่ฆๅ
ๅถ': u'่ฆๅๅถ',
u'่ฆๅ ๅ': u'่ฆๅ ๅ',
u'่ฆ่ชๅถ': u'่ฆ่ชๅถ',
u'่ฆๅฒ': u'่ฆ่ก',
u'่ฆไน': u'่ฆ้บผ',
u'่ฆไบก': u'่ฆไบก',
u'่ฆๅฝ': u'่ฆๅฝ',
u'่ฆๅทขไนไธๆ ๅฎๅต': u'่ฆๅทขไนไธ็กๅฎๅต',
u'่ฆๆฐด้พๆถ': u'่ฆๆฐด้ฃๆถ',
u'่ฆๆฒก': u'่ฆๆฒ',
u'่ฆ็': u'่ฆ่',
u'่ฆ็': u'่ฆ่',
u'่ฆ็็': u'่ฆ่่',
u'่ฆ่พ': u'่ฆ่ฝ',
u'่ฆ้จ็ฟปไบ': u'่ฆ้จ็ฟป้ฒ',
u'่งไบ': u'่ฆๆผ',
u'่งๆฃฑ่ง่ง': u'่ฆ็จ่ฆ่ง',
u'่ง็ด ๆฑๆด': u'่ฆ็ด ๆฑๆจธ',
u'่ง้ไธๆ': u'่ฆ้ไธๆ',
u'่งๅ': u'่ฆๅ',
u'่ง่': u'่ฆ็ฏ',
u'่ฆๅฆๅฏไป': u'่ฆๅฆๅฏ่ฎ',
u'่งไบ': u'่ฆๆผ',
u'่ง้': u'่งๆก',
u'่ง่ฝๅ': u'่ง่ฝ็ผ',
u'่ง่ฝ้': u'่ง่ฝ่ฃก',
u'่งๆฃฑ': u'่ง็จ',
u'่งฃ้': u'่งฃๅฑ',
u'่งฃ็่ฏ': u'่งฃ็่ฅ',
u'่งฃ่ฏ': u'่งฃ่ฅ',
u'่งฃ้ไป้กป็ณป้ไบบ': u'่งฃ้ดไป้ ็นซ้ดไบบ',
u'่งฃ้่ฟ้กป็ณป้ไบบ': u'่งฃ้ด้้ ็นซ้ดไบบ',
u'่งฃๅไฝฏ็': u'่งฃ้ซฎไฝฏ็',
u'่งฆ้กป': u'่งธ้ฌ',
u'่จไบ': u'่จไบ',
u'่จๅคง่ๅคธ': u'่จๅคง่ๅคธ',
u'่จ่พฉ่็กฎ': u'่จ่พฏ่็กฎ',
u'่ฎขๅถ': u'่จ่ฃฝ',
u'่ฎกๅ': u'่จๅ',
u'่ฎกๆถ่กจ': u'่จๆ้ถ',
u'ๆไบ': u'่จไบ',
u'ๆไบ': u'่จไบ',
u'ๆไบค': u'่จไบค',
u'ๆไบบ': u'่จไบบ',
u'ๆไป': u'่จไป',
u'ๆๅฟๆ': u'่จๅ
ๆ',
u'ๆๅค่ฎฝไป': u'่จๅค่ซทไป',
u'ๆๅ': u'่จๅ',
u'ๆๅฝ': u'่จๅฝ',
u'ๆๅ': u'่จๅ',
u'ๆๆขฆ': u'่จๅคข',
u'ๆๅคง': u'่จๅคง',
u'ๆๅญค': u'่จๅญค',
u'ๆๅบ': u'่จๅบ',
u'ๆๆ
': u'่จๆ
',
u'ๆ็พ': u'่จ็พ',
u'ๆ็
': u'่จ็
',
u'ๆ็ฎก': u'่จ็ฎก',
u'ๆ่จ': u'่จ่จ',
u'ๆ่ฏ': u'่จ่ฉ',
u'ๆไนฐ': u'่จ่ฒท',
u'ๆๅ': u'่จ่ณฃ',
u'ๆ่บซ': u'่จ่บซ',
u'ๆ่พ': u'่จ่พญ',
u'ๆ่ฟ': u'่จ้',
u'ๆ่ฟ': u'่จ้',
u'ๆ้': u'่จ้',
u'่ฎธๆฟ่ตท็ป': u'่จฑๆฟ่ตท็ถ',
u'่ฏ่ฏด็': u'่จด่ชช่',
u'ๆณจไธ': u'่จปไธ',
u'ๆณจๅ': u'่จปๅ',
u'ๆณจๅคฑ': u'่จปๅคฑ',
u'ๆณจๅฎ': u'่จปๅฎ',
u'ๆณจๆ': u'่จปๆ',
u'ๆณจๆ ': u'่จปๆจ',
u'ๆณจ็ๅจๅจ': u'่จป็ๅจๅจ',
u'ๆณจ็': u'่จป็',
u'ๆณจ่': u'่จป่
ณ',
u'ๆณจ่งฃ': u'่จป่งฃ',
u'ๆณจ่ฎฐ': u'่จป่จ',
u'ๆณจ่ฏ': u'่จป่ญฏ',
u'ๆณจ้': u'่จป้ท',
u'ๆณจ๏ผ': u'่จป๏ผ',
u'่ฏๆญๅ': u'่ฉๆท็ผ',
u'่ฏๆณจ': u'่ฉ่จป',
u'่ฏๅนฒ': u'่ฉๅนน',
u'่ฏๆฑ': u'่ฉๅฝ',
u'่ฏไฝ': u'่ฉ้ค',
u'่ฏขไบ': u'่ฉขๆผ',
u'่ฏขไบๅ่': u'่ฉขๆผ่ป่',
u'่ฏ่ฏ': u'่ฉฆ่ฅ',
u'่ฏๅถ': u'่ฉฆ่ฃฝ',
u'่ฏไบ': u'่ฉฉไบ',
u'่ฉฉไบ': u'่ฉฉไบ',
u'่ฏ่ต': u'่ฉฉ่ฎ',
u'่ฏ้': u'่ฉฉ้',
u'่ฏไฝ': u'่ฉฉ้ค',
u'่ฏ้ๆ่ฏ': u'่ฉฑ่ฃกๆ่ฉฑ',
u'่ฏฅ้': u'่ฉฒ้',
u'่ฏฆๅพๅๅผ': u'่ฉณๅพตๅๅผ',
u'่ฏฆๆณจ': u'่ฉณ่จป',
u'่ฏ่ต': u'่ช่ฎ',
u'ๅคธๅคๆ้ก': u'่ชๅค้ฌฅ้ก',
u'ๅคธ่ฝๆๆบ': u'่ช่ฝ้ฌฅๆบ',
u'ๅคธ่ต': u'่ช่ฎ',
u'ๅฟๅ': u'่ชๅ',
u'ๅฟๅ': u'่ชๅ',
u'ๅฟๅบ': u'่ชๆ
ถ',
u'ๅฟๅผ': u'่ช็ฐ',
u'่ฎคๅ': u'่ชๆบ',
u'่ฏฑๅฅธ': u'่ชๅงฆ',
u'่ฏญไบ': u'่ชไบ',
u'่ฏญๆฑ': u'่ชๅฝ',
u'่ฏญๆไบ': u'่ชๆไบ',
u'่ชๆไบ': u'่ชๆไบ',
u'่ฏๅพ': u'่ช ๅพต',
u'่ฏๆด': u'่ช ๆจธ',
u'่ฏฌ่': u'่ชฃ่ก',
u'่ฏด็': u'่ชช่',
u'่ฐๅนฒ็': u'่ชฐๅนน็',
u'่ฏพๅพ': u'่ชฒๅพต',
u'่ฏพไฝ': u'่ชฒ้ค',
u'่ฐๅ': u'่ชฟๆบ',
u'่ฐๅถ': u'่ชฟ่ฃฝ',
u'่ฐ่กจ': u'่ชฟ้ถ',
u'่ฐ้่กจ': u'่ชฟ้้ถ',
u'่ฐๅพ': u'่ซๅพต',
u'่ฏทๅ้
': u'่ซๅ้ฑ',
u'่ฏทๅๅ
ฅ็ฎ': u'่ซๅๅ
ฅ็',
u'่ฏทๆ': u'่ซ่จ',
u'ๅจ่ฏข': u'่ซฎ่ฉข',
u'่ฏธไฝ': u'่ซธ้ค',
u'่ฐๅนฒ': u'่ฌๅนน',
u'่ฐข็ปๅ่ง': u'่ฌ็ตๅ่ง',
u'่ฐฌ้่ๅฃฐ': u'่ฌฌๆก่่ฒ',
u'่ฐฌ่ต': u'่ฌฌ่ฎ',
u'่ฌทไธ': u'่ฌท้',
u'่ฐจไบๅฟ': u'่ฌนๆผๅฟ',
u'่ญฆไธ้': u'่ญฆไธ้',
u'่ญฆๆฅ้': u'่ญฆๅ ฑ้',
u'่ญฆ็คบ้': u'่ญฆ็คบ้',
u'่ญฆ้': u'่ญฆ้',
u'่ฏๆณจ': u'่ญฏ่จป',
u'ๆคๅ': u'่ญท้ซฎ',
u'ๅๅพ': u'่ฎๅพต',
u'ๅไธ': u'่ฎ้',
u'ๅ่': u'่ฎ้ซ',
u'ๅ้ซ': u'่ฎ้ซ',
u'ไปๅ': u'่ฎๅ',
u'ไปๅคท': u'่ฎๅคท',
u'ไปๆ ก': u'่ฎๆ ก',
u'ไปๆญฃ': u'่ฎๆญฃ',
u'ไป้': u'่ฎ้',
u'่ตไธ็ปๅฃ': u'่ฎไธ็ตๅฃ',
u'่ตไฝฉ': u'่ฎไฝฉ',
u'่ตๅ': u'่ฎๅ',
u'่ตๅนไธๅทฒ': u'่ฎๅไธๅทฒ',
u'่ตๆฌ': u'่ฎๆ',
u'่ตไน': u'่ฎๆจ',
u'่ตๆญ': u'่ฎๆญ',
u'่ตๅน': u'่ฎๆญ',
u'่ต็พ': u'่ฎ็พ',
u'่ต็พก': u'่ฎ็พจ',
u'่ต่ฎธ': u'่ฎ่จฑ',
u'่ต่ฏ': u'่ฎ่ฉ',
u'่ต่ช': u'่ฎ่ญฝ',
u'่ต่ต': u'่ฎ่ณ',
u'่ต่พ': u'่ฎ่พญ',
u'่ต้ข': u'่ฎ้ ',
u'่ฑๅนฒ': u'่ฑไนพ',
u'่ฑ่
ๅนฒ': u'่ฑ่
ไนพ',
u'็ซ็': u'่ฑ่',
u'็ซ่ตท่ๆข': u'่ฑ่ตท่ๆข',
u'ไธฐๆปจ': u'่ฑๆฟฑ',
u'ไธฐๆปจไนก': u'่ฑๆฟฑ้',
u'่ฑกๅพ': u'่ฑกๅพต',
u'่ฑกๅพ็': u'่ฑกๅพต่',
u'่ดๅบ็ดฏ็ดฏ': u'่ฒ ๅต็บ็บ',
u'่ดชๆฌฒ': u'่ฒชๆ
พ',
u'่ดตไปท': u'่ฒดไปท',
u'่ดตๅนฒ': u'่ฒดๅนน',
u'่ดตๅพ': u'่ฒดๅพต',
u'่ฒทๅถ': u'่ฒทๅ
',
u'ไนฐๅถ': u'่ฒทๅ
',
u'ไนฐๆญๅ': u'่ฒทๆท็ผ',
u'่ดนๅ ': u'่ฒปไฝ',
u'่ดป่': u'่ฒฝ็ฏ',
u'่ต้ๅ ็จ': u'่ณ้ๅ ็จ',
u'่ดพๅ': u'่ณๅ',
u'่ณๅ': u'่ณๅ',
u'่ต้ฅฅ': u'่ณ้ฅ',
u'่ต่ต': u'่ณ่ฎ',
u'่ดคๅ': u'่ณขๅ',
u'่ณขๅ': u'่ณขๅ',
u'ๅๆญๅ': u'่ณฃๆท็ผ',
u'ๅๅ': u'่ณฃ็',
u'่ดจๆด': u'่ณชๆจธ',
u'่ตๅฐ': u'่ณญๆชฏ',
u'่ตๆ': u'่ณญ้ฌฅ',
u'่ณธไฝ': u'่ณธ้ค',
u'่ดญๅนถ': u'่ณผไฝต',
u'่ดญไนฐๆฌฒ': u'่ณผ่ฒทๆ
พ',
u'่ตขไฝ': u'่ด้ค',
u'่ตคๆฏ': u'่ตคๆฎ',
u'่ตค็ปณ็ณป่ถณ': u'่ตค็นฉ็นซ่ถณ',
u'่ตค้็ด ': u'่ตค้็ด ',
u'่ตฐๅ่ทฏ': u'่ตฐๅ่ทฏ',
u'่ตทๅค': u'่ตท่ค',
u'่ตทๅ': u'่ตท้ฌจ',
u'่ถ
็บงๆฏ': u'่ถ
็ด็',
u'่ตถๅถ': u'่ถ่ฃฝ',
u'่ตถ้ขๆฃ': u'่ถ้บตๆฃ',
u'่ตตๆฒปๅ': u'่ถๆฒปๅณ',
u'่ตตๅบ': u'่ถ่',
u'่ถฑๅนฒ': u'่ถฒๅนน',
u'่ถณไบ': u'่ถณๆผ',
u'่ทๆ': u'่ทๆ',
u'่ท่ก': u'่ท่ฉ',
u'่ทฏ็ญพ': u'่ทฏ็ฑค',
u'่ทณๆขๅฐไธ': u'่ทณๆจๅฐไธ',
u'่ทณ่ก': u'่ทณ่ฉ',
u'่ทณ่กจ': u'่ทณ้ถ',
u'่นชไบ': u'่นชๆผ',
u'่นญๆฃฑๅญ': u'่นญ็จๅญ',
u'่บ้': u'่บ้ฌฑ',
u'่บซไบ': u'่บซๆผ',
u'่บซไฝๅ่ค': u'่บซ้ซ้ซฎ่',
u'่บฏๅนฒ': u'่ปๅนน',
u'่ฝฆๅบ้': u'่ปๅบซ่ฃก',
u'่ฝฆ็ซ้': u'่ป็ซ่ฃก',
u'่ฝฆ้': u'่ป่ฃก',
u'่ฝจ่': u'่ป็ฏ',
u'ๅ้ๅ
ๅถ': u'่ป้ๅๅถ',
u'่ฝฉ่พ': u'่ป้ข',
u'่พไบ': u'่ผๆผ',
u'ๆฝๆฒ': u'่ผๆฒ',
u'ๆฝๆญ': u'่ผๆญ',
u'ๆฝ่ฏ': u'่ผ่ฏ',
u'ๆฝ่': u'่ผ่ฏ',
u'ๆฝ่ฉ': u'่ผ่ฉ',
u'ๆฝ่ฏ': u'่ผ่ฉ',
u'ๆฝ่ฏ': u'่ผ่ฉฉ',
u'ๆฝ่ฉฉ': u'่ผ่ฉฉ',
u'่ฝปไบ': u'่ผๆผ',
u'่ฝป่ฝปๆพๆพ': u'่ผ่ผ้ฌ้ฌ',
u'่ฝปๆพ': u'่ผ้ฌ',
u'่ฝฎๅฅธ': u'่ผชๅงฆ',
u'่ฝฎๅ': u'่ผช่ฟด',
u'่ฝฌๅๅพ': u'่ฝๅๅพ',
u'่ฝฌๅฐ': u'่ฝๆชฏ',
u'่ฝฌๆ': u'่ฝ่จ',
u'่ฝฌๆๅ้': u'่ฝ้ฌฅๅ้',
u'่พไธ': u'่พไธ',
u'่พ่ฐท': u'่พ็ฉ',
u'ๅๅ
ฌๅฐ': u'่พฆๅ
ฌๆชฏ',
u'่พๆฑ': u'่พญๅฝ',
u'่พซๅ': u'่พฎ้ซฎ',
u'่พฉๆ': u'่พฏ้ฌฅ',
u'ๅๅ': u'่พฒๆ',
u'ๅๅๅฒ': u'่พฒๆญทๅฒ',
u'ๅๆฐๅ': u'่พฒๆฐๆ',
u'ๅๆฐๅๅฒ': u'่พฒๆฐๆญทๅฒ',
u'ๅๅบ': u'่พฒ่',
u'ๅ่ฏ': u'่พฒ่ฅ',
u'่ฟๅ': u'่ฟ่ฟด',
u'่ฟๆฅ็กไป': u'่ฟๆฅ็ก่ฎ',
u'่ฟๆฅ้': u'่ฟๆฅ่ฃก',
u'่ฟๆด': u'่ฟๆจธ',
u'่ฟฅ็ถๅๅผ': u'่ฟฅ็ถ่ฟด็ฐ',
u'่ฟซไบ': u'่ฟซๆผ',
u'ๅๅ
่ฟ็
ง': u'่ฟดๅ
่ฟ็
ง',
u'ๅๅ': u'่ฟดๅ',
u'ๅๅ': u'่ฟดๅ',
u'ๅๅป': u'่ฟดๅป',
u'ๅๅฝขๅคน': u'่ฟดๅฝขๅคพ',
u'ๅๆ': u'่ฟดๆ',
u'ๅๆ': u'่ฟดๆ',
u'ๅๆต': u'่ฟดๆต',
u'ๅ็ฏ': u'่ฟด็ฐ',
u'ๅ็บน้': u'่ฟด็ด้',
u'ๅ็ป': u'่ฟด็น',
u'ๅ็ฟ': u'่ฟด็ฟ',
u'ๅ่ ': u'่ฟด่
ธ',
u'ๅ่ฏต': u'่ฟด่ชฆ',
u'ๅ่ทฏ': u'่ฟด่ทฏ',
u'ๅ่ฝฌ': u'่ฟด่ฝ',
u'ๅ้ๆง': u'่ฟด้ๆง',
u'ๅ้ฟ': u'่ฟด้ฟ',
u'ๅ้ฎ': u'่ฟด้พ',
u'ๅ้ณ': u'่ฟด้ณ',
u'ๅๅ': u'่ฟด้ฟ',
u'ๅ้ฃ': u'่ฟด้ขจ',
u'่ฟทๅนป่ฏ': u'่ฟทๅนป่ฅ',
u'่ฟทไบ': u'่ฟทๆผ',
u'่ฟท่': u'่ฟทๆฟ',
u'่ฟท่ฏ': u'่ฟท่ฅ',
u'่ฟท้ญ่ฏ': u'่ฟท้ญ่ฅ',
u'่ฟฝๅถ': u'่ฟฝๅ
',
u'้ไผ': u'้ๅคฅ',
u'้็ง่ฏ': u'้็่ฅ',
u'้่ไบๅฏ': u'้่ๆผๅฏ',
u'้้': u'้้',
u'้้ๅ': u'้้ๅ',
u'้้ฃๅ': u'้้ขจๅพ',
u'้ๅ': u'้้ซฎ',
u'้้ฅๆธธ': u'้้้',
u'้่พ': u'้้ข',
u'่ฟๅชไธ': u'้ๅชไธ',
u'่ฟๅชๅ
': u'้ๅชๅ
',
u'่ฟๅชๅฎน': u'้ๅชๅฎน',
u'่ฟๅช้': u'้ๅชๆก',
u'่ฟๅชๆฏ': u'้ๅชๆฏ',
u'่ฟๅช็จ': u'้ๅช็จ',
u'่ฟไผไบบ': u'้ๅคฅไบบ',
u'่ฟ้': u'้่ฃก',
u'่ฟ้': u'้้',
u'่ฟๅช': u'้้ป',
u'่ฟไน': u'้้บผ',
u'่ฟไน็': u'้้บผ่',
u'้ๅฅธ': u'้ๅงฆ',
u'้ๅฟ้ข': u'้ๅฟ้บต',
u'้ไบ': u'้ๆผ',
u'้ๅ': u'้ๆ',
u'้ๅๅฒ': u'้ๆญทๅฒ',
u'้ๅบ': u'้่',
u'้ๅถ้ฌฅ็ ': u'้ๅ
้ฌฅ็ ',
u'้ๅถๆ็ ': u'้ๅ
้ฌฅ็ ',
u'้ ้': u'้ ้',
u'้ ้่กจ': u'้ ้้ถ',
u'้ ๆฒ': u'้ ้บฏ',
u'่ฟไธๅนถๅ': u'้ฃไธไฝตๅ',
u'่ฟๅ ': u'้ฃไฝ',
u'่ฟ้': u'้ฃๆก',
u'่ฟ็ณป': u'้ฃ็นซ',
u'่ฟๅบ': u'้ฃ่',
u'ๅจๆธธไธ็': u'้ฑ้ไธ็',
u'่ฟๅ ': u'้ฒไฝ',
u'้ผๅนถ': u'้ผไฝต',
u'้้ฃๅ': u'้้ขจๅพ',
u'ๆธธไบ': u'้ไบ',
u'ๆธธไบบ': u'้ไบบ',
u'ๆธธไป': u'้ไป',
u'ๆธธไผด': u'้ไผด',
u'ๆธธไพ ': u'้ไฟ ',
u'ๆธธๅถ': u'้ๅถ',
u'ๆธธๅๆไฝ': u'้ๅๆ้ค',
u'ๆธธๅจ': u'้ๅ',
u'ๆธธๅญ': u'้ๅ',
u'ๆธธๅญ': u'้ๅญ',
u'ๆธธๅญฆ': u'้ๅญธ',
u'ๆธธๅฎข': u'้ๅฎข',
u'ๆธธๅฎฆ': u'้ๅฎฆ',
u'ๆธธๅฑฑ็ฉๆฐด': u'้ๅฑฑ็ฉๆฐด',
u'ๆธธๅฟ
ๆๆน': u'้ๅฟ
ๆๆน',
u'ๆธธๆฉ': u'้ๆฉ',
u'ๆธธๆ': u'้ๆฒ',
u'ๆธธๆๅฅฝ้ฒ': u'้ๆๅฅฝ้',
u'ๆธธๆน': u'้ๆน',
u'ๆธธๆ': u'้ๆ',
u'ๆธธไน': u'้ๆจ',
u'ๆธธๆ ๅกๅฐบ': u'้ๆจๅกๅฐบ',
u'ๆธธๅ': u'้ๆญท',
u'ๆธธๆฐ': u'้ๆฐ',
u'ๆธธๆฒณ': u'้ๆฒณ',
u'ๆธธ็': u'้็ต',
u'ๆธธ็ฉ': u'้็ฉ',
u'ๆธธ่ก': u'้็ช',
u'ๆธธ็ฎ้ชๆ': u'้็ฎ้จๆท',
u'ๆธธ็จ': u'้็จ',
u'ๆธธไธ': u'้็ตฒ',
u'ๆธธๅ
ด': u'้่',
u'ๆธธ่น': u'้่น',
u'ๆธธ่': u'้่',
u'ๆธธ่กไธๅฝ': u'้่ฉไธๆญธ',
u'ๆธธ่บ': u'้่',
u'ๆธธ่ก': u'้่ก',
u'ๆธธ่ก': u'้่ก',
u'ๆธธ่ง': u'้่ฆฝ',
u'ๆธธ่ฎฐ': u'้่จ',
u'ๆธธ่ฏด': u'้่ชช',
u'ๆธธ่ต': u'้่ณ',
u'ๆธธ่ตฐ': u'้่ตฐ',
u'ๆธธ่ธช': u'้่นค',
u'ๆธธ้': u'้้',
u'ๆธธ้': u'้้ฏ',
u'ๆธธ็ฆป': u'้้ข',
u'ๆธธ้ชๅ
ต': u'้้จๅ
ต',
u'ๆธธ้ญ': u'้้ญ',
u'่ฟไบ': u'้ๆผ',
u'่ฟๆ': u'้ๆ',
u'่ฟๆฐด้ข': u'้ๆฐด้บต',
u'้่': u'้็ฏ',
u'้ไบ': u'้ๆผ',
u'้ๅ': u'้่ฟด',
u'่ฟๅฟๆ่ณ': u'้ ็ธฃ็บ่ณ',
u'่ฟๆธธ': u'้ ้',
u'้จๆธธ': u'้จ้',
u'้ฎไธ': u'้ฎ้',
u'่ฟไบ': u'้ทๆผ',
u'้ๆ่กจๆ': u'้ธๆ่กจๆ',
u'้ๆ่กจๅณ': u'้ธๆ่กจๆฑบ',
u'้ๆ่กจ็ฐ': u'้ธๆ่กจ็พ',
u'้ๆ่กจ็คบ': u'้ธๆ่กจ็คบ',
u'้ๆ่กจ่พพ': u'้ธๆ่กจ้',
u'้ไผ ้': u'้บๅณ้',
u'้่': u'้บ็ฏ',
u'้่ฟน': u'้บ่ฟน',
u'่พฝๆฒ': u'้ผ็',
u'้ฟๅญ่ฏ': u'้ฟๅญ่ฅ',
u'้ๅคฉไนๅนธ': u'้ๅคฉไนๅ',
u'่ฟๅ ': u'้ไฝ',
u'่ฟ้': u'้ๆก',
u'่ฟๅฒ': u'้่ก',
u'้้้้ข': u'้่ฃก้้ข',
u'้ฃๅชๆฏ': u'้ฃๅชๆฏ',
u'้ฃๅชๆ': u'้ฃๅชๆ',
u'้ฃๅท': u'้ฃๆฒ',
u'้ฃ้': u'้ฃ่ฃก',
u'้ฃๅช': u'้ฃ้ป',
u'้ฃไน': u'้ฃ้บผ',
u'้ฃไน็': u'้ฃ้บผ่',
u'้ๆด': u'้ๆจธ',
u'้้่ฒ่ฒ': u'้้่ฒ่ฒ',
u'้ๆธธ': u'้้',
u'้้': u'้้',
u'้จ่ฝๅ': u'้จ่ฝ็ผ',
u'้ฝไบ': u'้ฝๆผ',
u'ไนกๆฟ': u'้ๆฟ',
u'้ญๅฑไบ': u'้ญๅฑไบ',
u'้ๅฏไบ': u'้ญๅฑไบ',
u'้ๅบๅ
ฌ': u'้ญ่ๅ
ฌ',
u'้
ๅถ้ฅฒๆ': u'้
ๅถ้ฃผๆ',
u'้
ๅ็': u'้
ๅ่',
u'้
ๆฐดๅนฒ็ฎก': u'้
ๆฐดๅนน็ฎก',
u'้
่ฏ': u'้
่ฅ',
u'้
ๅถ': u'้
่ฃฝ',
u'้
ๅธ': u'้
ๅธ',
u'้
ๅ': u'้
็ฝ',
u'้
่ด': u'้
่ด',
u'้
่ฏ': u'้
่ฅ',
u'้
้ดๆฒ่': u'้
้ด้บดๆซฑ',
u'้
ๆฒ': u'้
้บด',
u'้
ฅๆพ': u'้
ฅ้ฌ',
u'้ๆด': u'้ๆจธ',
u'้ไบ': u'้ๆผ',
u'้ๅ': u'้็ฝ',
u'ไธไธซๅคด': u'้ไธซ้ ญ',
u'ไธไบ': u'้ไบ',
u'ไธไบบ': u'้ไบบ',
u'ไธไพช': u'้ๅ',
u'ไธๅ
ซๆช': u'้ๅ
ซๆช',
u'ไธๅๅ': u'้ๅๅ',
u'ไธๅง': u'้ๅ',
u'ไธๅ': u'้ๅ',
u'ไธๅฒ': u'้ๅฒ',
u'ไธๅ': u'้ๅ',
u'ไธๅค': u'้ๅ',
u'ไธๅฐ': u'้ๅฐ',
u'ไธๅคท': u'้ๅคท',
u'ไธๅฅณ': u'้ๅฅณ',
u'ไธๅฅณๆ้ขฆ': u'้ๅฅณๆ้กฐ',
u'ไธๅฅดๅฟ': u'้ๅฅดๅ
',
u'ไธๅฆ': u'้ๅฉฆ',
u'ไธๅชณ': u'้ๅชณ',
u'ไธๅชณๅฆ': u'้ๅชณๅฉฆ',
u'ไธๅฐ้ธญ': u'้ๅฐ้ดจ',
u'ไธๅทดๆช': u'้ๅทดๆช',
u'ไธๅพ': u'้ๅพ',
u'ไธๆถ': u'้ๆก',
u'ไธๆ': u'้ๆ
',
u'ไธๆฏไบ': u'้ๆไบ',
u'ไธไบ': u'้ๆผ',
u'ไธๆซ': u'้ๆซ',
u'ไธๆ ท': u'้ๆจฃ',
u'ไธๆญป': u'้ๆญป',
u'ไธๆฏ': u'้ๆฏ',
u'ไธๆฒฎ': u'้ๆฒฎ',
u'ไธ็ท': u'้็ท',
u'ไธ้ป': u'้่',
u'ไธๅฃฐ': u'้่ฒ',
u'ไธๅฃฐ่ฟๆญ': u'้่ฒ้ ๆญ',
u'ไธ่ธ': u'้่',
u'ไธ่': u'้่',
u'ไธ่ก': u'้่ก',
u'ไธ่จ': u'้่จ',
u'ไธ่ฏ': u'้่ฉ',
u'ไธ่ฏ': u'้่ฉฑ',
u'ไธ่ฏญ': u'้่ช',
u'ไธ่ดผ็': u'้่ณ็',
u'ไธ่พ': u'้่พญ',
u'ไธ่พฑ': u'้่พฑ',
u'ไธ้': u'้้',
u'ไธไธ': u'้้',
u'ไธ้': u'้้',
u'ไธๆ': u'้้',
u'ไธๅคดๆช่ธ': u'้้ ญๆช่',
u'ไธ็ฑป': u'้้ก',
u'้
้
ฟ็': u'้้่',
u'ๅป่ฏ': u'้ซ่ฅ',
u'ๅป้ข้': u'้ซ้ข่ฃก',
u'้
ฟๅถ': u'้่ฃฝ',
u'่ก
้': u'้้',
u'้็ณไนๅฝน': u'้็ณไนๅฝน',
u'้็ณไนๆ': u'้็ณไนๆฐ',
u'้็ณไนๆฐ': u'้็ณไนๆฐ',
u'้็ณ็ฃฏ': u'้็ณ็ฃฏ',
u'้็ณ็ถ': u'้็ณ็ฃฏ',
u'้่ฏ': u'้่ฅ',
u'้็จ่กจ': u'้็จ้ถ',
u'้ๅ': u'้ๅ',
u'้ๅ': u'้ๅ',
u'้ๆ': u'้ๆบ',
u'้ไบ': u'้ๆผ',
u'้็ฝ้ข': u'้็พ
้บต',
u'้ๅถ': u'้่ฃฝ',
u'้ๅค': u'้่ค',
u'้ๆ': u'้่จ',
u'้ๆธธ': u'้้',
u'้้ค': u'้้',
u'้ๅง': u'้่',
u'้ๆธธ': u'้้',
u'ๅๅบ': u'้ๅบ',
u'ๅๅ': u'้ๅ',
u'ๅๅฎ': u'้ๅฎ',
u'ๅๆญฃ': u'้ๆญฃ',
u'ๅๆธ
': u'้ๆธ
',
u'ๅ่ฎข': u'้่จ',
u'้ไปๅง': u'้ๅๅง',
u'้ไปๆบช': u'้ๅดๆบช',
u'้ๅธ้': u'้ๅธ้',
u'้่': u'้็ฏ',
u'้่กจๆ
': u'้่กจๆ
',
u'้่กจๆ': u'้่กจๆ
',
u'้่กจๆฌ': u'้่กจๆ',
u'้่กจๆ': u'้่กจๆ',
u'้่กจๆผ': u'้่กจๆผ',
u'้่กจ็ฐ': u'้่กจ็พ',
u'้่กจ็คบ': u'้่กจ็คบ',
u'้่กจ่พพ': u'้่กจ้',
u'้่กจ้ฒ': u'้่กจ้ฒ',
u'้่กจ้ข': u'้่กจ้ข',
u'้่ฃ
็้': u'้่ฃ็่ฃก',
u'้่กจ': u'้้ถ',
u'้้': u'้้',
u'้้ฉฌไป้': u'้้ฆฌๅด้',
u'้ๅ': u'้้ซฎ',
u'้้ค': u'้้',
u'้ฉๅฟๆ่ง': u'้ๅฟ้ฌฅ่ง',
u'้ถๆฑ': u'้็ก',
u'้ถๅ': u'้้ซฎ',
u'้่': u'้
็ฏ',
u'้ๅถ': u'้
่ฃฝ',
u'้้': u'้
้',
u'้ฏ้': u'้ซ้',
u'้ๅถ': u'้่ฃฝ',
u'้บ้ฆๅ็ปฃ': u'้ช้ฆๅ็นก',
u'้ขไน็ผ้ๆฏๅธ': u'้ผไน้้่กๅธซ',
u'้ขๆข': u'้ผๆจ',
u'้ขๅถ': u'้ผ่ฃฝ',
u'ๅฝ็': u'้่',
u'ๅฝๅถ': u'้่ฃฝ',
u'้ค็ผ': u'้้',
u'้ฑ่ฐท': u'้ข็ฉ',
u'้ฑ่': u'้ข็ฏ',
u'้ฑๅบ': u'้ข่',
u'้ฆ็ปฃ่ฑๅญ': u'้ฆ็ถ่ฑๅ',
u'้ฆ็ปฃ': u'้ฆ็นก',
u'่กจๅ': u'้ถๅ',
u'่กจๅ ': u'้ถๅ ',
u'่กจๅธฆ': u'้ถๅธถ',
u'่กจๅบ': u'้ถๅบ',
u'่กจๅ': u'้ถๅป ',
u'่กจๅฟซ': u'้ถๅฟซ',
u'่กจๆ
ข': u'้ถๆ
ข',
u'่กจๆฟ': u'้ถๆฟ',
u'่กจๅฃณ': u'้ถๆฎผ',
u'่กจ็': u'้ถ็',
u'่กจ็ๅๅ': u'้ถ็ๅๅ',
u'่กจ็ๅๅฒ': u'้ถ็ๆญทๅฒ',
u'่กจ็': u'้ถ็ค',
u'่กจ่ๅญ': u'้ถ่ๅญ',
u'่กจ่ก': u'้ถ่ก',
u'่กจ่ฝฌ': u'้ถ่ฝ',
u'่กจ้': u'้ถ้',
u'่กจ้': u'้ถ้',
u'่กจ้พ': u'้ถ้',
u'็ผๅถ': u'้ๅถ',
u'็ผๅฅ': u'้ๅฅ',
u'็ผๅญ': u'้ๅญ',
u'็ผๅธ': u'้ๅธซ',
u'็ผๅบฆ': u'้ๅบฆ',
u'็ผๅฝข': u'้ๅฝข',
u'็ผๆฐ': u'้ๆฐฃ',
u'็ผๆฑ': u'้ๆฑ',
u'็ผ็ณ': u'้็ณ',
u'็ผ่ดซ': u'้่ฒง',
u'็ผ้ๆฏ': u'้้่ก',
u'็ผ้ข': u'้้ผ',
u'้
ๅบ': u'้่',
u'้ป็ผๅบ': u'้้ๅบ',
u'้ฒ่ไธ่': u'้ฅ่ไธๆจ',
u'้ฐไป': u'้ๅ',
u'้คๅฟ': u'้ๅ
',
u'้คๅญ': u'้ๅญ',
u'้คๅคด': u'้้ ญ',
u'้็
': u'้ฝ็
',
u'้่': u'้ฝ่',
u'้่': u'้ฝ่',
u'้ไธ': u'้ไธ',
u'้ไธ': u'้ไธ',
u'้ไธ': u'้ไธ',
u'้ไธๆฃไธ้ธฃ': u'้ไธๆฃไธ้ณด',
u'้ไธๆไธ้ธฃ': u'้ไธๆไธ้ณด',
u'้ไธๆฒไธๅ': u'้ไธๆฒไธ้ฟ',
u'้ไธ็ฉบๅๅ': u'้ไธ็ฉบๅๅ',
u'้ไนณๆด': u'้ไนณๆด',
u'้ไนณ็ณ': u'้ไนณ็ณ',
u'้ๅ': u'้ๅ',
u'้ๅ ': u'้ๅ ',
u'้ๅฃ': u'้ๅฃ',
u'้ๅจๅฏบ้': u'้ๅจๅฏบ่ฃก',
u'้ๅก': u'้ๅก',
u'้ๅฃ': u'้ๅฃ',
u'้ๅคช': u'้ๅคช',
u'้ๅฅฝ': u'้ๅฅฝ',
u'้ๅฑฑ': u'้ๅฑฑ',
u'้ๅทฆๅณ': u'้ๅทฆๅณ',
u'้ๅทฎ': u'้ๅทฎ',
u'้ๅบง': u'้ๅบง',
u'้ๅฝข': u'้ๅฝข',
u'้ๅฝข่ซ': u'้ๅฝข่ฒ',
u'้ๅพ': u'้ๅพ',
u'้ๅฟซ': u'้ๅฟซ',
u'้ๆ': u'้ๆ',
u'้ๆ
ข': u'้ๆ
ข',
u'้ๆ': u'้ๆบ',
u'้ๆฒ': u'้ๆฒ',
u'้ๆ': u'้ๆ',
u'้ๆฅผ': u'้ๆจ',
u'้ๆจก': u'้ๆจก',
u'้ๆฒก': u'้ๆฒ',
u'้ๆผ': u'้ๆผ',
u'้็': u'้็',
u'้็ด': u'้็ด',
u'้ๅ้ณ': u'้็ผ้ณ',
u'้็': u'้็',
u'้็': u'้็ค',
u'้็ธ': u'้็ธ',
u'้็ฃฌ': u'้็ฃฌ',
u'้็บฝ': u'้็ด',
u'้็ฝฉ': u'้็ฝฉ',
u'้ๅฃฐ': u'้่ฒ',
u'้่
ฐ': u'้่
ฐ',
u'้่บ': u'้่บ',
u'้่ก': u'้่ก',
u'้่กจ้ข': u'้่กจ้ข',
u'้่ขซ': u'้่ขซ',
u'้่ฐ': u'้่ชฟ',
u'้่บซ': u'้่บซ',
u'้้': u'้้',
u'้่กจ': u'้้ถ',
u'้่กจๅ': u'้้ถๅ',
u'้่กจๅฟซ': u'้้ถๅฟซ',
u'้่กจๆ
ข': u'้้ถๆ
ข',
u'้่กจๅๅฒ': u'้้ถๆญทๅฒ',
u'้่กจ็': u'้้ถ็',
u'้่กจ็': u'้้ถ็',
u'้่กจ็ๅๅฒ': u'้้ถ็ๆญทๅฒ',
u'้่กจ็': u'้้ถ็ค',
u'้่กจ่ก': u'้้ถ่ก',
u'้่กจ้': u'้้ถ้',
u'้ๅ
ณ': u'้้',
u'้้ๅ': u'้้ณๅ',
u'้้ข': u'้้ข',
u'้ๅ': u'้้ฟ',
u'้้กถ': u'้้ ',
u'้ๅคด': u'้้ ญ',
u'้ไฝ': u'้้ซ',
u'้้ธฃ': u'้้ณด',
u'้็น': u'้้ป',
u'้้ผ': u'้้ผ',
u'้้ผ': u'้้ผ',
u'้ๆ': u'้ตๆ',
u'้ๆ ๆ': u'้ตๆฌๆ',
u'้้ค': u'้ต้',
u'้้': u'้ต้ฝ',
u'้้': u'้ต้',
u'้ธ้': u'้้',
u'้ดไบ': u'้ๆผ',
u'้ฟๅ ': u'้ทๅ ',
u'้ฟไบ': u'้ทๆผ',
u'้ฟๅ': u'้ทๆ',
u'้ฟๅๅฒ': u'้ทๆญทๅฒ',
u'้ฟ็่ฏ': u'้ท็่ฅ',
u'้ฟ่ก': u'้ท้ฌ',
u'้จๅธ': u'้ๅธ',
u'้จๅๅฟ': u'้ๅผๅ
',
u'้จ้': u'้่ฃก',
u'้ซๆ็คผ': u'้ๆท็ฆฎ',
u'ๅผๅ': u'้ๅผ',
u'ๅผๅพ': u'้ๅพต',
u'ๅผ้': u'้ๆก',
u'ๅผๅ': u'้็ผ',
u'ๅผ่ฏ': u'้่ฅ',
u'ๅผ่พ': u'้้ข',
u'ๅผๅ': u'้้ฌจ',
u'้ฒๆ
้ธ่ด': u'้ๆ
้ธ็ทป',
u'้ฒ่ก': u'้่ฉ',
u'้ฒๆธธ': u'้้',
u'้ดไธๅฎนๅ': u'้ไธๅฎน้ซฎ',
u'้ต้ๅฐ': u'้ๆก็พ',
u'ๅๅบ': u'้คๅบ',
u'้บ่': u'้จ็ฏ',
u'้่': u'้ซ็ฏ',
u'้ฏ่ก': u'้่ฉ',
u'้ฏ็ผ': u'้้',
u'ๅ
ณ็ณป': u'้ไฟ',
u'ๅ
ณ็ณป็': u'้ไฟ่',
u'ๅ
ณๅผไธๆ็กฎ': u'้ๅผ่ๆ็กฎ',
u'ๅ
ณไบ': u'้ๆผ',
u'่พไฝ': u'้ขไฝ',
u'่พไฝ': u'้ขไฝ',
u'่พๅ': u'้ขๅ',
u'่พๅ': u'้ขๅ',
u'่พๅฐ': u'้ขๅฐ',
u'่พๅฎค': u'้ขๅฎค',
u'่พๅปบ': u'้ขๅปบ',
u'่พไธบ': u'้ข็บ',
u'่พ็ฐ': u'้ข็ฐ',
u'่พ็ญ': u'้ข็ฏ',
u'่พ่ฐฃ': u'้ข่ฌ ',
u'่พ่พ': u'้ข่พ',
u'่พ้ชไปฅๅพ': u'้ข้ชไปฅๅพ',
u'้ฒๆ': u'้ฒๆ',
u'้ฒๆฐด่กจ': u'้ฒๆฐด้ถ',
u'้ฒๅพก': u'้ฒ็ฆฆ',
u'้ฒ่': u'้ฒ็ฏ',
u'้ฒ้': u'้ฒ้ฝ',
u'้ฒๅฐ': u'้ฒ้ขฑ',
u'้ปไบ': u'้ปๆผ',
u'้ฟๅ็': u'้ฟๅ็',
u'้ฟๆฏๅพ้ไบๆฏ': u'้ฟๆฏๅ้ไบๆฏ',
u'้ฟๅ': u'้ฟ็',
u'้ไบ': u'้ๆผ',
u'้ๆณจ': u'้่จป',
u'้ๅ่ฏ': u'้ๅฃ่ฅ',
u'้ๅถ': u'้ๅถ',
u'ๅๅฎ': u'้ๅฎ',
u'้ค่ญ่ฏ': u'้ค่ญ่ฅ',
u'้ชๅ': u'้ชๅผ',
u'้ดๅนฒ': u'้ฐไนพ',
u'้ดๅ': u'้ฐๆ',
u'้ดๅๅฒ': u'้ฐๆญทๅฒ',
u'้ดๆฒ้็ฟป่น': u'้ฐๆบ่ฃก็ฟป่น',
u'้ด้': u'้ฐ้ฌฑ',
u'้็ผ': u'้ณ้',
u'้ๆธธ': u'้ธ้',
u'้ณๆฅ้ข': u'้ฝๆฅ้บต',
u'้ณๅ': u'้ฝๆ',
u'้ณๅๅฒ': u'้ฝๆญทๅฒ',
u'้ๅ่ฎธ': u'้ๅ่จฑ',
u'้ๅ': u'้ๆบ',
u'้ไบ': u'้จๆผ',
u'้ๅ ': u'้ฑไฝ',
u'้ๅ ': u'้ฑๅ ',
u'้ไบ': u'้ฑๆผ',
u'ๅชๅญ': u'้ปๅญ',
u'ๅชๅฝฑ': u'้ปๅฝฑ',
u'ๅชๆ้ฎๅคฉ': u'้ปๆ้ฎๅคฉ',
u'ๅช็ผ': u'้ป็ผ',
u'ๅช่จ็่ฏญ': u'้ป่จ็่ช',
u'ๅช่บซ': u'้ป่บซ',
u'้ๆๆ': u'้ๆๆ',
u'้
่': u'้
็ฏ',
u'้
่ด': u'้
็ทป',
u'้ไบ': u'้ๆผ',
u'้ๆธธๆณ': u'้้ๆณ',
u'้ๆข็ปๆ ': u'้ๆจ็ซๆฃ',
u'ๅๆๅฐ': u'้ๆๅฐ',
u'ๅๆ': u'้ๆบ',
u'ๅ่็ฑป': u'้่้ก',
u'ๅ้': u'้้ตฐ',
u'ๆๅ้ขๅฟ': u'้ๅ้บตๅ
',
u'ๆๅฟ': u'้่ช',
u'ๆ้ข': u'้้บต',
u'้ธกๅต้น
ๆ': u'้ๅต้ต้ฌฅ',
u'้ธกๅฅธ': u'้ๅงฆ',
u'้ธกไบ้น
ๆ': u'้็ญ้ต้ฌฅ',
u'้ธกไธ': u'้็ตฒ',
u'้ธกไธ้ข': u'้็ตฒ้บต',
u'้ธก่
ฟ้ข': u'้่
ฟ้บต',
u'้ธก่้ๆ้ชจๅคด': u'้่่ฃกๆ้ชจ้ ญ',
u'้ธกๅช': u'้้ป',
u'็ฆปไบ': u'้ขๆผ',
u'้พ่': u'้ฃๆจ',
u'้พไบ': u'้ฃๆผ',
u'้ช็ช่คๅ ': u'้ช็ช่ขๅ ',
u'้ช้': u'้ช่ฃก',
u'้ช้็บข': u'้ช่ฃก็ด
',
u'้ช้่ป': u'้ช่ฃก่ป',
u'ไบๅ็ฝ่ฏ': u'้ฒๅ็ฝ่ฅ',
u'ไบ็ฌไธ็ญพ': u'้ฒ็ฌไธ็ฑค',
u'ไบๆธธ': u'้ฒ้',
u'ไบ้กป': u'้ฒ้ฌ',
u'้ถไธช': u'้ถๅ',
u'้ถๅคๅช': u'้ถๅค้ป',
u'้ถๅคฉๅ': u'้ถๅคฉๅพ',
u'้ถๅช': u'้ถ้ป',
u'้ถไฝ': u'้ถ้ค',
u'็ตๅญ่กจๆ ผ': u'้ปๅญ่กจๆ ผ',
u'็ตๅญ่กจ': u'้ปๅญ้ถ',
u'็ตๅญ้': u'้ปๅญ้',
u'็ตๅญ้่กจ': u'้ปๅญ้้ถ',
u'็ตๆ': u'้ปๆ',
u'็ต็ ่กจ': u'้ป็ขผ่กจ',
u'็ต็บฟๆ': u'้ป็ทๆ',
u'็ตๅฒ': u'้ป่ก',
u'็ต่กจ': u'้ป้ถ',
u'็ต้': u'้ป้',
u'้ๆ ': u'้ๆ
',
u'้่ก': u'้่ฉ',
u'้พ้': u'้ง่ฃก',
u'้ฒไธ': u'้ฒ้',
u'้ธๅ ': u'้ธไฝ',
u'้่': u'้ฝ็ฏ',
u'็ต่ฏ': u'้่ฅ',
u'้ๅฑฑไธๅ': u'้ๅฑฑไธ้ซฎ',
u'้่น': u'้่น',
u'้่นๆ': u'้่ๆ',
u'้่ๅๅฎข': u'้่
ๅผๅฎข',
u'้้็ด ': u'้้็ด ',
u'้้': u'้้ปด',
u'้ๅ ไธๅฏ': u'้ไฝไธๅฏ',
u'้ขๅ
ไฝ': u'้ขๅ
ไฝ',
u'้ขๅ
ๅซ': u'้ขๅ
ๅซ',
u'้ขๅ
ๅด': u'้ขๅ
ๅ',
u'้ขๅ
ๅฎน': u'้ขๅ
ๅฎน',
u'้ขๅ
ๅบ': u'้ขๅ
ๅบ',
u'้ขๅ
ๅข': u'้ขๅ
ๅป',
u'้ขๅ
ๆ': u'้ขๅ
ๆ',
u'้ขๅ
ๆฌ': u'้ขๅ
ๆฌ',
u'้ขๅ
ๆฝ': u'้ขๅ
ๆฌ',
u'้ขๅ
ๆถต': u'้ขๅ
ๆถต',
u'้ขๅ
็ฎก': u'้ขๅ
็ฎก',
u'้ขๅ
ๆ': u'้ขๅ
็ดฎ',
u'้ขๅ
็ฝ': u'้ขๅ
็พ
',
u'้ขๅ
็': u'้ขๅ
่',
u'้ขๅ
่': u'้ขๅ
่',
u'้ขๅ
่ฃ
': u'้ขๅ
่ฃ',
u'้ขๅ
่ฃน': u'้ขๅ
่ฃน',
u'้ขๅ
่ตท': u'้ขๅ
่ตท',
u'้ขๅ
ๅ': u'้ขๅ
่พฆ',
u'้ขๅบ่': u'้ขๅบ่',
u'้ขๆ็': u'้ขๆ่',
u'้ขๆก็ฎ': u'้ขๆข็ฎ',
u'้ขๆข็ฎ': u'้ขๆข็ฎ',
u'้ข็ฒ็ข': u'้ข็ฒ็ข',
u'้ข็ฒ็บข': u'้ข็ฒ็ด
',
u'้ขไธด็': u'้ข่จ่',
u'้ข้ฃ้ฅญ': u'้ข้ฃ้ฃฏ',
u'้ข้ฃ้ข': u'้ข้ฃ้บต',
u'้้': u'้่ฃก',
u'้ฃๅถ': u'้ฃ่ฃฝ',
u'็งๅ': u'้ฆ้',
u'้ญ่พๅ
ฅ้': u'้ญ่พๅ
ฅ่ฃก',
u'้ฆๅบ': u'้่',
u'้ฉๅฝๅถ': u'้ๅ่ฃฝ',
u'้ฉๅถ': u'้่ฃฝ',
u'้ณๅ': u'้ณๆบ',
u'้ณๅฃฐๅฆ้': u'้ณ่ฒๅฆ้',
u'้ถๅฑฑๅฒ': u'้ถๅฑฑๆฒ',
u'ๅ้': u'้ฟ้',
u'้ ้ข': u'้ ้ข',
u'้กต้ข': u'้ ้ข',
u'้ ๅค': u'้ ๅค',
u'้กถๅค': u'้ ๅค',
u'้กนๅบ': u'้
่',
u'้กบไบ': u'้ ๆผ',
u'้กบ้ๅ': u'้ ้ๅ',
u'้กบ้ฃๅ': u'้ ้ขจๅพ',
u'้กปๆ นๆฎ': u'้ ๆ นๆ',
u'้ข็ณป': u'้ ็นซ',
u'้ข่ต': u'้ ่ฎ',
u'้ขๅถ': u'้ ่ฃฝ',
u'้ขๅ้': u'้ ๅ่ฃก',
u'้ข่ขๆฌฒ': u'้ ่ขๆ
พ',
u'ๅคดๅทพๅๅจๆฐด้': u'้ ญๅทพๅผๅจๆฐด่ฃก',
u'ๅคด้': u'้ ญ่ฃก',
u'ๅคดๅ': u'้ ญ้ซฎ',
u'้ข้กป': u'้ ฐ้ฌ',
u'้ข็ญพ': u'้ก็ฑค',
u'้ขๅพ': u'้กๅพต',
u'้ขๆ็ฅๅ': u'้กๆ็ฅๆ',
u'้ขๆ็ฅๅๅฒ': u'้กๆ็ฅๆญทๅฒ',
u'้ข่': u'้ก็ฏ',
u'้ข ๅนฒๅๅค': u'้กไนพๅๅค',
u'้ข ่ฆ': u'้ก่ฆ',
u'้ข ้ข ไปไป': u'้ก้กไปไป',
u'้ขคๆ ': u'้กซๆ
',
u'ๆพ็คบ่กจ': u'้กฏ็คบ้ถ',
u'ๆพ็คบ้': u'้กฏ็คบ้',
u'ๆพ็คบ้่กจ': u'้กฏ็คบ้้ถ',
u'ๆพ่ๆ ๅฟ': u'้กฏ่ๆจๅฟ',
u'้ฃๅนฒ': u'้ขจไนพ',
u'้ฃๅ': u'้ขจๅ',
u'้ฃๅๅฟ': u'้ขจๅ่ช',
u'้ฃๅ๏ผ': u'้ขจๅพ๏ผ',
u'้ฃๅทๆฎไบ': u'้ขจๆฒๆฎ้ฒ',
u'้ฃ็ฉๅฟ': u'้ขจ็ฉ่ช',
u'้ฃ่': u'้ขจ็ฏ',
u'้ฃ้': u'้ขจ่ฃก',
u'้ฃ่ตทไบๆถ': u'้ขจ่ตท้ฒๆนง',
u'้ฃ้': u'้ขจ้',
u'้ขจ้': u'้ขจ้',
u'ๅฐ้ฃ': u'้ขฑ้ขจ',
u'ๅฐ้ฃๅ': u'้ขฑ้ขจๅพ',
u'ๅฎไบ': u'้ขณไบ',
u'ๅฎๅ': u'้ขณๅ',
u'ๅฎๅป': u'้ขณๅป',
u'ๅฎๅพ': u'้ขณๅพ',
u'ๅฎ่ตฐ': u'้ขณ่ตฐ',
u'ๅฎ่ตท': u'้ขณ่ตท',
u'ๅฎ้ช': u'้ขณ้ช',
u'ๅฎ้ฃ': u'้ขณ้ขจ',
u'ๅฎ้ฃๅ': u'้ขณ้ขจๅพ',
u'้ฃ่ก': u'้ฃ่ฉ',
u'้ฃๆธธ': u'้ฃ้',
u'้ฃ้ฃ่ก่ก': u'้ฃ้ฃ่ฉ่ฉ',
u'้ฃๆ': u'้ฃ็ดฎ',
u'้ฃๅๆฝ็ฒ': u'้ฃ่ป่ผ็ฒ',
u'้ฃ่ก้': u'้ฃ่ก้',
u'้ฃๆฌฒ': u'้ฃๆ
พ',
u'้ฃๆฌฒไธๆฏ': u'้ฃๆฌฒไธๆฏ',
u'้ฃ้ไน่น': u'้ฃ้ไน่น',
u'้ฃ้ข': u'้ฃ้บต',
u'้ฅญๅ้': u'้ฃฏๅพ้',
u'้ฅญๅข': u'้ฃฏ็ณฐ',
u'้ฅญๅบ': u'้ฃฏ่',
u'้ฅฒๅ': u'้ฃผ้คต',
u'้ฅผๅนฒ': u'้ค
ไนพ',
u'้ฆไฝ': u'้ค้ค',
u'ไฝ0': u'้ค0',
u'ไฝ1': u'้ค1',
u'ไฝ2': u'้ค2',
u'ไฝ3': u'้ค3',
u'ไฝ4': u'้ค4',
u'ไฝ5': u'้ค5',
u'ไฝ6': u'้ค6',
u'ไฝ7': u'้ค7',
u'ไฝ8': u'้ค8',
u'ไฝ9': u'้ค9',
u'ไฝใ': u'้คใ',
u'ไฝไธ': u'้คไธ',
u'ไฝไธ': u'้คไธ',
u'ไฝไธ': u'้คไธ',
u'ไฝไธ': u'้คไธ',
u'ไฝไน': u'้คไน',
u'ไฝไบ': u'้คไบ',
u'ไฝไบ': u'้คไบ',
u'ไฝไบ': u'้คไบ',
u'ไฝไบบ': u'้คไบบ',
u'ไฝไฟ': u'้คไฟ',
u'ไฝๅ': u'้คๅ',
u'ไฝๅ': u'้คๅ',
u'ไฝๅ
': u'้คๅ
',
u'ไฝๅ
ซ': u'้คๅ
ซ',
u'ไฝๅ
ญ': u'้คๅ
ญ',
u'ไฝๅ': u'้คๅ',
u'ไฝๅ': u'้คๅ',
u'ไฝๅฉ': u'้คๅฉ',
u'ไฝๅฒ': u'้คๅฒ',
u'ไฝๅ': u'้คๅ',
u'ไฝๅ': u'้คๅ',
u'ไฝๅ': u'้คๅ',
u'ไฝๅณ': u'้คๅณ',
u'ไฝๅ': u'้คๅ',
u'ไฝๅ': u'้คๅ',
u'ไฝๅฐ': u'้คๅฐ',
u'ไฝๅขจ': u'้คๅขจ',
u'ไฝๅค': u'้คๅค',
u'ไฝๅฆ': u'้คๅฆ',
u'ไฝๅง': u'้คๅง',
u'ไฝๅจ': u'้คๅจ',
u'ไฝๅญ': u'้คๅญ',
u'ไฝๅญ': u'้คๅญ',
u'ไฝๅญฝ': u'้คๅญฝ',
u'ไฝๅผฆ': u'้คๅผฆ',
u'ไฝๆ': u'้คๆ',
u'ไฝๆธ': u'้คๆธ',
u'ไฝๅบ': u'้คๆ
ถ',
u'ไฝๆฐ': u'้คๆธ',
u'ไฝๆ': u'้คๆ',
u'ไฝๆ ': u'้คๆ ',
u'ไฝๆ': u'้คๆ',
u'ไฝๆ': u'้คๆ',
u'ไฝๆญ': u'้คๆญ',
u'ไฝๆฏ': u'้คๆฏ',
u'ไฝๆก': u'้คๆก',
u'ไฝๆกถ': u'้คๆกถ',
u'ไฝไธ': u'้คๆฅญ',
u'ไฝๆฌพ': u'้คๆฌพ',
u'ไฝๆญฅ': u'้คๆญฅ',
u'ไฝๆฎ': u'้คๆฎ',
u'ไฝๆฏ': u'้คๆฏ',
u'ไฝๆฐ': u'้คๆฐฃ',
u'ไฝๆณข': u'้คๆณข',
u'ไฝๆณข่กๆผพ': u'้คๆณข็ชๆผพ',
u'ไฝๆธฉ': u'้คๆบซ',
u'ไฝๆณฝ': u'้คๆพค',
u'ไฝๆฒฅ': u'้ค็',
u'ไฝ็': u'้ค็',
u'ไฝ็ญ': u'้ค็ฑ',
u'ไฝ็ฌ': u'้ค็ผ',
u'ไฝ็': u'้ค็',
u'ไฝ็': u'้ค็',
u'ไฝไผ': u'้ค็พ',
u'ไฝ็ช': u'้ค็ซ
',
u'ไฝ็ฒฎ': u'้ค็ณง',
u'ไฝ็ปช': u'้ค็ท',
u'ไฝ็ผบ': u'้ค็ผบ',
u'ไฝ็ฝช': u'้ค็ฝช',
u'ไฝ็พก': u'้ค็พจ',
u'ไฝๅฃฐ': u'้ค่ฒ',
u'ไฝ่': u'้ค่',
u'ไฝๅ
ด': u'้ค่',
u'ไฝ่': u'้ค่',
u'ไฝ่ซ': u'้ค่ญ',
u'ไฝ่ฃ': u'้ค่ฃ',
u'ไฝ่ง': u'้ค่ง',
u'ไฝ่ฎบ': u'้ค่ซ',
u'ไฝ่ดฃ': u'้ค่ฒฌ',
u'ไฝ่ฒพ': u'้ค่ฒพ',
u'ไฝ่พ': u'้ค่ผ',
u'ไฝ่พ': u'้ค่พ',
u'ไฝ้
ฒ': u'้ค้
ฒ',
u'ไฝ้ฐ': u'้ค้',
u'ไฝ้ฒ': u'้ค้',
u'ไฝ้ถ': u'้ค้ถ',
u'ไฝ้': u'้ค้',
u'ไฝ้': u'้ค้',
u'ไฝ้ณ': u'้ค้ณ',
u'ไฝ้ณ็ปๆข': u'้ค้ณ็นๆข',
u'ไฝ้ต': u'้ค้ป',
u'ไฝๅ': u'้ค้ฟ',
u'ไฝ้ข': u'้ค้ก',
u'ไฝ้ฃ': u'้ค้ขจ',
u'ไฝ้ฃ': u'้ค้ฃ',
u'ไฝๅ
': u'้ค้ปจ',
u'ไฝ๏ผ': u'้ค๏ผ',
u'ไฝ๏ผ': u'้ค๏ผ',
u'ไฝ๏ผ': u'้ค๏ผ',
u'ไฝ๏ผ': u'้ค๏ผ',
u'ไฝ๏ผ': u'้ค๏ผ',
u'ไฝ๏ผ': u'้ค๏ผ',
u'ไฝ๏ผ': u'้ค๏ผ',
u'ไฝ๏ผ': u'้ค๏ผ',
u'ไฝ๏ผ': u'้ค๏ผ',
u'ไฝ๏ผ': u'้ค๏ผ',
u'้ฆ้ฅจ้ข': u'้ค้ฃฉ้บต',
u'้ฆ่ฐท': u'้คจ็ฉ',
u'้ฆ้': u'้คจ่ฃก',
u'ๅไนณ': u'้คตไนณ',
u'ๅไบ': u'้คตไบ',
u'ๅๅฅถ': u'้คตๅฅถ',
u'ๅ็ป': u'้คต็ตฆ',
u'ๅ็พ': u'้คต็พ',
u'ๅ็ช': u'้คต่ฑฌ',
u'ๅ่ฟ': u'้คต้',
u'ๅ้ธก': u'้คต้',
u'ๅ้ฃ': u'้คต้ฃ',
u'ๅ้ฅฑ': u'้คต้ฃฝ',
u'ๅๅ
ป': u'้คต้ค',
u'ๅ้ฉด': u'้คต้ฉข',
u'ๅ้ฑผ': u'้คต้ญ',
u'ๅ้ธญ': u'้คต้ดจ',
u'ๅ้น
': u'้คต้ต',
u'้ฅฅๅฏ': u'้ฅๅฏ',
u'้ฅฅๆฐ': u'้ฅๆฐ',
u'้ฅฅๆธด': u'้ฅๆธด',
u'้ฅฅๆบบ': u'้ฅๆบบ',
u'้ฅฅ่': u'้ฅ่',
u'้ฅฅ้ฅฑ': u'้ฅ้ฃฝ',
u'้ฅฅ้ฆ': u'้ฅ้ฅ',
u'้ฆๅฝๅ
ถๅฒ': u'้ฆ็ถๅ
ถ่ก',
u'้ฆๅ': u'้ฆ็ผ',
u'้ฆๅช': u'้ฆ้ป',
u'้ฆๅนฒ': u'้ฆไนพ',
u'้ฆๅฑฑๅบ': u'้ฆๅฑฑๅบ',
u'้ฉฌๅนฒ': u'้ฆฌไนพ',
u'้ฉฌๅ ๅฑฑ': u'้ฆฌๅ ๅฑฑ',
u'้ฆฌๅ ๅฑฑ': u'้ฆฌๅ ๅฑฑ',
u'้ฉฌๆ': u'้ฆฌๆ',
u'้ฆฌๆ ผ้ๅธ': u'้ฆฌๆ ผ้ๅธ',
u'้ฉฌๆ ผ้ๅธ': u'้ฆฌๆ ผ้ๅธ',
u'้ฉฌ่กจ': u'้ฆฌ้ถ',
u'้ฉปๆ': u'้ง็ดฎ',
u'้ช่ก': u'้ง่ฉ',
u'่
พๅฒ': u'้จฐ่ก',
u'ๆ่ต': u'้ฉ่ฎ',
u'ๆ้': u'้ฉ้',
u'้ชจๅญ้': u'้ชจๅญ่ฃก',
u'้ชจๅนฒ': u'้ชจๅนน',
u'้ชจ็ฐๅ': u'้ชจ็ฐ็ฝ',
u'้ชจๅ': u'้ชจ็ฝ',
u'้ชจๅคด้ๆฃๅบๆฅ็้ฑๆๅๅพ่': u'้ชจ้ ญ่ฃกๆๅบไพ็้ข็บๅๅพ่',
u'่ฎ่ฎ่่': u'้ชฏ้ชฏ้ซ้ซ',
u'่ฎ่': u'้ชฏ้ซ',
u'่ไนฑ': u'้ซไบ',
u'่ไบ': u'้ซไบ',
u'่ๅ
ฎๅ
ฎ': u'้ซๅ
ฎๅ
ฎ',
u'่ๅญ': u'้ซๅญ',
u'่ๅพ': u'้ซๅพ',
u'่ๅฟ': u'้ซๅฟ',
u'่ไธ่ฅฟ': u'้ซๆฑ่ฅฟ',
u'่ๆฐด': u'้ซๆฐด',
u'่็': u'้ซ็',
u'่่ฏ': u'้ซ่ฉ',
u'่่ฏ': u'้ซ่ฉฑ',
u'่้ฑ': u'้ซ้ข',
u'่ๅ': u'้ซ้ซฎ',
u'ไฝ่': u'้ซ็ฏ',
u'ไฝ็ณป': u'้ซ็ณป',
u'้ซๅ ': u'้ซๅ ',
u'้ซๅนฒๆฐ': u'้ซๅนฒๆพ',
u'้ซๅนฒ้ข': u'้ซๅนฒ้ ',
u'้ซๅนฒ': u'้ซๅนน',
u'้ซๅบฆ่ชๅถ': u'้ซๅบฆ่ชๅถ',
u'้ซๆธ
ๆฟ': u'้ซๆธ
ๆฟ',
u'้ซกๅ': u'้ซก้ซฎ',
u'้ซญ่ก': u'้ซญ้ฌ',
u'้ซญ้กป': u'้ซญ้ฌ',
u'ๅไธๆๅ ': u'้ซฎไธๆๅ ',
u'ๅไธๅฒๅ ': u'้ซฎไธๆฒๅ ',
u'ๅไนณ': u'้ซฎไนณ',
u'ๅๅ
ๅฏ้ด': u'้ซฎๅ
ๅฏ้',
u'ๅๅช': u'้ซฎๅช',
u'ๅๅ': u'้ซฎๅ',
u'ๅๅคน': u'้ซฎๅคพ',
u'ๅๅฆป': u'้ซฎๅฆป',
u'ๅๅง': u'้ซฎๅง',
u'ๅๅฑ': u'้ซฎๅฑ',
u'ๅๅทฒ้็ฝ': u'้ซฎๅทฒ้็ฝ',
u'ๅๅธฆ': u'้ซฎๅธถ',
u'ๅๅป': u'้ซฎๅป',
u'ๅๅผ': u'้ซฎๅผ',
u'ๅๅผๅ้ง': u'้ซฎๅผๅ้',
u'ๅๆ': u'้ซฎๆ',
u'ๅๅท': u'้ซฎๆฒ',
u'ๅๆ น': u'้ซฎๆ น',
u'ๅๆฒน': u'้ซฎๆฒน',
u'ๅๆผ': u'้ซฎๆผ',
u'ๅไธบ่กไนๆฌ': u'้ซฎ็บ่กไนๆฌ',
u'ๅ็ถ': u'้ซฎ็',
u'ๅ็ฃ': u'้ซฎ็ฌ',
u'ๅ็ญๅฟ้ฟ': u'้ซฎ็ญๅฟ้ท',
u'ๅ็ฆ': u'้ซฎ็ฆ',
u'ๅ็ฌบ': u'้ซฎ็ฎ',
u'ๅ็บฑ': u'้ซฎ็ด',
u'ๅ็ป': u'้ซฎ็ต',
u'ๅไธ': u'้ซฎ็ตฒ',
u'ๅ็ฝ': u'้ซฎ็ถฒ',
u'ๅ่': u'้ซฎ่
ณ',
u'ๅ่ค': u'้ซฎ่',
u'ๅ่ถ': u'้ซฎ่ ',
u'ๅ่': u'้ซฎ่',
u'ๅ่ก': u'้ซฎ่ ',
u'ๅ่ธๅฒๅ ': u'้ซฎ่ธๆฒๅ ',
u'ๅ่พซ': u'้ซฎ่พฎ',
u'ๅ้': u'้ซฎ้',
u'ๅ้': u'้ซฎ้ต',
u'ๅ้ฟ': u'้ซฎ้ท',
u'ๅ้
': u'้ซฎ้',
u'ๅ้': u'้ซฎ้',
u'ๅ้': u'้ซฎ้',
u'ๅ้ฅฐ': u'้ซฎ้ฃพ',
u'ๅ้ซป': u'้ซฎ้ซป',
u'ๅ้ฌ': u'้ซฎ้ฌข',
u'้ซฏ่ก': u'้ซฏ้ฌ',
u'้ซผๆพ': u'้ซผ้ฌ',
u'้ฌ
ๆพ': u'้ฌ
้ฌ',
u'ๆพไธๅฃๆฐ': u'้ฌไธๅฃๆฐฃ',
u'ๆพไบ': u'้ฌไบ',
u'ๆพไบ': u'้ฌไบ',
u'ๆพๅ
้ณ': u'้ฌๅ
้ณ',
u'ๆพๅฒ': u'้ฌๅ',
u'ๆพๅจ': u'้ฌๅ',
u'ๆพๅฃ': u'้ฌๅฃ',
u'ๆพๅ': u'้ฌๅ',
u'ๆพๅ': u'้ฌๅ',
u'ๆพๅฎฝ': u'้ฌๅฏฌ',
u'ๆพๅผ': u'้ฌๅผ',
u'ๆพๅฟซ': u'้ฌๅฟซ',
u'ๆพๆ': u'้ฌๆ',
u'ๆพๆ': u'้ฌๆ',
u'ๆพๆ': u'้ฌๆ',
u'ๆพๆฃ': u'้ฌๆฃ',
u'ๆพๆ': u'้ฌๆ',
u'ๆพๆฐ': u'้ฌๆฐฃ',
u'ๆพๆตฎ': u'้ฌๆตฎ',
u'ๆพ็ป': u'้ฌ็ถ',
u'ๆพ็ดง': u'้ฌ็ท',
u'ๆพ็ผ': u'้ฌ็ทฉ',
u'ๆพ่': u'้ฌ่',
u'ๆพ่ฑ': u'้ฌ่ซ',
u'ๆพ่': u'้ฌ่',
u'ๆพ่ตท': u'้ฌ่ตท',
u'ๆพ่ฝฏ': u'้ฌ่ป',
u'ๆพ้': u'้ฌ้',
u'ๆพๅผ': u'้ฌ้',
u'ๆพ้ฅผ': u'้ฌ้ค
',
u'ๆพๆพ': u'้ฌ้ฌ',
u'้ฌๅ': u'้ฌ้ซฎ',
u'่กๅญ': u'้ฌๅญ',
u'่กๆขข': u'้ฌๆขข',
u'่กๆธฃ': u'้ฌๆธฃ',
u'่ก้ซญ': u'้ฌ้ซญ',
u'่ก้ซฏ': u'้ฌ้ซฏ',
u'่ก้กป': u'้ฌ้ฌ',
u'้ฌๅ': u'้ฌ้ซฎ',
u'้กปๆ น': u'้ฌๆ น',
u'้กปๆฏ': u'้ฌๆฏ',
u'้กป็': u'้ฌ็',
u'้กป็': u'้ฌ็',
u'้กปๅ': u'้ฌ้ซฎ',
u'้กป่ก': u'้ฌ้ฌ',
u'้กป้กป': u'้ฌ้ฌ',
u'้กป้ฒจ': u'้ฌ้ฏ',
u'้กป้ฒธ': u'้ฌ้ฏจ',
u'้ฌๅ': u'้ฌข้ซฎ',
u'ๆไธ': u'้ฌฅไธ',
u'ๆไธ่ฟ': u'้ฌฅไธ้',
u'ๆไบ': u'้ฌฅไบ',
u'ๆๆฅๆๅป': u'้ฌฅไพ้ฌฅๅป',
u'ๆๅ': u'้ฌฅๅ',
u'ๆๅๅญ': u'้ฌฅๅๅญ',
u'ๆๅ': u'้ฌฅๅ',
u'ๆๅฒ': u'้ฌฅๅ',
u'ๆ่': u'้ฌฅๅ',
u'ๆๅฃ': u'้ฌฅๅฃ',
u'ๆๅ': u'้ฌฅๅ',
u'ๆๅด': u'้ฌฅๅด',
u'ๆๅฐไธป': u'้ฌฅๅฐไธป',
u'ๆๅฃซ': u'้ฌฅๅฃซ',
u'ๆๅฏ': u'้ฌฅๅฏ',
u'ๆๅทง': u'้ฌฅๅทง',
u'ๆๅนๅญ': u'้ฌฅๅนๅญ',
u'ๆๅผ': u'้ฌฅๅผ',
u'ๆๅผ': u'้ฌฅๅผ',
u'ๆๅซๆฐ': u'้ฌฅๅฝๆฐฃ',
u'ๆๅฝฉ': u'้ฌฅๅฝฉ',
u'ๆๅฟ็ผ': u'้ฌฅๅฟ็ผ',
u'ๆๅฟ': u'้ฌฅๅฟ',
u'ๆ้ท': u'้ฌฅๆถ',
u'ๆๆ': u'้ฌฅๆ',
u'ๆๆ': u'้ฌฅๆ',
u'ๆๆนๆน': u'้ฌฅๆนๆน',
u'ๆๆ': u'้ฌฅๆ',
u'ๆๆ': u'้ฌฅๆ',
u'ๆๆบ': u'้ฌฅๆบ',
u'ๆๆด': u'้ฌฅๆด',
u'ๆๆญฆ': u'้ฌฅๆญฆ',
u'ๆๆฎด': u'้ฌฅๆฏ',
u'ๆๆฐ': u'้ฌฅๆฐฃ',
u'ๆๆณ': u'้ฌฅๆณ',
u'ๆไบ': u'้ฌฅ็ญ',
u'ๆไบๆๅ': u'้ฌฅ็ญ้ฌฅๅ',
u'ๆ็': u'้ฌฅ็',
u'ๆ็ๆ้ฝฟ': u'้ฌฅ็ๆ้ฝ',
u'ๆ็ๆ้ฝฟ': u'้ฌฅ็้ฌฅ้ฝ',
u'ๆ็': u'้ฌฅ็',
u'ๆ็ๅฐ': u'้ฌฅ็่บ',
u'ๆ็ฌ': u'้ฌฅ็ฌ',
u'ๆ็ ': u'้ฌฅ็ ',
u'ๆๅ ': u'้ฌฅ็',
u'ๆ็พ่': u'้ฌฅ็พ่',
u'ๆ็ผ': u'้ฌฅ็ผ',
u'ๆ็งๆนไฟฎ': u'้ฌฅ็งๆนไฟฎ',
u'ๆ่้ธๅ
ต': u'้ฌฅ่้ๅ
ต',
u'ๆ่้ธ้ฅ': u'้ฌฅ่้้',
u'ๆ่': u'้ฌฅ่
ณ',
u'ๆ่ฐ': u'้ฌฅ่ฆ',
u'ๆ่ถ': u'้ฌฅ่ถ',
u'ๆ่': u'้ฌฅ่',
u'ๆๅถๅฟ': u'้ฌฅ่ๅ
',
u'ๆๅถๅญ': u'้ฌฅ่ๅญ',
u'ๆ็': u'้ฌฅ่',
u'ๆ่่': u'้ฌฅ่่',
u'ๆ่ฏ': u'้ฌฅ่ฉฑ',
u'ๆ่ณ': u'้ฌฅ่ฑ',
u'ๆ่ตท': u'้ฌฅ่ตท',
u'ๆ่ถฃ': u'้ฌฅ่ถฃ',
u'ๆ้ฒๆฐ': u'้ฌฅ้ๆฐฃ',
u'ๆ้ธก': u'้ฌฅ้',
u'ๆ้ช็บข': u'้ฌฅ้ช็ด
',
u'ๆๅคด': u'้ฌฅ้ ญ',
u'ๆ้ฃ': u'้ฌฅ้ขจ',
u'ๆ้ฅค': u'้ฌฅ้ฃฃ',
u'ๆๆ': u'้ฌฅ้ฌฅ',
u'ๆๅ': u'้ฌฅ้ฌจ',
u'ๆ้ฑผ': u'้ฌฅ้ญ',
u'ๆ้ธญ': u'้ฌฅ้ดจ',
u'ๆ้น้น': u'้ฌฅ้ตช้ถ',
u'ๆไธฝ': u'้ฌฅ้บ',
u'้น็็ฉๅฟ': u'้ฌง่็ฉๅ
',
u'้น่กจ': u'้ฌง้ถ',
u'้น้': u'้ฌง้',
u'ๅๅจ': u'้ฌจๅ',
u'ๅๅ ': u'้ฌจๅ ',
u'ๅ็ฌ': u'้ฌจ็ฌ',
u'้ไผ': u'้ฌฑไผ',
u'้ๅ': u'้ฌฑๅ',
u'้ๅ': u'้ฌฑๅ',
u'้ๅ': u'้ฌฑๅ',
u'้ๅ ไธๅถ': u'้ฌฑๅ ไธๅถ',
u'้ๅก': u'้ฌฑๅก',
u'้ๅ': u'้ฌฑๅฃ',
u'้ๅพ': u'้ฌฑๅพ',
u'้ๆ': u'้ฌฑๆ',
u'้้ท': u'้ฌฑๆถ',
u'้ๆค': u'้ฌฑๆค',
u'้ๆ': u'้ฌฑๆ',
u'้ๆน': u'้ฌฑๆน',
u'้ๆ': u'้ฌฑๆ',
u'้ๆฐ': u'้ฌฑๆฐฃ',
u'้ๆฑ': u'้ฌฑๆฑ',
u'้ๆฒๆฒ': u'้ฌฑๆฒๆฒ',
u'้ๆณฑ': u'้ฌฑๆณฑ',
u'้็ซ': u'้ฌฑ็ซ',
u'้็ญ': u'้ฌฑ็ฑ',
u'้็ ': u'้ฌฑ็ ',
u'้็': u'้ฌฑ็',
u'้็งฏ': u'้ฌฑ็ฉ',
u'้็บก': u'้ฌฑ็ด',
u'้็ป': u'้ฌฑ็ต',
u'้่ธ': u'้ฌฑ่ธ',
u'้่': u'้ฌฑ่',
u'้่ก': u'้ฌฑ่ก',
u'้้': u'้ฌฑ้',
u'้้': u'้ฌฑ้',
u'้้': u'้ฌฑ้',
u'้้ญ': u'้ฌฑ้',
u'้้ถ': u'้ฌฑ้ถ',
u'้้ไธๅนณ': u'้ฌฑ้ฌฑไธๅนณ',
u'้้ไธไน': u'้ฌฑ้ฌฑไธๆจ',
u'้้ๅฏกๆฌข': u'้ฌฑ้ฌฑๅฏกๆญก',
u'้้่็ป': u'้ฌฑ้ฌฑ่็ต',
u'้้่ฑ่ฑ': u'้ฌฑ้ฌฑ่ฅ่ฅ',
u'้้ป': u'้ฌฑ้ป',
u'้ฌผ่ฐทๅญ': u'้ฌผ่ฐทๅญ',
u'้ญ็ตๆขฆ็ณป': u'้ญ็ฝๅคข็นซ',
u'้ญๅพ': u'้ญๅพต',
u'้ญ่กจ': u'้ญ้ถ',
u'้ฑผๅนฒ': u'้ญไนพ',
u'้ฑผๆพ': u'้ญ้ฌ',
u'้ฒธ้กป': u'้ฏจ้ฌ',
u'้ฒ้ฑผ': u'้ฏฐ้ญ',
u'้ธ ๅ ้นๅทข': u'้ณฉไฝ้ตฒๅทข',
u'ๅคๅฐไบ้ฃ': u'้ณณๅฐไบ้ฃ',
u'ๅคๆขจๅนฒ': u'้ณณๆขจไนพ',
u'้ธฃ้': u'้ณด้',
u'้ธฟๆก็ธๅบ': u'้ดปๆก็ธ่',
u'้ธฟ่': u'้ดป็ฏ',
u'้ธฟ็ฏๅทจๅถ': u'้ดป็ฏๅทจ่ฃฝ',
u'้น
ๅ': u'้ตๆบ',
u'้นๅ': u'้ต ้ซฎ',
u'้ๅฟ้็ช': u'้ตฐๅฟ้็ช',
u'้ๆ': u'้ตฐๆ',
u'้็ฟ': u'้ตฐ็ฟ',
u'้้น': u'้ตฐ้ถ',
u'้นคๅ': u'้ถดๅผ',
u'้นคๅ': u'้ถด้ซฎ',
u'้นฐ้': u'้นฐ้ตฐ',
u'ๅธๅณ': u'้นนๅณ',
u'ๅธๅดๆทก่': u'้นนๅดๆทก่',
u'ๅธๅ': u'้นนๅ',
u'ๅธๅบฆ': u'้นนๅบฆ',
u'ๅธๅพ': u'้นนๅพ',
u'ๅธๆน': u'้นนๆน',
u'ๅธๆฐด': u'้นนๆฐด',
u'ๅธๆดพ': u'้นนๆดพ',
u'ๅธๆตท': u'้นนๆตท',
u'ๅธๆทก': u'้นนๆทก',
u'ๅธๆน': u'้นนๆน',
u'ๅธๆฑค': u'้นนๆนฏ',
u'ๅธๆฝ': u'้นนๆฝ',
u'ๅธ็': u'้นน็',
u'ๅธ็ฒฅ': u'้นน็ฒฅ',
u'ๅธ่': u'้นน่',
u'ๅธ่': u'้นน่',
u'ๅธ่ๅนฒ': u'้นน่ไนพ',
u'ๅธ่': u'้นน่',
u'ๅธ็ช่': u'้นน่ฑฌ่',
u'ๅธ็ฑป': u'้นน้ก',
u'ๅธ้ฃ': u'้นน้ฃ',
u'ๅธ้ฑผ': u'้นน้ญ',
u'ๅธ้ธญ่': u'้นน้ดจ่',
u'ๅธๅค': u'้นน้นต',
u'ๅธๅธ': u'้นน้นน',
u'็ๆๆไนๅธ': u'้นฝๆๆ้บผ้นน',
u'็ๅค': u'้นฝๆปท',
u'็ไฝ': u'้นฝ้ค',
u'ไธฝไบ': u'้บๆผ',
u'ๆฒๅฐ': u'้บดๅกต',
u'ๆฒ่': u'้บดๆซฑ',
u'ๆฒ็': u'้บด็',
u'ๆฒ็งๆ': u'้บด็งๆ',
u'ๆฒ่': u'้บด่',
u'ๆฒ่ฝฆ': u'้บด่ป',
u'ๆฒ้ๅฃซ': u'้บด้ๅฃซ',
u'ๆฒ้ฑ': u'้บด้ข',
u'ๆฒ้ข': u'้บด้ข',
u'ๆฒ้': u'้บด้ปด',
u'้ขไบบๅฟ': u'้บตไบบๅ
',
u'้ขไปท': u'้บตๅน',
u'้ขๅ
': u'้บตๅ
',
u'้ขๅ': u'้บตๅ',
u'้ขๅฏๅฟ': u'้บตๅฏๅ
',
u'้ขๅก': u'้บตๅก',
u'้ขๅบ': u'้บตๅบ',
u'้ขๅ': u'้บตๅป ',
u'้ขๆ': u'้บตๆค',
u'้ขๆ': u'้บตๆ',
u'้ขๆก': u'้บตๆข',
u'้ขๆฑค': u'้บตๆนฏ',
u'้ขๆต': u'้บตๆผฟ',
u'้ข็ฐ': u'้บต็ฐ',
u'้ข็็ฉ': u'้บต็็ฉ',
u'้ข็ฎ': u'้บต็ฎ',
u'้ข็ ๅฟ': u'้บต็ขผๅ
',
u'้ข็ญ': u'้บต็ญ',
u'้ข็ฒ': u'้บต็ฒ',
u'้ข็ณ': u'้บต็ณ',
u'้ขๅข': u'้บต็ณฐ',
u'้ข็บฟ': u'้บต็ท',
u'้ข็ผธ': u'้บต็ผธ',
u'้ข่ถ': u'้บต่ถ',
u'้ข้ฃ': u'้บต้ฃ',
u'้ข้ฅบ': u'้บต้ค',
u'้ข้ฅผ': u'้บต้ค
',
u'้ข้ฆ': u'้บต้คจ',
u'้บป่ฏ': u'้บป่ฅ',
u'้บป้่ฏ': u'้บป้่ฅ',
u'้บป้
ฑ้ข': u'้บป้ฌ้บต',
u'้ปๅนฒ้ป็ฆ': u'้ปไนพ้ป็ฆ',
u'้ปๅ': u'้ปๆ',
u'้ปๆฒ้': u'้ปๆฒ้',
u'้ปๅๅฒ': u'้ปๆญทๅฒ',
u'้ป้่กจ': u'้ป้่กจ',
u'้ป้บ็ญ': u'้ป้บ็ญ',
u'้ป้ฐ็ญ': u'้ป้บ็ญ',
u'้ป้': u'้ป้',
u'้ปๅ': u'้ป้ซฎ',
u'้ปๆฒๆฏ็ด ': u'้ป้บดๆฏ็ด ',
u'้ปๅฅดๅๅคฉๅฝ': u'้ปๅฅด็ฑฒๅคฉ้',
u'้ปๅ': u'้ป้ซฎ',
u'็นๅ้': u'้ปๅ้',
u'็นๅค้': u'้ปๅค้',
u'็น้': u'้ป่ฃก',
u'็น้': u'้ป้',
u'้ๆฏ': u'้ปดๆฏ',
u'้็ด ': u'้ปด็ด ',
u'้่': u'้ปด่',
u'้้ป': u'้ปด้ป',
u'้้ปง': u'้ปด้ปง',
u'้ผ้': u'้ผ่ฃก',
u'ๅฌๅฌ้ผ': u'้ผ้ผ้ผ',
u'้ผ ่ฏ': u'้ผ ่ฅ',
u'้ผ ๆฒ่': u'้ผ ้บด่',
u'้ผปๆขๅฟ': u'้ผปๆขๅ
',
u'้ผปๆข': u'้ผปๆจ',
u'้ผปๅ': u'้ผปๆบ',
u'้ฝ็่็': u'้ฝ็ๆจ็',
u'้ฝๅบ': u'้ฝ่',
u'้ฝฟๅฑๅ็ง': u'้ฝๅฑ้ซฎ็ง',
u'้ฝฟ่ฝๅ็ฝ': u'้ฝ่ฝ้ซฎ็ฝ',
u'้ฝฟๅ': u'้ฝ้ซฎ',
u'ๅบๅฟ': u'้ฝฃๅ
',
u'ๅบๅง': u'้ฝฃๅ',
u'ๅบๅจ็ป': u'้ฝฃๅ็ซ',
u'ๅบๅก้': u'้ฝฃๅก้',
u'ๅบๆ': u'้ฝฃๆฒ',
u'ๅบ่็ฎ': u'้ฝฃ็ฏ็ฎ',
u'ๅบ็ตๅฝฑ': u'้ฝฃ้ปๅฝฑ',
u'ๅบ็ต่ง': u'้ฝฃ้ป่ฆ',
u'้พๅท': u'้พๆฒ',
u'้พ็ผๅนฒ': u'้พ็ผไนพ',
u'้พ้กป': u'้พ้ฌ',
u'้พๆ่ไผค': u'้พ้ฌฅ่ๅท',
u'้พๅฑฑๅบ': u'้พๅฑฑๅบ',
u'๏ผๅ
ๅถ': u'๏ผๅๅถ',
u'๏ผๅ
ๅถ': u'๏ผๅๅถ',
u'๏ผๅคๅช': u'๏ผๅค้ป',
u'๏ผๅคฉๅ': u'๏ผๅคฉๅพ',
u'๏ผๅช': u'๏ผ้ป',
u'๏ผไฝ': u'๏ผ้ค',
u'๏ผๅคฉๅ': u'๏ผๅคฉๅพ',
u'๏ผๅช': u'๏ผ้ป',
u'๏ผไฝ': u'๏ผ้ค',
u'๏ผๅคฉๅ': u'๏ผๅคฉๅพ',
u'๏ผๅช': u'๏ผ้ป',
u'๏ผไฝ': u'๏ผ้ค',
u'๏ผๅคฉๅ': u'๏ผๅคฉๅพ',
u'๏ผๅช': u'๏ผ้ป',
u'๏ผไฝ': u'๏ผ้ค',
u'๏ผๅคฉๅ': u'๏ผๅคฉๅพ',
u'๏ผๅช': u'๏ผ้ป',
u'๏ผไฝ': u'๏ผ้ค',
u'๏ผๅคฉๅ': u'๏ผๅคฉๅพ',
u'๏ผๅช': u'๏ผ้ป',
u'๏ผไฝ': u'๏ผ้ค',
u'๏ผๅคฉๅ': u'๏ผๅคฉๅพ',
u'๏ผๅช': u'๏ผ้ป',
u'๏ผไฝ': u'๏ผ้ค',
u'๏ผๅคฉๅ': u'๏ผๅคฉๅพ',
u'๏ผๅช': u'๏ผ้ป',
u'๏ผไฝ': u'๏ผ้ค',
u'๏ผๅคฉๅ': u'๏ผๅคฉๅพ',
u'๏ผๅช': u'๏ผ้ป',
u'๏ผไฝ': u'๏ผ้ค',
u'๏ผๅคฉๅ': u'๏ผๅคฉๅพ',
u'๏ผๅช': u'๏ผ้ป',
u'๏ผไฝ': u'๏ผ้ค',
u'๏ผๅ
ๅถ': u'๏ผๅๅถ',
u'๏ผๅ
ๅถ': u'๏ผๅๅถ',
u'๏ผๅ
ๅถ': u'๏ผๅๅถ',
} | AdvancedLangConv | /AdvancedLangConv-0.01.tar.gz/AdvancedLangConv-0.01/langconv/defaulttables/zh_hant.py | zh_hant.py |
from zh_hans import convtable as oldtable
convtable = oldtable.copy()
convtable.update({
u'16้ฒไฝ': u'16่ฟไฝ',
u'16้ฒไฝๅถ': u'16่ฟไฝๅถ',
u'ใ': u'โ',
u'ใ': u'โ',
u'ใ': u'โ',
u'ใ': u'โ',
u'่ฌๆ': u'ไธๅ',
u'ไธๆฅต้ซ': u'ไธๆ็ฎก',
u'ไธๆฅต็ฎก': u'ไธๆ็ฎก',
u'ไธฒๅๅ ้ๅจ': u'ไธฒๅๅ ้ๅจ',
u'ไธฒๅ': u'ไธฒ่ก',
u'็่ฒๅฅๅ
': u'ไนๅ
นๅซๅ
ๆฏๅฆ',
u'่้': u'ไน้จ',
u'่ๅฃซ': u'ไนพ้
ช',
u'ไบๆฅต็ฎก': u'ไบๆ็ฎก',
u'ไบๆฅต้ซ': u'ไบๆ็ฎก',
u'ไบ้ฒไฝๅถ': u'ไบ่ฟไฝๅถ',
u'ไบ้ฒไฝ': u'ไบ่ฟๅถ',
u'็ถฒ้็ถฒ่ทฏ': u'ไบ่็ฝ',
u'ไบ่ฏ็ถฒ': u'ไบ่็ฝ',
u'ไบๅๅผ': u'ไบคไบๅผ',
u'ไบบๅทฅๆบๆ
ง': u'ไบบๅทฅๆบ่ฝ',
u'็้บฝ': u'ไปไน',
u'็้บผ': u'ไปไน',
u'ไนๅคช็ถฒ': u'ไปฅๅคช็ฝ',
u'่ช็ฑ็': u'ไปปๆ็',
u'ๅชๅ
้ ๅบ': u'ไผๅ
็บง',
u'ๆๆธฌ': u'ไผ ๆ',
u'ไผฏๅฉ่ฒ': u'ไผฏๅฉๅ
น',
u'่ฒ้ๆฏ': u'ไผฏๅฉๅ
น',
u'้ป้ฃๅ': u'ไฝๅพ',
u'็ถญๅพท่ง': u'ไฝๅพ่ง',
u'ๅธธๅผ': u'ไพ็จ',
u'ไพๅธ็ด': u'ไพ็ฝ็บช',
u'ๆตท็': u'ไพฏ่ตๅ ',
u'ๆๅธถๅ': u'ไพฟๆบๅผ',
u'่ณ่จ็่ซ': u'ไฟกๆฏ่ฎบ',
u'ๆฏ้ณ': u'ๅ
้ณ',
u'ๆธธๆจ': u'ๅ
ๆ ',
u'ๅ
็ข': u'ๅ
็',
u'ๅ
็ขๆฉ': u'ๅ
้ฉฑ',
u'ๆฏๆ้ ': u'ๅ
ๆ้กฟ',
u'ๅ
็พ
ๅ่ฅฟไบ': u'ๅ
็ฝๅฐไบ',
u'้ฒ็': u'ๅ
ฅ็',
u'ๅ
จๅฝข': u'ๅ
จ่ง',
u'ๅ
ซ้ฒไฝๅถ': u'ๅ
ซ่ฟไฝๅถ',
u'ๅ
ซ้ฒไฝ': u'ๅ
ซ่ฟๅถ',
u'ๅ
ฌ่ป': u'ๅ
ฌๅ
ฑๆฑฝ่ฝฆ',
u'ๅ
ฌ่ปไธๆธ': u'ๅ
ฌ่ฝฆไธไนฆ',
u'ๅ
ญ้ฒไฝๅถ': u'ๅ
ญ่ฟไฝๅถ',
u'ๅ
ญ้ฒไฝ': u'ๅ
ญ่ฟๅถ',
u'่จๆถ้ซ': u'ๅ
ๅญ',
u'็ๆฏไบ': u'ๅๆฏไบ',
u'้ฒๅฏซ': u'ๅไฟๆค',
u'ๅท่': u'ๅ่',
u'ๅท็ค': u'ๅ่',
u'ๅนพๅ
งไบๆฏ็ดข': u'ๅ ๅ
ไบๆฏ็ป',
u'ๆขต่ฐท': u'ๅก้ซ',
u'่จ็จ่ป': u'ๅบ็ง่ฝฆ',
u'ๅๆฃๅผ': u'ๅๅธๅผ',
u'่งฃๆๅบฆ': u'ๅ่พจ็',
u'ๅๆฏๆฆๆฏ็ป': u'ๅๆฏๆฆๅฃซ็ป',
u'่ณดๆฏ็ไบ': u'ๅฉๆฏ้ไบ',
u'่ฟฆ็ด': u'ๅ ็บณ',
u'ๅ ๅฝญ': u'ๅ ่ฌ',
u'่ผๅ
ฅ': u'ๅ ่ฝฝ',
u'ๅ้ฒไฝๅถ': u'ๅ่ฟไฝๅถ',
u'ๅ้ฒไฝ': u'ๅ่ฟๅถ',
u'ๅๅฝข': u'ๅ่ง',
u'ๅไน่ก': u'ๅไน่ก',
u'ๆณขๆญ้ฃ': u'ๅ่จ็ฆ็บณ',
u'็งๅฎ้': u'ๅขๆบ่พพ',
u'่ก็': u'ๅซ็',
u'่ก็': u'ๅซ็',
u'็ๅฐ้ฆฌๆ': u'ๅฑๅฐ้ฉฌๆ',
u'ๅ็ๅค': u'ๅ็ๅคๅฐ',
u'ๅ็ๅค็พ': u'ๅ็ๅคๅฐ',
u'ๅ็ๅคๅฐ': u'ๅ็ๅคๅฐ',
u'ๅๅฉๅไบ': u'ๅ็ซ็น้ไบ',
u'่ฎๆธ': u'ๅ้',
u'ๆ็': u'ๅฐ็',
u'ๆก็': u'ๅฐ็',
u'ๅๅธๅฐ': u'ๅๅธๆ',
u'ๅ่ฉๅ
': u'ๅ่จๅ
ๆฏๅฆ',
u'ๅฅๆฏๅคง้ปๅ ': u'ๅฅๆฏ่พพ้ปๅ ',
u'้่จ': u'ๅชๅฃฐ',
u'ๅ ๆธ': u'ๅ ๅญ',
u'ๅ็ฆ้ญฏ': u'ๅพ็ฆๅข',
u'ๅๅบซๆผ': u'ๅๅบๆผๆฏๅฆ',
u'่้ฒ่ฅฟไบ': u'ๅฃๅข่ฅฟไบ',
u'่ๅๆฏ็ดๅๆฏ': u'ๅฃๅบ่จๅๅฐผ็ปดๆฏ',
u'่ๅ
้ๆฏๅค็ฆๅๅฐผ็ถญๆฏ': u'ๅฃๅบ่จๅๅฐผ็ปดๆฏ',
u'่ๆๆฃฎๅๆ ผ็้ฃไธ': u'ๅฃๆๆฃฎ็นๅๆ ผๆ็บณไธๆฏ',
u'่้ฆฌๅฉ่ซพ': u'ๅฃ้ฉฌๅ่ฏบ',
u'่ไบ้ฃ': u'ๅญไบ้ฃ',
u'ๅฆๅฐๅฐผไบ': u'ๅฆๆกๅฐผไบ',
u'่กฃ็ดขๆฏไบ': u'ๅๅกไฟๆฏไบ',
u'่กฃ็ดขๅนไบ': u'ๅๅกไฟๆฏไบ',
u'ๅ่ฝ่ฎๆธๅ็จฑ': u'ๅๅ',
u'ๅ้ๅทดๆฏ': u'ๅบ้ๅทดๆฏ',
u'ๅกๅๅ
': u'ๅกๅๅ
ๆฏๅฆ',
u'ๅกๆๅฉๆ': u'ๅกๆๅฉๆ',
u'ๅกๆฎๅๆฏ': u'ๅกๆตฆ่ทฏๆฏ',
u'ๅกๅธญ็พ': u'ๅก่ๅฐ',
u'้ณๆๅก': u'ๅฃฐๅก',
u'ๅค็ฑณๅฐผๅ
': u'ๅค็ฑณๅฐผๅ ๅฝ',
u'ๅคๅญฆ': u'ๅคๆ ก',
u'็ฆๅฃซ': u'ๅคงไผ',
u'็ฆๆฏ': u'ๅคงไผ',
u'ๅคง่ก็ขงๅธ': u'ๅคงๅซยท่ดๅ
ๆฑๅง',
u'้ ญๆง': u'ๅคด็',
u'่ณๅฃซ': u'ๅฅ้ฉฐ',
u'ๅนณๆฒป': u'ๅฅ้ฉฐ',
u'ๅฟๅป': u'ๅฅถๆฒน',
u'ๅญๅ
ไผ': u'ๅญๅ
ไผ',
u'ๅญๅ
ๆ': u'ๅญๅ
ไผ',
u'ๅญๅ
ๆฟ': u'ๅญๅ
ๆต',
u'ๅญๅ
ๆต': u'ๅญๅ
ๆต',
u'ๅญๅๅคงๅฐ': u'ๅญๅท',
u'ๅญๅๆช': u'ๅญๅบ',
u'ๆฌไฝ': u'ๅญๆฎต',
u'ๅญๅ
': u'ๅญ็ฌฆ',
u'ๅญ็ฏ': u'ๅญ่',
u'ไฝๅ
็ต': u'ๅญ่',
u'ๅญๆช': u'ๅญ็',
u'ๅฎๅฐๅกๅๅทดๅธ้': u'ๅฎๆ็ๅๅทดๅธ่พพ',
u'ๅทจ้': u'ๅฎ',
u'ๅฏฌ้ ป': u'ๅฎฝๅธฆ',
u'ๅฎๅ': u'ๅฏปๅ',
u'ๅฅๅๅฉไบ': u'ๅฐผๆฅๅฉไบ',
u'ๅฐผๆฅๅฉไบ': u'ๅฐผๆฅๅฉไบ',
u'ๅฐผๆฅๅฉไบ': u'ๅฐผๆฅๅฉไบ',
u'ๅฐผๆฅ็พ': u'ๅฐผๆฅๅฐ',
u'ๅฐผๆฅๅฐ': u'ๅฐผๆฅๅฐ',
u'็ซ ็ฏ้่จป': u'ๅฐพๆณจ',
u'ๅๅ็ถฒ': u'ๅฑๅ็ฝ',
u'้
่ณ': u'ๅทจๅ',
u'ๅทด่ฒๅค': u'ๅทดๅทดๅคๆฏ',
u'ๅทดๅธไบ็ดๅนพๅ
งไบ': u'ๅทดๅธไบๆฐๅ ๅ
ไบ',
u'ๅธๅธ': u'ๅธไป',
u'ๅธๆฎ': u'ๅธไป',
u'ๅธๅบ็ดๆณ็ดข': u'ๅธๅบ็บณๆณ็ดข',
u'ๅธๅ็ดๆณ็ดข': u'ๅธๅบ็บณๆณ็ดข',
u'ๅธๅธไบ': u'ๅธๅธไบ',
u'ๅธๅธไบ': u'ๅธๅธไบ',
u'่ฒ้ๅฐ': u'ๅธ้่ฟช',
u'ๅธ็นๆ': u'ๅธ็นๅ',
u'ๅธ็': u'ๅธๅณ',
u'ๅนณๆฒปไนไนฑ': u'ๅนณๆฒปไนไนฑ',
u'ๅนณๆฒปไนไบ': u'ๅนณๆฒปไนไนฑ',
u'้ๅๆญฅ': u'ๅผๆญฅ',
u'่ฟดๅ': u'ๅพช็ฏ',
u'ๅฟซ้่จๆถ้ซ': u'ๅฟซ้ชๅญๅจๅจ',
u'ๅฏๆตๆ': u'ๆป็บฟ',
u'็พฉๅคงๅฉ': u'ๆๅคงๅฉ',
u'้ปๅฎๅจ': u'ๆดๅฎๅจ',
u'ๅฑไปท': u'ๆฟไปท',
u'็ดข็พ
้็พคๅณถ': u'ๆ็ฝ้จ็พคๅฒ',
u'ๆๅฐ': u'ๆๅฐ',
u'ๅๅฐ': u'ๆๅฐ',
u'ๅฐ่กจๆฉ': u'ๆๅฐๆบ',
u'ๆๅฐๆฉ': u'ๆๅฐๆบ',
u'ๅฐ้': u'ๆ้จ',
u'ๆ็ๅจ': u'ๆซ็ไปช',
u'ๆฌๅผง': u'ๆฌๅท',
u'ๆฟ็ ดๅด': u'ๆฟ็ ดไป',
u'็ฉๆถ': u'ๆท่ฑน',
u'ไป้ข': u'ๆฅๅฃ',
u'ๆงๅถ้
': u'ๆงไปถ',
u'่ณๆๅบซ': u'ๆฐๆฎๅบ',
u'ๆฑถ่': u'ๆ่ฑ',
u'ๅฒ็ฆๆฟ่ญ': u'ๆฏๅจๅฃซๅ
ฐ',
u'ๆฏๆด็ถญๅฐผไบ': u'ๆฏๆดๆๅฐผไบ',
u'็ด่ฅฟ่ญ': u'ๆฐ่ฅฟๅ
ฐ',
u'ๅณ้ฃ้บต': u'ๆนไพฟ้ข',
u'ๅฟซ้้ข': u'ๆนไพฟ้ข',
u'ๆณก้บต': u'ๆนไพฟ้ข',
u'้้ฃ้บต': u'ๆนไพฟ้ข',
u'ไผบๆๅจ': u'ๆๅกๅจ',
u'ๆฉๆขฐไบบ': u'ๆบๅจไบบ',
u'ๆฉๅจไบบ': u'ๆบๅจไบบ',
u'่จฑๅฏๆฌ': u'ๆ้',
u'ๅฏถ็
': u'ๆ ๅฟ',
u'ๆ ผ็้ฃ้': u'ๆ ผๆ็บณ่พพ',
u'ๆฆดๆงค': u'ๆฆด่ฒ',
u'ๆฆดๆขฟ': u'ๆฆด่ฒ',
u'่
ๅฉๅกๅฐผไบ': u'ๆฏ้ๅกๅฐผไบ',
u'ๆฏ้่ฃๆฏ': u'ๆฏ้ๆฑๆฏ',
u'ๆจก้่ฅฟๆฏ': u'ๆฏ้ๆฑๆฏ',
u'ๅไน': u'ๆฐไน',
u'ไธญๆจ': u'ๆฐไน',
u'ๆฐธๆ': u'ๆฐธๅ',
u'ๆฒๅฐ้ฟๆไผฏ': u'ๆฒ็น้ฟๆไผฏ',
u'ๆฒ็ๅฐ้ฟๆไผฏ': u'ๆฒ็น้ฟๆไผฏ',
u'ๆณขๅฃซๅฐผไบ่ตซๅกๅฅ็ถญ็ด': u'ๆณขๆฏๅฐผไบๅ้ปๅกๅฅ็ปด้ฃ',
u'่พๅทดๅจ': u'ๆดฅๅทดๅธ้ฆ',
u'ๅฎ้ฝๆๆฏ': u'ๆดช้ฝๆๆฏ',
u'ๆปฟ16้ฒไฝ': u'ๆปก16่ฟไฝ',
u'ๆปฟไบ้ฒไฝ': u'ๆปกไบ่ฟไฝ',
u'ๆปฟๅ
ซ้ฒไฝ': u'ๆปกๅ
ซ่ฟไฝ',
u'ๆปฟๅ
ญ้ฒไฝ': u'ๆปกๅ
ญ่ฟไฝ',
u'ๆปฟๅๅ
ญ้ฒไฝ': u'ๆปกๅๅ
ญ่ฟไฝ',
u'ๆปฟๅ้ฒไฝ': u'ๆปกๅ่ฟไฝ',
u'่็ซ้': u'็ซ้
็ๅธฝ',
u'ๅ้้ๆ่ฒๅฅ': u'็น็ซๅฐผ่พพๅๆๅทดๅฅ',
u'็้ป': u'็ฌๅช',
u'ๅกไฝฉ้
่': u'็ๅฆฎๅผยทๅกๆฎ้ไบ่',
u'่ซพ้ญฏ': u'็้ฒ',
u'่ฌ้ฃๆ': u'็ฆๅช้ฟๅพ',
u'ๆบซ็ดๅ': u'็ฆๅช้ฟๅพ',
u'็ข็': u'็็',
u'็ญ่จ': u'็ญไฟก',
u'็ฐก่จ': u'็ญไฟก',
u'็ฝๅฐ': u'็ฝๅฐ',
u'็ฝๅกต': u'็ฝๅฐ',
u'็ฝ่บ': u'็ฝ่บ',
u'็ฝ้ข': u'็ฝ้ข',
u'็ฝ้ผ': u'็ฝ้ข',
u'็ฝ': u'็ก
',
u'็ฝ็': u'็ก
็',
u'็ฝ่ฐท': u'็ก
่ฐท',
u'็กฌ้ซ': u'็กฌไปถ',
u'็กฌ็ข': u'็กฌ็',
u'็ฃ็ข': u'็ฃ็',
u'็ฃ่ป': u'็ฃ้',
u'่ๆฉ': u'็งๆฉ็ฝ',
u'่ฑก็ๆตทๅฒธ': u'็ง็น่ฟช็ฆ',
u'่กๅ้ป่ฉฑ': u'็งปๅจ็ต่ฏ',
u'ๆตๅ้ป่ฉฑ': u'็งปๅจ็ต่ฏ',
u'็จๅผๆงๅถ': u'็จๆง',
u'็ชๅฐผ่ฅฟไบ': u'็ชๅฐผๆฏ',
u'่ฐๆ': u'็ฌๆ',
u'็ญๆผ': u'็ญไบ',
u'้็ฎๅ
': u'็ฎๅญ',
u'ๆผ็ฎๆณ': u'็ฎๆณ',
u'้ก้ฒ็': u'็ฒๅ
ฅ็',
u'็ดข้ฆฌๅฉไบ': u'็ดข้ฉฌ้',
u'็ถฒ่ทฏ': u'็ฝ็ป',
u'็ถฒ็ตก': u'็ฝ็ป',
u'ๅฏฎๅ': u'่ๆ',
u'่ฏ้
': u'่ฏๅฐผไบ',
u'่ฏไบ': u'่ฏๅฐผไบ',
u'่ช็ฑ็ๅ': u'่ช็ฑ็ๅ',
u'่ช็ฑ็ๅก': u'่ช็ฑ็ๅ',
u'ๅฎ่ป': u'่ช่ก่ฝฆ',
u'ๅคช็ฉบๆขญ': u'่ชๅคฉ้ฃๆบ',
u'็ฉฟๆขญๆฉ': u'่ชๅคฉ้ฃๆบ',
u'็ฏๆ
ถ': u'่ๆฅ',
u'ๆถๅ
': u'่ฏ็',
u'ๆถ็': u'่ฏ็',
u'่ๅฉๅ': u'่้ๅ',
u'ๅฃซๅคๅคๆขจ': u'่่',
u'่ซไธๆฏๅ
': u'่ซๆกๆฏๅ
',
u'่ณด็ดขๆ': u'่ฑ็ดขๆ',
u'่พญๅฝ': u'่ฏๆฑ',
u'็่ช': u'่ฏ็ป',
u'่ชฟๅถ่งฃ่ชฟๅจ': u'่ฐๅถ่งฃ่ฐๅจ',
u'ๆธๆๆฉ': u'่ฐๅถ่งฃ่ฐๅจ',
u'่ฒๅ': u'่ดๅฎ',
u'ๅฐๆฏไบ': u'่ตๆฏไบ',
u'็ป็ดง่ทณ': u'่นฆๆ่ทณ',
u'็ฌจ่ฑฌ่ทณ': u'่นฆๆ่ทณ',
u'่ป้ซ': u'่ฝฏไปถ',
u'่ปไปถ': u'่ฝฏไปถ',
u'่ป็ขๆฉ': u'่ฝฏ้ฉฑ',
u'็ฑณ้ซๅฅง้ฒ': u'่ฟๅ
ๅฐยทๆฌงๆ',
u'่้บฅๅ ': u'่ฟๅ
ๅฐยท่้ฉฌ่ตซ',
u'้ ็จๆงๅถ': u'่ฟ็จๆงๅถ',
u'่ฟ็จๆงๅถ': u'่ฟ็จๆงๅถ',
u'ไบๅกๆ็ถ': u'้ฟๅกๆ็',
u'้ฟๆไผฏ่ฏๅๅคงๅ
ฌๅ': u'้ฟๆไผฏ่ๅ้
้ฟๅฝ',
u'ๆฃ้ฑ': u'้ถ้ฑ',
u'ๅ้': u'้ฉๅฝ',
u'้ฆฌ็พๅฐๅคซ': u'้ฉฌๅฐไปฃๅคซ',
u'ๆฒ่ฌ': u'้ฉฌๆ็นยท่จ่ฌ',
u'้ฆฌ็พไป': u'้ฉฌ่ณไป',
u'่ฌไบๅพ': u'้ฉฌ่ช่พพ',
u'้ฆฌๅฉๅ
ฑๅๅ': u'้ฉฌ้ๅ
ฑๅๅฝ',
u'้ ่จญ': u'้ป่ฎค',
u'ๆป้ผ ': u'้ผ ๆ ',
}) | AdvancedLangConv | /AdvancedLangConv-0.01.tar.gz/AdvancedLangConv-0.01/langconv/defaulttables/zh_cn.py | zh_cn.py |
from zh_hant import convtable as oldtable
convtable = oldtable.copy()
convtable.update({
u'โ': u'ใ',
u'โ': u'ใ',
u'โ': u'ใ',
u'โ': u'ใ',
u'ไธๆฅต้ซ': u'ไธๆฅต็ฎก',
u'ไธ่็่ทก': u'ไธ็็่ทก',
u'ไธ่้้': u'ไธ็้้',
u'ไธ็่ฃก': u'ไธ็่ฃ',
u'ไธ็้': u'ไธ็่ฃ',
u'ไธญๆ้': u'ไธญๆ่ฃ',
u'ไธญๆ่ฃก': u'ไธญๆ่ฃ',
u'ๆฐไน': u'ไธญๆจ',
u'ๅไน': u'ไธญๆจ',
u'ๆฅๅพท': u'ไนๅพ',
u'ไน่': u'ไน็',
u'ไน่ไฝ': u'ไน่ไฝ',
u'ไน่ๅ': u'ไน่ๅ',
u'ไน่ๆธ': u'ไน่ๆธ',
u'ไน่็จฑ': u'ไน่็จฑ',
u'ไน่่
': u'ไน่่
',
u'ไน่่ฟฐ': u'ไน่่ฟฐ',
u'ไน่้': u'ไน่้',
u'่้': u'ไน้',
u'ไบๆฅต้ซ': u'ไบๆฅต็ฎก',
u'็ถฒ้็ถฒ่ทฏ': u'ไบ่ฏ็ถฒ',
u'ๅ ็น็ฝ': u'ไบ่ฏ็ถฒ',
u'ไบฎ่': u'ไบฎ็',
u'ไบฎ่ไฝ': u'ไบฎ่ไฝ',
u'ไบฎ่ๅ': u'ไบฎ่ๅ',
u'ไบฎ่ๆธ': u'ไบฎ่ๆธ',
u'ไบฎ่็จฑ': u'ไบฎ่็จฑ',
u'ไบฎ่่
': u'ไบฎ่่
',
u'ไบฎ่่ฟฐ': u'ไบฎ่่ฟฐ',
u'ไบฎ่้': u'ไบฎ่้',
u'ไบบๅทฅๆบๆ
ง': u'ไบบๅทฅๆบ่ฝ',
u'ไป่': u'ไป็',
u'ไป่ไฝ': u'ไป่ไฝ',
u'ไป่ๅ': u'ไป่ๅ',
u'ไป่ๆธ': u'ไป่ๆธ',
u'ไป่็จฑ': u'ไป่็จฑ',
u'ไป่่
': u'ไป่่
',
u'ไป่่ฟฐ': u'ไป่่ฟฐ',
u'ไป่้': u'ไป่้',
u'ไปฃ่กจ่': u'ไปฃ่กจ็',
u'ไปฃ่กจ่ไฝ': u'ไปฃ่กจ่ไฝ',
u'ไปฃ่กจ่ๅ': u'ไปฃ่กจ่ๅ',
u'ไปฃ่กจ่ๆธ': u'ไปฃ่กจ่ๆธ',
u'ไปฃ่กจ่็จฑ': u'ไปฃ่กจ่็จฑ',
u'ไปฃ่กจ่่
': u'ไปฃ่กจ่่
',
u'ไปฃ่กจ่่ฟฐ': u'ไปฃ่กจ่่ฟฐ',
u'ไปฃ่กจ่้': u'ไปฃ่กจ่้',
u'่ฒ้ๆฏ': u'ไผฏๅฉ่ฒ',
u'ไผด่': u'ไผด็',
u'ไผด่ไฝ': u'ไผด่ไฝ',
u'ไผด่ๅ': u'ไผด่ๅ',
u'ไผด่ๆธ': u'ไผด่ๆธ',
u'ไผด่็จฑ': u'ไผด่็จฑ',
u'ไผด่่
': u'ไผด่่
',
u'ไผด่่ฟฐ': u'ไผด่่ฟฐ',
u'ไผด่้': u'ไผด่้',
u'ๅญ็ฏ': u'ไฝๅ
็ต',
u'ๅญ่': u'ไฝๅ
็ต',
u'ไฝ่': u'ไฝ็',
u'ไฝ่ไฝ': u'ไฝ่ไฝ',
u'ไฝ่ๅ': u'ไฝ่ๅ',
u'ไฝ่ๆธ': u'ไฝ่ๆธ',
u'ไฝ่็จฑ': u'ไฝ่็จฑ',
u'ไฝ่่
': u'ไฝ่่
',
u'ไฝ่่ฟฐ': u'ไฝ่่ฟฐ',
u'ไฝ่้': u'ไฝ่้',
u'ไฝ่': u'ไฝ็',
u'ไฝ่ไฝ': u'ไฝ่ไฝ',
u'ไฝ่ๅ': u'ไฝ่ๅ',
u'ไฝ่ๆธ': u'ไฝ่ๆธ',
u'ไฝ่็จฑ': u'ไฝ่็จฑ',
u'ไฝ่่
': u'ไฝ่่
',
u'ไฝ่่ฟฐ': u'ไฝ่่ฟฐ',
u'ไฝ่้': u'ไฝ่้',
u'็ถญๅพท่ง': u'ไฝๅพ่ง',
u'ไฝๅ่ฃก': u'ไฝๅ่ฃ',
u'ไฝๅ้': u'ไฝๅ่ฃ',
u'ไพ่': u'ไพ็',
u'ไพ่ไฝ': u'ไพ่ไฝ',
u'ไพ่ๅ': u'ไพ่ๅ',
u'ไพ่ๆธ': u'ไพ่ๆธ',
u'ไพ่็จฑ': u'ไพ่็จฑ',
u'ไพ่่
': u'ไพ่่
',
u'ไพ่่ฟฐ': u'ไพ่่ฟฐ',
u'ไพ่้': u'ไพ่้',
u'ๆตท็': u'ไพฏ่ณฝๅ ',
u'ไฟ้่': u'ไฟ้็',
u'ไฟ้่ไฝ': u'ไฟ้่ไฝ',
u'ไฟ้่ๅ': u'ไฟ้่ๅ',
u'ไฟ้่ๆธ': u'ไฟ้่ๆธ',
u'ไฟ้่็จฑ': u'ไฟ้่็จฑ',
u'ไฟ้่่
': u'ไฟ้่่
',
u'ไฟ้่่ฟฐ': u'ไฟ้่่ฟฐ',
u'ไฟ้่้': u'ไฟ้่้',
u'ไฟก่': u'ไฟก็',
u'ไฟก่ไฝ': u'ไฟก่ไฝ',
u'ไฟก่ๅ': u'ไฟก่ๅ',
u'ไฟก่ๆธ': u'ไฟก่ๆธ',
u'ไฟก่็จฑ': u'ไฟก่็จฑ',
u'ไฟก่่
': u'ไฟก่่
',
u'ไฟก่่ฟฐ': u'ไฟก่่ฟฐ',
u'ไฟก่้': u'ไฟก่้',
u'ๅ่': u'ๅ็',
u'ๅ่ไฝ': u'ๅ่ไฝ',
u'ๅ่ๅ': u'ๅ่ๅ',
u'ๅ่ๆธ': u'ๅ่ๆธ',
u'ๅ่็จฑ': u'ๅ่็จฑ',
u'ๅ่่
': u'ๅ่่
',
u'ๅ่่ฟฐ': u'ๅ่่ฟฐ',
u'ๅ่้': u'ๅ่้',
u'ๅ่': u'ๅ็',
u'ๅ่ไฝ': u'ๅ่ไฝ',
u'ๅ่ๅ': u'ๅ่ๅ',
u'ๅ่ๆธ': u'ๅ่ๆธ',
u'ๅ่็จฑ': u'ๅ่็จฑ',
u'ๅ่่
': u'ๅ่่
',
u'ๅ่่ฟฐ': u'ๅ่่ฟฐ',
u'ๅ่้': u'ๅ่้',
u'ๅ่': u'ๅ็',
u'ๅ่ไฝ': u'ๅ่ไฝ',
u'ๅ่ๅ': u'ๅ่ๅ',
u'ๅ่ๆธ': u'ๅ่ๆธ',
u'ๅ่็จฑ': u'ๅ่็จฑ',
u'ๅ่่
': u'ๅ่่
',
u'ๅ่่ฟฐ': u'ๅ่่ฟฐ',
u'ๅ่้': u'ๅ่้',
u'ๅด่': u'ๅด็',
u'ๅด่ไฝ': u'ๅด่ไฝ',
u'ๅด่ๅ': u'ๅด่ๅ',
u'ๅด่ๆธ': u'ๅด่ๆธ',
u'ๅด่็จฑ': u'ๅด่็จฑ',
u'ๅด่่
': u'ๅด่่
',
u'ๅด่่ฟฐ': u'ๅด่่ฟฐ',
u'ๅด่้': u'ๅด่้',
u'ๅท่': u'ๅท็',
u'ๅท่ไฝ': u'ๅท่ไฝ',
u'ๅท่ๅ': u'ๅท่ๅ',
u'ๅท่ๆธ': u'ๅท่ๆธ',
u'ๅท่็จฑ': u'ๅท่็จฑ',
u'ๅท่่
': u'ๅท่่
',
u'ๅท่่ฟฐ': u'ๅท่่ฟฐ',
u'ๅท่้': u'ๅท่้',
u'ๅ่': u'ๅ็',
u'ๅ่ไฝ': u'ๅ่ไฝ',
u'ๅ่ๅ': u'ๅ่ๅ',
u'ๅ่ๆธ': u'ๅ่ๆธ',
u'ๅ่็จฑ': u'ๅ่็จฑ',
u'ๅ่่
': u'ๅ่่
',
u'ๅ่่ฟฐ': u'ๅ่่ฟฐ',
u'ๅ่้': u'ๅ่้',
u'ๅถๆฎ': u'ๅ
ๆฎ',
u'ๅถๆฎบ': u'ๅ
ๆฎบ',
u'้ช้้พ': u'ๅ
้ฒ',
u'้ช้ต้พ': u'ๅ
้ฒ',
u'ๅ
่': u'ๅ
็',
u'ๅ
่ไฝ': u'ๅ
่ไฝ',
u'ๅ
่ๅ': u'ๅ
่ๅ',
u'ๅ
่ๆธ': u'ๅ
่ๆธ',
u'ๅ
่็จฑ': u'ๅ
่็จฑ',
u'ๅ
่่
': u'ๅ
่่
',
u'ๅ
่่ฟฐ': u'ๅ
่่ฟฐ',
u'ๅ
่้': u'ๅ
่้',
u'ๆฏๆ้ ': u'ๅ
ๆ้ ',
u'ๅ
็พ
ๅ่ฅฟไบ': u'ๅ
็พ
ๅฐไบ',
u'ๅ
ฌ่ปไธๆธ': u'ๅ
ฌ่ปไธๆธ',
u'ๅ่': u'ๅ็',
u'ๅ่ไฝ': u'ๅ่ไฝ',
u'ๅ่ๅ': u'ๅ่ๅ',
u'ๅ่ๆธ': u'ๅ่ๆธ',
u'ๅ่็จฑ': u'ๅ่็จฑ',
u'ๅ่่
': u'ๅ่่
',
u'ๅ่่ฟฐ': u'ๅ่่ฟฐ',
u'ๅ่้': u'ๅ่้',
u'ๅ่': u'ๅ็',
u'ๅ่ไฝ': u'ๅ่ไฝ',
u'ๅ่ๅ': u'ๅ่ๅ',
u'ๅ่ๆธ': u'ๅ่ๆธ',
u'ๅ่็จฑ': u'ๅ่็จฑ',
u'ๅ่่
': u'ๅ่่
',
u'ๅ่่ฟฐ': u'ๅ่่ฟฐ',
u'ๅ่้': u'ๅ่้',
u'ๅฌๅคฉ้': u'ๅฌๅคฉ่ฃ',
u'ๅฌๅคฉ่ฃก': u'ๅฌๅคฉ่ฃ',
u'ๅฌๆฅ่ฃก': u'ๅฌๆฅ่ฃ',
u'ๅฌๆฅ้': u'ๅฌๆฅ่ฃ',
u'ๅๅธ': u'ๅไฝ',
u'ๅๅธๆผ': u'ๅไฝๆผ',
u'ๅๅธไบ': u'ๅไฝๆผ',
u'ๅๆฏๆฆๆฏ็ป': u'ๅๆฏๆฆๅฃซ็ป',
u'่ณดๆฏ็ไบ': u'ๅฉๆฏ้ไบ',
u'ๅถ่': u'ๅถ็',
u'ๅถ่ไฝ': u'ๅถ่ไฝ',
u'ๅถ่ๅ': u'ๅถ่ๅ',
u'ๅถ่ๆธ': u'ๅถ่ๆธ',
u'ๅถ่็จฑ': u'ๅถ่็จฑ',
u'ๅถ่่
': u'ๅถ่่
',
u'ๅถ่่ฟฐ': u'ๅถ่่ฟฐ',
u'ๅถ่้': u'ๅถ่้',
u'ๅป่': u'ๅป็',
u'ๅป่ไฝ': u'ๅป่ไฝ',
u'ๅป่ๅ': u'ๅป่ๅ',
u'ๅป่ๆธ': u'ๅป่ๆธ',
u'ๅป่็จฑ': u'ๅป่็จฑ',
u'ๅป่่
': u'ๅป่่
',
u'ๅป่่ฟฐ': u'ๅป่่ฟฐ',
u'ๅป่้': u'ๅป่้',
u'่ฟฆ็ด': u'ๅ ็ด',
u'ๅ ๅฝญ': u'ๅ ่ฌ',
u'ๅชๅ่': u'ๅชๅ็',
u'ๅชๅ่ไฝ': u'ๅชๅ่ไฝ',
u'ๅชๅ่ๅ': u'ๅชๅ่ๅ',
u'ๅชๅ่ๆธ': u'ๅชๅ่ๆธ',
u'ๅชๅ่็จฑ': u'ๅชๅ่็จฑ',
u'ๅชๅ่่
': u'ๅชๅ่่
',
u'ๅชๅ่่ฟฐ': u'ๅชๅ่่ฟฐ',
u'ๅชๅ่้': u'ๅชๅ่้',
u'ๅช่': u'ๅช็',
u'ๅช่ไฝ': u'ๅช่ไฝ',
u'ๅช่ๅ': u'ๅช่ๅ',
u'ๅช่ๆธ': u'ๅช่ๆธ',
u'ๅช่็จฑ': u'ๅช่็จฑ',
u'ๅช่่
': u'ๅช่่
',
u'ๅช่่ฟฐ': u'ๅช่่ฟฐ',
u'ๅช่้': u'ๅช่้',
u'ๅ่': u'ๅ็',
u'ๅ่ไฝ': u'ๅ่ไฝ',
u'ๅ่ๅ': u'ๅ่ๅ',
u'ๅ่ๆธ': u'ๅ่ๆธ',
u'ๅ่็จฑ': u'ๅ่็จฑ',
u'ๅ่่
': u'ๅ่่
',
u'ๅ่่ฟฐ': u'ๅ่่ฟฐ',
u'ๅ่้': u'ๅ่้',
u'ๅป้ข้': u'ๅป้ข่ฃ',
u'ๆณขๆญ้ฃ': u'ๅ่จ็ฆ็ด',
u'็ๅฆฎๅผยทๅกๆฎ้ไบ่': u'ๅกไฝฉ้
่',
u'ๅฐ่': u'ๅฐ็',
u'ๅฐ่ไฝ': u'ๅฐ่ไฝ',
u'ๅฐ่ๅ': u'ๅฐ่ๅ',
u'ๅฐ่ๆธ': u'ๅฐ่ๆธ',
u'ๅฐ่็จฑ': u'ๅฐ่็จฑ',
u'ๅฐ่่
': u'ๅฐ่่
',
u'ๅฐ่่ฟฐ': u'ๅฐ่่ฟฐ',
u'ๅฐ่้': u'ๅฐ่้',
u'็ๅฐ้ฆฌๆ': u'ๅฑๅฐ้ฆฌๆ',
u'ๆณก้บต': u'ๅณ้ฃ้บต',
u'ๆนไพฟ้ข': u'ๅณ้ฃ้บต',
u'ๅฟซ้้ข': u'ๅณ้ฃ้บต',
u'้้ฃ้บต': u'ๅณ้ฃ้บต',
u'ๅ็ๅค': u'ๅ็ๅค็พ',
u'ๅ็ๅค็พ': u'ๅ็ๅค็พ',
u'ๅ็ๅคๅฐ': u'ๅ็ๅค็พ',
u'ๅๅฉๅไบ': u'ๅ็ซ็น้ไบ',
u'ๅป่': u'ๅป็',
u'ๅป่ไฝ': u'ๅป่ไฝ',
u'ๅป่ๅ': u'ๅป่ๅ',
u'ๅป่ๆธ': u'ๅป่ๆธ',
u'ๅป่็จฑ': u'ๅป่็จฑ',
u'ๅป่่
': u'ๅป่่
',
u'ๅป่่ฟฐ': u'ๅป่่ฟฐ',
u'ๅป่้': u'ๅป่้',
u'ๅ่': u'ๅ็',
u'ๅ่ไฝ': u'ๅ่ไฝ',
u'ๅ่ๅ': u'ๅ่ๅ',
u'ๅ่ๆธ': u'ๅ่ๆธ',
u'ๅ่็จฑ': u'ๅ่็จฑ',
u'ๅ่่
': u'ๅ่่
',
u'ๅ่่ฟฐ': u'ๅ่่ฟฐ',
u'ๅ่้': u'ๅ่้',
u'ๅซ่': u'ๅซ็',
u'ๅซ่ไฝ': u'ๅซ่ไฝ',
u'ๅซ่ๅ': u'ๅซ่ๅ',
u'ๅซ่ๆธ': u'ๅซ่ๆธ',
u'ๅซ่็จฑ': u'ๅซ่็จฑ',
u'ๅซ่่
': u'ๅซ่่
',
u'ๅซ่่ฟฐ': u'ๅซ่่ฟฐ',
u'ๅซ่้': u'ๅซ่้',
u'ๅฑๅ': u'ๅฑๅค',
u'ๅฑๅค': u'ๅฑๅค',
u'ๅไธ่': u'ๅไธ็',
u'ๅๅพ่': u'ๅๅพ็',
u'ๅ่': u'ๅ็',
u'ๅๅธๅฐ': u'ๅๅธๅ ค',
u'ๅ่': u'ๅ็',
u'ๅ่ไฝ': u'ๅ่ไฝ',
u'ๅ่ๅ': u'ๅ่ๅ',
u'ๅ่ๆธ': u'ๅ่ๆธ',
u'ๅ่็จฑ': u'ๅ่็จฑ',
u'ๅ่่
': u'ๅ่่
',
u'ๅ่่ฟฐ': u'ๅ่่ฟฐ',
u'ๅ่้': u'ๅ่้',
u'ๅซ่': u'ๅซ็',
u'ๅซ่ไฝ': u'ๅซ่ไฝ',
u'ๅซ่ๅ': u'ๅซ่ๅ',
u'ๅซ่ๆธ': u'ๅซ่ๆธ',
u'ๅซ่็จฑ': u'ๅซ่็จฑ',
u'ๅซ่่
': u'ๅซ่่
',
u'ๅซ่่ฟฐ': u'ๅซ่่ฟฐ',
u'ๅซ่้': u'ๅซ่้',
u'ๅน่': u'ๅน็',
u'ๅน่ไฝ': u'ๅน่ไฝ',
u'ๅน่ๅ': u'ๅน่ๅ',
u'ๅน่ๆธ': u'ๅน่ๆธ',
u'ๅน่็จฑ': u'ๅน่็จฑ',
u'ๅน่่
': u'ๅน่่
',
u'ๅน่่ฟฐ': u'ๅน่่ฟฐ',
u'ๅน่้': u'ๅน่้',
u'ๅณ่': u'ๅณ็',
u'ๅณ่ไฝ': u'ๅณ่ไฝ',
u'ๅณ่ๅ': u'ๅณ่ๅ',
u'ๅณ่ๆธ': u'ๅณ่ๆธ',
u'ๅณ่็จฑ': u'ๅณ่็จฑ',
u'ๅณ่่
': u'ๅณ่่
',
u'ๅณ่่ฟฐ': u'ๅณ่่ฟฐ',
u'ๅณ่้': u'ๅณ่้',
u'ๅค': u'ๅค',
u'ๅฅๆฏๅคง้ปๅ ': u'ๅฅๆฏ้้ปๅ ',
u'ๅญ่': u'ๅญ็',
u'ๅญ่ไฝ': u'ๅญ่ไฝ',
u'ๅญ่ๅ': u'ๅญ่ๅ',
u'ๅญ่ๆธ': u'ๅญ่ๆธ',
u'ๅญ่็จฑ': u'ๅญ่็จฑ',
u'ๅญ่่
': u'ๅญ่่
',
u'ๅญ่่ฟฐ': u'ๅญ่่ฟฐ',
u'ๅญ่้': u'ๅญ่้',
u'ๅฑ่': u'ๅฑ็',
u'ๅฑ่ไฝ': u'ๅฑ่ไฝ',
u'ๅฑ่ๅ': u'ๅฑ่ๅ',
u'ๅฑ่ๆธ': u'ๅฑ่ๆธ',
u'ๅฑ่็จฑ': u'ๅฑ่็จฑ',
u'ๅฑ่่
': u'ๅฑ่่
',
u'ๅฑ่่ฟฐ': u'ๅฑ่่ฟฐ',
u'ๅฑ่้': u'ๅฑ่้',
u'ๅ่': u'ๅ็',
u'ๅ่ไฝ': u'ๅ่ไฝ',
u'ๅ่ๅ': u'ๅ่ๅ',
u'ๅ่ๆธ': u'ๅ่ๆธ',
u'ๅ่็จฑ': u'ๅ่็จฑ',
u'ๅ่่
': u'ๅ่่
',
u'ๅ่่ฟฐ': u'ๅ่่ฟฐ',
u'ๅ่้': u'ๅ่้',
u'่ช่ก่ฝฆ': u'ๅฎ่ป',
u'ๅ
ไธ่': u'ๅ
ไธ็',
u'ๅ
ๅพ่': u'ๅ
ๅพ็',
u'ๅ
่': u'ๅ
็',
u'ๅด้': u'ๅด่ฃ',
u'ๅด่ฃก': u'ๅด่ฃ',
u'ๅท่': u'ๅท็',
u'ๅท่ไฝ': u'ๅท่ไฝ',
u'ๅท่ๅ': u'ๅท่ๅ',
u'ๅท่ๆธ': u'ๅท่ๆธ',
u'ๅท่็จฑ': u'ๅท่็จฑ',
u'ๅท่่
': u'ๅท่่
',
u'ๅท่่ฟฐ': u'ๅท่่ฟฐ',
u'ๅท่้': u'ๅท่้',
u'ๅ ่': u'ๅ ็',
u'ๅ ่ไฝ': u'ๅ ่ไฝ',
u'ๅ ่ๅ': u'ๅ ่ๅ',
u'ๅ ่ๆธ': u'ๅ ่ๆธ',
u'ๅ ่็จฑ': u'ๅ ่็จฑ',
u'ๅ ่่
': u'ๅ ่่
',
u'ๅ ่่ฟฐ': u'ๅ ่่ฟฐ',
u'ๅ ่้': u'ๅ ่้',
u'ๅฐ่': u'ๅฐ็',
u'ๅฐ่ไฝ': u'ๅฐ่ไฝ',
u'ๅฐ่ๅ': u'ๅฐ่ๅ',
u'ๅฐ่ๆธ': u'ๅฐ่ๆธ',
u'ๅฐ่็จฑ': u'ๅฐ่็จฑ',
u'ๅฐ่่
': u'ๅฐ่่
',
u'ๅฐ่่ฟฐ': u'ๅฐ่่ฟฐ',
u'ๅฐ่้': u'ๅฐ่้',
u'ๅ่': u'ๅ็',
u'ๅ่ไฝ': u'ๅ่ไฝ',
u'ๅ่ๅ': u'ๅ่ๅ',
u'ๅ่ๆธ': u'ๅ่ๆธ',
u'ๅ่็จฑ': u'ๅ่็จฑ',
u'ๅ่่
': u'ๅ่่
',
u'ๅ่่ฟฐ': u'ๅ่่ฟฐ',
u'ๅ่้': u'ๅ่้',
u'ๅ็ฆ้ญฏ': u'ๅ็ฆ็ง',
u'ๅ่ฑ็ถฒ': u'ๅ่ฑ็ถฒ',
u'ๅ่ฑ็ฝ': u'ๅ่ฑ็ถฒ',
u'ๅจ่': u'ๅจ็',
u'ๅจ่ไฝ': u'ๅจ่ไฝ',
u'ๅจ่ๅ': u'ๅจ่ๅ',
u'ๅจ่ๆธ': u'ๅจ่ๆธ',
u'ๅจ่็จฑ': u'ๅจ่็จฑ',
u'ๅจ่่
': u'ๅจ่่
',
u'ๅจ่่ฟฐ': u'ๅจ่่ฟฐ',
u'ๅจ่้': u'ๅจ่้',
u'่ไบ้ฃ': u'ๅญไบ้ฃ',
u'ๅ่': u'ๅ็',
u'ๅ่ไฝ': u'ๅ่ไฝ',
u'ๅ่ๅ': u'ๅ่ๅ',
u'ๅ่ๆธ': u'ๅ่ๆธ',
u'ๅ่็จฑ': u'ๅ่็จฑ',
u'ๅ่่
': u'ๅ่่
',
u'ๅ่่ฟฐ': u'ๅ่่ฟฐ',
u'ๅ่้': u'ๅ่้',
u'ๅฆๅฐๅฐผไบ': u'ๅฆๆกๅฐผไบ',
u'่กฃ็ดขๅนไบ': u'ๅๅกไฟๆฏไบ',
u'่กฃ็ดขๆฏไบ': u'ๅๅกไฟๆฏไบ',
u'ๅ้ๅทดๆฏ': u'ๅบ้ๅทดๆฏ',
u'ๅกๆฎๅๆฏ': u'ๅกๆตฆ่ทฏๆฏ',
u'ๅกๅธญ็พ': u'ๅก่็พ',
u'ๅฃ่': u'ๅฃ็',
u'ๅฃ่ไฝ': u'ๅฃ่ไฝ',
u'ๅฃ่ๅ': u'ๅฃ่ๅ',
u'ๅฃ่ๆธ': u'ๅฃ่ๆธ',
u'ๅฃ่็จฑ': u'ๅฃ่็จฑ',
u'ๅฃ่่
': u'ๅฃ่่
',
u'ๅฃ่่ฟฐ': u'ๅฃ่่ฟฐ',
u'ๅฃ่้': u'ๅฃ่้',
u'ๅคๅคฉ้': u'ๅคๅคฉ่ฃ',
u'ๅคๅคฉ่ฃก': u'ๅคๅคฉ่ฃ',
u'ๅคๆฅ้': u'ๅคๆฅ่ฃ',
u'ๅคๆฅ่ฃก': u'ๅคๆฅ่ฃ',
u'ๅคข่': u'ๅคข็',
u'ๅคข่ไฝ': u'ๅคข่ไฝ',
u'ๅคข่ๅ': u'ๅคข่ๅ',
u'ๅคข่ๆธ': u'ๅคข่ๆธ',
u'ๅคข่็จฑ': u'ๅคข่็จฑ',
u'ๅคข่่
': u'ๅคข่่
',
u'ๅคข่่ฟฐ': u'ๅคข่่ฟฐ',
u'ๅคข่้': u'ๅคข่้',
u'ๅคงๅซยท่ดๅ
ๆฑๅง': u'ๅคง่ก็ขงๅธ',
u'ๅคพ่': u'ๅคพ็',
u'ๅคพ่ไฝ': u'ๅคพ่ไฝ',
u'ๅคพ่ๅ': u'ๅคพ่ๅ',
u'ๅคพ่ๆธ': u'ๅคพ่ๆธ',
u'ๅคพ่็จฑ': u'ๅคพ่็จฑ',
u'ๅคพ่่
': u'ๅคพ่่
',
u'ๅคพ่่ฟฐ': u'ๅคพ่่ฟฐ',
u'ๅคพ่้': u'ๅคพ่้',
u'ๅญค่': u'ๅญค็',
u'ๅญค่ไฝ': u'ๅญค่ไฝ',
u'ๅญค่ๅ': u'ๅญค่ๅ',
u'ๅญค่ๆธ': u'ๅญค่ๆธ',
u'ๅญค่็จฑ': u'ๅญค่็จฑ',
u'ๅญค่่
': u'ๅญค่่
',
u'ๅญค่่ฟฐ': u'ๅญค่่ฟฐ',
u'ๅญค่้': u'ๅญค่้',
u'ๅญธ่': u'ๅญธ็',
u'ๅญธ่ไฝ': u'ๅญธ่ไฝ',
u'ๅญธ่ๅ': u'ๅญธ่ๅ',
u'ๅญธ่ๆธ': u'ๅญธ่ๆธ',
u'ๅญธ่็จฑ': u'ๅญธ่็จฑ',
u'ๅญธ่่
': u'ๅญธ่่
',
u'ๅญธ่่ฟฐ': u'ๅญธ่่ฟฐ',
u'ๅญธ่้': u'ๅญธ่้',
u'ๅญธ่ฃก': u'ๅญธ่ฃ',
u'ๅญฆ้': u'ๅญธ่ฃ',
u'ๅฎ่': u'ๅฎ็',
u'ๅฎ่ไฝ': u'ๅฎ่ไฝ',
u'ๅฎ่ๅ': u'ๅฎ่ๅ',
u'ๅฎ่ๆธ': u'ๅฎ่ๆธ',
u'ๅฎ่็จฑ': u'ๅฎ่็จฑ',
u'ๅฎ่่
': u'ๅฎ่่
',
u'ๅฎ่่ฟฐ': u'ๅฎ่่ฟฐ',
u'ๅฎ่้': u'ๅฎ่้',
u'ๅฎๅฐๅกๅๅทดๅธ้': u'ๅฎๆ็ๅๅทดๅธ้',
u'ๅฎ่': u'ๅฎ็',
u'ๅฎ่ไฝ': u'ๅฎ่ไฝ',
u'ๅฎ่ๅ': u'ๅฎ่ๅ',
u'ๅฎ่ๆธ': u'ๅฎ่ๆธ',
u'ๅฎ่็จฑ': u'ๅฎ่็จฑ',
u'ๅฎ่่
': u'ๅฎ่่
',
u'ๅฎ่่ฟฐ': u'ๅฎ่่ฟฐ',
u'ๅฎ่้': u'ๅฎ่้',
u'ๆฒๅฐๆฒ': u'ๅฏ่ฑช',
u'ๅฏๅ่ฃก': u'ๅฏๅ่ฃ',
u'ๅฏๅ้': u'ๅฏๅ่ฃ',
u'ๅฏซ่': u'ๅฏซ็',
u'ๅฏซ่ไฝ': u'ๅฏซ่ไฝ',
u'ๅฏซ่ๅ': u'ๅฏซ่ๅ',
u'ๅฏซ่ๆธ': u'ๅฏซ่ๆธ',
u'ๅฏซ่็จฑ': u'ๅฏซ่็จฑ',
u'ๅฏซ่่
': u'ๅฏซ่่
',
u'ๅฏซ่่ฟฐ': u'ๅฏซ่่ฟฐ',
u'ๅฏซ่้': u'ๅฏซ่้',
u'ไธ่พ้': u'ๅฐ่ผฏ่ฃ',
u'ๅฐ่ผฏ่ฃก': u'ๅฐ่ผฏ่ฃ',
u'ๅฐ่': u'ๅฐ็',
u'ๅฐ่ไฝ': u'ๅฐ่ไฝ',
u'ๅฐ่ๅ': u'ๅฐ่ๅ',
u'ๅฐ่ๆธ': u'ๅฐ่ๆธ',
u'ๅฐ่็จฑ': u'ๅฐ่็จฑ',
u'ๅฐ่่
': u'ๅฐ่่
',
u'ๅฐ่่ฟฐ': u'ๅฐ่่ฟฐ',
u'ๅฐ่้': u'ๅฐ่้',
u'ๅฐ่': u'ๅฐ็',
u'ๅฐ่ไฝ': u'ๅฐ่ไฝ',
u'ๅฐ่ๅ': u'ๅฐ่ๅ',
u'ๅฐ่ๆธ': u'ๅฐ่ๆธ',
u'ๅฐ่็จฑ': u'ๅฐ่็จฑ',
u'ๅฐ่่
': u'ๅฐ่่
',
u'ๅฐ่่ฟฐ': u'ๅฐ่่ฟฐ',
u'ๅฐ่้': u'ๅฐ่้',
u'ๅฅๅๅฉไบ': u'ๅฐผๆฅๅฉไบ',
u'ๅฐผๆฅๅฉไบ': u'ๅฐผๆฅๅฉไบ',
u'ๅฐผๆฅๅฉไบ': u'ๅฐผๆฅๅฉไบ',
u'ๅฐผๆฅๅฐ': u'ๅฐผๆฅ็พ',
u'ๅฐผๆฅ็พ': u'ๅฐผๆฅ็พ',
u'ๅฐผๆฅ': u'ๅฐผๆฅ็พ',
u'ๅฑ่': u'ๅฑ็',
u'ๅฑ่ไฝ': u'ๅฑ่ไฝ',
u'ๅฑ่ๅ': u'ๅฑ่ๅ',
u'ๅฑ่ๆธ': u'ๅฑ่ๆธ',
u'ๅฑ่็จฑ': u'ๅฑ่็จฑ',
u'ๅฑ่่
': u'ๅฑ่่
',
u'ๅฑ่่ฟฐ': u'ๅฑ่่ฟฐ',
u'ๅฑ่้': u'ๅฑ่้',
u'ๅฑฑๆด่ฃก': u'ๅฑฑๆด่ฃ',
u'ๅฑฑๆด้': u'ๅฑฑๆด่ฃ',
u'็ๆฏไบ': u'ๅฒกๆฏไบ',
u'ๅ
ฌ่ป': u'ๅทดๅฃซ',
u'ๅทด่ฒๅค': u'ๅทดๅทดๅคๆฏ',
u'ๅทดๅธไบ็ดๅนพๅ
งไบ': u'ๅทดๅธไบๆฐ็ฟๅ
งไบ',
u'ๅธๅ็ดๆณ็ดข': u'ๅธๅบ็ดๆณ็ดข',
u'ๅธๅธไบ': u'ๅธๅธไบ',
u'ๅธๅธไบ': u'ๅธๅธไบ',
u'ๅธๅธ': u'ๅธๆฎ',
u'ๅธไป': u'ๅธๆฎ',
u'่ฒ้ๅฐ': u'ๅธ้่ฟช',
u'ๅธ็นๅ': u'ๅธ็นๆ',
u'ๅธๅณ': u'ๅธ็',
u'ๅธถ่': u'ๅธถ็',
u'ๅธถ่ไฝ': u'ๅธถ่ไฝ',
u'ๅธถ่ๅ': u'ๅธถ่ๅ',
u'ๅธถ่ๆธ': u'ๅธถ่ๆธ',
u'ๅธถ่็จฑ': u'ๅธถ่็จฑ',
u'ๅธถ่่
': u'ๅธถ่่
',
u'ๅธถ่่ฟฐ': u'ๅธถ่่ฟฐ',
u'ๅธถ่้': u'ๅธถ่้',
u'ๅนซ่': u'ๅนซ็',
u'ๅนซ่ไฝ': u'ๅนซ่ไฝ',
u'ๅนซ่ๅ': u'ๅนซ่ๅ',
u'ๅนซ่ๆธ': u'ๅนซ่ๆธ',
u'ๅนซ่็จฑ': u'ๅนซ่็จฑ',
u'ๅนซ่่
': u'ๅนซ่่
',
u'ๅนซ่่ฟฐ': u'ๅนซ่่ฟฐ',
u'ๅนซ่้': u'ๅนซ่้',
u'ๅนฒ็ๆฅ': u'ๅนฒ็ๆฅ',
u'่ณๅฃซ': u'ๅนณๆฒป',
u'ๅนดไปฃ้': u'ๅนดไปฃ่ฃ',
u'ๅนดไปฃ่ฃก': u'ๅนดไปฃ่ฃ',
u'ๅนน่': u'ๅนน็',
u'ๅนฒ็': u'ๅนน็',
u'ๅนพๅ
งไบๆฏ็ดข': u'ๅนพๅ
งไบๆฏ็ดน',
u'ๅบท่': u'ๅบท็',
u'ๅบท่ไฝ': u'ๅบท่ไฝ',
u'ๅบท่ๅ': u'ๅบท่ๅ',
u'ๅบท่ๆธ': u'ๅบท่ๆธ',
u'ๅบท่็จฑ': u'ๅบท่็จฑ',
u'ๅบท่่
': u'ๅบท่่
',
u'ๅบท่่ฟฐ': u'ๅบท่่ฟฐ',
u'ๅบท่้': u'ๅบท่้',
u'ๅพ
่': u'ๅพ
็',
u'ๅพ
่ไฝ': u'ๅพ
่ไฝ',
u'ๅพ
่ๅ': u'ๅพ
่ๅ',
u'ๅพ
่ๆธ': u'ๅพ
่ๆธ',
u'ๅพ
่็จฑ': u'ๅพ
่็จฑ',
u'ๅพ
่่
': u'ๅพ
่่
',
u'ๅพ
่่ฟฐ': u'ๅพ
่่ฟฐ',
u'ๅพ
่้': u'ๅพ
่้',
u'ๅพ่': u'ๅพ็',
u'ๅพ่ไฝ': u'ๅพ่ไฝ',
u'ๅพ่ๅ': u'ๅพ่ๅ',
u'ๅพ่ๆธ': u'ๅพ่ๆธ',
u'ๅพ่็จฑ': u'ๅพ่็จฑ',
u'ๅพ่่
': u'ๅพ่่
',
u'ๅพ่่ฟฐ': u'ๅพ่่ฟฐ',
u'ๅพ่้': u'ๅพ่้',
u'ๅพช่': u'ๅพช็',
u'ๅพช่ไฝ': u'ๅพช่ไฝ',
u'ๅพช่ๅ': u'ๅพช่ๅ',
u'ๅพช่ๆธ': u'ๅพช่ๆธ',
u'ๅพช่็จฑ': u'ๅพช่็จฑ',
u'ๅพช่่
': u'ๅพช่่
',
u'ๅพช่่ฟฐ': u'ๅพช่่ฟฐ',
u'ๅพช่้': u'ๅพช่้',
u'ๅฟ่': u'ๅฟ็',
u'ๅฟ็นซ่': u'ๅฟ็นซ็',
u'ๅฟ่ไฝ': u'ๅฟ่ไฝ',
u'ๅฟ่ๅ': u'ๅฟ่ๅ',
u'ๅฟ่ๆธ': u'ๅฟ่ๆธ',
u'ๅฟ่็จฑ': u'ๅฟ่็จฑ',
u'ๅฟ่่
': u'ๅฟ่่
',
u'ๅฟ่่ฟฐ': u'ๅฟ่่ฟฐ',
u'ๅฟ่้': u'ๅฟ่้',
u'ๅฟ่ฃก': u'ๅฟ่ฃ',
u'ๅฟ้': u'ๅฟ่ฃ',
u'ๅฟ่': u'ๅฟ็',
u'ๅฟ่ไฝ': u'ๅฟ่ไฝ',
u'ๅฟ่ๅ': u'ๅฟ่ๅ',
u'ๅฟ่ๆธ': u'ๅฟ่ๆธ',
u'ๅฟ่็จฑ': u'ๅฟ่็จฑ',
u'ๅฟ่่
': u'ๅฟ่่
',
u'ๅฟ่่ฟฐ': u'ๅฟ่่ฟฐ',
u'ๅฟ่้': u'ๅฟ่้',
u'ๅฟ่': u'ๅฟ็',
u'ๅฟ่ไฝ': u'ๅฟ่ไฝ',
u'ๅฟ่ๅ': u'ๅฟ่ๅ',
u'ๅฟ่ๆธ': u'ๅฟ่ๆธ',
u'ๅฟ่็จฑ': u'ๅฟ่็จฑ',
u'ๅฟ่่
': u'ๅฟ่่
',
u'ๅฟ่่ฟฐ': u'ๅฟ่่ฟฐ',
u'ๅฟ่้': u'ๅฟ่้',
u'ๅฟ่': u'ๅฟ็',
u'ๅฟ่ไฝ': u'ๅฟ่ไฝ',
u'ๅฟ่ๅ': u'ๅฟ่ๅ',
u'ๅฟ่ๆธ': u'ๅฟ่ๆธ',
u'ๅฟ่็จฑ': u'ๅฟ่็จฑ',
u'ๅฟ่่
': u'ๅฟ่่
',
u'ๅฟ่่ฟฐ': u'ๅฟ่่ฟฐ',
u'ๅฟ่้': u'ๅฟ่้',
u'ๆฅ่': u'ๆฅ็',
u'ๆฅ่ไฝ': u'ๆฅ่ไฝ',
u'ๆฅ่ๅ': u'ๆฅ่ๅ',
u'ๆฅ่ๆธ': u'ๆฅ่ๆธ',
u'ๆฅ่็จฑ': u'ๆฅ่็จฑ',
u'ๆฅ่่
': u'ๆฅ่่
',
u'ๆฅ่่ฟฐ': u'ๆฅ่่ฟฐ',
u'ๆฅ่้': u'ๆฅ่้',
u'ๆง่': u'ๆง็',
u'ๆง่ไฝ': u'ๆง่ไฝ',
u'ๆง่ๅ': u'ๆง่ๅ',
u'ๆง่ๆธ': u'ๆง่ๆธ',
u'ๆง่็จฑ': u'ๆง่็จฑ',
u'ๆง่่
': u'ๆง่่
',
u'ๆง่่ฟฐ': u'ๆง่่ฟฐ',
u'ๆง่้': u'ๆง่้',
u'ๆ ่': u'ๆ ็',
u'ๆ ่ไฝ': u'ๆ ่ไฝ',
u'ๆ ่ๅ': u'ๆ ่ๅ',
u'ๆ ่ๆธ': u'ๆ ่ๆธ',
u'ๆ ่็จฑ': u'ๆ ่็จฑ',
u'ๆ ่่
': u'ๆ ่่
',
u'ๆ ่่ฟฐ': u'ๆ ่่ฟฐ',
u'ๆ ่้': u'ๆ ่้',
u'ๆณ่ฑก': u'ๆณๅ',
u'ๆณ่': u'ๆณ็',
u'ๆณ่ไฝ': u'ๆณ่ไฝ',
u'ๆณ่ๅ': u'ๆณ่ๅ',
u'ๆณ่ๆธ': u'ๆณ่ๆธ',
u'ๆณ่็จฑ': u'ๆณ่็จฑ',
u'ๆณ่่
': u'ๆณ่่
',
u'ๆณ่่ฟฐ': u'ๆณ่่ฟฐ',
u'ๆณ่้': u'ๆณ่้',
u'็พฉๅคงๅฉ': u'ๆๅคงๅฉ',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่ๆธ': u'ๆ่ๆธ',
u'ๆ่็จฑ': u'ๆ่็จฑ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่้': u'ๆ่้',
u'ๆ
ฃ่': u'ๆ
ฃ็',
u'ๆ
ฃ่ไฝ': u'ๆ
ฃ่ไฝ',
u'ๆ
ฃ่ๅ': u'ๆ
ฃ่ๅ',
u'ๆ
ฃ่ๆธ': u'ๆ
ฃ่ๆธ',
u'ๆ
ฃ่็จฑ': u'ๆ
ฃ่็จฑ',
u'ๆ
ฃ่่
': u'ๆ
ฃ่่
',
u'ๆ
ฃ่่ฟฐ': u'ๆ
ฃ่่ฟฐ',
u'ๆ
ฃ่้': u'ๆ
ฃ่้',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่ๆธ': u'ๆ่ๆธ',
u'ๆ่็จฑ': u'ๆ่็จฑ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่้': u'ๆ่้',
u'ๆท่': u'ๆท็',
u'ๆท่ไฝ': u'ๆท่ไฝ',
u'ๆท่ๅ': u'ๆท่ๅ',
u'ๆท่ๆธ': u'ๆท่ๆธ',
u'ๆท่็จฑ': u'ๆท่็จฑ',
u'ๆท่่
': u'ๆท่่
',
u'ๆท่่ฟฐ': u'ๆท่่ฟฐ',
u'ๆท่้': u'ๆท่้',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่ๆธ': u'ๆ่ๆธ',
u'ๆ่็จฑ': u'ๆ่็จฑ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่้': u'ๆ่้',
u'ๆฐ่': u'ๆฐ็',
u'ๆฐ่ไฝ': u'ๆฐ่ไฝ',
u'ๆฐ่ๅ': u'ๆฐ่ๅ',
u'ๆฐ่ๆธ': u'ๆฐ่ๆธ',
u'ๆฐ่็จฑ': u'ๆฐ่็จฑ',
u'ๆฐ่่
': u'ๆฐ่่
',
u'ๆฐ่่ฟฐ': u'ๆฐ่่ฟฐ',
u'ๆฐ่้': u'ๆฐ่้',
u'ๆฒ่ฃก': u'ๆฒ่ฃ',
u'ๆ้': u'ๆฒ่ฃ',
u'้ปๅฎๅจ': u'ๆดๅฎๅจ',
u'็ๅฎๅจ': u'ๆดๅฎๅจ',
u'ๆด่': u'ๆด็',
u'ๆด่ไฝ': u'ๆด่ไฝ',
u'ๆด่ๅ': u'ๆด่ๅ',
u'ๆด่ๆธ': u'ๆด่ๆธ',
u'ๆด่็จฑ': u'ๆด่็จฑ',
u'ๆด่่
': u'ๆด่่
',
u'ๆด่่ฟฐ': u'ๆด่่ฟฐ',
u'ๆด่้': u'ๆด่้',
u'็ดข็พ
้็พคๅณถ': u'ๆ็พ
้็พคๅณถ',
u'ๅๅฐ': u'ๆๅฐ',
u'ๅฐ่กจๆฉ': u'ๆๅฐๆฉ',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่ๆธ': u'ๆ่ๆธ',
u'ๆ่็จฑ': u'ๆ่็จฑ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่้': u'ๆ่้',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่ๆธ': u'ๆ่ๆธ',
u'ๆ่็จฑ': u'ๆ่็จฑ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่้': u'ๆ่้',
u'ๆพไธ่': u'ๆพไธ็',
u'ๆพๅพ่': u'ๆพๅพ็',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่็จฑ': u'ๆ่็จฑ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่้': u'ๆ่้',
u'ๆซ่': u'ๆซ็',
u'ๆซ่ไฝ': u'ๆซ่ไฝ',
u'ๆซ่ๅ': u'ๆซ่ๅ',
u'ๆซ่ๆธ': u'ๆซ่ๆธ',
u'ๆซ่็จฑ': u'ๆซ่็จฑ',
u'ๆซ่่
': u'ๆซ่่
',
u'ๆซ่่ฟฐ': u'ๆซ่่ฟฐ',
u'ๆซ่้': u'ๆซ่้',
u'ๆฌ่': u'ๆฌ็',
u'ๆฌ่ไฝ': u'ๆฌ่ไฝ',
u'ๆฌ่ๅ': u'ๆฌ่ๅ',
u'ๆฌ่็จฑ': u'ๆฌ่็จฑ',
u'ๆฌ่่
': u'ๆฌ่่
',
u'ๆฌ่่ฟฐ': u'ๆฌ่่ฟฐ',
u'ๆฌ่้': u'ๆฌ่้',
u'ๆฑ่': u'ๆฑ็',
u'ๆฑ่ไฝ': u'ๆฑ่ไฝ',
u'ๆฑ่ๅ': u'ๆฑ่ๅ',
u'ๆฑ่็จฑ': u'ๆฑ่็จฑ',
u'ๆฑ่่
': u'ๆฑ่่
',
u'ๆฑ่่ฟฐ': u'ๆฑ่่ฟฐ',
u'ๆฑ่้': u'ๆฑ่้',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่ๆธ': u'ๆ่ๆธ',
u'ๆ่็จฑ': u'ๆ่็จฑ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่้': u'ๆ่้',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่็จฑ': u'ๆ่็จฑ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่้': u'ๆ่้',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่็จฑ': u'ๆ่็จฑ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่้': u'ๆ่้',
u'ๆผ่': u'ๆผ็',
u'ๆผ่ไฝ': u'ๆผ่ไฝ',
u'ๆผ่ๅ': u'ๆผ่ๅ',
u'ๆผ่็จฑ': u'ๆผ่็จฑ',
u'ๆผ่่
': u'ๆผ่่
',
u'ๆผ่่ฟฐ': u'ๆผ่่ฟฐ',
u'ๆผ่้': u'ๆผ่้',
u'ๆฟ่': u'ๆฟ็',
u'ๆฟ็ ดๅด': u'ๆฟ็ ดไพ',
u'ๆฟ่ไฝ': u'ๆฟ่ไฝ',
u'ๆฟ่ๅ': u'ๆฟ่ๅ',
u'ๆฟ่็จฑ': u'ๆฟ่็จฑ',
u'ๆฟ่่
': u'ๆฟ่่
',
u'ๆฟ่่ฟฐ': u'ๆฟ่่ฟฐ',
u'ๆฟ่้': u'ๆฟ่้',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่็จฑ': u'ๆ่็จฑ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่้': u'ๆ่้',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่็จฑ': u'ๆ่็จฑ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่้': u'ๆ่้',
u'ๆจ่': u'ๆจ็',
u'ๆจ่ไฝ': u'ๆจ่ไฝ',
u'ๆจ่ๅ': u'ๆจ่ๅ',
u'ๆจ่็จฑ': u'ๆจ่็จฑ',
u'ๆจ่่
': u'ๆจ่่
',
u'ๆจ่่ฟฐ': u'ๆจ่่ฟฐ',
u'ๆจ่้': u'ๆจ่้',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่็จฑ': u'ๆ่็จฑ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่้': u'ๆ่้',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่็จฑ': u'ๆ่็จฑ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่้': u'ๆ่้',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่ๆธ': u'ๆ่ๆธ',
u'ๆ่็จฑ': u'ๆ่็จฑ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่้': u'ๆ่้',
u'ๆ้ค': u'ๆ้',
u'ๆฅ่': u'ๆฅ็',
u'ๆฅ่ไฝ': u'ๆฅ่ไฝ',
u'ๆฅ่ๅ': u'ๆฅ่ๅ',
u'ๆฅ่็จฑ': u'ๆฅ่็จฑ',
u'ๆฅ่่
': u'ๆฅ่่
',
u'ๆฅ่่ฟฐ': u'ๆฅ่่ฟฐ',
u'ๆฅ่้': u'ๆฅ่้',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่ๆธ': u'ๆ่ๆธ',
u'ๆ่็จฑ': u'ๆ่็จฑ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่้': u'ๆ่้',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่็จฑ': u'ๆ่็จฑ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่้': u'ๆ่้',
u'ๆฎ่': u'ๆฎ็',
u'ๆฎ่ไฝ': u'ๆฎ่ไฝ',
u'ๆฎ่ๅ': u'ๆฎ่ๅ',
u'ๆฎ่็จฑ': u'ๆฎ่็จฑ',
u'ๆฎ่่
': u'ๆฎ่่
',
u'ๆฎ่่ฟฐ': u'ๆฎ่่ฟฐ',
u'ๆฎ่้': u'ๆฎ่้',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่็จฑ': u'ๆ่็จฑ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่้': u'ๆ่้',
u'ๆผ่': u'ๆผ็',
u'ๆผ่ไฝ': u'ๆผ่ไฝ',
u'ๆผ่ๅ': u'ๆผ่ๅ',
u'ๆผ่ๆธ': u'ๆผ่ๆธ',
u'ๆผ่็จฑ': u'ๆผ่็จฑ',
u'ๆผ่่
': u'ๆผ่่
',
u'ๆผ่่ฟฐ': u'ๆผ่่ฟฐ',
u'ๆผ่้': u'ๆผ่้',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่็จฑ': u'ๆ่็จฑ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่้': u'ๆ่้',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่ๆธ': u'ๆ่ๆธ',
u'ๆ่็จฑ': u'ๆ่็จฑ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่้': u'ๆ่้',
u'ๆบ่': u'ๆบ็',
u'ๆบ่ไฝ': u'ๆบ่ไฝ',
u'ๆบ่ๅ': u'ๆบ่ๅ',
u'ๆบ่็จฑ': u'ๆบ่็จฑ',
u'ๆบ่่
': u'ๆบ่่
',
u'ๆบ่่ฟฐ': u'ๆบ่่ฟฐ',
u'ๆบ่้': u'ๆบ่้',
u'ๆ
ไบ้': u'ๆ
ไบ่ฃ',
u'ๆ
ไบ่ฃก': u'ๆ
ไบ่ฃ',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่็จฑ': u'ๆ่็จฑ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่้': u'ๆ่้',
u'ๆธ่': u'ๆธ็',
u'ๆธ่ไฝ': u'ๆธ่ไฝ',
u'ๆธ่ๅ': u'ๆธ่ๅ',
u'ๆธ่็จฑ': u'ๆธ่็จฑ',
u'ๆธ่่
': u'ๆธ่่
',
u'ๆธ่่ฟฐ': u'ๆธ่่ฟฐ',
u'ๆธ่้': u'ๆธ่้',
u'ๆฅ่': u'ๆฅ็',
u'ๆฅ่ไฝ': u'ๆฅ่ไฝ',
u'ๆฅ่ๅ': u'ๆฅ่ๅ',
u'ๆฅ่ๆธ': u'ๆฅ่ๆธ',
u'ๆฅ่็จฑ': u'ๆฅ่็จฑ',
u'ๆฅ่่
': u'ๆฅ่่
',
u'ๆฅ่่ฟฐ': u'ๆฅ่่ฟฐ',
u'ๆฅ่้': u'ๆฅ่้',
u'ๅฒ็ฆๆฟ่ญ': u'ๆฏๅจๅฃซ่ญ',
u'ๆฏๆด็ถญๅฐผไบ': u'ๆฏๆดๆๅฐผไบ',
u'ๆฐ่้พ่้': u'ๆฐ่้พ่้',
u'็ด่ฅฟ่ญ': u'ๆฐ่ฅฟ่ญ',
u'ๆฅๅญ้': u'ๆฅๅญ่ฃ',
u'ๆฅๅญ่ฃก': u'ๆฅๅญ่ฃ',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่ๆธ': u'ๆ่ๆธ',
u'ๆ่็จฑ': u'ๆ่็จฑ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่้': u'ๆ่้',
u'ๆ ่': u'ๆ ็',
u'ๆ ่ไฝ': u'ๆ ่ไฝ',
u'ๆ ่ๅ': u'ๆ ่ๅ',
u'ๆ ่ๆธ': u'ๆ ่ๆธ',
u'ๆ ่็จฑ': u'ๆ ่็จฑ',
u'ๆ ่่
': u'ๆ ่่
',
u'ๆ ่่ฟฐ': u'ๆ ่่ฟฐ',
u'ๆ ่้': u'ๆ ่้',
u'ๆฅๅ้': u'ๆฅๅ่ฃ',
u'ๆฅๅ่ฃก': u'ๆฅๅ่ฃ',
u'ๆฅๅคฉ่ฃก': u'ๆฅๅคฉ่ฃ',
u'ๆฅๅคฉ้': u'ๆฅๅคฉ่ฃ',
u'ๆฅๆฅ่ฃก': u'ๆฅๆฅ่ฃ',
u'ๆฅๆฅ้': u'ๆฅๆฅ่ฃ',
u'ๆถ้ด้': u'ๆ้่ฃ',
u'ๆ้่ฃก': u'ๆ้่ฃ',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่็จฑ': u'ๆ่็จฑ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่้': u'ๆ่้',
u'ๆๅ้': u'ๆๅ่ฃ',
u'ๆๅ่ฃก': u'ๆๅ่ฃ',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่ๆธ': u'ๆ่ๆธ',
u'ๆ่็จฑ': u'ๆ่็จฑ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่้': u'ๆ่้',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่ๆธ': u'ๆ่ๆธ',
u'ๆ่็จฑ': u'ๆ่็จฑ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่้': u'ๆ่้',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่ๆธ': u'ๆ่ๆธ',
u'ๆ่็จฑ': u'ๆ่็จฑ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่้': u'ๆ่้',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่็จฑ': u'ๆ่็จฑ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่้': u'ๆ่้',
u'ๆฌ่': u'ๆฌ็',
u'ๆฌ่ไฝ': u'ๆฌ่ไฝ',
u'ๆฌ่ๅ': u'ๆฌ่ๅ',
u'ๆฌ่ๆธ': u'ๆฌ่ๆธ',
u'ๆฌ่็จฑ': u'ๆฌ่็จฑ',
u'ๆฌ่่
': u'ๆฌ่่
',
u'ๆฌ่่ฟฐ': u'ๆฌ่่ฟฐ',
u'ๆฌ่้': u'ๆฌ่้',
u'ๆๅญ้': u'ๆๅญ่ฃ',
u'ๆๅญ่ฃก': u'ๆๅญ่ฃ',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่็จฑ': u'ๆ่็จฑ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่้': u'ๆ่้',
u'ๆ ผ็้ฃ้': u'ๆ ผๆ็ด้',
u'ๆ็': u'ๆก็',
u'ๅฐ็': u'ๆก็',
u'ๆขณ่': u'ๆขณ็',
u'ๆขณ่ไฝ': u'ๆขณ่ไฝ',
u'ๆขณ่ๅ': u'ๆขณ่ๅ',
u'ๆขณ่็จฑ': u'ๆขณ่็จฑ',
u'ๆขณ่่
': u'ๆขณ่่
',
u'ๆขณ่่ฟฐ': u'ๆขณ่่ฟฐ',
u'ๆขณ่้': u'ๆขณ่้',
u'ๆฃฎๆ่ฃก': u'ๆฃฎๆ่ฃ',
u'ๆฃฎๆ้': u'ๆฃฎๆ่ฃ',
u'ๆฃบๆ่ฃก': u'ๆฃบๆ่ฃ',
u'ๆฃบๆ้': u'ๆฃบๆ่ฃ',
u'ๆฆด่ฎ': u'ๆฆดๆงค',
u'ๆฆด่ฒ': u'ๆฆดๆงค',
u'ๆจ่': u'ๆจ็',
u'ๆจ่ไฝ': u'ๆจ่ไฝ',
u'ๆจ่ๅ': u'ๆจ่ๅ',
u'ๆจ่ๆธ': u'ๆจ่ๆธ',
u'ๆจ่็จฑ': u'ๆจ่็จฑ',
u'ๆจ่่
': u'ๆจ่่
',
u'ๆจ่่ฟฐ': u'ๆจ่่ฟฐ',
u'ๆจ่้': u'ๆจ่้',
u'ๅฏถ็
': u'ๆจ่ด',
u'ๆจ่ช่': u'ๆจ่ช็',
u'ๆฉๅจไบบ': u'ๆฉๆขฐไบบ',
u'ๆบๅจไบบ': u'ๆฉๆขฐไบบ',
u'ๅๅฒ้': u'ๆญทๅฒ่ฃ',
u'ๆญทๅฒ่ฃก': u'ๆญทๅฒ่ฃ',
u'ๆฎบ่': u'ๆฎบ็',
u'ๆฎบ่ไฝ': u'ๆฎบ่ไฝ',
u'ๆฎบ่ๅ': u'ๆฎบ่ๅ',
u'ๆฎบ่ๆธ': u'ๆฎบ่ๆธ',
u'ๆฎบ่็จฑ': u'ๆฎบ่็จฑ',
u'ๆฎบ่่
': u'ๆฎบ่่
',
u'ๆฎบ่่ฟฐ': u'ๆฎบ่่ฟฐ',
u'ๆฎบ่้': u'ๆฎบ่้',
u'่
ๅฉๅกๅฐผไบ': u'ๆฏ้ๅกๅฐผไบ',
u'ๆฏ้ๆฑๆฏ': u'ๆฏ้่ฃๆฏ',
u'ๆจก้่ฅฟๆฏ': u'ๆฏ้่ฃๆฏ',
u'ๆฑ่': u'ๆฑ็',
u'ๆฑ่ไฝ': u'ๆฑ่ไฝ',
u'ๆฑ่ๅ': u'ๆฑ่ๅ',
u'ๆฑ่ๆธ': u'ๆฑ่ๆธ',
u'ๆฑ่็จฑ': u'ๆฑ่็จฑ',
u'ๆฑ่่
': u'ๆฑ่่
',
u'ๆฑ่่ฟฐ': u'ๆฑ่่ฟฐ',
u'ๆฑ่้': u'ๆฑ่้',
u'ๆ่ฑ': u'ๆฑถ่',
u'ๆฒ่': u'ๆฒ็',
u'ๆฒ่ไฝ': u'ๆฒ่ไฝ',
u'ๆฒ่ๅ': u'ๆฒ่ๅ',
u'ๆฒ่ๆธ': u'ๆฒ่ๆธ',
u'ๆฒ่็จฑ': u'ๆฒ่็จฑ',
u'ๆฒ่่
': u'ๆฒ่่
',
u'ๆฒ่่ฟฐ': u'ๆฒ่่ฟฐ',
u'ๆฒ่้': u'ๆฒ่้',
u'ๆฒๅฐ้ฟๆไผฏ': u'ๆฒ็น้ฟๆไผฏ',
u'ๆฒ็ๅฐ้ฟๆไผฏ': u'ๆฒ็น้ฟๆไผฏ',
u'้ฉฌๆ็นยท่จ่ฌ': u'ๆฒ่ฌ',
u'ๆฒฟ่': u'ๆฒฟ็',
u'ๆฒฟ่ไฝ': u'ๆฒฟ่ไฝ',
u'ๆฒฟ่ๅ': u'ๆฒฟ่ๅ',
u'ๆฒฟ่ๆธ': u'ๆฒฟ่ๆธ',
u'ๆฒฟ่็จฑ': u'ๆฒฟ่็จฑ',
u'ๆฒฟ่่
': u'ๆฒฟ่่
',
u'ๆฒฟ่่ฟฐ': u'ๆฒฟ่่ฟฐ',
u'ๆฒฟ่้': u'ๆฒฟ่้',
u'ๆณขๅฃซๅฐผไบ่ตซๅกๅฅ็ถญ็ด': u'ๆณขๆฏๅฐผไบ้ปๅกๅฅ็ถญ้ฃ',
u'่พๅทดๅจ': u'ๆดฅๅทดๅธ้',
u'ๅฎ้ฝๆๆฏ': u'ๆดช้ฝๆๆฏ',
u'ๆดป่': u'ๆดป็',
u'ๆดป่ไฝ': u'ๆดป่ไฝ',
u'ๆดป่ๅ': u'ๆดป่ๅ',
u'ๆดป่ๆธ': u'ๆดป่ๆธ',
u'ๆดป่็จฑ': u'ๆดป่็จฑ',
u'ๆดป่่
': u'ๆดป่่
',
u'ๆดป่่ฟฐ': u'ๆดป่่ฟฐ',
u'ๆดป่้': u'ๆดป่้',
u'่กๅ้ป่ฉฑ': u'ๆตๅ้ป่ฉฑ',
u'็งปๅจ็ต่ฏ': u'ๆตๅ้ป่ฉฑ',
u'ๆต่': u'ๆต็',
u'ๆต่ไฝ': u'ๆต่ไฝ',
u'ๆต่ๅ': u'ๆต่ๅ',
u'ๆต่ๆธ': u'ๆต่ๆธ',
u'ๆต่็จฑ': u'ๆต่็จฑ',
u'ๆต่่
': u'ๆต่่
',
u'ๆต่่ฟฐ': u'ๆต่่ฟฐ',
u'ๆต่้': u'ๆต่้',
u'ๆต้ฒ่': u'ๆต้ฒ็',
u'ๆตฎ่': u'ๆตฎ็',
u'ๆตฎ่ไฝ': u'ๆตฎ่ไฝ',
u'ๆตฎ่ๅ': u'ๆตฎ่ๅ',
u'ๆตฎ่ๆธ': u'ๆตฎ่ๆธ',
u'ๆตฎ่็จฑ': u'ๆตฎ่็จฑ',
u'ๆตฎ่่
': u'ๆตฎ่่
',
u'ๆตฎ่่ฟฐ': u'ๆตฎ่่ฟฐ',
u'ๆตฎ่้': u'ๆตฎ่้',
u'ๆถต่': u'ๆถต็',
u'ๆถต่ไฝ': u'ๆถต่ไฝ',
u'ๆถต่ๅ': u'ๆถต่ๅ',
u'ๆถต่ๆธ': u'ๆถต่ๆธ',
u'ๆถต่็จฑ': u'ๆถต่็จฑ',
u'ๆถต่่
': u'ๆถต่่
',
u'ๆถต่่ฟฐ': u'ๆถต่่ฟฐ',
u'ๆถต่้': u'ๆถต่้',
u'ๆถผ่': u'ๆถผ็',
u'ๆถผ่ไฝ': u'ๆถผ่ไฝ',
u'ๆถผ่ๅ': u'ๆถผ่ๅ',
u'ๆถผ่ๆธ': u'ๆถผ่ๆธ',
u'ๆถผ่็จฑ': u'ๆถผ่็จฑ',
u'ๆถผ่่
': u'ๆถผ่่
',
u'ๆถผ่่ฟฐ': u'ๆถผ่่ฟฐ',
u'ๆถผ่้': u'ๆถผ่้',
u'ๆทฑๆทต่ฃก': u'ๆทฑๆทต่ฃ',
u'ๆทฑๆธ้': u'ๆทฑๆธ่ฃ',
u'ๆธด่': u'ๆธด็',
u'ๆธด่ไฝ': u'ๆธด่ไฝ',
u'ๆธด่ๅ': u'ๆธด่ๅ',
u'ๆธด่ๆธ': u'ๆธด่ๆธ',
u'ๆธด่็จฑ': u'ๆธด่็จฑ',
u'ๆธด่่
': u'ๆธด่่
',
u'ๆธด่่ฟฐ': u'ๆธด่่ฟฐ',
u'ๆธด่้': u'ๆธด่้',
u'ๆบข่': u'ๆบข็',
u'ๆบข่ไฝ': u'ๆบข่ไฝ',
u'ๆบข่ๅ': u'ๆบข่ๅ',
u'ๆบข่ๆธ': u'ๆบข่ๆธ',
u'ๆบข่็จฑ': u'ๆบข่็จฑ',
u'ๆบข่่
': u'ๆบข่่
',
u'ๆบข่่ฟฐ': u'ๆบข่่ฟฐ',
u'ๆบข่้': u'ๆบข่้',
u'ๆผ่': u'ๆผ็',
u'ๆผ่ไฝ': u'ๆผ่ไฝ',
u'ๆผ่ๅ': u'ๆผ่ๅ',
u'ๆผ่ๆธ': u'ๆผ่ๆธ',
u'ๆผ่็จฑ': u'ๆผ่็จฑ',
u'ๆผ่่
': u'ๆผ่่
',
u'ๆผ่่ฟฐ': u'ๆผ่่ฟฐ',
u'ๆผ่้': u'ๆผ่้',
u'ๆผซ่': u'ๆผซ็',
u'ๆผซ่ไฝ': u'ๆผซ่ไฝ',
u'ๆผซ่ๅ': u'ๆผซ่ๅ',
u'ๆผซ่ๆธ': u'ๆผซ่ๆธ',
u'ๆผซ่็จฑ': u'ๆผซ่็จฑ',
u'ๆผซ่่
': u'ๆผซ่่
',
u'ๆผซ่่ฟฐ': u'ๆผซ่่ฟฐ',
u'ๆผซ่้': u'ๆผซ่้',
u'ๆฝค่': u'ๆฝค็',
u'ๆฝค่ไฝ': u'ๆฝค่ไฝ',
u'ๆฝค่ๅ': u'ๆฝค่ๅ',
u'ๆฝค่ๆธ': u'ๆฝค่ๆธ',
u'ๆฝค่็จฑ': u'ๆฝค่็จฑ',
u'ๆฝค่่
': u'ๆฝค่่
',
u'ๆฝค่่ฟฐ': u'ๆฝค่่ฟฐ',
u'ๆฝค่้': u'ๆฝค่้',
u'่ธ': u'็
',
u'็
ง่': u'็
ง็',
u'็
ง่ไฝ': u'็
ง่ไฝ',
u'็
ง่ๅ': u'็
ง่ๅ',
u'็
ง่ๆธ': u'็
ง่ๆธ',
u'็
ง่็จฑ': u'็
ง่็จฑ',
u'็
ง่่
': u'็
ง่่
',
u'็
ง่่ฟฐ': u'็
ง่่ฟฐ',
u'็
ง่้': u'็
ง่้',
u'็่': u'็็',
u'็่ไฝ': u'็่ไฝ',
u'็่ๅ': u'็่ๅ',
u'็่ๆธ': u'็่ๆธ',
u'็่็จฑ': u'็่็จฑ',
u'็่่
': u'็่่
',
u'็่่ฟฐ': u'็่่ฟฐ',
u'็่้': u'็่้',
u'็ญ่': u'็ญ็',
u'็ญ่ไฝ': u'็ญ่ไฝ',
u'็ญ่ๅ': u'็ญ่ๅ',
u'็ญ่ๆธ': u'็ญ่ๆธ',
u'็ญ่็จฑ': u'็ญ่็จฑ',
u'็ญ่่
': u'็ญ่่
',
u'็ญ่่ฟฐ': u'็ญ่่ฟฐ',
u'็ญ่้': u'็ญ่้',
u'ๅ้้ๆ่ฒๅฅ': u'็น็ซๅฐผ้ๅๅคๅทดๅฅ',
u'็ฝ่': u'็ฝ็',
u'็ฝ่ไฝ': u'็ฝ่ไฝ',
u'็ฝ่ๅ': u'็ฝ่ๅ',
u'็ฝ่ๆธ': u'็ฝ่ๆธ',
u'็ฝ่็จฑ': u'็ฝ่็จฑ',
u'็ฝ่่
': u'็ฝ่่
',
u'็ฝ่่ฟฐ': u'็ฝ่่ฟฐ',
u'็ฝ่้': u'็ฝ่้',
u'็ฏไธ่': u'็ฏไธ็',
u'็ฏไธ่ไฝ': u'็ฏไธ่ไฝ',
u'็ฏไธ่ๅ': u'็ฏไธ่ๅ',
u'็ฏไธ่ๆธ': u'็ฏไธ่ๆธ',
u'็ฏไธ่็จฑ': u'็ฏไธ่็จฑ',
u'็ฏไธ่่
': u'็ฏไธ่่
',
u'็ฏไธ่่ฟฐ': u'็ฏไธ่่ฟฐ',
u'็ฏไธ่้': u'็ฏไธ่้',
u'็ฏๅพ่': u'็ฏๅพ็',
u'็ฌๅช': u'็้ป',
u'็่': u'็็',
u'็่ไฝ': u'็่ไฝ',
u'็่ๅ': u'็่ๅ',
u'็่ๆธ': u'็่ๆธ',
u'็่็จฑ': u'็่็จฑ',
u'็่่
': u'็่่
',
u'็่่ฟฐ': u'็่่ฟฐ',
u'็่้': u'็่้',
u'็ฑ้': u'็่ฃ',
u'็่ฃก': u'็่ฃ',
u'็จ่': u'็จ็',
u'็จ่ไฝ': u'็จ่ไฝ',
u'็จ่ๅ': u'็จ่ๅ',
u'็จ่ๆธ': u'็จ่ๆธ',
u'็จ่็จฑ': u'็จ่็จฑ',
u'็จ่่
': u'็จ่่
',
u'็จ่่ฟฐ': u'็จ่่ฟฐ',
u'็จ่้': u'็จ่้',
u'็ฒ่': u'็ฒ็',
u'็ฒ่ไฝ': u'็ฒ่ไฝ',
u'็ฒ่ๅ': u'็ฒ่ๅ',
u'็ฒ่ๆธ': u'็ฒ่ๆธ',
u'็ฒ่็จฑ': u'็ฒ่็จฑ',
u'็ฒ่่
': u'็ฒ่่
',
u'็ฒ่่ฟฐ': u'็ฒ่่ฟฐ',
u'็ฒ่้': u'็ฒ่้',
u'่ซพ้ญฏ': u'็้ญฏ',
u'่ฌ้ฃๆ': u'็ฆๅช้ฟๅ',
u'็่': u'็็',
u'็่ไฝ': u'็่ไฝ',
u'็่ๅ': u'็่ๅ',
u'็่ๆธ': u'็่ๆธ',
u'็่็จฑ': u'็่็จฑ',
u'็่่
': u'็่่
',
u'็่่ฟฐ': u'็่่ฟฐ',
u'็่้': u'็่้',
u'็จไธ่': u'็จไธ็',
u'็จๅพ่': u'็จๅพ็',
u'็จ่': u'็จ็',
u'็จ่ไฝ': u'็จ่ไฝ',
u'็จ่ๅ': u'็จ่ๅ',
u'็จ่ๆธ': u'็จ่ๆธ',
u'็จ่็จฑ': u'็จ่็จฑ',
u'็จ่่
': u'็จ่่
',
u'็จ่่ฟฐ': u'็จ่่ฟฐ',
u'็จ่้': u'็จ่้',
u'็่': u'็็',
u'็่ไฝ': u'็่ไฝ',
u'็่ๅ': u'็่ๅ',
u'็่ๆธ': u'็่ๆธ',
u'็่็จฑ': u'็่็จฑ',
u'็่่
': u'็่่
',
u'็่่ฟฐ': u'็่่ฟฐ',
u'็่้': u'็่้',
u'็ถ่': u'็ถ็',
u'็ถ่ไฝ': u'็ถ่ไฝ',
u'็ถ่ๅ': u'็ถ่ๅ',
u'็ถ่ๆธ': u'็ถ่ๆธ',
u'็ถ่็จฑ': u'็ถ่็จฑ',
u'็ถ่่
': u'็ถ่่
',
u'็ถ่่ฟฐ': u'็ถ่่ฟฐ',
u'็ถ่้': u'็ถ่้',
u'็่': u'็็',
u'็่ไฝ': u'็่ไฝ',
u'็่ๅ': u'็่ๅ',
u'็่ๆธ': u'็่ๆธ',
u'็่็จฑ': u'็่็จฑ',
u'็่่
': u'็่่
',
u'็่่ฟฐ': u'็่่ฟฐ',
u'็่้': u'็่้',
u'ๅๅธ': u'็ผไฝ',
u'็ผๅธ': u'็ผไฝ',
u'็พ็ง่ฃก': u'็พ็ง่ฃ',
u'็พ็ง้': u'็พ็ง่ฃ',
u'่จ็จ่ป': u'็ๅฃซ',
u'ๅบ็ง่ฝฆ': u'็ๅฃซ',
u'็ฎ้้ณ็ง': u'็ฎ่ฃ้ฝ็ง',
u'็ฎ่ฃก้ฝ็ง': u'็ฎ่ฃ้ฝ็ง',
u'็บ่': u'็บ็',
u'็บ่ไฝ': u'็บ่ไฝ',
u'็บ่ๅ': u'็บ่ๅ',
u'็บ่ๆธ': u'็บ่ๆธ',
u'็บ่็จฑ': u'็บ่็จฑ',
u'็บ่่
': u'็บ่่
',
u'็บ่่ฟฐ': u'็บ่่ฟฐ',
u'็บ่้': u'็บ่้',
u'็่': u'็็',
u'็่ไฝ': u'็่ไฝ',
u'็่ๅ': u'็่ๅ',
u'็่ๆธ': u'็่ๆธ',
u'็่็จฑ': u'็่็จฑ',
u'็่่
': u'็่่
',
u'็่่ฟฐ': u'็่่ฟฐ',
u'็่้': u'็่้',
u'็งๅฎ้': u'็งๆบ้',
u'็ฏ่': u'็ฏ็',
u'็ฏ่ไฝ': u'็ฏ่ไฝ',
u'็ฏ่ๅ': u'็ฏ่ๅ',
u'็ฏ่ๆธ': u'็ฏ่ๆธ',
u'็ฏ่็จฑ': u'็ฏ่็จฑ',
u'็ฏ่่
': u'็ฏ่่
',
u'็ฏ่่ฟฐ': u'็ฏ่่ฟฐ',
u'็ฏ่้': u'็ฏ่้',
u'็พ่': u'็พ็',
u'็พ่ไฝ': u'็พ่ไฝ',
u'็พ่ๅ': u'็พ่ๅ',
u'็พ่ๆธ': u'็พ่ๆธ',
u'็พ่็จฑ': u'็พ่็จฑ',
u'็พ่่
': u'็พ่่
',
u'็พ่่ฟฐ': u'็พ่่ฟฐ',
u'็พ่้': u'็พ่้',
u'็ไธ่': u'็ไธ็',
u'็ๅพ่': u'็ๅพ็',
u'็่': u'็็',
u'็่ไฝ': u'็่ไฝ',
u'็่ๅ': u'็่ๅ',
u'็่ๆธ': u'็่ๆธ',
u'็่็จฑ': u'็่็จฑ',
u'็่่
': u'็่่
',
u'็่่ฟฐ': u'็่่ฟฐ',
u'็่้': u'็่้',
u'็ผ็่ฃก': u'็ผ็่ฃ',
u'็ผ็้': u'็ผ็่ฃ',
u'่ไป้บผๆฅ': u'็ไป้บผๆฅ',
u'่ไป': u'็ไป',
u'่ไฝ ': u'็ไฝ ',
u'่ๅ': u'็ๅ',
u'่ๅฐ': u'็ๅฐ',
u'่ๅขจ': u'็ๅขจ',
u'่ๅฅน': u'็ๅฅน',
u'่ๅฆณ': u'็ๅฆณ',
u'่ๅฎ': u'็ๅฎ',
u'่ๅฏฆ': u'็ๅฏฆ',
u'่ๅฟ': u'็ๅฟ',
u'่ๆฅ': u'็ๆฅ',
u'่ๆณ': u'็ๆณ',
u'่ๆ': u'็ๆ',
u'่ๆ': u'็ๆ',
u'่ๆ': u'็ๆ',
u'่ๆธ': u'็ๆธ',
u'่ๆณ': u'็ๆณ',
u'่ๆถผ': u'็ๆถผ',
u'่็ซ': u'็็ซ',
u'่็ผ': u'็็ผ',
u'่็ฅ': u'็็ฅ',
u'่็ญ': u'็็ญ',
u'่็ตฒ': u'็็ตฒ',
u'่็ท': u'็็ท',
u'่่
ณ': u'็่
ณ',
u'่่ฆ': u'็่ฆ',
u'่่ฒ': u'็่ฒ',
u'่่ฝ': u'็่ฝ',
u'่่กฃ': u'็่กฃ',
u'่่ฃ': u'็่ฃ',
u'่่ฟท': u'็่ฟท',
u'่้': u'็้',
u'่้': u'็้',
u'่้ธ': u'็้ธ',
u'่้ญ': u'็้ญ',
u'็กไธ่': u'็กไธ็',
u'็กๅพ่': u'็กๅพ็',
u'็ก่': u'็ก็',
u'็ก่ไฝ': u'็ก่ไฝ',
u'็ก่ๅ': u'็ก่ๅ',
u'็ก่ๆธ': u'็ก่ๆธ',
u'็ก่็จฑ': u'็ก่็จฑ',
u'็ก่่
': u'็ก่่
',
u'็ก่่ฟฐ': u'็ก่่ฟฐ',
u'็ก่้': u'็ก่้',
u'็่': u'็็',
u'็่ไฝ': u'็่ไฝ',
u'็่ๅ': u'็่ๅ',
u'็่ๆธ': u'็่ๆธ',
u'็่็จฑ': u'็่็จฑ',
u'็่่
': u'็่่
',
u'็่่ฟฐ': u'็่่ฟฐ',
u'็่้': u'็่้',
u'็ช่': u'็ช็',
u'็ช่ไฝ': u'็ช่ไฝ',
u'็ช่ๅ': u'็ช่ๅ',
u'็ช่ๆธ': u'็ช่ๆธ',
u'็ช่็จฑ': u'็ช่็จฑ',
u'็ช่่
': u'็ช่่
',
u'็ช่่ฟฐ': u'็ช่่ฟฐ',
u'็ช่้': u'็ช่้',
u'็ฐก่จ': u'็ญ่จ',
u'็ญไฟก': u'็ญ่จ',
u'็กฌไปถ': u'็กฌไปถ',
u'็กฌ้ซ': u'็กฌไปถ',
u'็ฆๆฏ': u'็ฆๅฃซ',
u'็ฆ่': u'็ฆ็',
u'็ฆ่ไฝ': u'็ฆ่ไฝ',
u'็ฆ่ๅ': u'็ฆ่ๅ',
u'็ฆ่ๆธ': u'็ฆ่ๆธ',
u'็ฆ่็จฑ': u'็ฆ่็จฑ',
u'็ฆ่่
': u'็ฆ่่
',
u'็ฆ่่ฟฐ': u'็ฆ่่ฟฐ',
u'็ฆ่้': u'็ฆ่้',
u'็งๅ่ฃก': u'็งๅ่ฃ',
u'็งๅ้': u'็งๅ่ฃ',
u'็งๅคฉ่ฃก': u'็งๅคฉ่ฃ',
u'็งๅคฉ้': u'็งๅคฉ่ฃ',
u'็งๆฅ้': u'็งๆฅ่ฃ',
u'็งๆฅ่ฃก': u'็งๆฅ่ฃ',
u'่ๆฉ': u'็งๆฉ็พ
',
u'ๆท่ฑน': u'็ฉๆถ',
u'็ฉบ่': u'็ฉบ็',
u'็ฉบ่ไฝ': u'็ฉบ่ไฝ',
u'็ฉบ่ๅ': u'็ฉบ่ๅ',
u'็ฉบ่ๆธ': u'็ฉบ่ๆธ',
u'็ฉบ่็จฑ': u'็ฉบ่็จฑ',
u'็ฉบ่่
': u'็ฉบ่่
',
u'็ฉบ่่ฟฐ': u'็ฉบ่่ฟฐ',
u'็ฉบ่้': u'็ฉบ่้',
u'ๅคช็ฉบๆขญ': u'็ฉฟๆขญๆฉ',
u'่ชๅคฉ้ฃๆบ': u'็ฉฟๆขญๆฉ',
u'็ฉฟ่': u'็ฉฟ็',
u'็ฉฟ่ไฝ': u'็ฉฟ่ไฝ',
u'็ฉฟ่ๅ': u'็ฉฟ่ๅ',
u'็ฉฟ่ๆธ': u'็ฉฟ่ๆธ',
u'็ฉฟ่็จฑ': u'็ฉฟ่็จฑ',
u'็ฉฟ่่
': u'็ฉฟ่่
',
u'็ฉฟ่่ฟฐ': u'็ฉฟ่่ฟฐ',
u'็ฉฟ่้': u'็ฉฟ่้',
u'็ซ่': u'็ซ็',
u'็ซ่ไฝ': u'็ซ่ไฝ',
u'็ซ่ๅ': u'็ซ่ๅ',
u'็ซ่ๆธ': u'็ซ่ๆธ',
u'็ซ่็จฑ': u'็ซ่็จฑ',
u'็ซ่่
': u'็ซ่่
',
u'็ซ่่ฟฐ': u'็ซ่่ฟฐ',
u'็ซ่้': u'็ซ่้',
u'็ฌ่': u'็ฌ็',
u'็ฌ่ไฝ': u'็ฌ่ไฝ',
u'็ฌ่ๅ': u'็ฌ่ๅ',
u'็ฌ่ๆธ': u'็ฌ่ๆธ',
u'็ฌ่็จฑ': u'็ฌ่็จฑ',
u'็ฌ่่
': u'็ฌ่่
',
u'็ฌ่่ฟฐ': u'็ฌ่่ฟฐ',
u'็ฌ่้': u'็ฌ่้',
u'็ฎก่': u'็ฎก็',
u'็ฎก่ไฝ': u'็ฎก่ไฝ',
u'็ฎก่ๅ': u'็ฎก่ๅ',
u'็ฎก่ๆธ': u'็ฎก่ๆธ',
u'็ฎก่็จฑ': u'็ฎก่็จฑ',
u'็ฎก่่
': u'็ฎก่่
',
u'็ฎก่่ฟฐ': u'็ฎก่่ฟฐ',
u'็ฎก่้': u'็ฎก่้',
u'่ฟๅ
ๅฐยทๆฌงๆ': u'็ฑณ้ซๅฅง้ฒ',
u'็ณปๅ่ฃก': u'็ณปๅ่ฃ',
u'็ณปๅ้': u'็ณปๅ่ฃ',
u'็ดข้ฆฌๅฉไบ': u'็ดข้ฆฌ้',
u'็ดฎ่': u'็ดฎ็',
u'็ดฎ่ไฝ': u'็ดฎ่ไฝ',
u'็ดฎ่ๅ': u'็ดฎ่ๅ',
u'็ดฎ่ๆธ': u'็ดฎ่ๆธ',
u'็ดฎ่็จฑ': u'็ดฎ่็จฑ',
u'็ดฎ่่
': u'็ดฎ่่
',
u'็ดฎ่่ฟฐ': u'็ดฎ่่ฟฐ',
u'็ดฎ่้': u'็ดฎ่้',
u'็ถ่': u'็ถ็',
u'็ถ่ไฝ': u'็ถ่ไฝ',
u'็ถ่ๅ': u'็ถ่ๅ',
u'็ถ่ๆธ': u'็ถ่ๆธ',
u'็ถ่็จฑ': u'็ถ่็จฑ',
u'็ถ่่
': u'็ถ่่
',
u'็ถ่่ฟฐ': u'็ถ่่ฟฐ',
u'็ถ่้': u'็ถ่้',
u'็ถฒ่ทฏ': u'็ถฒ็ตก',
u'็ทๅถ': u'็ทๅ
',
u'็น่': u'็น็',
u'็น่ไฝ': u'็น่ไฝ',
u'็น่ๅ': u'็น่ๅ',
u'็น่ๆธ': u'็น่ๆธ',
u'็น่็จฑ': u'็น่็จฑ',
u'็น่่
': u'็น่่
',
u'็น่่ฟฐ': u'็น่่ฟฐ',
u'็น่้': u'็น่้',
u'็บ่': u'็บ็',
u'็บ่ไฝ': u'็บ่ไฝ',
u'็บ่ๅ': u'็บ่ๅ',
u'็บ่ๆธ': u'็บ่ๆธ',
u'็บ่็จฑ': u'็บ่็จฑ',
u'็บ่่
': u'็บ่่
',
u'็บ่่ฟฐ': u'็บ่่ฟฐ',
u'็บ่้': u'็บ่้',
u'็ฝฉ่': u'็ฝฉ็',
u'็ฝฉ่ไฝ': u'็ฝฉ่ไฝ',
u'็ฝฉ่ๅ': u'็ฝฉ่ๅ',
u'็ฝฉ่ๆธ': u'็ฝฉ่ๆธ',
u'็ฝฉ่็จฑ': u'็ฝฉ่็จฑ',
u'็ฝฉ่่
': u'็ฝฉ่่
',
u'็ฝฉ่่ฟฐ': u'็ฝฉ่่ฟฐ',
u'็ฝฉ่้': u'็ฝฉ่้',
u'็ฝต่': u'็ฝต็',
u'็ฝต่ไฝ': u'็ฝต่ไฝ',
u'็ฝต่ๅ': u'็ฝต่ๅ',
u'็ฝต่ๆธ': u'็ฝต่ๆธ',
u'็ฝต่็จฑ': u'็ฝต่็จฑ',
u'็ฝต่่
': u'็ฝต่่
',
u'็ฝต่่ฟฐ': u'็ฝต่่ฟฐ',
u'็ฝต่้': u'็ฝต่้',
u'็พ่': u'็พ็',
u'็พ่ไฝ': u'็พ่ไฝ',
u'็พ่ๅ': u'็พ่ๅ',
u'็พ่ๆธ': u'็พ่ๆธ',
u'็พ่็จฑ': u'็พ่็จฑ',
u'็พ่่
': u'็พ่่
',
u'็พ่่ฟฐ': u'็พ่่ฟฐ',
u'็พ่้': u'็พ่้',
u'่่': u'่็',
u'่่ไฝ': u'่่ไฝ',
u'่่ๅ': u'่่ๅ',
u'่่ๆธ': u'่่ๆธ',
u'่่็จฑ': u'่่็จฑ',
u'่่่
': u'่่่
',
u'่่่ฟฐ': u'่่่ฟฐ',
u'่่้': u'่่้',
u'ๅฏฎๅ': u'่ๆพ',
u'่่': u'่็',
u'่่ไฝ': u'่่ไฝ',
u'่่ๅ': u'่่ๅ',
u'่่ๆธ': u'่่ๆธ',
u'่่็จฑ': u'่่็จฑ',
u'่่่
': u'่่่
',
u'่่่ฟฐ': u'่่่ฟฐ',
u'่่้': u'่่้',
u'ๅฃๅบ่จๅๅฐผ็ปดๆฏ': u'่ๅๆฏ็ดๅๆฏ',
u'่ๅ
้ๆฏๅค็ฆๅๅฐผ็ถญๆฏ': u'่ๅๆฏ็ดๅๆฏ',
u'่ๆๆฃฎๅๆ ผ็้ฃไธ': u'่ๆๆฃฎ็นๅๆ ผๆ็ดไธๆฏ',
u'่้ฒ่ฅฟไบ': u'่็ง่ฅฟไบ',
u'่้ฆฌๅฉ่ซพ': u'่้ฆฌๅ่ซพ',
u'่ฝไธ่': u'่ฝไธ็',
u'่ฝๅพ่': u'่ฝๅพ็',
u'่ฝ่': u'่ฝ็',
u'่ฝ่ไฝ': u'่ฝ่ไฝ',
u'่ฝ่ๅ': u'่ฝ่ๅ',
u'่ฝ่ๆธ': u'่ฝ่ๆธ',
u'่ฝ่็จฑ': u'่ฝ่็จฑ',
u'่ฝ่่
': u'่ฝ่่
',
u'่ฝ่่ฟฐ': u'่ฝ่่ฟฐ',
u'่ฝ่้': u'่ฝ่้',
u'่้': u'่่ฃ',
u'่่ฃก': u'่่ฃ',
u'่ฏๅฐผไบ': u'่ฏ้
',
u'่ฏไบ': u'่ฏ้
',
u'่่': u'่็',
u'่่ไฝ': u'่่ไฝ',
u'่่ๅ': u'่่ๅ',
u'่่ๆธ': u'่่ๆธ',
u'่่็จฑ': u'่่็จฑ',
u'่่่
': u'่่่
',
u'่่่ฟฐ': u'่่่ฟฐ',
u'่่้': u'่่้',
u'่ ่': u'่ ็',
u'่ ่ไฝ': u'่ ่ไฝ',
u'่ ่ๅ': u'่ ่ๅ',
u'่ ่ๆธ': u'่ ่ๆธ',
u'่ ่็จฑ': u'่ ่็จฑ',
u'่ ่่
': u'่ ่่
',
u'่ ่่ฟฐ': u'่ ่่ฟฐ',
u'่ ่้': u'่ ่้',
u'่จ่': u'่จ็',
u'่จ่ไฝ': u'่จ่ไฝ',
u'่จ่ๅ': u'่จ่ๅ',
u'่จ่ๆธ': u'่จ่ๆธ',
u'่จ่็จฑ': u'่จ่็จฑ',
u'่จ่่
': u'่จ่่
',
u'่จ่่ฟฐ': u'่จ่่ฟฐ',
u'่จ่้': u'่จ่้',
u'่่': u'่็',
u'่่ไฝ': u'่่ไฝ',
u'่่ๅ': u'่่ๅ',
u'่่ๆธ': u'่่ๆธ',
u'่่็จฑ': u'่่็จฑ',
u'่่่
': u'่่่
',
u'่่่ฟฐ': u'่่่ฟฐ',
u'่่้': u'่่้',
u'่ฟๅ
ๅฐยท่้ฉฌ่ตซ': u'่้บฅๅ ',
u'่ฆ่': u'่ฆ็',
u'่ฆ่ไฝ': u'่ฆ่ไฝ',
u'่ฆ่ๅ': u'่ฆ่ๅ',
u'่ฆ่ๆธ': u'่ฆ่ๆธ',
u'่ฆ่็จฑ': u'่ฆ่็จฑ',
u'่ฆ่่
': u'่ฆ่่
',
u'่ฆ่่ฟฐ': u'่ฆ่่ฟฐ',
u'่ฆ่้': u'่ฆ่้',
u'่ฆ้': u'่ฆ่ฃ',
u'่ฆ่ฃก': u'่ฆ่ฃ',
u'่ซไธๆฏๅ
': u'่ซๆกๆฏๅ
',
u'่ณด็ดขๆ': u'่็ดขๆ',
u'้ฆฌ่ช้': u'่ฌไบๅพ',
u'้ฉฌ่ช่พพ': u'่ฌไบๅพ',
u'่ฝ่': u'่ฝ็',
u'่ฝ่ไฝ': u'่ฝ่ไฝ',
u'่ฝ่ๅ': u'่ฝ่ๅ',
u'่ฝ่ๆธ': u'่ฝ่ๆธ',
u'่ฝ่็จฑ': u'่ฝ่็จฑ',
u'่ฝ่่
': u'่ฝ่่
',
u'่ฝ่่ฟฐ': u'่ฝ่่ฟฐ',
u'่ฝ่้': u'่ฝ่้',
u'่่': u'่็',
u'่่ไฝ': u'่่ไฝ',
u'่่ๅ': u'่่ๅ',
u'่่ๆธ': u'่่ๆธ',
u'่่็จฑ': u'่่็จฑ',
u'่่่
': u'่่่
',
u'่่่ฟฐ': u'่่่ฟฐ',
u'่่้': u'่่้',
u'่จ่พพๅง': u'่ฉ้ๅง',
u'่่': u'่็',
u'่่': u'่็',
u'่่ไฝ': u'่่ไฝ',
u'่่ๅ': u'่่ๅ',
u'่่ๆธ': u'่่ๆธ',
u'่่็จฑ': u'่่็จฑ',
u'่่่
': u'่่่
',
u'่่่ฟฐ': u'่่่ฟฐ',
u'่่้': u'่่้',
u'่่': u'่็',
u'่่ไฝ': u'่่ไฝ',
u'่่ๅ': u'่่ๅ',
u'่่ๆธ': u'่่ๆธ',
u'่่็จฑ': u'่่็จฑ',
u'่่่
': u'่่่
',
u'่่่ฟฐ': u'่่่ฟฐ',
u'่่้': u'่่้',
u'่ธ่': u'่ธ็',
u'่ธ่ไฝ': u'่ธ่ไฝ',
u'่ธ่ๅ': u'่ธ่ๅ',
u'่ธ่ๆธ': u'่ธ่ๆธ',
u'่ธ่็จฑ': u'่ธ่็จฑ',
u'่ธ่่
': u'่ธ่่
',
u'่ธ่่ฟฐ': u'่ธ่่ฟฐ',
u'่ธ่้': u'่ธ่้',
u'่ก่': u'่ก็',
u'่ก่ไฝ': u'่ก่ไฝ',
u'่ก่ๅ': u'่ก่ๅ',
u'่ก่ๆธ': u'่ก่ๆธ',
u'่ก่็จฑ': u'่ก่็จฑ',
u'่ก่่
': u'่ก่่
',
u'่ก่่ฟฐ': u'่ก่่ฟฐ',
u'่ก่้': u'่ก่้',
u'่ก': u'่ก',
u'่กฃ่': u'่กฃ็',
u'่กฃ่ไฝ': u'่กฃ่ไฝ',
u'่กฃ่ๅ': u'่กฃ่ๅ',
u'่กฃ่ๆธ': u'่กฃ่ๆธ',
u'่กฃ่็จฑ': u'่กฃ่็จฑ',
u'่กฃ่่
': u'่กฃ่่
',
u'่กฃ่่ฟฐ': u'่กฃ่่ฟฐ',
u'่กฃ่้': u'่กฃ่้',
u'่ฃกๅพๅค้ฃ': u'่ฃๅพๅค้ฃ',
u'้ๅพๅค่ฟ': u'่ฃๅพๅค้ฃ',
u'้้ข': u'่ฃ้ข',
u'่ฃก้ข': u'่ฃ้ข',
u'่ฃ่': u'่ฃ็',
u'่ฃ่ไฝ': u'่ฃ่ไฝ',
u'่ฃ่ๅ': u'่ฃ่ๅ',
u'่ฃ่ๆธ': u'่ฃ่ๆธ',
u'่ฃ่็จฑ': u'่ฃ่็จฑ',
u'่ฃ่่
': u'่ฃ่่
',
u'่ฃ่่ฟฐ': u'่ฃ่่ฟฐ',
u'่ฃ่้': u'่ฃ่้',
u'่ฃน่': u'่ฃน็',
u'่ฃน่ไฝ': u'่ฃน่ไฝ',
u'่ฃน่ๅ': u'่ฃน่ๅ',
u'่ฃน่ๆธ': u'่ฃน่ๆธ',
u'่ฃน่็จฑ': u'่ฃน่็จฑ',
u'่ฃน่่
': u'่ฃน่่
',
u'่ฃน่่ฟฐ': u'่ฃน่่ฟฐ',
u'่ฃน่้': u'่ฃน่้',
u'่ฆ่': u'่ฆ็',
u'่ฆ่ไฝ': u'่ฆ่ไฝ',
u'่ฆ่ๅ': u'่ฆ่ๅ',
u'่ฆ่ๆธ': u'่ฆ่ๆธ',
u'่ฆ่็จฑ': u'่ฆ่็จฑ',
u'่ฆ่่
': u'่ฆ่่
',
u'่ฆ่่ฟฐ': u'่ฆ่่ฟฐ',
u'่ฆ่้': u'่ฆ่้',
u'่จ่': u'่จ็',
u'่จ่ไฝ': u'่จ่ไฝ',
u'่จ่ๅ': u'่จ่ๅ',
u'่จ่ๆธ': u'่จ่ๆธ',
u'่จ่็จฑ': u'่จ่็จฑ',
u'่จ่่
': u'่จ่่
',
u'่จ่่ฟฐ': u'่จ่่ฟฐ',
u'่จ่้': u'่จ่้',
u'่ฉฆ่': u'่ฉฆ็',
u'่ฉฆ่ไฝ': u'่ฉฆ่ไฝ',
u'่ฉฆ่ๅ': u'่ฉฆ่ๅ',
u'่ฉฆ่ๆธ': u'่ฉฆ่ๆธ',
u'่ฉฆ่็จฑ': u'่ฉฆ่็จฑ',
u'่ฉฆ่่
': u'่ฉฆ่่
',
u'่ฉฆ่่ฟฐ': u'่ฉฆ่่ฟฐ',
u'่ฉฆ่้': u'่ฉฆ่้',
u'่ช่': u'่ช็',
u'่ช่ไฝ': u'่ช่ไฝ',
u'่ช่ๅ': u'่ช่ๅ',
u'่ช่ๆธ': u'่ช่ๆธ',
u'่ช่็จฑ': u'่ช่็จฑ',
u'่ช่่
': u'่ช่่
',
u'่ช่่ฟฐ': u'่ช่่ฟฐ',
u'่ช่้': u'่ช่้',
u'ๆธๆๆฉ': u'่ชฟๅถ่งฃ่ชฟๅจ',
u'่ฎ่': u'่ฎ็',
u'่ฎ่ไฝ': u'่ฎ่ไฝ',
u'่ฎ่ๅ': u'่ฎ่ๅ',
u'่ฎ่ๆธ': u'่ฎ่ๆธ',
u'่ฎ่็จฑ': u'่ฎ่็จฑ',
u'่ฎ่่
': u'่ฎ่่
',
u'่ฎ่่ฟฐ': u'่ฎ่่ฟฐ',
u'่ฎ่้': u'่ฎ่้',
u'่ฑ่': u'่ฑ็',
u'่ฑ่ไฝ': u'่ฑ่ไฝ',
u'่ฑ่ๅ': u'่ฑ่ๅ',
u'่ฑ่ๆธ': u'่ฑ่ๆธ',
u'่ฑ่็จฑ': u'่ฑ่็จฑ',
u'่ฑ่่
': u'่ฑ่่
',
u'่ฑ่่ฟฐ': u'่ฑ่่ฟฐ',
u'่ฑ่้': u'่ฑ่้',
u'่ฑซ่': u'่ฑซ็',
u'่ฑซ่ไฝ': u'่ฑซ่ไฝ',
u'่ฑซ่ๅ': u'่ฑซ่ๅ',
u'่ฑซ่ๆธ': u'่ฑซ่ๆธ',
u'่ฑซ่็จฑ': u'่ฑซ่็จฑ',
u'่ฑซ่่
': u'่ฑซ่่
',
u'่ฑซ่่ฟฐ': u'่ฑซ่่ฟฐ',
u'่ฑซ่้': u'่ฑซ่้',
u'่ฒๅ': u'่ฒๅฏง',
u'่ฒ่': u'่ฒ็',
u'่ฒ่ไฝ': u'่ฒ่ไฝ',
u'่ฒ่ๅ': u'่ฒ่ๅ',
u'่ฒ่ๆธ': u'่ฒ่ๆธ',
u'่ฒ่็จฑ': u'่ฒ่็จฑ',
u'่ฒ่่
': u'่ฒ่่
',
u'่ฒ่่ฟฐ': u'่ฒ่่ฟฐ',
u'่ฒ่้': u'่ฒ่้',
u'่ฒทๅถ': u'่ฒทๅ
',
u'ๅฐๆฏไบ': u'่ดๆฏไบ',
u'่ตฐ่': u'่ตฐ็',
u'่ตฐ่ไฝ': u'่ตฐ่ไฝ',
u'่ตฐ่ๅ': u'่ตฐ่ๅ',
u'่ตฐ่ๆธ': u'่ตฐ่ๆธ',
u'่ตฐ่็จฑ': u'่ตฐ่็จฑ',
u'่ตฐ่่
': u'่ตฐ่่
',
u'่ตฐ่่ฟฐ': u'่ตฐ่่ฟฐ',
u'่ตฐ่้': u'่ตฐ่้',
u'่ถ่': u'่ถ็',
u'่ถ่ไฝ': u'่ถ่ไฝ',
u'่ถ่ๅ': u'่ถ่ๅ',
u'่ถ่ๆธ': u'่ถ่ๆธ',
u'่ถ่็จฑ': u'่ถ่็จฑ',
u'่ถ่่
': u'่ถ่่
',
u'่ถ่่ฟฐ': u'่ถ่่ฟฐ',
u'่ถ่้': u'่ถ่้',
u'่ถด่': u'่ถด็',
u'่ถด่ไฝ': u'่ถด่ไฝ',
u'่ถด่ๅ': u'่ถด่ๅ',
u'่ถด่ๆธ': u'่ถด่ๆธ',
u'่ถด่็จฑ': u'่ถด่็จฑ',
u'่ถด่่
': u'่ถด่่
',
u'่ถด่่ฟฐ': u'่ถด่่ฟฐ',
u'่ถด่้': u'่ถด่้',
u'่ท่': u'่ท็',
u'่ท่ไฝ': u'่ท่ไฝ',
u'่ท่ๅ': u'่ท่ๅ',
u'่ท่ๆธ': u'่ท่ๆธ',
u'่ท่็จฑ': u'่ท่็จฑ',
u'่ท่่
': u'่ท่่
',
u'่ท่่ฟฐ': u'่ท่่ฟฐ',
u'่ท่้': u'่ท่้',
u'่ท่': u'่ท็',
u'่ท่ไฝ': u'่ท่ไฝ',
u'่ท่ๅ': u'่ท่ๅ',
u'่ท่ๆธ': u'่ท่ๆธ',
u'่ท่็จฑ': u'่ท่็จฑ',
u'่ท่่
': u'่ท่่
',
u'่ท่่ฟฐ': u'่ท่่ฟฐ',
u'่ท่้': u'่ท่้',
u'่ทช่': u'่ทช็',
u'่ทช่ไฝ': u'่ทช่ไฝ',
u'่ทช่ๅ': u'่ทช่ๅ',
u'่ทช่ๆธ': u'่ทช่ๆธ',
u'่ทช่็จฑ': u'่ทช่็จฑ',
u'่ทช่่
': u'่ทช่่
',
u'่ทช่่ฟฐ': u'่ทช่่ฟฐ',
u'่ทช่้': u'่ทช่้',
u'่ทณ่': u'่ทณ็',
u'่ทณ่ไฝ': u'่ทณ่ไฝ',
u'่ทณ่ๅ': u'่ทณ่ๅ',
u'่ทณ่ๆธ': u'่ทณ่ๆธ',
u'่ทณ่็จฑ': u'่ทณ่็จฑ',
u'่ทณ่่
': u'่ทณ่่
',
u'่ทณ่่ฟฐ': u'่ทณ่่ฟฐ',
u'่ทณ่้': u'่ทณ่้',
u'่ธ่': u'่ธ็',
u'่ธ่ไฝ': u'่ธ่ไฝ',
u'่ธ่ๅ': u'่ธ่ๅ',
u'่ธ่็จฑ': u'่ธ่็จฑ',
u'่ธ่่
': u'่ธ่่
',
u'่ธ่่ฟฐ': u'่ธ่่ฟฐ',
u'่ธ่้': u'่ธ่้',
u'่ธฉ่': u'่ธฉ็',
u'่ธฉ่ไฝ': u'่ธฉ่ไฝ',
u'่ธฉ่ๅ': u'่ธฉ่ๅ',
u'่ธฉ่ๆธ': u'่ธฉ่ๆธ',
u'่ธฉ่็จฑ': u'่ธฉ่็จฑ',
u'่ธฉ่่
': u'่ธฉ่่
',
u'่ธฉ่่ฟฐ': u'่ธฉ่่ฟฐ',
u'่ธฉ่้': u'่ธฉ่้',
u'่บ่': u'่บ็',
u'่บ่ไฝ': u'่บ่ไฝ',
u'่บ่ๅ': u'่บ่ๅ',
u'่บ่ๆธ': u'่บ่ๆธ',
u'่บ่็จฑ': u'่บ่็จฑ',
u'่บ่่
': u'่บ่่
',
u'่บ่่ฟฐ': u'่บ่่ฟฐ',
u'่บ่้': u'่บ่้',
u'่บซ่': u'่บซ็',
u'่บซ่ไฝ': u'่บซ่ไฝ',
u'่บซ่ๅ': u'่บซ่ๅ',
u'่บซ่ๆธ': u'่บซ่ๆธ',
u'่บซ่็จฑ': u'่บซ่็จฑ',
u'่บซ่่
': u'่บซ่่
',
u'่บซ่่ฟฐ': u'่บซ่่ฟฐ',
u'่บซ่้': u'่บซ่้',
u'่บบ่': u'่บบ็',
u'่บบ่ไฝ': u'่บบ่ไฝ',
u'่บบ่ๅ': u'่บบ่ๅ',
u'่บบ่ๆธ': u'่บบ่ๆธ',
u'่บบ่็จฑ': u'่บบ่็จฑ',
u'่บบ่่
': u'่บบ่่
',
u'่บบ่่ฟฐ': u'่บบ่่ฟฐ',
u'่บบ่้': u'่บบ่้',
u'่ป้ซ': u'่ปไปถ',
u'่ผ่': u'่ผ็',
u'่ผ่ไฝ': u'่ผ่ไฝ',
u'่ผ่ๅ': u'่ผ่ๅ',
u'่ผ่ๆธ': u'่ผ่ๆธ',
u'่ผ่็จฑ': u'่ผ่็จฑ',
u'่ผ่่
': u'่ผ่่
',
u'่ผ่่ฟฐ': u'่ผ่่ฟฐ',
u'่ผ่้': u'่ผ่้',
u'่ฝ่': u'่ฝ็',
u'่ฝ่ไฝ': u'่ฝ่ไฝ',
u'่ฝ่ๅ': u'่ฝ่ๅ',
u'่ฝ่ๆธ': u'่ฝ่ๆธ',
u'่ฝ่็จฑ': u'่ฝ่็จฑ',
u'่ฝ่่
': u'่ฝ่่
',
u'่ฝ่่ฟฐ': u'่ฝ่่ฟฐ',
u'่ฝ่้': u'่ฝ่้',
u'่พฆ่': u'่พฆ็',
u'่พฆ่ไฝ': u'่พฆ่ไฝ',
u'่พฆ่ๅ': u'่พฆ่ๅ',
u'่พฆ่ๆธ': u'่พฆ่ๆธ',
u'่พฆ่็จฑ': u'่พฆ่็จฑ',
u'่พฆ่่
': u'่พฆ่่
',
u'่พฆ่่ฟฐ': u'่พฆ่่ฟฐ',
u'่พฆ่้': u'่พฆ่้',
u'่ฟ่ง่ชไฟก': u'่ฟ่ง่ฐไฟก',
u'่ฟ่ง่ฐไฟก': u'่ฟ่ง่ฐไฟก',
u'่ฟซ่': u'่ฟซ็',
u'่ฟฝ่': u'่ฟฝ็',
u'่ฟฝ่ไฝ': u'่ฟฝ่ไฝ',
u'่ฟฝ่ๅ': u'่ฟฝ่ๅ',
u'่ฟฝ่ๆธ': u'่ฟฝ่ๆธ',
u'่ฟฝ่็จฑ': u'่ฟฝ่็จฑ',
u'่ฟฝ่่
': u'่ฟฝ่่
',
u'่ฟฝ่่ฟฐ': u'่ฟฝ่่ฟฐ',
u'่ฟฝ่้': u'่ฟฝ่้',
u'้่': u'้็',
u'้่ไฝ': u'้่ไฝ',
u'้่ๅ': u'้่ๅ',
u'้่ๆธ': u'้่ๆธ',
u'้่็จฑ': u'้่็จฑ',
u'้่่
': u'้่่
',
u'้่่ฟฐ': u'้่่ฟฐ',
u'้่้': u'้่้',
u'้้': u'้่ฃ',
u'้่ฃก': u'้่ฃ',
u'้ฃ่': u'้ฃ็',
u'้ฃ่ไฝ': u'้ฃ่ไฝ',
u'้ฃ่ๅ': u'้ฃ่ๅ',
u'้ฃ่ๆธ': u'้ฃ่ๆธ',
u'้ฃ่็จฑ': u'้ฃ่็จฑ',
u'้ฃ่่
': u'้ฃ่่
',
u'้ฃ่่ฟฐ': u'้ฃ่่ฟฐ',
u'้ฃ่้': u'้ฃ่้',
u'้ผ่': u'้ผ็',
u'้ผ่ไฝ': u'้ผ่ไฝ',
u'้ผ่ๅ': u'้ผ่ๅ',
u'้ผ่ๆธ': u'้ผ่ๆธ',
u'้ผ่็จฑ': u'้ผ่็จฑ',
u'้ผ่่
': u'้ผ่่
',
u'้ผ่่ฟฐ': u'้ผ่่ฟฐ',
u'้ผ่้': u'้ผ่้',
u'้่': u'้็',
u'้่ไฝ': u'้่ไฝ',
u'้่ๅ': u'้่ๅ',
u'้่ๆธ': u'้่ๆธ',
u'้่็จฑ': u'้่็จฑ',
u'้่่
': u'้่่
',
u'้่่ฟฐ': u'้่่ฟฐ',
u'้่้': u'้่้',
u'้่': u'้็',
u'้่ไฝ': u'้่ไฝ',
u'้่ๅ': u'้่ๅ',
u'้่ๆธ': u'้่ๆธ',
u'้่็จฑ': u'้่็จฑ',
u'้่่
': u'้่่
',
u'้่่ฟฐ': u'้่่ฟฐ',
u'้่้': u'้่้',
u'้ ่': u'้ ็',
u'้ ่ไฝ': u'้ ่ไฝ',
u'้ ่ๅ': u'้ ่ๅ',
u'้ ่ๆธ': u'้ ่ๆธ',
u'้ ่็จฑ': u'้ ่็จฑ',
u'้ ่่
': u'้ ่่
',
u'้ ่่ฟฐ': u'้ ่่ฟฐ',
u'้ ่้': u'้ ่้',
u'้
่': u'้
็',
u'้
่ไฝ': u'้
่ไฝ',
u'้
่ๅ': u'้
่ๅ',
u'้
่ๆธ': u'้
่ๆธ',
u'้
่็จฑ': u'้
่็จฑ',
u'้
่่
': u'้
่่
',
u'้
่่ฟฐ': u'้
่่ฟฐ',
u'้
่้': u'้
่้',
u'้ฏ': u'้
ฐ',
u'้่': u'้็',
u'้่ไฝ': u'้่ไฝ',
u'้่ๅ': u'้่ๅ',
u'้่ๆธ': u'้่ๆธ',
u'้่็จฑ': u'้่็จฑ',
u'้่่
': u'้่่
',
u'้่่ฟฐ': u'้่่ฟฐ',
u'้่้': u'้่้',
u'้ซ้ข่ฃก': u'้ซ้ข่ฃ',
u'้ฏๅฃบ': u'้ฏๅฃบ',
u'้ฏๅฃถ': u'้ฏๅฃบ',
u'้ฏ้': u'้ฏ้',
u'้ฏ้ข': u'้ฏ้ข',
u'้ฏ้ฌ': u'้ฏ้ฌ',
u'้ฏ้
ฑ': u'้ฏ้ฌ',
u'้ฏ้ธก': u'้ฏ้',
u'้ฏ้': u'้ฏ้',
u'้่': u'้็',
u'้่ไฝ': u'้่ไฝ',
u'้่ๅ': u'้่ๅ',
u'้่ๆธ': u'้่ๆธ',
u'้่็จฑ': u'้่็จฑ',
u'้่่
': u'้่่
',
u'้่่ฟฐ': u'้่่ฟฐ',
u'้่้': u'้่้',
u'้ค': u'้',
u'้คๅฟ้ฌฅ่ง': u'้ๅฟ้ฌฅ่ง',
u'้ช่': u'้ช็',
u'้ช่ไฝ': u'้ช่ไฝ',
u'้ช่ๅ': u'้ช่ๅ',
u'้ช่ๆธ': u'้ช่ๆธ',
u'้ช่็จฑ': u'้ช่็จฑ',
u'้ช่่
': u'้ช่่
',
u'้ช่่ฟฐ': u'้ช่่ฟฐ',
u'้ช่้': u'้ช่้',
u'้่': u'้็',
u'้่ไฝ': u'้่ไฝ',
u'้่ๅ': u'้่ๅ',
u'้่ๆธ': u'้่ๆธ',
u'้่็จฑ': u'้่็จฑ',
u'้่่
': u'้่่
',
u'้่่ฟฐ': u'้่่ฟฐ',
u'้่้': u'้่้',
u'้่': u'้็',
u'้่ไฝ': u'้่ไฝ',
u'้่ๅ': u'้่ๅ',
u'้่ๆธ': u'้่ๆธ',
u'้่็จฑ': u'้่็จฑ',
u'้่่
': u'้่่
',
u'้่่ฟฐ': u'้่่ฟฐ',
u'้่้': u'้่้',
u'้่': u'้็',
u'้่ไฝ': u'้่ไฝ',
u'้่ๅ': u'้่ๅ',
u'้่ๆธ': u'้่ๆธ',
u'้่็จฑ': u'้่็จฑ',
u'้่่
': u'้่่
',
u'้่่ฟฐ': u'้่่ฟฐ',
u'้่้': u'้่้',
u'้่': u'้็',
u'้่ไฝ': u'้่ไฝ',
u'้่ๅ': u'้่ๅ',
u'้่ๆธ': u'้่ๆธ',
u'้่็จฑ': u'้่็จฑ',
u'้่่
': u'้่่
',
u'้่่ฟฐ': u'้่่ฟฐ',
u'้่้': u'้่้',
u'่ไธ่': u'้ปไธ็',
u'่ๅพ่': u'้ปๅพ็',
u'่่': u'้ป็',
u'ไบๅกๆ็ถ': u'้ฟๅกๆ็',
u'้ฟๆไผฏ่ฏๅๅคงๅ
ฌๅ': u'้ฟๆไผฏ่ฏๅ้
้ทๅ',
u'้่': u'้็',
u'้่ไฝ': u'้่ไฝ',
u'้่ๅ': u'้่ๅ',
u'้่ๆธ': u'้่ๆธ',
u'้่็จฑ': u'้่็จฑ',
u'้่่
': u'้่่
',
u'้่่ฟฐ': u'้่่ฟฐ',
u'้่้': u'้่้',
u'้่': u'้็',
u'้่ไฝ': u'้่ไฝ',
u'้่ๅ': u'้่ๅ',
u'้่ๆธ': u'้่ๆธ',
u'้่็จฑ': u'้่็จฑ',
u'้่่
': u'้่่
',
u'้่่ฟฐ': u'้่่ฟฐ',
u'้่้': u'้่้',
u'้ช่': u'้ช็',
u'้ช่ไฝ': u'้ช่ไฝ',
u'้ช่ๅ': u'้ช่ๅ',
u'้ช่ๆธ': u'้ช่ๆธ',
u'้ช่็จฑ': u'้ช่็จฑ',
u'้ช่่
': u'้ช่่
',
u'้ช่่ฟฐ': u'้ช่่ฟฐ',
u'้ช่้': u'้ช่้',
u'้่': u'้็',
u'้่ไฝ': u'้่ไฝ',
u'้่ๅ': u'้่ๅ',
u'้่ๆธ': u'้่ๆธ',
u'้่็จฑ': u'้่็จฑ',
u'้่่
': u'้่่
',
u'้่่ฟฐ': u'้่่ฟฐ',
u'้่้': u'้่้',
u'้จ่': u'้จ็',
u'้จ่ไฝ': u'้จ่ไฝ',
u'้จ่ๅ': u'้จ่ๅ',
u'้จ่ๆธ': u'้จ่ๆธ',
u'้จ่็จฑ': u'้จ่็จฑ',
u'้จ่่
': u'้จ่่
',
u'้จ่่ฟฐ': u'้จ่่ฟฐ',
u'้จ่้': u'้จ่้',
u'้
่': u'้
็',
u'้
่ไฝ': u'้
่ไฝ',
u'้
่ๅ': u'้
่ๅ',
u'้
่ๆธ': u'้
่ๆธ',
u'้
่็จฑ': u'้
่็จฑ',
u'้
่่
': u'้
่่
',
u'้
่่ฟฐ': u'้
่่ฟฐ',
u'้
่้': u'้
่้',
u'้่': u'้็',
u'้่ไฝ': u'้่ไฝ',
u'้่ๅ': u'้่ๅ',
u'้่ๆธ': u'้่ๆธ',
u'้่็จฑ': u'้่็จฑ',
u'้่่
': u'้่่
',
u'้่่ฟฐ': u'้่่ฟฐ',
u'้่้': u'้่้',
u'ๅฐๆทๆท': u'้ช็ณ',
u'้ช้็บข': u'้ช่ฃ็ด
',
u'้ช่ฃก็ด
': u'้ช่ฃ็ด
',
u'้ช่ฃก่ป': u'้ช่ฃ่ป',
u'้ช้่ป': u'้ช่ฃ่ป',
u'้ ่': u'้ ็',
u'้ ่ไฝ': u'้ ่ไฝ',
u'้ ่ๅ': u'้ ่ๅ',
u'้ ่็จฑ': u'้ ่็จฑ',
u'้ ่็งฐ': u'้ ่็จฑ',
u'้ ่่
': u'้ ่่
',
u'้ ่่ฟฐ': u'้ ่่ฟฐ',
u'้ ่้': u'้ ่้',
u'้ ่ๅฝ': u'้ ่้',
u'้ฟ่': u'้ฟ็',
u'้ฟ่ไฝ': u'้ฟ่ไฝ',
u'้ฟ่ๅ': u'้ฟ่ๅ',
u'้ฟ่ๆธ': u'้ฟ่ๆธ',
u'้ฟ่็จฑ': u'้ฟ่็จฑ',
u'้ฟ่่
': u'้ฟ่่
',
u'้ฟ่่ฟฐ': u'้ฟ่่ฟฐ',
u'้ฟ่้': u'้ฟ่้',
u'้ ่': u'้ ็',
u'้ ่ไฝ': u'้ ่ไฝ',
u'้ ่ๅ': u'้ ่ๅ',
u'้ ่ๆธ': u'้ ่ๆธ',
u'้ ่็จฑ': u'้ ่็จฑ',
u'้ ่่
': u'้ ่่
',
u'้ ่่ฟฐ': u'้ ่่ฟฐ',
u'้ ่้': u'้ ่้',
u'้ ่': u'้ ็',
u'้ ่ไฝ': u'้ ่ไฝ',
u'้ ่ๅ': u'้ ่ๅ',
u'้ ่ๆธ': u'้ ่ๆธ',
u'้ ่็จฑ': u'้ ่็จฑ',
u'้ ่่
': u'้ ่่
',
u'้ ่่ฟฐ': u'้ ่่ฟฐ',
u'้ ่้': u'้ ่้',
u'้ ๅธ': u'้ ไฝ',
u'้ขๅธ': u'้ ไฝ',
u'้ ๅ่ฃก': u'้ ๅ่ฃ',
u'้ขๅ้': u'้ ๅ่ฃ',
u'้ ่': u'้ ็',
u'้ ่ไฝ': u'้ ่ไฝ',
u'้ ่ๅ': u'้ ่ๅ',
u'้ ่ๆธ': u'้ ่ๆธ',
u'้ ่็จฑ': u'้ ่็จฑ',
u'้ ่่
': u'้ ่่
',
u'้ ่่ฟฐ': u'้ ่่ฟฐ',
u'้ ่้': u'้ ่้',
u'้ฃ่': u'้ฃ็',
u'้ฃ่ไฝ': u'้ฃ่ไฝ',
u'้ฃ่ๅ': u'้ฃ่ๅ',
u'้ฃ่ๆธ': u'้ฃ่ๆธ',
u'้ฃ่็จฑ': u'้ฃ่็จฑ',
u'้ฃ่่
': u'้ฃ่่
',
u'้ฃ่่ฟฐ': u'้ฃ่่ฟฐ',
u'้ฃ่้': u'้ฃ่้',
u'้คจ่ฃก': u'้คจ่ฃ',
u'้ฆ้': u'้คจ่ฃ',
u'้ฆฌ็พๅฐๅคซ': u'้ฆฌ็พไปฃๅคซ',
u'้ฆฌๅฉๅ
ฑๅๅ': u'้ฆฌ้ๅ
ฑๅๅ',
u'ๅ่ฑ': u'้ฆฌ้ด่ฏ',
u'้ง่': u'้ง็',
u'้ง่ไฝ': u'้ง่ไฝ',
u'้ง่ๅ': u'้ง่ๅ',
u'้ง่ๆธ': u'้ง่ๆธ',
u'้ง่็จฑ': u'้ง่็จฑ',
u'้ง่่
': u'้ง่่
',
u'้ง่่ฟฐ': u'้ง่่ฟฐ',
u'้ง่้': u'้ง่้',
u'้จ่': u'้จ็',
u'้จ่ไฝ': u'้จ่ไฝ',
u'้จ่ๅ': u'้จ่ๅ',
u'้จ่ๆธ': u'้จ่ๆธ',
u'้จ่็จฑ': u'้จ่็จฑ',
u'้จ่่
': u'้จ่่
',
u'้จ่่ฟฐ': u'้จ่่ฟฐ',
u'้จ่้': u'้จ่้',
u'้จ่': u'้จ็',
u'้จ่ไฝ': u'้จ่ไฝ',
u'้จ่ๅ': u'้จ่ๅ',
u'้จ่ๆธ': u'้จ่ๆธ',
u'้จ่็จฑ': u'้จ่็จฑ',
u'้จ่่
': u'้จ่่
',
u'้จ่่ฟฐ': u'้จ่่ฟฐ',
u'้จ่้': u'้จ่้',
u'้ซ่': u'้ซ็',
u'้ซ่ไฝ': u'้ซ่ไฝ',
u'้ซ่ๅ': u'้ซ่ๅ',
u'้ซ่ๆธ': u'้ซ่ๆธ',
u'้ซ่็จฑ': u'้ซ่็จฑ',
u'้ซ่่
': u'้ซ่่
',
u'้ซ่่ฟฐ': u'้ซ่่ฟฐ',
u'้ซ่้': u'้ซ่้',
u'้ซญ่': u'้ซญ็',
u'้ซญ่ไฝ': u'้ซญ่ไฝ',
u'้ซญ่ๅ': u'้ซญ่ๅ',
u'้ซญ่ๆธ': u'้ซญ่ๆธ',
u'้ซญ่็จฑ': u'้ซญ่็จฑ',
u'้ซญ่่
': u'้ซญ่่
',
u'้ซญ่่ฟฐ': u'้ซญ่่ฟฐ',
u'้ซญ่้': u'้ซญ่้',
u'้ฌฅ่': u'้ฌฅ็',
u'้ฌฅ่ไฝ': u'้ฌฅ่ไฝ',
u'้ฌฅ่ๅ': u'้ฌฅ่ๅ',
u'้ฌฅ่ๆธ': u'้ฌฅ่ๆธ',
u'้ฌฅ่็จฑ': u'้ฌฅ่็จฑ',
u'้ฌฅ่่
': u'้ฌฅ่่
',
u'้ฌฅ่่ฟฐ': u'้ฌฅ่่ฟฐ',
u'้ฌฅ่้': u'้ฌฅ่้',
u'้บ่': u'้บ็',
u'้บ่ไฝ': u'้บ่ไฝ',
u'้บ่ๅ': u'้บ่ๅ',
u'้บ่ๆธ': u'้บ่ๆธ',
u'้บ่็จฑ': u'้บ่็จฑ',
u'้บ่่
': u'้บ่่
',
u'้บ่่ฟฐ': u'้บ่่ฟฐ',
u'้บ่้': u'้บ่้',
u'้ป่': u'้ป็',
u'้ป่ไฝ': u'้ป่ไฝ',
u'้ป่ๅ': u'้ป่ๅ',
u'้ป่ๆธ': u'้ป่ๆธ',
u'้ป่็จฑ': u'้ป่็จฑ',
u'้ป่่
': u'้ป่่
',
u'้ป่่ฟฐ': u'้ป่่ฟฐ',
u'้ป่้': u'้ป่้',
u'้ป่': u'้ป็',
u'้ป่ไฝ': u'้ป่ไฝ',
u'้ป่ๅ': u'้ป่ๅ',
u'้ป่ๆธ': u'้ป่ๆธ',
u'้ป่็จฑ': u'้ป่็จฑ',
u'้ป่่
': u'้ป่่
',
u'้ป่่ฟฐ': u'้ป่่ฟฐ',
u'้ป่้': u'้ป่้',
u'้ป่ฃก': u'้ป่ฃ',
u'็น้': u'้ป่ฃ',
}) | AdvancedLangConv | /AdvancedLangConv-0.01.tar.gz/AdvancedLangConv-0.01/langconv/defaulttables/zh_hk.py | zh_hk.py |
from zh_hant import convtable as oldtable
convtable = oldtable.copy()
convtable.update({
u'โ': u'ใ',
u'โ': u'ใ',
u'โ': u'ใ',
u'โ': u'ใ',
u'ไธๆฅต็ฎก': u'ไธๆฅต้ซ',
u'ไธๆ็ฎก': u'ไธๆฅต้ซ',
u'ไธ็่ฃ': u'ไธ็่ฃก',
u'ไธญๆ่ฃ': u'ไธญๆ่ฃก',
u'ไธฒ่ก': u'ไธฒๅ',
u'ไธฒๅๅ ้ๅจ': u'ไธฒๅๅ ้ๅจ',
u'ไปฅๅคช็ฝ': u'ไนๅคช็ถฒ',
u'ๅฅถ้
ช': u'ไนณ้
ช',
u'ไบๆฅต็ฎก': u'ไบๆฅต้ซ',
u'ไบๆ็ฎก': u'ไบๆฅต้ซ',
u'ไบคไบๅผ': u'ไบๅๅผ',
u'้ฟๅกๆ็': u'ไบๅกๆ็ถ',
u'ไบบๅทฅๆบ่ฝ': u'ไบบๅทฅๆบๆ
ง',
u'ๆฅๅฃ': u'ไป้ข',
u'ไปปๆ็ๅก': u'ไปปๆ็ๅก',
u'ไปปๆ็ๅ': u'ไปปๆ็ๅก',
u'ๆๅกๅจ': u'ไผบๆๅจ',
u'ๅญ็ฏ': u'ไฝๅ
็ต',
u'ๅญ่': u'ไฝๅ
็ต',
u'ไฝๅ่ฃ': u'ไฝๅ่ฃก',
u'ไผๅ
็บง': u'ๅชๅ
้ ๅบ',
u'ๅ
ๅ
': u'ๅ
ๅถ',
u'ๅ
ๅถ': u'ๅ
ๅถ',
u'ๅ
็': u'ๅ
็ข',
u'ๅ
้ฉฑ': u'ๅ
็ขๆฉ',
u'ๅ
็พ
ๅฐไบ': u'ๅ
็พ
ๅ่ฅฟไบ',
u'ๅ
็ฝๅฐไบ': u'ๅ
็พ
ๅ่ฅฟไบ',
u'ๅ
จ่ง': u'ๅ
จๅฝข',
u'ๅฌๅคฉ่ฃ': u'ๅฌๅคฉ่ฃก',
u'ๅฌๆฅ่ฃ': u'ๅฌๆฅ่ฃก',
u'ๅ่': u'ๅท็ค',
u'ๅท่': u'ๅท็ค',
u'ๅถๅจ': u'ๅถๅจ',
u'ๅ
ๅจ': u'ๅถๅจ',
u'ๅถๅพ': u'ๅถๅพ',
u'ๅ
ๅพ': u'ๅถๅพ',
u'ๅ
ๆ': u'ๅถๆ',
u'ๅถๆ': u'ๅถๆ',
u'ๅ
ๆก': u'ๅถๆก',
u'ๅถๆก': u'ๅถๆก',
u'ๅถๆฎ': u'ๅถๆฎ',
u'ๅ
ๆฎ': u'ๅถๆฎ',
u'ๅถๆฎ': u'ๅถๆฎ',
u'ๅ
ๆฎบ': u'ๅถๆฎบ',
u'ๅถๆ': u'ๅถๆฎบ',
u'ๅถๆฎบ': u'ๅถๆฎบ',
u'ๅๅธๅผ': u'ๅๆฃๅผ',
u'ๆๅฐ': u'ๅๅฐ',
u'ๅๆฏๆฆๅฃซ็ป': u'ๅๆฏๆฆๆฏ็ป',
u'ๅชๅฝฉ': u'ๅช็ถต',
u'ๅ ่ฌ': u'ๅ ๅฝญ',
u'ๆป็บฟ': u'ๅฏๆตๆ',
u'ๅฑๅ็ฝ': u'ๅๅ็ถฒ',
u'็น็ซๅฐผ้ๅๅคๅทดๅฅ': u'ๅ้้ๆ่ฒๅฅ',
u'็น็ซๅฐผ่พพๅๆๅทดๅฅ': u'ๅ้้ๆ่ฒๅฅ',
u'ๅ่ง': u'ๅๅฝข',
u'ๅกๅก็พ': u'ๅก้',
u'ๅกๅกๅฐ': u'ๅก้',
u'ๆๅฐๆฉ': u'ๅฐ่กจๆฉ',
u'ๆๅฐๆบ': u'ๅฐ่กจๆฉ',
u'ๅ็ซ็น้ไบ': u'ๅๅฉๅไบ',
u'ๅ็ซ็น้ไบ': u'ๅๅฉๅไบ',
u'ๅ็ๅคๅฐ': u'ๅ็ๅค',
u'ๅ็ๅค็พ': u'ๅ็ๅค',
u'ๆฏๅจๅฃซๅ
ฐ': u'ๅฒ็ฆๆฟ่ญ',
u'ๆฏๅจๅฃซ่ญ': u'ๅฒ็ฆๆฟ่ญ',
u'ๅๅธๆ': u'ๅๅธๅฐ',
u'ๅๅธๅ ค': u'ๅๅธๅฐ',
u'ๅบ้ๅทดๆฏ': u'ๅ้ๅทดๆฏ',
u'ๅ็ฆ็ง': u'ๅ็ฆ้ญฏ',
u'ๅพ็ฆๅข': u'ๅ็ฆ้ญฏ',
u'ๅ่จๅ
ๆฏๅฆ': u'ๅ่ฉๅ
',
u'ๅฅๆฏ้้ปๅ ': u'ๅฅๆฏๅคง้ปๅ ',
u'ๅฅๆฏ่พพ้ปๅ ': u'ๅฅๆฏๅคง้ปๅ ',
u'ๆ ผ้ญฏๅไบ': u'ๅฌๆฒปไบ',
u'ๆ ผ้ฒๅไบ': u'ๅฌๆฒปไบ',
u'ไฝๆฒปไบ': u'ๅฌๆฒปไบ',
u'ไฝๆฒปไบ': u'ๅฌๆฒปไบ',
u'ๅด่ฃ': u'ๅด่ฃก',
u'ๅๅบๆผๆฏๅฆ': u'ๅๅบซๆผ',
u'่ฏไป': u'ๅ่ฑ',
u'ๅ่ฑ็ถฒ': u'ๅ่ฑ็ถฒ',
u'ๅ่ฑ็ฝ': u'ๅ่ฑ็ถฒ',
u'ๅฆๆกๅฐผไบ': u'ๅฆๅฐๅฐผไบ',
u'ๅฆๆกๅฐผไบ': u'ๅฆๅฐๅฐผไบ',
u'็ซฏๅฃ': u'ๅ ',
u'ๅกๅๅ
ๆฏๅฆ': u'ๅกๅๅ
',
u'ๅก่ๅฐ': u'ๅกๅธญ็พ',
u'ๅก่็พ': u'ๅกๅธญ็พ',
u'ๅกๆตฆ่ทฏๆฏ': u'ๅกๆฎๅๆฏ',
u'ๅคๅคฉ่ฃ': u'ๅคๅคฉ่ฃก',
u'ๅคๆฅ่ฃ': u'ๅคๆฅ่ฃก',
u'ๅคๆๅฐผๅ ๅ
ฑๅๅ': u'ๅคๆๅฐผๅ ',
u'ๅค็ฑณๅฐผๅ ๅ
ฑๅๅฝ': u'ๅคๆๅฐผๅ ',
u'ๅค็ฑณๅฐผๅ ๅ
ฑๅๅ': u'ๅคๆๅฐผๅ ',
u'ๅค็ฑณๅฐผๅ ๅฝ': u'ๅค็ฑณๅฐผๅ
',
u'ๅคๆๅฐผๅ ๅ': u'ๅค็ฑณๅฐผๅ
',
u'็ฉฟๆขญๆฉ': u'ๅคช็ฉบๆขญ',
u'่ชๅคฉ้ฃๆบ': u'ๅคช็ฉบๆขญ',
u'ๅฐผๆฅๅฉไบ': u'ๅฅๅๅฉไบ',
u'ๅฐผๆฅๅฉไบ': u'ๅฅๅๅฉไบ',
u'ๅญ็ฌฆ': u'ๅญๅ
',
u'ๅญๅท': u'ๅญๅๅคงๅฐ',
u'ๅญๅบ': u'ๅญๅๆช',
u'ๅญ็ฌฆ้': u'ๅญ็ฌฆ้',
u'ๅญ็': u'ๅญๆช',
u'ๅญธ่ฃ': u'ๅญธ่ฃก',
u'ๅฎๆ็ๅๅทดๅธ้': u'ๅฎๅฐๅกๅๅทดๅธ้',
u'ๅฎๆ็ๅๅทดๅธ่พพ': u'ๅฎๅฐๅกๅๅทดๅธ้',
u'ๅฎๅ
': u'ๅฎๅ
',
u'ๆดช้ฝๆๆฏ': u'ๅฎ้ฝๆๆฏ',
u'ๅฏปๅ': u'ๅฎๅ',
u'ๅฏๅ่ฃ': u'ๅฏๅ่ฃก',
u'ๅฎฝๅธฆ': u'ๅฏฌ้ ป',
u'่ๆพ': u'ๅฏฎๅ',
u'่ๆ': u'ๅฏฎๅ',
u'ๆ้จ': u'ๅฐ้',
u'ๅฐ่ผฏ่ฃ': u'ๅฐ่ผฏ่ฃก',
u'่ดๆฏไบ': u'ๅฐๆฏไบ',
u'่ตๆฏไบ': u'ๅฐๆฏไบ',
u'ๅฐผๆฅ็พ': u'ๅฐผๆฅ',
u'ๅฐผๆฅๅฐ': u'ๅฐผๆฅ',
u'ๅฑฑๆด่ฃ': u'ๅฑฑๆด่ฃก',
u'ๅทดๅธไบๆฐ็ฟๅ
งไบ': u'ๅทดๅธไบ็ดๅนพๅ
งไบ',
u'ๅทดๅธไบๆฐๅ ๅ
ไบ': u'ๅทดๅธไบ็ดๅนพๅ
งไบ',
u'ๅทดๅทดๅคๆฏ': u'ๅทด่ฒๅค',
u'ๅธๅบ็บณๆณ็ดข': u'ๅธๅ็ดๆณ็ดข',
u'ๅธๅบ็ดๆณ็ดข': u'ๅธๅ็ดๆณ็ดข',
u'ๅธไป': u'ๅธๅธ',
u'ๅธๆฎ': u'ๅธๅธ',
u'ๅธๅณ': u'ๅธ็',
u'ไพ็จ': u'ๅธธๅผ',
u'ๅนณๆฒปไนไนฑ': u'ๅนณๆฒปไนไบ',
u'ๅนณๆฒปไนไบ': u'ๅนณๆฒปไนไบ',
u'ๅนดไปฃ่ฃ': u'ๅนดไปฃ่ฃก',
u'ๅ ๅ
ไบๆฏ็ป': u'ๅนพๅ
งไบๆฏ็ดข',
u'ๅนพๅ
งไบๆฏ็ดน': u'ๅนพๅ
งไบๆฏ็ดข',
u'ๅฝฉๅธฆ': u'ๅฝฉๅธถ',
u'ๅฝฉๆ': u'ๅฝฉๆ',
u'ๅฝฉๆฅผ': u'ๅฝฉๆจ',
u'ๅฝฉ็ๆฅผ': u'ๅฝฉ็ๆจ',
u'ๅพฉ่': u'ๅพฉ็ฆ',
u'ๅค่': u'ๅพฉ็ฆ',
u'ๅฟ่ฃ': u'ๅฟ่ฃก',
u'ๅฟซ้ชๅญๅจๅจ': u'ๅฟซ้่จๆถ้ซ',
u'้ชๅญ': u'ๅฟซ้่จๆถ้ซ',
u'ๆณ่ฑก': u'ๆณๅ',
u'ไผ ๆ': u'ๆๆธฌ',
u'ไน ็จ': u'ๆ
ฃ็จ',
u'ๆๅฝฉๅจฑไบฒ': u'ๆฒ็ถตๅจ่ฆช',
u'ๆฒ่ฃ': u'ๆฒ่ฃก',
u'ๆ็ต็ญ': u'ๆ้ป็ญ',
u'ๆ็ต': u'ๆ้ป็ญ',
u'ๆฌๅท': u'ๆฌๅผง',
u'ๆฟ็ ดไพ': u'ๆฟ็ ดๅด',
u'ๆฟ็ ดไป': u'ๆฟ็ ดๅด',
u'็ฉๆถ': u'ๆท่ฑน',
u'ๆซ็ไปช': u'ๆ็ๅจ',
u'ๆ้ฉ': u'ๆ้ค',
u'ๆ้': u'ๆ้ค',
u'ๆงไปถ': u'ๆงๅถ้
',
u'ๅฐ็': u'ๆ็',
u'ๆก็': u'ๆ็',
u'ไพฟๆบๅผ': u'ๆๅธถๅ',
u'ๆ
ไบ่ฃ': u'ๆ
ไบ่ฃก',
u'่ฐๅถ่งฃ่ฐๅจ': u'ๆธๆๆฉ',
u'่ชฟๅถ่งฃ่ชฟๅจ': u'ๆธๆๆฉ',
u'ๆฏๆดๆๅฐผไบ': u'ๆฏๆด็ถญๅฐผไบ',
u'ๆฏๆดๆๅฐผไบ': u'ๆฏๆด็ถญๅฐผไบ',
u'ๆฐ็บชๅ
': u'ๆฐ็ดๅ
',
u'ๆฐ็ดๅ
': u'ๆฐ็ดๅ
',
u'ๆฅๅญ่ฃ': u'ๆฅๅญ่ฃก',
u'ๆฅๅ่ฃ': u'ๆฅๅ่ฃก',
u'ๆฅๅคฉ่ฃ': u'ๆฅๅคฉ่ฃก',
u'ๆฅๆฅ่ฃ': u'ๆฅๆฅ่ฃก',
u'ๆ้่ฃ': u'ๆ้่ฃก',
u'่ฏ็': u'ๆถๅ
',
u'ๆๅ่ฃ': u'ๆๅ่ฃก',
u'ๆๅญ่ฃ': u'ๆๅญ่ฃก',
u'ไนๅพ': u'ๆฅๅพท',
u'ๅ
ๆ้ ': u'ๆฏๆ้ ',
u'ๅ
ๆ้กฟ': u'ๆฏๆ้ ',
u'ๆ ผๆ็ด้': u'ๆ ผ็้ฃ้',
u'ๆ ผๆ็บณ่พพ': u'ๆ ผ็้ฃ้',
u'ๅก้ซ': u'ๆขต่ฐท',
u'ๆฃฎๆ่ฃ': u'ๆฃฎๆ่ฃก',
u'ๆฃบๆ่ฃ': u'ๆฃบๆ่ฃก',
u'ๆฆด่ฎ': u'ๆฆดๆงค',
u'ๆฆด่ฒ': u'ๆฆดๆงค',
u'ไปฟ็': u'ๆจกๆฌ',
u'ๆฏ้่ฃๆฏ': u'ๆจก้่ฅฟๆฏ',
u'ๆฏ้ๆฑๆฏ': u'ๆจก้่ฅฟๆฏ',
u'ๆฉๆขฐไบบ': u'ๆฉๅจไบบ',
u'ๆบๅจไบบ': u'ๆฉๅจไบบ',
u'ๅญๆฎต': u'ๆฌไฝ',
u'ๆญทๅฒ่ฃ': u'ๆญทๅฒ่ฃก',
u'ๅ
้ณ': u'ๆฏ้ณ',
u'ๆฐธๅ': u'ๆฐธๆ',
u'ๆ่ฑ': u'ๆฑถ่',
u'ๆฒ็น้ฟๆไผฏ': u'ๆฒ็ๅฐ้ฟๆไผฏ',
u'ๆฒๅฐ้ฟๆไผฏ': u'ๆฒ็ๅฐ้ฟๆไผฏ',
u'ๆณขๆฏๅฐผไบ้ปๅกๅฅ็ถญ้ฃ': u'ๆณขๅฃซๅฐผไบ่ตซๅกๅฅ็ถญ็ด',
u'ๆณขๆฏๅฐผไบๅ้ปๅกๅฅ็ปด้ฃ': u'ๆณขๅฃซๅฐผไบ่ตซๅกๅฅ็ถญ็ด',
u'ๅ่จ็ฆ็บณ': u'ๆณขๆญ้ฃ',
u'ๅ่จ็ฆ็ด': u'ๆณขๆญ้ฃ',
u'ไพฏ่ตๅ ': u'ๆตท็',
u'ไพฏ่ณฝๅ ': u'ๆตท็',
u'ๆทฑๆทต่ฃ': u'ๆทฑๆทต่ฃก',
u'ๅ
ๆ ': u'ๆธธๆจ',
u'้ผ ๆ ': u'ๆป้ผ ',
u'็ฎๆณ': u'ๆผ็ฎๆณ',
u'ไนๅ
นๅซๅ
ๆฏๅฆ': u'็่ฒๅฅๅ
',
u'่ฏ็ป': u'็่ช',
u'็่ฃ': u'็่ฃก',
u'ๅกๆๅฉๆ': u'็
ๅญๅฑฑ',
u'ๅฑๅฐ้ฉฌๆ': u'็ๅฐ้ฆฌๆ',
u'ๅฑๅฐ้ฆฌๆ': u'็ๅฐ้ฆฌๆ',
u'ๅๆฏไบ': u'็ๆฏไบ',
u'ๅฒกๆฏไบ': u'็ๆฏไบ',
u'็ๅ
': u'็ๅถ',
u'็ๅถ': u'็ๅถ',
u'็พ็ง่ฃ': u'็พ็ง่ฃก',
u'็ฎ่ฃ้ฝ็ง': u'็ฎ่ฃก้ฝ็ง',
u'็งๆบ้': u'็งๅฎ้',
u'ๅขๆบ่พพ': u'็งๅฎ้',
u'็ๅถ': u'็ๅถ',
u'็ๅ
': u'็ๅถ',
u'็ผ็่ฃ': u'็ผ็่ฃก',
u'็ก
็': u'็ฝ็',
u'็ก
่ฐท': u'็ฝ่ฐท',
u'็กฌ็': u'็กฌ็ข',
u'็กฌไปถ': u'็กฌ้ซ',
u'็็': u'็ข็',
u'็ฃ็': u'็ฃ็ข',
u'็ฃ้': u'็ฃ่ป',
u'็งๅ่ฃ': u'็งๅ่ฃก',
u'็งๅคฉ่ฃ': u'็งๅคฉ่ฃก',
u'็งๆฅ่ฃ': u'็งๆฅ่ฃก',
u'็จๆง': u'็จๅผๆงๅถ',
u'็ชๅฐผๆฏ': u'็ชๅฐผ่ฅฟไบ',
u'ๅฐพๆณจ': u'็ซ ็ฏ้่จป',
u'่นฆๆ่ทณ': u'็ฌจ่ฑฌ่ทณ',
u'็ป็ดง่ทณ': u'็ฌจ่ฑฌ่ทณ',
u'็ญไบ': u'็ญๆผ',
u'็ญ่จ': u'็ฐก่จ',
u'็ญไฟก': u'็ฐก่จ',
u'็ณปๅ่ฃ': u'็ณปๅ่ฃก',
u'ๆฐ่ฅฟ่ญ': u'็ด่ฅฟ่ญ',
u'ๆฐ่ฅฟๅ
ฐ': u'็ด่ฅฟ่ญ',
u'ๆ็ฝ้จ็พคๅฒ': u'็ดข็พ
้็พคๅณถ',
u'ๆ็พ
้็พคๅณถ': u'็ดข็พ
้็พคๅณถ',
u'็ดข้ฆฌ้': u'็ดข้ฆฌๅฉไบ',
u'็ดข้ฉฌ้': u'็ดข้ฆฌๅฉไบ',
u'็ปๅฝฉ': u'็ต็ถต',
u'ไฝๅพ่ง': u'็ถญๅพท่ง',
u'็ถฒ็ตก': u'็ถฒ่ทฏ',
u'็ฝ็ป': u'็ถฒ่ทฏ',
u'ไบ่ฏ็ถฒ': u'็ถฒ้็ถฒ่ทฏ',
u'ๅ ็น็ฝ': u'็ถฒ้็ถฒ่ทฏ',
u'ๅฝฉ็': u'็ถต็',
u'ๅฝฉ็ปธ': u'็ถต็ถข',
u'ๅฝฉ็บฟ': u'็ถต็ท',
u'ๅฝฉ่น': u'็ถต่น',
u'ๅฝฉ่กฃ': u'็ถต่กฃ',
u'็ผๅถ': u'็ทๅถ',
u'็ทๅ
': u'็ทๅถ',
u'็ทๅถ': u'็ทๅถ',
u'ๆๅคงๅฉ': u'็พฉๅคงๅฉ',
u'่ๅญๅท': u'่ๅญ่',
u'ๅฃๅบ่จๅๅฐผ็ปดๆฏ': u'่ๅ
้ๆฏๅค็ฆๅๅฐผ็ถญๆฏ',
u'่ๅๆฏ็ดๅๆฏ': u'่ๅ
้ๆฏๅค็ฆๅๅฐผ็ถญๆฏ',
u'่ๆๆฃฎ็นๅๆ ผๆ็ดไธๆฏ': u'่ๆๆฃฎๅๆ ผ็้ฃไธ',
u'ๅฃๆๆฃฎ็นๅๆ ผๆ็บณไธๆฏ': u'่ๆๆฃฎๅๆ ผ็้ฃไธ',
u'ๅฃๅข่ฅฟไบ': u'่้ฒ่ฅฟไบ',
u'่็ง่ฅฟไบ': u'่้ฒ่ฅฟไบ',
u'ๅฃ้ฉฌๅ่ฏบ': u'่้ฆฌๅฉ่ซพ',
u'่้ฆฌๅ่ซพ': u'่้ฆฌๅฉ่ซพ',
u'่่ฃ': u'่่ฃก',
u'่ฏๅฐผไบ': u'่ฏไบ',
u'่ฏ้
': u'่ฏไบ',
u'ไปปๆ็': u'่ช็ฑ็',
u'่ชๅคฉๅคงๅญฆ': u'่ชๅคฉๅคงๅญธ',
u'่ฆ่ฃ': u'่ฆ่ฃก',
u'ๆฏ้ๅกๅฐผไบ': u'่
ๅฉๅกๅฐผไบ',
u'ๆฏ้ๅกๅฐผไบ': u'่
ๅฉๅกๅฐผไบ',
u'่ซๆกๆฏๅ
': u'่ซไธๆฏๅ
',
u'ไธๅ': u'่ฌๆ',
u'็ฆๅช้ฟๅพ': u'่ฌ้ฃๆ',
u'็ฆๅช้ฟๅ': u'่ฌ้ฃๆ',
u'ไน้': u'่้',
u'ไน้จ': u'่้',
u'็': u'่',
u'็งๆฉ็พ
': u'่ๆฉ',
u'็งๆฉ็ฝ': u'่ๆฉ',
u'ๅธ้่ฟช': u'่ฒ้ๅฐ',
u'ๅญไบ้ฃ': u'่ไบ้ฃ',
u'ๅญไบ้ฃ': u'่ไบ้ฃ',
u'็ซ้
็ๅธฝ': u'่็ซ้',
u'่้ๅ': u'่ๅฉๅ',
u'่กๅถ': u'่กๅถ',
u'่กๅ
': u'่กๅถ',
u'่กๅถๅ': u'่กๅถๅพ',
u'่กๅ
ๅพ': u'่กๅถๅพ',
u'่กๅถๅพ': u'่กๅถๅพ',
u'ๆตๅ้ป่ฉฑ': u'่กๅ้ป่ฉฑ',
u'็งปๅจ็ต่ฏ': u'่กๅ้ป่ฉฑ',
u'่ก็จๆงๅถ': u'่ก็จๆงๅถ',
u'่ก': u'่ก',
u'ๅซ็': u'่ก็',
u'่ก็': u'่ก็',
u'ๅๅกไฟๆฏไบ': u'่กฃ็ดขๆฏไบ',
u'ๅๅกไฟๆฏไบ': u'่กฃ็ดขๆฏไบ',
u'่ฃๅพๅค้ฃ': u'่ฃกๅพๅค้ฃ',
u'่ฃ้ข': u'่ฃก้ข',
u'ๅ่พจ็': u'่งฃๆๅบฆ',
u'่ฏ็ ': u'่งฃ็ขผ',
u'ๅบ็ง่ฝฆ': u'่จ็จ่ป',
u'ๆ้': u'่จฑๅฏๆฌ',
u'็้ฒ': u'่ซพ้ญฏ',
u'็้ญฏ': u'่ซพ้ญฏ',
u'ๅ้': u'่ฎๆธ',
u'็ง็น่ฟช็ฆ': u'่ฑก็ๆตทๅฒธ',
u'่ฒๅฏง': u'่ฒๅ',
u'่ดๅฎ': u'่ฒๅ',
u'ไผฏๅฉ่ฒ': u'่ฒ้ๆฏ',
u'ไผฏๅฉๅ
น': u'่ฒ้ๆฏ',
u'่ฒทๅ
': u'่ฒทๅถ',
u'ไนฐๅถ': u'่ฒทๅถ',
u'่ฒทๅถ': u'่ฒทๅถ',
u'ๆฐๆฎๅบ': u'่ณๆๅบซ',
u'ไฟกๆฏ่ฎบ': u'่ณ่จ็่ซ',
u'ๅฅ้ฉฐ': u'่ณๅฃซ',
u'ๅนณๆฒป': u'่ณๅฃซ',
u'ๅฉๆฏ้ไบ': u'่ณดๆฏ็ไบ',
u'ๅฉๆฏ้ไบ': u'่ณดๆฏ็ไบ',
u'่็ดขๆ': u'่ณด็ดขๆ',
u'่ฑ็ดขๆ': u'่ณด็ดขๆ',
u'่ฝฏ้ฉฑ': u'่ป็ขๆฉ',
u'่ปไปถ': u'่ป้ซ',
u'่ฝฏไปถ': u'่ป้ซ',
u'ๅ ่ฝฝ': u'่ผๅ
ฅ',
u'ๆดฅๅทดๅธ้ฆ': u'่พๅทดๅจ',
u'ๆดฅๅทดๅธ้': u'่พๅทดๅจ',
u'่ฏๆฑ': u'่พญๅฝ',
u'ๅ ็บณ': u'่ฟฆ็ด',
u'ๅ ็ด': u'่ฟฆ็ด',
u'่ฟฝๅถ': u'่ฟฝๅถ',
u'่ฟฝๅ
': u'่ฟฝๅถ',
u'้่ฃ': u'้่ฃก',
u'ไฟก้': u'้้',
u'้ๅถ้ฌฅ็ ': u'้ๅถ้ฌฅ็ ',
u'้ๅ
้ฌฅ็ ': u'้ๅถ้ฌฅ็ ',
u'้ๅถๆ็ ': u'้ๅถ้ฌฅ็ ',
u'ๅณ้ฃ้บต': u'้้ฃ้บต',
u'ๆนไพฟ้ข': u'้้ฃ้บต',
u'ๅฟซ้้ข': u'้้ฃ้บต',
u'่ฟๅญๅท': u'้ฃๅญ่',
u'่ฟๅถ': u'้ฒไฝ',
u'ๅ
ฅ็': u'้ฒ็',
u'็ฎๅญ': u'้็ฎๅ
',
u'้ ็จๆงๅถ': u'้ ็จๆงๅถ',
u'่ฟ็จๆงๅถ': u'้ ็จๆงๅถ',
u'ๆบซ็ดๅ่ฌ': u'้ฃๆ',
u'้ซ้ข่ฃ': u'้ซ้ข่ฃก',
u'้
ฐ': u'้ฏ',
u'ๅทจๅ': u'้
่ณ',
u'้ฉ': u'้ค',
u'้': u'้ค',
u'้ฉๅฟๆ่ง': u'้คๅฟ้ฌฅ่ง',
u'้ๅฟ้ฌฅ่ง': u'้คๅฟ้ฌฅ่ง',
u'ๅไฟๆค': u'้ฒๅฏซ',
u'้ฟๆไผฏ่ๅ้
้ฟๅฝ': u'้ฟๆไผฏ่ฏๅๅคงๅ
ฌๅ',
u'้ฟๆไผฏ่ฏๅ้
้ทๅ': u'้ฟๆไผฏ่ฏๅๅคงๅ
ฌๅ',
u'ๅชๅฃฐ': u'้่จ',
u'่ฑๆบ': u'้ข็ท',
u'้ช่ฃ็ด
': u'้ช่ฃก็ด
',
u'้ช่ฃ่ป': u'้ช่ฃก่ป',
u'้ช้้พ': u'้ช้ต้พ',
u'้้็ด ': u'้้ปด็ด ',
u'ๅผๆญฅ': u'้ๅๆญฅ',
u'ๅฃฐๅก': u'้ณๆๅก',
u'็ผบ็': u'้ ่จญ',
u'้ขๅธ': u'้ ๅธ',
u'้ ไฝ': u'้ ๅธ',
u'้ ๅ่ฃ': u'้ ๅ่ฃก',
u'ๅคด็': u'้ ญๆง',
u'็ฒๅ
ฅ็': u'้ก้ฒ็',
u'้คจ่ฃ': u'้คจ่ฃก',
u'้ฉฌ้ๅ
ฑๅๅฝ': u'้ฆฌๅฉๅ
ฑๅๅ',
u'้ฆฌ้ๅ
ฑๅๅ': u'้ฆฌๅฉๅ
ฑๅๅ',
u'้ฉฌ่ณไป': u'้ฆฌ็พไป',
u'้ฉฌๅฐไปฃๅคซ': u'้ฆฌ็พๅฐๅคซ',
u'้ฆฌ็พไปฃๅคซ': u'้ฆฌ็พๅฐๅคซ',
u'่ฌไบๅพ': u'้ฆฌ่ช้',
u'็ๅฎๅจ': u'้ปๅฎๅจ',
u'ๆดๅฎๅจ': u'้ปๅฎๅจ',
u'้ป่ฃ': u'้ป่ฃก',
u'ไฝๅพ': u'้ป้ฃๅ',
}) | AdvancedLangConv | /AdvancedLangConv-0.01.tar.gz/AdvancedLangConv-0.01/langconv/defaulttables/zh_tw.py | zh_tw.py |
convtable = {
u'ใณ': u'ใ',
u'ใ': u'๐ชจ',
u'ใ ': u'ใ',
u'ใฉ': u'ใจซ',
u'ไฌ': u'๐ซ',
u'ไท': u'ไถ',
u'ไ': u'ไบ',
u'ไป': u'ไพ',
u'ไ': u'๐ฆ',
u'ไผ': u'ไ',
u'ไช': u'๐ฉผ',
u'ไช': u'๐ฉ',
u'ไช': u'๐ฉฟ',
u'ไซด': u'๐ฉ',
u'ไฌ': u'๐ฉฎ',
u'ไฌ': u'๐ฉฏ',
u'ไญ': u'๐ฉ ',
u'ไญ': u'๐ฉ ',
u'ไญฟ': u'๐ฉงญ',
u'ไฎ': u'๐ฉงฐ',
u'ไฎ': u'๐ฉจ',
u'ไฎ ': u'๐ฉงฟ',
u'ไฎณ': u'๐ฉจ',
u'ไฎพ': u'๐ฉงช',
u'ไฏ': u'ไฏ
',
u'ไฐพ': u'้ฒ',
u'ไฑ': u'๐ฉพ',
u'ไฑฌ': u'๐ฉพ',
u'ไฑฐ': u'๐ฉพ',
u'ไฑท': u'ไฒฃ',
u'ไฑฝ': u'ไฒ',
u'ไฒ': u'้ณ',
u'ไฒฐ': u'๐ช',
u'ไดฌ': u'๐ช',
u'ไดด': u'๐ช',
u'ไธ': u'ไธข',
u'ไธฆ': u'ๅนถ',
u'ไนพ': u'ๅนฒ',
u'ไบ': u'ไนฑ',
u'ไบ': u'ไบ',
u'ไบ': u'ไบ',
u'ไฝ': u'ไผซ',
u'ไฝ': u'ๅธ',
u'ไฝ': u'ๅ ',
u'ไฝต': u'ๅนถ',
u'ไพ': u'ๆฅ',
u'ไพ': u'ไป',
u'ไพถ': u'ไพฃ',
u'ไฟ': u'ไฟฃ',
u'ไฟ': u'็ณป',
u'ไฟ': u'ไผฃ',
u'ไฟ ': u'ไพ ',
u'ๅ': u'ไผฅ',
u'ๅ': u'ไฟฉ',
u'ๅ': u'ไฟซ',
u'ๅ': u'ไป',
u'ๅ': u'ไธช',
u'ๅ': u'ไปฌ',
u'ๅ': u'ๅนธ',
u'ๅซ': u'ไผฆ',
u'ๅ': u'ไผ',
u'ๅด': u'ไพง',
u'ๅต': u'ไพฆ',
u'ๅฝ': u'ไผช',
u'ๅ': u'ๆฐ',
u'ๅ': u'ไผง',
u'ๅ': u'ไผ',
u'ๅ': u'ๅค',
u'ๅข': u'ๅฎถ',
u'ๅญ': u'ไฝฃ',
u'ๅฏ': u'ๅฌ',
u'ๅณ': u'ไผ ',
u'ๅด': u'ไผ',
u'ๅต': u'ๅบ',
u'ๅท': u'ไผค',
u'ๅพ': u'ๅพ',
u'ๅ': u'ๅป',
u'ๅ
': u'ไป
',
u'ๅ': u'ไฝฅ',
u'ๅ': u'ไพจ',
u'ๅ': u'ไป',
u'ๅ': u'ไผช',
u'ๅฅ': u'ไพฅ',
u'ๅจ': u'ๅพ',
u'ๅฑ': u'้',
u'ๅน': u'ไปท',
u'ๅ': u'ไปช',
u'ๅ': u'ไพฌ',
u'ๅ': u'ไบฟ',
u'ๅ': u'ไพฉ',
u'ๅ': u'ไฟญ',
u'ๅ': u'ๅง',
u'ๅ': u'ไฟฆ',
u'ๅ': u'ไพช',
u'ๅ': u'ๅฐฝ',
u'ๅ': u'ๅฟ',
u'ๅช': u'ไผ',
u'ๅฒ': u'ๅจ',
u'ๅท': u'ไฟช',
u'ๅธ': u'ใฉ',
u'ๅบ': u'ๅฉ',
u'ๅป': u'ๅฅ',
u'ๅผ': u'ไฟจ',
u'ๅ
': u'ๅถ',
u'ๅ
': u'ๅ
',
u'ๅ
': u'ๅฟ',
u'ๅ
': u'ๅ
',
u'ๅ
ง': u'ๅ
',
u'ๅ
ฉ': u'ไธค',
u'ๅ': u'ๅ',
u'ๅช': u'ๅน',
u'ๅ': u'ๅ',
u'ๅ': u'ๅป',
u'ๅ': u'๐ช',
u'ๅ': u'ๅ',
u'ๅฑ': u'ๅฏ',
u'ๅฅ': u'ๅซ',
u'ๅช': u'ๅ ',
u'ๅ': u'ๅญ',
u'ๅ': u'ๅ',
u'ๅ': u'ๅ
',
u'ๅ': u'ๅน',
u'ๅ': u'ๅฌ',
u'ๅ': u'ๅ',
u'ๅ': u'ๅฅ',
u'ๅฎ': u'ๅ',
u'ๅด': u'ๅ',
u'ๅต': u'ๅ',
u'ๅท': u'้ฒ',
u'ๅ': u'ๅ',
u'ๅ': u'ๅง',
u'ๅ': u'ๅ',
u'ๅ': u'ๅฝ',
u'ๅ': u'ๅฟ',
u'ๅ': u'ๅ',
u'ๅ': u'ใฅ',
u'ๅ': u'ๅ',
u'ๅ': u'ใ',
u'ๅ': u'ๅฒ',
u'ๅ': u'ๅจ',
u'ๅ': u'ๅก',
u'ๅ': u'ๅ',
u'ๅ': u'่',
u'ๅ': u'ๅณ',
u'ๅข': u'ๅฟ',
u'ๅฉ': u'ๅ',
u'ๅฑ': u'ๅข',
u'ๅณ': u'ๅ',
u'ๅต': u'ๅฑ',
u'ๅธ': u'ๅ',
u'ๅป': u'ๅ',
u'ๅญ': u'ๅฆ',
u'ๅฏ': u'ๆฑ',
u'ๅฑ': u'ๅฎ',
u'ๅ': u'ๅบ',
u'ๅ': u'ๅ',
u'ๅป': u'ๅด',
u'ๅฝ': u'ๅณ',
u'ๅ': u'ๅ',
u'ๅ ': u'ๅ',
u'ๅค': u'ๅ',
u'ๅญ': u'ๅ',
u'ๅฒ': u'ๅ',
u'ๅด': u'ๅฃ',
u'ๅ': u'ๅ',
u'ๅ': u'ๅ',
u'ๅข': u'ไธ',
u'ๅ': u'ๅค',
u'ๅณ': u'ๅด',
u'ๅถ': u'ๅ',
u'ๅ': u'ๅ',
u'ๅผ': u'ๅ',
u'ๅก': u'ๅ',
u'ๅ': u'ๅ',
u'ๅ': u'ๅฃ',
u'ๅ': u'้ฎ',
u'ๅ': u'ๅฏ',
u'ๅ': u'ๅ',
u'ๅ': u'ๅฏ',
u'ๅข': u'ๅก',
u'ๅ': u'ใ',
u'ๅ': u'ๅค',
u'ๅช': u'ไธง',
u'ๅซ': u'ๅ',
u'ๅฌ': u'ไน',
u'ๅฎ': u'ๅ',
u'ๅฒ': u'ๅ',
u'ๅ': u'ๅ',
u'ๅ': u'ๅฌ',
u'ๅ': u'ๅ',
u'ๅ': u'ๅ',
u'ๅ': u'ๅ',
u'ๅฉ': u'ๅข',
u'ๅฐ': u'๐ ฎถ',
u'ๅถ': u'ๅ',
u'ๅน': u'๐ชก',
u'ๅ': u'ๅน',
u'ๅ': u'ๅฝ',
u'ๅ': u'ๅ',
u'ๅ': u'ๅง',
u'ๅ': u'ๅฐ',
u'ๅ': u'ๅ',
u'ๅฉ': u'ๅ',
u'ๅฎ': u'ๅ ',
u'ๅฏ': u'ๅธ',
u'ๅฐ': u'ๅฝ',
u'ๅต': u'ๅ',
u'ๅธ': u'ๅ',
u'ๅฝ': u'ๅด',
u'ๅ': u'ๆถ',
u'ๅ': u'ๅ',
u'ๅ': u'ใ',
u'ๅ': u'ๅ',
u'ๅ ': u'ๅ',
u'ๅฅ': u'ๅ',
u'ๅฆ': u'ๅ',
u'ๅฏ': u'ๅณ',
u'ๅฒ': u'ๅ',
u'ๅด': u'ๅท',
u'ๅธ': u'ๅจ',
u'ๅน': u'ๅฝ',
u'ๅ': u'ๅ',
u'ๅ': u'ๅ',
u'ๅ': u'ๅ',
u'ๅ': u'ๅฐ',
u'ๅ': u'ๅ',
u'ๅ': u'ๅฎ',
u'ๅฅ': u'ๅฝ',
u'ๅฆ': u'ๅ',
u'ๅจ': u'ๅ',
u'ๅฎ': u'ๅ',
u'ๅฒ': u'ไบธ',
u'ๅณ': u'ๅพ',
u'ๅด': u'ไธฅ',
u'ๅถ': u'ๅค',
u'ๅ': u'ๅญ',
u'ๅ': u'ๅซ',
u'ๅ': u'ๅฃ',
u'ๅ
': u'ๅ',
u'ๅ': u'ๅ',
u'ๅ': u'่',
u'ๅ': u'ๅฑ',
u'ๅช': u'ๅฑ',
u'ๅ': u'ๅต',
u'ๅ': u'ๅฝ',
u'ๅ': u'ๅด',
u'ๅ': u'ๅญ',
u'ๅ': u'ๅ',
u'ๅ': u'ๅพ',
u'ๅ': u'ๅข',
u'ๅ': u'๐ชขฎ',
u'ๅต': u'ๅฏ',
u'ๅก': u'ๅญ',
u'ๅฐ': u'้',
u'ๅท': u'ๆง',
u'ๅ
': u'ๅ',
u'ๅ ': u'ๅฉ',
u'ๅ ': u'ๅด',
u'ๅ ': u'ๅ',
u'ๅ ฏ': u'ๅฐง',
u'ๅ ฑ': u'ๆฅ',
u'ๅ ด': u'ๅบ',
u'ๅก': u'ๅ',
u'ๅก': u'่',
u'ๅก': u'ๅฒ',
u'ๅก': u'ๅ',
u'ๅก': u'ๆถ',
u'ๅก': u'ๅข',
u'ๅกข': u'ๅ',
u'ๅกค': u'ๅ',
u'ๅกต': u'ๅฐ',
u'ๅกน': u'ๅ ',
u'ๅข': u'ๅซ',
u'ๅข': u'ๅ ',
u'ๅขฎ': u'ๅ ',
u'ๅขฐ': u'ๅ',
u'ๅขณ': u'ๅ',
u'ๅขป': u'ๅข',
u'ๅขพ': u'ๅฆ',
u'ๅฃ': u'ๅ',
u'ๅฃ': u'๐ก',
u'ๅฃ': u'ๅฑ',
u'ๅฃ': u'ๅ',
u'ๅฃ': u'ๅ',
u'ๅฃ': u'ๅน',
u'ๅฃ': u'ๅ',
u'ๅฃ': u'ๅ',
u'ๅฃ': u'ๅ',
u'ๅฃ': u'ๅ',
u'ๅฃ ': u'ๅ
',
u'ๅฃข': u'ๅ',
u'ๅฃฉ': u'ๅ',
u'ๅฃฏ': u'ๅฃฎ',
u'ๅฃบ': u'ๅฃถ',
u'ๅฃผ': u'ๅฃธ',
u'ๅฃฝ': u'ๅฏฟ',
u'ๅค ': u'ๅค',
u'ๅคข': u'ๆขฆ',
u'ๅคฅ': u'ไผ',
u'ๅคพ': u'ๅคน',
u'ๅฅ': u'ๅฅ',
u'ๅฅง': u'ๅฅฅ',
u'ๅฅฉ': u'ๅฅ',
u'ๅฅช': u'ๅคบ',
u'ๅฅฌ': u'ๅฅ',
u'ๅฅฎ': u'ๅฅ',
u'ๅฅผ': u'ๅงน',
u'ๅฆ': u'ๅฆ',
u'ๅง': u'ๅง',
u'ๅงฆ': u'ๅฅธ',
u'ๅจ': u'ๅจฑ',
u'ๅฉ': u'ๅจ',
u'ๅฉฆ': u'ๅฆ',
u'ๅฉญ': u'ๅจ
',
u'ๅชง': u'ๅจฒ',
u'ๅชฏ': u'ๅฆซ',
u'ๅชผ': u'ๅชช',
u'ๅชฝ': u'ๅฆ',
u'ๅซ': u'ๅฆช',
u'ๅซต': u'ๅฆฉ',
u'ๅซป': u'ๅจด',
u'ๅซฟ': u'ๅฉณ',
u'ๅฌ': u'ๅฆซ',
u'ๅฌ': u'ๅจ',
u'ๅฌ': u'ๅฉต',
u'ๅฌ': u'ๅจ',
u'ๅฌ': u'ๅซฑ',
u'ๅฌก': u'ๅซ',
u'ๅฌค': u'ๅฌท',
u'ๅฌช': u'ๅซ',
u'ๅฌฐ': u'ๅฉด',
u'ๅฌธ': u'ๅฉถ',
u'ๅญ': u'ๅจ',
u'ๅญซ': u'ๅญ',
u'ๅญธ': u'ๅญฆ',
u'ๅญฟ': u'ๅญช',
u'ๅฎฎ': u'ๅฎซ',
u'ๅฏ': u'้',
u'ๅฏข': u'ๅฏ',
u'ๅฏฆ': u'ๅฎ',
u'ๅฏง': u'ๅฎ',
u'ๅฏฉ': u'ๅฎก',
u'ๅฏซ': u'ๅ',
u'ๅฏฌ': u'ๅฎฝ',
u'ๅฏต': u'ๅฎ ',
u'ๅฏถ': u'ๅฎ',
u'ๅฐ': u'ๅฐ',
u'ๅฐ': u'ไธ',
u'ๅฐ': u'ๅฏป',
u'ๅฐ': u'ๅฏน',
u'ๅฐ': u'ๅฏผ',
u'ๅฐท': u'ๅฐด',
u'ๅฑ': u'ๅฑ',
u'ๅฑ': u'ๅฐธ',
u'ๅฑ': u'ๅฑ',
u'ๅฑ': u'ๅฑ',
u'ๅฑข': u'ๅฑก',
u'ๅฑค': u'ๅฑ',
u'ๅฑจ': u'ๅฑฆ',
u'ๅฑฉ': u'๐ชจ',
u'ๅฑฌ': u'ๅฑ',
u'ๅฒก': u'ๅ',
u'ๅณด': u'ๅฒ',
u'ๅณถ': u'ๅฒ',
u'ๅณฝ': u'ๅณก',
u'ๅด': u'ๅด',
u'ๅด': u'ๆ',
u'ๅด': u'ๅฒ',
u'ๅด': u'ไป',
u'ๅดข': u'ๅณฅ',
u'ๅดฌ': u'ๅฒฝ',
u'ๅต': u'ๅฒ',
u'ๅต': u'ๅฒ',
u'ๅถ': u'ๅต',
u'ๅถ': u'ๅดญ',
u'ๅถ': u'ๅฒ',
u'ๅถ': u'ๅต',
u'ๅถ': u'ๅด',
u'ๅถ ': u'ๅณค',
u'ๅถข': u'ๅณฃ',
u'ๅถง': u'ๅณ',
u'ๅถฎ': u'ๅด',
u'ๅถด': u'ๅฒ',
u'ๅถธ': u'ๅต',
u'ๅถบ': u'ๅฒญ',
u'ๅถผ': u'ๅฑฟ',
u'ๅถฝ': u'ๅฒณ',
u'ๅท': u'ๅฒฟ',
u'ๅท': u'ๅณฆ',
u'ๅท': u'ๅท
',
u'ๅท': u'ๅฒฉ',
u'ๅทฐ': u'ๅทฏ',
u'ๅทน': u'ๅบ',
u'ๅธฅ': u'ๅธ
',
u'ๅธซ': u'ๅธ',
u'ๅธณ': u'ๅธ',
u'ๅธถ': u'ๅธฆ',
u'ๅน': u'ๅธง',
u'ๅน': u'ๅธ',
u'ๅน': u'ๅธผ',
u'ๅน': u'ๅธป',
u'ๅน': u'ๅธ',
u'ๅนฃ': u'ๅธ',
u'ๅนซ': u'ๅธฎ',
u'ๅนฌ': u'ๅธฑ',
u'ๅนน': u'ๅนฒ',
u'ๅนบ': u'ไน',
u'ๅนพ': u'ๅ ',
u'ๅบซ': u'ๅบ',
u'ๅป': u'ๅ',
u'ๅป': u'ๅข',
u'ๅป': u'ๅฉ',
u'ๅป': u'ๅฆ',
u'ๅป': u'ๅจ',
u'ๅป': u'ๅฎ',
u'ๅป': u'ๅบ',
u'ๅป ': u'ๅ',
u'ๅปก': u'ๅบ',
u'ๅปข': u'ๅบ',
u'ๅปฃ': u'ๅนฟ',
u'ๅปฉ': u'ๅปช',
u'ๅปฌ': u'ๅบ',
u'ๅปณ': u'ๅ
',
u'ๅผ': u'ๅผ',
u'ๅผ': u'ๅ',
u'ๅผณ': u'ๅผช',
u'ๅผต': u'ๅผ ',
u'ๅผท': u'ๅผบ',
u'ๅฝ': u'ๅซ',
u'ๅฝ': u'ๅผน',
u'ๅฝ': u'ๅผฅ',
u'ๅฝ': u'ๅผฏ',
u'ๅฝ': u'ๆฑ',
u'ๅฝ': u'ๅฝ',
u'ๅฝฅ': u'ๅฝฆ',
u'ๅพ': u'ๅ',
u'ๅพ': u'ๅพ',
u'ๅพ': u'ไป',
u'ๅพ ': u'ๅพ',
u'ๅพฉ': u'ๅค',
u'ๅพต': u'ๅพ',
u'ๅพน': u'ๅฝป',
u'ๆ': u'ๆ',
u'ๆฅ': u'่ป',
u'ๆ
': u'ๆฆ',
u'ๆ': u'ๆฎ',
u'ๆต': u'ๆ
',
u'ๆถ': u'้ท',
u'ๆก': u'ๆถ',
u'ๆฑ': u'ๆผ',
u'ๆฒ': u'ๆฝ',
u'ๆป': u'ๆป',
u'ๆ': u'็ฑ',
u'ๆ': u'ๆฌ',
u'ๆจ': u'ๆซ',
u'ๆด': u'ๆ',
u'ๆท': u'ๆบ',
u'ๆพ': u'ๅฟพ',
u'ๆ
': u'ๆ ',
u'ๆ
': u'ๆ',
u'ๆ
': u'ๆ ',
u'ๆ
': u'ๆจ',
u'ๆ
': u'ๆญ',
u'ๆ
': u'ๆธ',
u'ๆ
ฃ': u'ๆฏ',
u'ๆ
ค': u'ๆซ',
u'ๆ
ช': u'ๆ',
u'ๆ
ซ': u'ๆ',
u'ๆ
ฎ': u'่',
u'ๆ
ณ': u'ๆญ',
u'ๆ
ถ': u'ๅบ',
u'ๆ
ผ': u'ๆ',
u'ๆ
พ': u'ๆฌฒ',
u'ๆ': u'ๅฟง',
u'ๆ': u'ๆซ',
u'ๆ': u'ๆ',
u'ๆ': u'ๅญ',
u'ๆ': u'ๆฆ',
u'ๆ': u'ๆฎ',
u'ๆค': u'ๆค',
u'ๆซ': u'ๆฏ',
u'ๆฎ': u'ๆ',
u'ๆฒ': u'ๅฎช',
u'ๆถ': u'ๅฟ',
u'ๆ': u'ๆณ',
u'ๆ': u'ๅบ',
u'ๆ': u'ๆฟ',
u'ๆ': u'ๆ',
u'ๆ': u'่',
u'ๆ': u'ๆผ',
u'ๆฃ': u'ๆ',
u'ๆจ': u'ๆน',
u'ๆฒ': u'ๆฉ',
u'ๆถ': u'ๆ',
u'ๆท': u'ๆ',
u'ๆธ': u'ๆฌ',
u'ๆบ': u'ๅฟ',
u'ๆผ': u'ๆง',
u'ๆพ': u'ๆ
',
u'ๆ': u'ๆ',
u'ๆ': u'ๆ',
u'ๆ': u'ๆ',
u'ๆง': u'ๆ',
u'ๆฉ': u'ๆฌ',
u'ๆฐ': u'ๆ',
u'ๆฑ': u'ๆฏ',
u'ๆฒ': u'ๆ',
u'ๆถ': u'ๆท',
u'ๆ': u'ๆ',
u'ๆ': u'ๆผ',
u'ๆฉ': u'ๆ',
u'ๆฑ': u'ๆฒ',
u'ๆพ': u'ๆ',
u'ๆจ': u'่',
u'ๆซ': u'ๆช',
u'ๆฑ': u'ๆจ',
u'ๆฒ': u'ๅท',
u'ๆ': u'ๆซ',
u'ๆ': u'ๆก',
u'ๆ': u'ๆ',
u'ๆ': u'ๆฃ',
u'ๆ': u'ๆ',
u'ๆก': u'้',
u'ๆ': u'ๆฃ',
u'ๆ': u'ๆฌ',
u'ๆ': u'ๆข',
u'ๆฎ': u'ๆฅ',
u'ๆ': u'ๆ',
u'ๆ': u'ๆ',
u'ๆ': u'ๆฃ',
u'ๆต': u'ๆพ',
u'ๆถ': u'ๆข',
u'ๆ': u'ๆด',
u'ๆ': u'ๆผ',
u'ๆ': u'ๆ',
u'ๆฏ': u'ๆ',
u'ๆณ': u'ๆ ',
u'ๆถ': u'ๆ',
u'ๆบ': u'ๆ',
u'ๆป': u'ๆบ',
u'ๆ': u'ๆ',
u'ๆ': u'ๆฆ',
u'ๆ': u'ๆ',
u'ๆ': u'ๆ ',
u'ๆ': u'ใง',
u'ๆ': u'ๆข',
u'ๆฃ': u'ๆธ',
u'ๆฅ': u'ๆจ',
u'ๆซ': u'ๆ',
u'ๆฒ': u'ๆ',
u'ๆณ': u'ๆฟ',
u'ๆป': u'ๆ',
u'ๆพ': u'ๆ',
u'ๆฟ': u'ๆก',
u'ๆ': u'ๆฅ',
u'ๆ': u'ๆณ',
u'ๆ': u'ๆฉ',
u'ๆ': u'ๅป',
u'ๆ': u'ๆก',
u'ๆ': u'ใง',
u'ๆ': u'ๆ
',
u'ๆ': u'ๆฎ',
u'ๆ ': u'ๆค',
u'ๆฌ': u'ๆ',
u'ๆฏ': u'ๆ',
u'ๆฐ': u'ๆง',
u'ๆฑ': u'ๆ',
u'ๆฒ': u'ๆท',
u'ๆด': u'ๆฉ',
u'ๆท': u'ๆท',
u'ๆบ': u'ๆ',
u'ๆป': u'ๆ',
u'ๆผ': u'ๆธ',
u'ๆพ': u'ๆฐ',
u'ๆ': u'ๆ
',
u'ๆ': u'ๆต',
u'ๆ': u'ๆข',
u'ๆ': u'ๆฆ',
u'ๆ': u'ๆ',
u'ๆ': u'ๆ',
u'ๆ': u'ๆบ',
u'ๆ': u'ๆบ',
u'ๆ': u'ๆ',
u'ๆข': u'ๆ',
u'ๆฃ': u'ๆ',
u'ๆค': u'ๆ',
u'ๆช': u'ๆ
',
u'ๆฌ': u'ๆฝ',
u'ๆ': u'่ดฅ',
u'ๆ': u'ๅ',
u'ๆต': u'ๆ',
u'ๆธ': u'ๆฐ',
u'ๆ': u'ๆ',
u'ๆ': u'ๆฏ',
u'ๆ': u'ๆ',
u'ๆฌ': u'ๆฉ',
u'ๆท': u'ๆญ',
u'ๆผ': u'ไบ',
u'ๆ': u'ๆ',
u'ๆฃ': u'ๆข',
u'ๆ': u'ๅ',
u'ๆ': u'ๆถ',
u'ๆ': u'ๆ',
u'ๆ': u'ๆผ',
u'ๆ': u'ๆ',
u'ๆ': u'ๆ',
u'ๆ': u'ๆธ',
u'ๆข': u'็
',
u'ๆซ': u'ๆ',
u'ๆ': u'ๆ',
u'ๆ': u'ๅ',
u'ๆ': u'ๆ',
u'ๆ': u'ๆ',
u'ๆ': u'ๅ',
u'ๆ': u'ๆง',
u'ๆ ': u'ๆท',
u'ๆจ': u'ๆฝ',
u'ๆฌ': u'ๆ',
u'ๆธ': u'ไนฆ',
u'ๆ': u'ไผ',
u'ๆง': u'่ง',
u'ๆฎ': u'ๆฏ',
u'ๆฑ': u'ไธ',
u'ๆด': u'้จ',
u'ๆต': u'ๆ
',
u'ๆกฟ': u'ๆ',
u'ๆข': u'ๆ ',
u'ๆข': u'ๆง',
u'ๆข': u'ๆก',
u'ๆข': u'ๆญ',
u'ๆขฒ': u'ๆฃ',
u'ๆฃ': u'ๅผ',
u'ๆฃ': u'ๆฃ',
u'ๆฃ': u'ๆจ',
u'ๆฃ': u'ๆฃ',
u'ๆฃ': u'ๆ ',
u'ๆฃก': u'๎ ญ',
u'ๆฃง': u'ๆ ',
u'ๆฃฒ': u'ๆ ',
u'ๆฃถ': u'ๆขพ',
u'ๆค': u'ๆก ',
u'ๆฅ': u'ๆจ',
u'ๆฅ': u'ๆซ',
u'ๆฅจ': u'ๆกข',
u'ๆฅญ': u'ไธ',
u'ๆฅต': u'ๆ',
u'ๆฆฆ': u'ๅนฒ',
u'ๆฆช': u'ๆฉ',
u'ๆฆฎ': u'่ฃ',
u'ๆฆฒ': u'ๆฆ
',
u'ๆฆฟ': u'ๆกค',
u'ๆง': u'ๆ',
u'ๆง': u'ๆช',
u'ๆง': u'ๆ ',
u'ๆงค': u'ๆขฟ',
u'ๆงง': u'ๆค ',
u'ๆงจ': u'ๆค',
u'ๆงณ': u'ๆกจ',
u'ๆจ': u'ๆกฉ',
u'ๆจ': u'ไน',
u'ๆจ
': u'ๆ',
u'ๆจ': u'ๆข',
u'ๆจ': u'ๆฅผ',
u'ๆจ': u'ๆ ',
u'ๆจ': u'ๆข',
u'ๆจฃ': u'ๆ ท',
u'ๆจธ': u'ๆด',
u'ๆจน': u'ๆ ',
u'ๆจบ': u'ๆกฆ',
u'ๆฉ': u'ๆกก',
u'ๆฉ': u'ๆกฅ',
u'ๆฉ': u'ๆบ',
u'ๆฉข': u'ๆคญ',
u'ๆฉซ': u'ๆจช',
u'ๆช': u'ๆชฉ',
u'ๆช': u'ๆฝ',
u'ๆช': u'ๆกฃ',
u'ๆช': u'ๆกง',
u'ๆช': u'ๆง',
u'ๆชข': u'ๆฃ',
u'ๆชฃ': u'ๆจฏ',
u'ๆชฎ': u'ๆขผ',
u'ๆชฏ': u'ๅฐ',
u'ๆชณ': u'ๆง',
u'ๆชธ': u'ๆ ',
u'ๆชป': u'ๆง',
u'ๆซ': u'ๆ',
u'ๆซ': u'ๆฉน',
u'ๆซ': u'ๆฆ',
u'ๆซ': u'ๆ ',
u'ๆซ': u'ๆค',
u'ๆซ': u'ๆฉผ',
u'ๆซ': u'ๆ ',
u'ๆซฅ': u'ๆฉฑ',
u'ๆซง': u'ๆง ',
u'ๆซจ': u'ๆ ',
u'ๆซช': u'ๆฅ',
u'ๆซซ': u'ๆฉฅ',
u'ๆซฌ': u'ๆฆ',
u'ๆซฑ': u'่',
u'ๆซณ': u'ๆ ',
u'ๆซธ': u'ๆฆ',
u'ๆซป': u'ๆจฑ',
u'ๆฌ': u'ๆ ',
u'ๆฌ
': u'ๆฆ',
u'ๆฌ': u'ๆ',
u'ๆฌ': u'ๆคค',
u'ๆฌ': u'ๆ พ',
u'ๆฌ': u'ๆฆ',
u'ๆฌ': u'ๆฃ',
u'ๆฌฝ': u'้ฆ',
u'ๆญ': u'ๅน',
u'ๆญ': u'ๆฌง',
u'ๆญ': u'ๆฌค',
u'ๆญก': u'ๆฌข',
u'ๆญฒ': u'ๅฒ',
u'ๆญท': u'ๅ',
u'ๆญธ': u'ๅฝ',
u'ๆญฟ': u'ๆฎ',
u'ๆฎ': u'ๆฎ',
u'ๆฎ': u'ๆฎ',
u'ๆฎค': u'ๆฎ',
u'ๆฎจ': u'ใฑฎ',
u'ๆฎซ': u'ๆฎ',
u'ๆฎญ': u'ๅต',
u'ๆฎฎ': u'ๆฎ',
u'ๆฎฏ': u'ๆฎก',
u'ๆฎฐ': u'ใฑฉ',
u'ๆฎฒ': u'ๆญผ',
u'ๆฎบ': u'ๆ',
u'ๆฎป': u'ๅฃณ',
u'ๆฎผ': u'ๅฃณ',
u'ๆฏ': u'ๆฏ',
u'ๆฏ': u'ๆฎด',
u'ๆฏฟ': u'ๆฏต',
u'ๆฐ': u'็ฆ',
u'ๆฐ': u'ๆฏก',
u'ๆฐ': u'ๆฐ',
u'ๆฐฃ': u'ๆฐ',
u'ๆฐซ': u'ๆฐข',
u'ๆฐฌ': u'ๆฐฉ',
u'ๆฐณ': u'ๆฐฒ',
u'ๆฑ': u'ๆฑก',
u'ๆฑบ': u'ๅณ',
u'ๆฒ': u'ๆฒก',
u'ๆฒ': u'ๅฒ',
u'ๆณ': u'ๅต',
u'ๆณ': u'ๆบฏ',
u'ๆดฉ': u'ๆณ',
u'ๆดถ': u'ๆฑน',
u'ๆตน': u'ๆต',
u'ๆถ': u'ๆณพ',
u'ๆถผ': u'ๅ',
u'ๆท': u'ๅ',
u'ๆท': u'ๆณช',
u'ๆทฅ': u'ๆธ',
u'ๆทจ': u'ๅ',
u'ๆทฉ': u'ๅ',
u'ๆทช': u'ๆฒฆ',
u'ๆทต': u'ๆธ',
u'ๆทถ': u'ๆถ',
u'ๆทบ': u'ๆต
',
u'ๆธ': u'ๆถฃ',
u'ๆธ': u'ๅ',
u'ๆธฆ': u'ๆถก',
u'ๆธฌ': u'ๆต',
u'ๆธพ': u'ๆต',
u'ๆน': u'ๅ',
u'ๆน': u'ๆต',
u'ๆนง': u'ๆถ',
u'ๆนฏ': u'ๆฑค',
u'ๆบ': u'ๆฒฉ',
u'ๆบ': u'ๅ',
u'ๆบ': u'ๆฒ',
u'ๆบซ': u'ๆธฉ',
u'ๆป': u'ๆฒง',
u'ๆป
': u'็ญ',
u'ๆป': u'ๆถค',
u'ๆป': u'่ฅ',
u'ๆป': u'ๆฑ',
u'ๆปฌ': u'ๆฒช',
u'ๆปฏ': u'ๆป',
u'ๆปฒ': u'ๆธ',
u'ๆปท': u'ๅค',
u'ๆปธ': u'ๆต',
u'ๆปป': u'ๆต',
u'ๆปพ': u'ๆป',
u'ๆปฟ': u'ๆปก',
u'ๆผ': u'ๆธ',
u'ๆผ': u'ๆฒค',
u'ๆผข': u'ๆฑ',
u'ๆผฃ': u'ๆถ',
u'ๆผฌ': u'ๆธ',
u'ๆผฒ': u'ๆถจ',
u'ๆผต': u'ๆบ',
u'ๆผธ': u'ๆธ',
u'ๆผฟ': u'ๆต',
u'ๆฝ': u'้ข',
u'ๆฝ': u'ๆณผ',
u'ๆฝ': u'ๆด',
u'ๆฝ': u'ๆฒฉ',
u'ๆฝ': u'ๆฝ',
u'ๆฝค': u'ๆถฆ',
u'ๆฝฏ': u'ๆต',
u'ๆฝฐ': u'ๆบ',
u'ๆฝท': u'ๆป',
u'ๆฝฟ': u'ๆถ ',
u'ๆพ': u'ๆถฉ',
u'ๆพ': u'ๆต',
u'ๆพ': u'ๆถ',
u'ๆพ': u'ๆฒ',
u'ๆพ': u'ๆถง',
u'ๆพ ': u'ๆธ',
u'ๆพค': u'ๆณฝ',
u'ๆพฆ': u'ๆปช',
u'ๆพฉ': u'ๆณถ',
u'ๆพฎ': u'ๆต',
u'ๆพฑ': u'ๆท',
u'ๆพพ': u'ใณ ',
u'ๆฟ': u'ๆต',
u'ๆฟ': u'ๆต',
u'ๆฟ': u'ๆนฟ',
u'ๆฟ': u'ๆณ',
u'ๆฟ': u'ๆต',
u'ๆฟค': u'ๆถ',
u'ๆฟซ': u'ๆปฅ',
u'ๆฟฐ': u'ๆฝ',
u'ๆฟฑ': u'ๆปจ',
u'ๆฟบ': u'ๆบ
',
u'ๆฟผ': u'ๆณบ',
u'ๆฟพ': u'ๆปค',
u'็
': u'ๆปข',
u'็': u'ๆธ',
u'็': u'ใฒฟ',
u'็': u'ๆณป',
u'็': u'ๆฒ',
u'็': u'ๆต',
u'็': u'ๆฟ',
u'็': u'ๆณธ',
u'็': u'ๆฒฅ',
u'็': u'ๆฝ',
u'็ ': u'ๆฝ',
u'็ฆ': u'ๆฝด',
u'็ง': u'ๆณท',
u'็จ': u'ๆฟ',
u'็ฐ': u'ๅผฅ',
u'็ฒ': u'ๆฝ',
u'็พ': u'ๆพ',
u'็': u'ๆฒฃ',
u'็': u'ๆป ',
u'็': u'ๆด',
u'็': u'ๆผ',
u'็': u'ๆปฉ',
u'็': u'็',
u'็ ': u'ๆผค',
u'็ฃ': u'ๆนพ',
u'็ค': u'ๆปฆ',
u'็ง': u'ๆป',
u'็ฝ': u'็พ',
u'็บ': u'ไธบ',
u'็': u'ไน',
u'็ด': u'็',
u'็ก': u'ๆ ',
u'็
': u'็ผ',
u'็
': u'็',
u'็
': u'็',
u'็
ข': u'่',
u'็
ฅ': u'็',
u'็
ฉ': u'็ฆ',
u'็
ฌ': u'็',
u'็
ฑ': u'ใถฝ',
u'็
': u'็
ด',
u'็': u'่ง',
u'็': u'็',
u'็ฑ': u'็ญ',
u'็ฒ': u'้ข',
u'็พ': u'็ฝ',
u'็': u'็จ',
u'็': u'็ฏ',
u'็': u'็',
u'็': u'็ง',
u'็': u'็ซ',
u'็': u'็',
u'็': u'่ฅ',
u'็ฆ': u'็ฟ',
u'็ฌ': u'ๆฏ',
u'็ญ': u'็',
u'็ด': u'็ฉ',
u'็ถ': u'ใถถ',
u'็ผ': u'็ฌ',
u'็พ': u'็',
u'็': u'็',
u'็': u'็',
u'็': u'็',
u'็ญ': u'ไบ',
u'็ฒ': u'ไธบ',
u'็บ': u'็ท',
u'็พ': u'ๅฐ',
u'็': u'ๅข',
u'็': u'็',
u'็ฝ': u'็ต',
u'็': u'่ฆ',
u'็ข': u'็',
u'็ง': u'็บ',
u'็': u'็ถ',
u'็น': u'็ญ',
u'็ฝ': u'็',
u'็': u'็ฐ',
u'็ถ': u'็น',
u'็ป': u'็ฒ',
u'็': u'็ธ',
u'็': u'ๅ',
u'็': u'็ฑ',
u'็
': u'็ฎ',
u'็': u'ๅฅ',
u'็จ': u'็ฌ',
u'็ช': u'็ฏ',
u'็ซ': u'็',
u'็ฎ': u'็',
u'็ฐ': u'็',
u'็ฑ': u'ใบ',
u'็ฒ': u'่ท',
u'็ต': u'็',
u'็ท': u'็ท',
u'็ธ': u'ๅ
ฝ',
u'็บ': u'็ญ',
u'็ป': u'็ฎ',
u'็ผ': u'็',
u'็': u'็ก',
u'็พ': u'็ฐ',
u'็บ': u'็',
u'็ฟ': u'็ฒ',
u'็': u'็ฎ',
u'็': u'็',
u'็ฃ': u'็',
u'็ค': u'็ถ',
u'็ฉ': u'่น',
u'็ช': u'็',
u'็ฒ': u'็ฑ',
u'็ฝ': u'๐ชป',
u'็': u'็',
u'็ฃ': u'็',
u'็ฆ': u'็ท',
u'็ซ': u'็ฐ',
u'็ฐ': u'็ฏ',
u'็ฝ': u'็บ',
u'็': u'็ผ',
u'็': u'็',
u'็': u'็',
u'็': u'็',
u'็': u'็ฏ',
u'็': u'็ฎ',
u'็ข': u'ไบง',
u'็ฃ': u'ไบง',
u'็ฆ': u'่',
u'็ฏ': u'ๅฎ',
u'็': u'ไบฉ',
u'็ข': u'ๆฏ',
u'็ซ': u'็ป',
u'็ฐ': u'ๅผ',
u'็ต': u'็ป',
u'็ถ': u'ๅฝ',
u'็': u'็ด',
u'็': u'ๅ ',
u'็': u'็',
u'็ ': u'้
ธ',
u'็พ': u'็ด',
u'็': u'็',
u'็': u'็ฏ',
u'็': u'็ก',
u'็': u'็ช',
u'็': u'็',
u'็ก': u'็ฎ',
u'็ง': u'็',
u'็ฎ': u'็',
u'็ฒ': u'็ญ',
u'็บ': u'็',
u'็ป': u'็',
u'็': u'็',
u'็': u'็จ',
u'็': u'็ซ',
u'็': u'็
',
u'็': u'ๆ',
u'็': u'็ ',
u'็': u'็ช',
u'็ก': u'็ด',
u'็ข': u'็',
u'็ค': u'็',
u'็ฅ': u'็',
u'็ง': u'็ฌ',
u'็ฉ': u'็',
u'็ฌ': u'็ฃ',
u'็ญ': u'็ฟ',
u'็ฎ': u'็พ',
u'็ฐ': u'็',
u'็ฑ': u'็ซ',
u'็ฒ': u'็ซ',
u'็ผ': u'ๅ',
u'็': u'็',
u'็ฐ': u'็ฑ',
u'็ธ': u'็ฒ',
u'็บ': u'็ฑ',
u'็': u'ๆฏ',
u'็': u'็',
u'็': u'็',
u'็ก': u'ๅฐฝ',
u'็ฃ': u'็',
u'็ค': u'็',
u'็ง': u'ๅข',
u'็ช': u'่ก',
u'็': u'็',
u'็ฅ': u'็ฆ',
u'็พ': u'ไผ',
u'็': u'๐ชพข',
u'็': u'ๅฐ',
u'็': u'็',
u'็': u'็',
u'็': u'็',
u'็': u'ไ',
u'็': u'็',
u'็ญ': u'ไบ',
u'็ถ': u'็',
u'็ผ': u'็',
u'็': u'่',
u'็': u'็ฌ',
u'็': u'็ฉ',
u'็ฏ': u'็ซ',
u'็ก': u'ๆฑ',
u'็ก': u'็ก',
u'็กค': u'็ก',
u'็กจ': u'็ ',
u'็กฏ': u'็ ',
u'็ข': u'ๅผ',
u'็ขฉ': u'็ก',
u'็ขญ': u'็ ',
u'็ขธ': u'็ ',
u'็ขบ': u'็กฎ',
u'็ขผ': u'็ ',
u'็ฃ': u'็ก',
u'็ฃ': u'็ ',
u'็ฃฃ': u'็ข',
u'็ฃง': u'็ข',
u'็ฃฏ': u'็ถ',
u'็ฃฝ': u'็ก',
u'็ค': u'็กท',
u'็ค': u'็ก',
u'็ค': u'็ข',
u'็คฆ': u'็ฟ',
u'็คช': u'็ บ',
u'็คซ': u'็ พ',
u'็คฌ': u'็พ',
u'็คฑ': u'็ ป',
u'็ฅ': u'็ฎ',
u'็ฅฟ': u'็ฆ',
u'็ฆ': u'็ฅธ',
u'็ฆ': u'็ฅฏ',
u'็ฆ': u'็ฅ',
u'็ฆก': u'็ฅ',
u'็ฆฆ': u'ๅพก',
u'็ฆช': u'็ฆ
',
u'็ฆฎ': u'็คผ',
u'็ฆฐ': u'็ฅข',
u'็ฆฑ': u'็ฅท',
u'็ฆฟ': u'็ง',
u'็ง': u'็ฑผ',
u'็จ
': u'็จ',
u'็จ': u'็ง',
u'็จ': u'ไ
',
u'็จ': u'ๆฃฑ',
u'็จ': u'็ฆ',
u'็จฎ': u'็ง',
u'็จฑ': u'็งฐ',
u'็ฉ': u'่ฐท',
u'็ฉ': u'็จฃ',
u'็ฉ': u'็งฏ',
u'็ฉ': u'้ข',
u'็ฉ ': u'็งพ',
u'็ฉก': u'็ฉ',
u'็ฉข': u'็งฝ',
u'็ฉฉ': u'็จณ',
u'็ฉซ': u'่ท',
u'็ฉญ': u'็จ',
u'็ชฉ': u'็ช',
u'็ชช': u'ๆดผ',
u'็ชฎ': u'็ฉท',
u'็ชฏ': u'็ช',
u'็ชต': u'็ช',
u'็ชถ': u'็ชญ',
u'็ชบ': u'็ชฅ',
u'็ซ': u'็ช',
u'็ซ
': u'็ช',
u'็ซ': u'็ชฆ',
u'็ซ': u'็ถ',
u'็ซ': u'็ช',
u'็ซช': u'็ซ',
u'็ซถ': u'็ซ',
u'็ญ': u'็ฌ',
u'็ญ': u'็ฌ',
u'็ญง': u'็ฌ',
u'็ญด': u'ไฒ',
u'็ฎ': u'ไธช',
u'็ฎ': u'็ฌบ',
u'็ฎ': u'็ญ',
u'็ฏ': u'่',
u'็ฏ': u'่',
u'็ฏ': u'็ญ',
u'็ฏ': u'็ฎง',
u'็ฏ': u'็ญผ',
u'็ฏค': u'็ฌ',
u'็ฏฉ': u'็ญ',
u'็ฏณ': u'็ญ',
u'็ฐ': u'็ฎฆ',
u'็ฐ': u'็ฏ',
u'็ฐ': u'่',
u'็ฐ': u'็ฎช',
u'็ฐก': u'็ฎ',
u'็ฐฃ': u'็ฏ',
u'็ฐซ': u'็ฎซ',
u'็ฐน': u'็ญ',
u'็ฐฝ': u'็ญพ',
u'็ฐพ': u'ๅธ',
u'็ฑ': u'็ฏฎ',
u'็ฑ': u'็ญน',
u'็ฑ': u'็ฎ',
u'็ฑ': u'็ฎจ',
u'็ฑ': u'็ฑ',
u'็ฑ ': u'็ฌผ',
u'็ฑค': u'็ญพ',
u'็ฑฉ': u'็ฌพ',
u'็ฑช': u'็ฐ',
u'็ฑฌ': u'็ฏฑ',
u'็ฑฎ': u'็ฎฉ',
u'็ฑฒ': u'ๅ',
u'็ฒต': u'็ฒค',
u'็ณ': u'็ณ',
u'็ณ': u'็ฒช',
u'็ณง': u'็ฒฎ',
u'็ณฐ': u'ๅข',
u'็ณฒ': u'็ฒ',
u'็ณด': u'็ฑด',
u'็ณถ': u'็ฒ',
u'็ณน': u'็บ',
u'็ณพ': u'็บ ',
u'็ด': u'็บช',
u'็ด': u'็บฃ',
u'็ด': u'็บฆ',
u'็ด
': u'็บข',
u'็ด': u'็บก',
u'็ด': u'็บฅ',
u'็ด': u'็บจ',
u'็ด': u'็บซ',
u'็ด': u'็บน',
u'็ด': u'็บณ',
u'็ด': u'็บฝ',
u'็ด': u'็บพ',
u'็ด': u'็บฏ',
u'็ด': u'็บฐ',
u'็ด': u'็บผ',
u'็ด': u'็บฑ',
u'็ด': u'็บฎ',
u'็ด': u'็บธ',
u'็ด': u'็บง',
u'็ด': u'็บท',
u'็ด': u'็บญ',
u'็ด': u'็บด',
u'็ดก': u'็บบ',
u'็ดฌ': u'ไท',
u'็ดฎ': u'ๆ',
u'็ดฐ': u'็ป',
u'็ดฑ': u'็ป',
u'็ดฒ': u'็ป',
u'็ดณ': u'็ป
',
u'็ดต': u'็บป',
u'็ดน': u'็ป',
u'็ดบ': u'็ป',
u'็ดผ': u'็ป',
u'็ดฟ': u'็ป',
u'็ต': u'็ป',
u'็ต': u'็ป',
u'็ต': u'็ป',
u'็ต
': u'ไน',
u'็ต': u'็ป',
u'็ต': u'็ป',
u'็ต': u'็ป',
u'็ต': u'็ป',
u'็ต': u'็ปฆ',
u'็ต': u'็ป',
u'็ต': u'็ป',
u'็ตก': u'็ป',
u'็ตข': u'็ป',
u'็ตฆ': u'็ป',
u'็ตจ': u'็ป',
u'็ตฐ': u'็ป',
u'็ตฑ': u'็ป',
u'็ตฒ': u'ไธ',
u'็ตณ': u'็ป',
u'็ตถ': u'็ป',
u'็ตน': u'็ปข',
u'็ตบ': u'๐ซจ',
u'็ถ': u'็ป',
u'็ถ': u'็ปก',
u'็ถ': u'็ป ',
u'็ถ': u'็ปจ',
u'็ถ': u'็ปฃ',
u'็ถ': u'็ปค',
u'็ถ': u'็ปฅ',
u'็ถ': u'ไผ',
u'็ถ': u'็ป',
u'็ถ': u'็ปผ',
u'็ถ': u'็ผ',
u'็ถ ': u'็ปฟ',
u'็ถข': u'็ปธ',
u'็ถฃ': u'็ปป',
u'็ถซ': u'็บฟ',
u'็ถฌ': u'็ปถ',
u'็ถญ': u'็ปด',
u'็ถฏ': u'็ปน',
u'็ถฐ': u'็ปพ',
u'็ถฑ': u'็บฒ',
u'็ถฒ': u'็ฝ',
u'็ถณ': u'็ปท',
u'็ถด': u'็ผ',
u'็ถต': u'ๅฝฉ',
u'็ถธ': u'็บถ',
u'็ถน': u'็ปบ',
u'็ถบ': u'็ปฎ',
u'็ถป': u'็ปฝ',
u'็ถฝ': u'็ปฐ',
u'็ถพ': u'็ปซ',
u'็ถฟ': u'็ปต',
u'็ท': u'็ปฒ',
u'็ท': u'็ผ',
u'็ท': u'็ดง',
u'็ท': u'็ปฏ',
u'็ท': u'็ปฟ',
u'็ท': u'็ปช',
u'็ท': u'็ปฌ',
u'็ท': u'็ปฑ',
u'็ท': u'็ผ',
u'็ท': u'็ผ',
u'็ท': u'็ผ',
u'็ท': u'็บฟ',
u'็ท': u'็ผ',
u'็ท': u'็ผ',
u'็ท ': u'็ผ',
u'็ทก': u'็ผ',
u'็ทฃ': u'็ผ',
u'็ทฆ': u'็ผ',
u'็ทจ': u'็ผ',
u'็ทฉ': u'็ผ',
u'็ทฌ': u'็ผ
',
u'็ทฏ': u'็บฌ',
u'็ทฑ': u'็ผ',
u'็ทฒ': u'็ผ',
u'็ทด': u'็ป',
u'็ทถ': u'็ผ',
u'็ทน': u'็ผ',
u'็ทป': u'่ด',
u'็ธ': u'่ฆ',
u'็ธ': u'็ผ',
u'็ธ': u'็ผข',
u'็ธ': u'็ผ',
u'็ธ': u'็ป',
u'็ธ': u'็ผฃ',
u'็ธ': u'็ผ',
u'็ธ': u'็ผ',
u'็ธ': u'็ผ',
u'็ธ': u'็ผ',
u'็ธ': u'็ผ',
u'็ธ': u'็ผ',
u'็ธฃ': u'ๅฟ',
u'็ธง': u'็ปฆ',
u'็ธซ': u'็ผ',
u'็ธญ': u'็ผก',
u'็ธฎ': u'็ผฉ',
u'็ธฑ': u'็บต',
u'็ธฒ': u'็ผง',
u'็ธณ': u'ไธ',
u'็ธด': u'็บค',
u'็ธต': u'็ผฆ',
u'็ธถ': u'็ตท',
u'็ธท': u'็ผ',
u'็ธน': u'็ผฅ',
u'็ธฝ': u'ๆป',
u'็ธพ': u'็ปฉ',
u'็น': u'็ปท',
u'็น
': u'็ผซ',
u'็น': u'็ผช',
u'็น': u'็ฉ',
u'็น': u'็ผฏ',
u'็น': u'็ป',
u'็น': u'็ผฎ',
u'็น': u'็ผญ',
u'็น': u'็ป',
u'็นก': u'็ปฃ',
u'็นข': u'็ผ',
u'็นฉ': u'็ปณ',
u'็นช': u'็ป',
u'็นซ': u'็ณป',
u'็นญ': u'่ง',
u'็นฎ': u'็ผฐ',
u'็นฏ': u'็ผณ',
u'็นฐ': u'็ผฒ',
u'็นณ': u'็ผด',
u'็นธ': u'ไ',
u'็นน': u'็ป',
u'็นผ': u'็ปง',
u'็นฝ': u'็ผค',
u'็นพ': u'็ผฑ',
u'็นฟ': u'ไ',
u'็บ': u'๐ซธ',
u'็บ': u'็ผฌ',
u'็บ': u'็บฉ',
u'็บ': u'็ปญ',
u'็บ': u'็ดฏ',
u'็บ': u'็ผ ',
u'็บ': u'็ผจ',
u'็บ': u'ๆ',
u'็บ': u'็บค',
u'็บ': u'็ผต',
u'็บ': u'็ผ',
u'็ผฝ': u'้ต',
u'็ฝ': u'ๅ',
u'็ฝ': u'็ฝ',
u'็ฝ': u'ๅ',
u'็ฝฐ': u'็ฝ',
u'็ฝต': u'้ช',
u'็ฝท': u'็ฝข',
u'็พ
': u'็ฝ',
u'็พ': u'็ฝด',
u'็พ': u'็พ',
u'็พ': u'่',
u'็พฅ': u'็พ',
u'็พจ': u'็พก',
u'็พฉ': u'ไน',
u'็ฟ': u'ไน ',
u'็ฟน': u'็ฟ',
u'่ฌ': u'่ง',
u'่ฎ': u'่ข',
u'่': u'ๅฃ',
u'่': u'้ป',
u'่ฏ': u'่',
u'่ฐ': u'่ช',
u'่ฒ': u'ๅฃฐ',
u'่ณ': u'่ธ',
u'่ต': u'่ฉ',
u'่ถ': u'่',
u'่ท': u'่',
u'่น': u'่',
u'่ฝ': u'ๅฌ',
u'่พ': u'่',
u'่
': u'่',
u'่
': u'่',
u'่': u'่',
u'่': u'่ซ',
u'่ฃ': u'ๅ',
u'่ซ': u'่ฑ',
u'่น': u'่',
u'่
': u'่พ',
u'่
': u'่จ',
u'่
ก': u'่ถ',
u'่
ฆ': u'่',
u'่
ซ': u'่ฟ',
u'่
ณ': u'่',
u'่
ธ': u'่ ',
u'่': u'่
ฝ',
u'่': u'่ค',
u'่ ': u'่ถ',
u'่ฉ': u'่
ป',
u'่ฝ': u'่',
u'่พ': u'่',
u'่ฟ': u'่',
u'่': u'่ธ',
u'่': u'่',
u'่': u'่',
u'่': u'่
',
u'่': u'่ช',
u'่': u'่',
u'่ ': u'่',
u'่ข': u'่',
u'่ฅ': u'ๅง',
u'่จ': u'ไธด',
u'่บ': u'ๅฐ',
u'่': u'ไธ',
u'่': u'ๅ
ด',
u'่': u'ไธพ',
u'่': u'ๆง',
u'่': u'้ฆ',
u'่': u'่ฑ',
u'่ค': u'่ฃ',
u'่ฆ': u'่ฐ',
u'่ซ': u'่ป',
u'่ฑ': u'่ฐ',
u'่ท': u'่ณ',
u'่ป': u'ๅ',
u'่ง': u'่',
u'่ฒ': u'ๅ
น',
u'่': u'่',
u'่': u'ๅบ',
u'่': u'่',
u'่ข': u'่',
u'่ง': u'่',
u'่ฏ': u'ๅ',
u'่ด': u'ๅบต',
u'่': u'่',
u'่': u'่ฑ',
u'่ฌ': u'ไธ',
u'่ต': u'่ด',
u'่': u'ๅถ',
u'่': u'่ญ',
u'่ค': u'่ฎ',
u'่ฆ': u'่',
u'่ฏ': u'่ฏ',
u'่ท': u'่ค',
u'่': u'่ผ',
u'่': u'่ณ',
u'่': u'่
',
u'่ผ': u'่',
u'่': u'่ช',
u'่': u'็',
u'่ฎ': u'่ฒ',
u'่ฏ': u'่',
u'่ด': u'่ผ',
u'่ฝ': u'่',
u'่': u'ๅ',
u'่': u'ๅ',
u'่': u'่',
u'่ฃ': u'่',
u'่ฅ': u'่ฑ',
u'่ฆ': u'่',
u'่ญ': u'่ซ',
u'่': u'่จ',
u'่': u'่',
u'่': u'่',
u'่': u'่ฌ',
u'่': u'่ธ',
u'่': u'่ธ',
u'่': u'่',
u'่ข': u'่',
u'่ฉ': u'่ก',
u'่ช': u'่',
u'่ญ': u'่ง',
u'่ท': u'่ฃ',
u'่': u'่ฐ',
u'่': u'่',
u'่': u'่',
u'่': u'่',
u'่': u'ๅง',
u'่': u'่ท',
u'่': u'่',
u'่': u'่ถ',
u'่ฆ': u'่',
u'่ฉ': u'่จ',
u'่ณ': u'ไ',
u'่ด': u'่ง',
u'่บ': u'่ ',
u'่': u'่',
u'่': u'่ฉ',
u'่': u'่บ',
u'่ฅ': u'่ฏ',
u'่ช': u'่ฎ',
u'่ด': u'่ด',
u'่ถ': u'่',
u'่น': u'่ผ',
u'่บ': u'่บ',
u'่': u'่ฒ',
u'่': u'่ฆ',
u'่': u'่',
u'่': u'่ด',
u'่': u'่น',
u'่': u'่',
u'่': u'่น',
u'่ข': u'่',
u'่ญ': u'ๅ
ฐ',
u'่บ': u'่ ',
u'่ฟ': u'่',
u'่': u'่',
u'่': u'ๅค',
u'่': u'่',
u'่': u'่',
u'่': u'ๅท',
u'่ง': u'ไบ',
u'่ฏ': u'่ฌ',
u'่บ': u'่ฑ',
u'่ป': u'่',
u'่': u'่ฌ',
u'่': u'่',
u'่': u'็ฌ',
u'่ฆ': u'่พ',
u'่ธ': u'่',
u'่': u'่ณ',
u'่': u'่',
u'่ข': u'่ค',
u'่ฎ': u'ไ',
u'่ป': u'่ผ',
u'่ฟ': u'่',
u'่': u'่ฐ',
u'่': u'่',
u'่': u'่จ',
u'่ฃ': u'่ฎ',
u'่ฌ': u'่',
u'่ฏ': u'่ฒ',
u'่ฒ': u'่ซ',
u'่ถ': u'่',
u'่ป': u'่',
u'่
': u'่',
u'่ ': u'่ฟ',
u'่ ': u'่',
u'่ ': u'่ด',
u'่ ': u'่พ',
u'่ ': u'่ก',
u'่ ฃ': u'่',
u'่ จ': u'่',
u'่ ฑ': u'่',
u'่ ถ': u'่',
u'่ ป': u'่ฎ',
u'่ก': u'ไผ',
u'่ก': u'่',
u'่ก': u'ๆฏ',
u'่ก': u'ๅ',
u'่ก': u'่ก',
u'่ก': u'ๅซ',
u'่ก': u'ๅฒ',
u'่กน': u'ๅช',
u'่ข': u'่กฎ',
u'่ฃ': u'่ข
',
u'่ฃ': u'้',
u'่ฃ': u'่กฅ',
u'่ฃ': u'่ฃ
',
u'่ฃก': u'้',
u'่ฃฝ': u'ๅถ',
u'่ค': u'ๅค',
u'่ค': u'่ฃ',
u'่ค': u'่ข',
u'่คฒ': u'่ฃค',
u'่คณ': u'่ฃข',
u'่คธ': u'่ค',
u'่คป': u'ไบต',
u'่ฅ': u'๐ซ',
u'่ฅ': u'ๅน',
u'่ฅ': u'่ฃฅ',
u'่ฅ': u'่ขฏ',
u'่ฅ': u'่ข',
u'่ฅ': u'่ฃฃ',
u'่ฅ ': u'่ฃ',
u'่ฅค': u'่คด',
u'่ฅช': u'่ข',
u'่ฅฌ': u'ไ',
u'่ฅฏ': u'่กฌ',
u'่ฅฒ': u'่ขญ',
u'่ฆ': u'่ง',
u'่ฆ': u'่ง',
u'่ฆ': u'่ง',
u'่ฆ': u'่ง
',
u'่ฆ': u'่ง',
u'่ฆ': u'่ง',
u'่ฆก': u'่ง',
u'่ฆฅ': u'่ง',
u'่ฆฆ': u'่ง',
u'่ฆช': u'ไบฒ',
u'่ฆฌ': u'่ง',
u'่ฆฏ': u'่ง',
u'่ฆฒ': u'่ง',
u'่ฆท': u'่ง',
u'่ฆบ': u'่ง',
u'่ฆผ': u'๐ซจ',
u'่ฆฝ': u'่ง',
u'่ฆฟ': u'่ง',
u'่ง': u'่ง',
u'่งด': u'่ง',
u'่งถ': u'่งฏ',
u'่งธ': u'่งฆ',
u'่จ': u'่ฎ ',
u'่จ': u'่ฎข',
u'่จ': u'่ฎฃ',
u'่จ': u'่ฎก',
u'่จ': u'่ฎฏ',
u'่จ': u'่ฎง',
u'่จ': u'่ฎจ',
u'่จ': u'่ฎฆ',
u'่จ': u'๐ซ',
u'่จ': u'่ฎฑ',
u'่จ': u'่ฎญ',
u'่จ': u'่ฎช',
u'่จ': u'่ฎซ',
u'่จ': u'ๆ',
u'่จ': u'่ฎฐ',
u'่จ': u'่ฎน',
u'่จ': u'่ฎถ',
u'่จ': u'่ฎผ',
u'่จข': u'ไฃ',
u'่จฃ': u'่ฏ',
u'่จฅ': u'่ฎท',
u'่จฉ': u'่ฎป',
u'่จช': u'่ฎฟ',
u'่จญ': u'่ฎพ',
u'่จฑ': u'่ฎธ',
u'่จด': u'่ฏ',
u'่จถ': u'่ฏ',
u'่จบ': u'่ฏ',
u'่จป': u'ๆณจ',
u'่ฉ': u'่ฏ',
u'่ฉ': u'่ฏ',
u'่ฉ': u'่ฎต',
u'่ฉ': u'่ฏ',
u'่ฉ': u'่ฏ',
u'่ฉ': u'่ฏ',
u'่ฉ': u'่ฏ',
u'่ฉ': u'่ฏ',
u'่ฉ': u'่ฏ',
u'่ฉ': u'่ฏ',
u'่ฉ': u'่ฏ
',
u'่ฉ': u'่ฏ',
u'่ฉ ': u'ๅ',
u'่ฉก': u'่ฏฉ',
u'่ฉข': u'่ฏข',
u'่ฉฃ': u'่ฏฃ',
u'่ฉฆ': u'่ฏ',
u'่ฉฉ': u'่ฏ',
u'่ฉซ': u'่ฏง',
u'่ฉฌ': u'่ฏ',
u'่ฉญ': u'่ฏก',
u'่ฉฎ': u'่ฏ ',
u'่ฉฐ': u'่ฏ',
u'่ฉฑ': u'่ฏ',
u'่ฉฒ': u'่ฏฅ',
u'่ฉณ': u'่ฏฆ',
u'่ฉต': u'่ฏ',
u'่ฉผ': u'่ฏ',
u'่ฉฟ': u'่ฏ',
u'่ช': u'่ฏ',
u'่ช
': u'่ฏ',
u'่ช': u'่ฏ',
u'่ช': u'ๅคธ',
u'่ช': u'ๅฟ',
u'่ช': u'่ฎค',
u'่ช': u'่ฏณ',
u'่ช': u'่ฏถ',
u'่ช': u'่ฏ',
u'่ช': u'่ฏฑ',
u'่ช': u'่ฏฎ',
u'่ช': u'่ฏญ',
u'่ช ': u'่ฏ',
u'่ชก': u'่ฏซ',
u'่ชฃ': u'่ฏฌ',
u'่ชค': u'่ฏฏ',
u'่ชฅ': u'่ฏฐ',
u'่ชฆ': u'่ฏต',
u'่ชจ': u'่ฏฒ',
u'่ชช': u'่ฏด',
u'่ชฌ': u'่ฏด',
u'่ชฐ': u'่ฐ',
u'่ชฒ': u'่ฏพ',
u'่ชถ': u'่ฐ',
u'่ชน': u'่ฏฝ',
u'่ชผ': u'่ฐ',
u'่ชพ': u'่จ',
u'่ชฟ': u'่ฐ',
u'่ซ': u'่ฐ',
u'่ซ': u'่ฐ',
u'่ซ': u'่ฐ',
u'่ซ': u'่ฏฟ',
u'่ซ': u'่ฏท',
u'่ซ': u'่ฏค',
u'่ซ': u'่ฏน',
u'่ซ': u'่ฏผ',
u'่ซ': u'่ฐ
',
u'่ซ': u'่ฎบ',
u'่ซ': u'่ฐ',
u'่ซ': u'่ฐ',
u'่ซ': u'่ฐ',
u'่ซ': u'่ฐ',
u'่ซ': u'่ฐ',
u'่ซข': u'่ฏจ',
u'่ซค': u'่ฐ',
u'่ซฆ': u'่ฐ',
u'่ซง': u'่ฐ',
u'่ซซ': u'่ฐ',
u'่ซญ': u'่ฐ',
u'่ซฎ': u'ๅจ',
u'่ซฐ': u'๐ซฐ',
u'่ซฑ': u'่ฎณ',
u'่ซณ': u'่ฐ',
u'่ซถ': u'่ฐ',
u'่ซท': u'่ฎฝ',
u'่ซธ': u'่ฏธ',
u'่ซบ': u'่ฐ',
u'่ซผ': u'่ฐ',
u'่ซพ': u'่ฏบ',
u'่ฌ': u'่ฐ',
u'่ฌ': u'่ฐ',
u'่ฌ': u'่ฐ',
u'่ฌ': u'่ช',
u'่ฌ
': u'่ฏ',
u'่ฌ': u'่ฐ',
u'่ฌ': u'่ฐ',
u'่ฌ': u'๐ซฒ',
u'่ฌ': u'่ฐง',
u'่ฌ': u'่ฐ',
u'่ฌ': u'่ฐก',
u'่ฌ': u'่ฐค',
u'่ฌ': u'่ฐฆ',
u'่ฌ': u'่ฐฅ',
u'่ฌ': u'่ฎฒ',
u'่ฌ': u'่ฐข',
u'่ฌ ': u'่ฐฃ',
u'่ฌก': u'่ฐฃ',
u'่ฌจ': u'่ฐ',
u'่ฌซ': u'่ฐช',
u'่ฌฌ': u'่ฐฌ',
u'่ฌญ': u'่ฐซ',
u'่ฌณ': u'่ฎด',
u'่ฌน': u'่ฐจ',
u'่ฌพ': u'่ฐฉ',
u'่ญ
': u'ไง',
u'่ญ': u'่ฏ',
u'่ญ': u'๐ซข',
u'่ญ': u'่ฐฒ',
u'่ญ': u'่ฎฅ',
u'่ญ': u'่ฐฎ',
u'่ญ': u'่ฏ',
u'่ญ': u'่ฐฏ',
u'่ญ': u'่ฐญ',
u'่ญ': u'่ฐฑ',
u'่ญซ': u'่ฐต',
u'่ญญ': u'ๆฏ',
u'่ญฏ': u'่ฏ',
u'่ญฐ': u'่ฎฎ',
u'่ญด': u'่ฐด',
u'่ญท': u'ๆค',
u'่ญธ': u'่ฏช',
u'่ญฝ': u'่ช',
u'่ญพ': u'่ฐซ',
u'่ฎ': u'่ฏป',
u'่ฎ': u'ๅ',
u'่ฎ': u'ไป',
u'่ฎ': u'่ฐ',
u'่ฎ': u'่ฎฉ',
u'่ฎ': u'่ฐฐ',
u'่ฎ': u'่ฐถ',
u'่ฎ': u'่ต',
u'่ฎ': u'่ฐ ',
u'่ฎ': u'่ฐณ',
u'่ฑ': u'ๅฒ',
u'่ฑ': u'็ซ',
u'่ฑ': u'ไธฐ',
u'่ฑ': u'่ณ',
u'่ฑฌ': u'็ช',
u'่ฑถ': u'่ฑฎ',
u'่ฒ': u'็ซ',
u'่ฒ': u'ไ',
u'่ฒ': u'่ด',
u'่ฒ': u'่ด',
u'่ฒ': u'่ด ',
u'่ฒ ': u'่ด',
u'่ฒก': u'่ดข',
u'่ฒข': u'่ดก',
u'่ฒง': u'่ดซ',
u'่ฒจ': u'่ดง',
u'่ฒฉ': u'่ดฉ',
u'่ฒช': u'่ดช',
u'่ฒซ': u'่ดฏ',
u'่ฒฌ': u'่ดฃ',
u'่ฒฏ': u'่ดฎ',
u'่ฒฐ': u'่ดณ',
u'่ฒฒ': u'่ต',
u'่ฒณ': u'่ดฐ',
u'่ฒด': u'่ดต',
u'่ฒถ': u'่ดฌ',
u'่ฒท': u'ไนฐ',
u'่ฒธ': u'่ดท',
u'่ฒบ': u'่ดถ',
u'่ฒป': u'่ดน',
u'่ฒผ': u'่ดด',
u'่ฒฝ': u'่ดป',
u'่ฒฟ': u'่ดธ',
u'่ณ': u'่ดบ',
u'่ณ': u'่ดฒ',
u'่ณ': u'่ต',
u'่ณ': u'่ต',
u'่ณ': u'่ดฟ',
u'่ณ
': u'่ต
',
u'่ณ': u'่ต',
u'่ณ': u'่ดพ',
u'่ณ': u'่ดผ',
u'่ณ': u'่ต',
u'่ณ': u'่ต',
u'่ณ': u'ๅฎพ',
u'่ณ': u'่ต',
u'่ณ': u'่ต',
u'่ณ': u'่ต',
u'่ณ': u'่ต',
u'่ณ': u'่ต',
u'่ณ ': u'่ต',
u'่ณก': u'่ต',
u'่ณข': u'่ดค',
u'่ณฃ': u'ๅ',
u'่ณค': u'่ดฑ',
u'่ณฆ': u'่ต',
u'่ณง': u'่ต',
u'่ณช': u'่ดจ',
u'่ณซ': u'่ต',
u'่ณฌ': u'่ดฆ',
u'่ณญ': u'่ต',
u'่ณฐ': u'ไ',
u'่ณด': u'่ต',
u'่ณต': u'่ต',
u'่ณบ': u'่ต',
u'่ณป': u'่ต',
u'่ณผ': u'่ดญ',
u'่ณฝ': u'่ต',
u'่ณพ': u'่ต',
u'่ด': u'่ดฝ',
u'่ด
': u'่ต',
u'่ด': u'่ต',
u'่ด': u'่ต ',
u'่ด': u'่ต',
u'่ด': u'่ต',
u'่ด': u'่ตก',
u'่ด': u'่ตข',
u'่ด': u'่ต',
u'่ด': u'่ต',
u'่ด': u'่ต',
u'่ด': u'่ต',
u'่ด': u'่ต',
u'่ด': u'่ตฃ',
u'่ด': u'่ต',
u'่ตฌ': u'่ตช',
u'่ถ': u'่ตถ',
u'่ถ': u'่ตต',
u'่ถจ': u'่ถ',
u'่ถฒ': u'่ถฑ',
u'่ทก': u'่ฟน',
u'่ธ': u'่ทต',
u'่ธด': u'่ธ',
u'่น': u'่ท',
u'่น': u'่ทธ',
u'่นฃ': u'่น',
u'่นค': u'่ธช',
u'่นบ': u'่ทท',
u'่นป': u'๐ซ',
u'่บ': u'่ทถ',
u'่บ': u'่ถธ',
u'่บ': u'่ธ',
u'่บ': u'่ทป',
u'่บ': u'่ท',
u'่บ': u'่ธฏ',
u'่บ': u'่ท',
u'่บ': u'่ธฌ',
u'่บ': u'่นฐ',
u'่บ': u'่ทน',
u'่บก': u'่น',
u'่บฅ': u'่นฟ',
u'่บฆ': u'่บ',
u'่บช': u'่บ',
u'่ป': u'่บฏ',
u'่ป': u'่ฝฆ',
u'่ป': u'่ฝง',
u'่ป': u'่ฝจ',
u'่ป': u'ๅ',
u'่ป': u'๐ซ',
u'่ป': u'่ฝช',
u'่ป': u'่ฝฉ',
u'่ป': u'่ฝซ',
u'่ป': u'่ฝญ',
u'่ป': u'่ฝฏ',
u'่ปค': u'่ฝท',
u'่ปจ': u'๐ซ',
u'่ปซ': u'่ฝธ',
u'่ปฒ': u'่ฝฑ',
u'่ปธ': u'่ฝด',
u'่ปน': u'่ฝต',
u'่ปบ': u'่ฝบ',
u'่ปป': u'่ฝฒ',
u'่ปผ': u'่ฝถ',
u'่ปพ': u'่ฝผ',
u'่ผ': u'่พ',
u'่ผ
': u'่พ',
u'่ผ': u'่พ',
u'่ผ': u'่พ',
u'่ผ': u'่ฝฝ',
u'่ผ': u'่ฝพ',
u'่ผ': u'่พ',
u'่ผ': u'ๆฝ',
u'่ผ': u'่พ
',
u'่ผ': u'่ฝป',
u'่ผ': u'๐ซ',
u'่ผ': u'่พ',
u'่ผ': u'่พ',
u'่ผ': u'่พ',
u'่ผ': u'่พ',
u'่ผ': u'่พ',
u'่ผฅ': u'่พ',
u'่ผฆ': u'่พ',
u'่ผฉ': u'่พ',
u'่ผช': u'่ฝฎ',
u'่ผฌ': u'่พ',
u'่ผฎ': u'๐ซ',
u'่ผฏ': u'่พ',
u'่ผณ': u'่พ',
u'่ผธ': u'่พ',
u'่ผป': u'่พ',
u'่ผพ': u'่พ',
u'่ผฟ': u'่',
u'่ฝ': u'่พ',
u'่ฝ': u'ๆฏ',
u'่ฝ': u'่พ',
u'่ฝ
': u'่พ',
u'่ฝ': u'่พ',
u'่ฝ': u'่ฝฌ',
u'่ฝ': u'่พ',
u'่ฝ': u'่ฝฟ',
u'่ฝ': u'่พ',
u'่ฝ': u'่ฝฐ',
u'่ฝก': u'่พ',
u'่ฝข': u'่ฝน',
u'่ฝฃ': u'๐ซ',
u'่ฝค': u'่ฝณ',
u'่พฆ': u'ๅ',
u'่พญ': u'่พ',
u'่พฎ': u'่พซ',
u'่พฏ': u'่พฉ',
u'่พฒ': u'ๅ',
u'่ฟด': u'ๅ',
u'้': u'่ฟณ',
u'้': u'่ฟ',
u'้ฃ': u'่ฟ',
u'้ฑ': u'ๅจ',
u'้ฒ': u'่ฟ',
u'้': u'ๆธธ',
u'้': u'่ฟ',
u'้': u'่ฟ',
u'้': u'่พพ',
u'้': u'่ฟ',
u'้': u'้ฅ',
u'้': u'้',
u'้': u'้',
u'้ ': u'่ฟ',
u'้ก': u'ๆบฏ',
u'้ฉ': u'้',
u'้ฒ': u'่ฟ',
u'้ท': u'่ฟ',
u'้ธ': u'้',
u'้บ': u'้',
u'้ผ': u'่พฝ',
u'้': u'่ฟ',
u'้': u'่ฟ',
u'้': u'่ฟฉ',
u'้': u'่พน',
u'้': u'้ป',
u'้': u'้ฆ',
u'้': u'้',
u'้ต': u'้ฎ',
u'้': u'้',
u'้': u'ไนก',
u'้': u'้น',
u'้': u'้ฌ',
u'้': u'้ง',
u'้ง': u'้',
u'้ญ': u'้',
u'้ฐ': u'้ป',
u'้ฒ': u'้ธ',
u'้ด': u'้บ',
u'้ถ': u'้',
u'้บ': u'้',
u'้
': u'้
',
u'้
': u'้ฆ',
u'้': u'้
',
u'้': u'ไธ',
u'้': u'้
',
u'้ฃ': u'็ณ',
u'้ซ': u'ๅป',
u'้ฌ': u'้
ฑ',
u'้ฏ': u'้
ฐ',
u'้ฑ': u'้
ฆ',
u'้': u'้
ฟ',
u'้': u'่ก
',
u'้': u'้
พ',
u'้
': u'้
ฝ',
u'้': u'้',
u'้': u'ๅ',
u'้': u'้
',
u'้': u'้',
u'้': u'้',
u'้': u'้',
u'้': u'้',
u'้': u'้',
u'้': u'้',
u'้': u'้',
u'้ฃ': u'้',
u'้ค': u'้',
u'้ง': u'้',
u'้ฉ': u'้',
u'้ณ': u'๐จฐฟ',
u'้ต': u'้',
u'้ท': u'้',
u'้น': u'้',
u'้บ': u'้',
u'้พ': u'ไฅบ',
u'้': u'้ฏ',
u'้': u'้ซ',
u'้': u'้',
u'้': u'้ญ',
u'้': u'๐ซง',
u'้': u'้',
u'้': u'้ ',
u'้': u'๐จฑ',
u'้': u'้',
u'้': u'้ฉ',
u'้': u'้ค',
u'้': u'้ฃ',
u'้': u'้',
u'้': u'้',
u'้': u'้ฎ',
u'้': u'้ง',
u'้ ': u'๐จฑ',
u'้ฃ': u'้',
u'้ฅ': u'้ฌ',
u'้ฆ': u'้',
u'้ง': u'้ช',
u'้ฎ': u'้',
u'้ฏ': u'๐จฑ',
u'้ฐ': u'้',
u'้ฒ': u'๐จฑ',
u'้ณ': u'้ถ',
u'้ด': u'้',
u'้ท': u'้ด',
u'้ธ': u'้น',
u'้น': u'้',
u'้บ': u'้ฐ',
u'้ฝ': u'้ธ',
u'้พ': u'้',
u'้ฟ': u'้ฟ',
u'้': u'้พ',
u'้': u'๐จฑ
',
u'้
': u'้',
u'้': u'้',
u'้': u'้',
u'้': u'้',
u'้': u'้',
u'้': u'้',
u'้': u'้ท',
u'้': u'้ณ',
u'้': u'้',
u'้': u'้
',
u'้': u'้บ',
u'้ข': u'้ต',
u'้ค': u'้ฉ',
u'้ฆ': u'้ฒ',
u'้ฌ': u'้ผ',
u'้ญ': u'้ฝ',
u'้ถ': u'้',
u'้ธ': u'้ฐ',
u'้บ': u'้',
u'้ป': u'้ฌ',
u'้ฟ': u'้ช',
u'้': u'้ถ',
u'้': u'้ณ',
u'้
': u'้',
u'้': u'้',
u'้': u'้ฃ',
u'้': u'้จ',
u'้': u'้ข',
u'้': u'้ญ',
u'้': u'้ซ',
u'้': u'้ฆ',
u'้': u'่ก',
u'้ ': u'้',
u'้ฃ': u'้ท',
u'้ฅ': u'้ฑ',
u'้ฆ': u'้',
u'้จ': u'้ต',
u'้ฉ': u'้ฅ',
u'้ช': u'้',
u'้ซ': u'้ฏ',
u'้ฌ': u'้',
u'้ฑ': u'้',
u'้ณ': u'้',
u'้ถ': u'๐จฑ',
u'้ท': u'้',
u'้น': u'้',
u'้ป': u'้',
u'้ผ': u'้',
u'้': u'้',
u'้': u'้',
u'้
': u'้',
u'้': u'้ก',
u'้': u'๐จฑ',
u'้': u'้ค',
u'้': u'้',
u'้': u'้',
u'้': u'้ป',
u'้': u'้',
u'้': u'้',
u'้ฃ': u'้',
u'้ค': u'้',
u'้ฅ': u'้',
u'้ฆ': u'้',
u'้จ': u'้',
u'้ฉ': u'้',
u'้ช': u'้บ',
u'้ญ': u'้',
u'้ฎ': u'้',
u'้ฏ': u'้',
u'้ฐ': u'้',
u'้ฑ': u'้ฝ',
u'้ถ': u'้',
u'้ธ': u'้ฏ',
u'้ผ': u'้ข',
u'้': u'้',
u'้': u'๐จฑ',
u'้': u'ๅฝ',
u'้': u'้',
u'้': u'้ซ',
u'้': u'้ฉ',
u'้': u'้',
u'้': u'้ฅ',
u'้': u'้',
u'้': u'้',
u'้': u'้ค',
u'้': u'้ฑ',
u'้': u'้ฎ',
u'้': u'้',
u'้': u'้ฌ',
u'้ ': u'้ญ',
u'้ก': u'้',
u'้ข': u'้ฑ',
u'้ฆ': u'้ฆ',
u'้จ': u'้',
u'้ฉ': u'้ ',
u'้ซ': u'้ก',
u'้ฎ': u'้ข',
u'้ฏ': u'้',
u'้ฒ': u'ๅฝ',
u'้ณ': u'้ฐ',
u'้ถ': u'่กจ',
u'้ธ': u'้ผ',
u'้': u'้',
u'้': u'้จ',
u'้': u'้ช',
u'้': u'๐จฑ',
u'้': u'้',
u'้': u'้ด',
u'้': u'้ณ',
u'้': u'็ผ',
u'้': u'้
',
u'้': u'้',
u'้': u'้ท',
u'้': u'้ก',
u'้': u'้',
u'้': u'้ป',
u'้ ': u'้ฝ',
u'้ค': u'้ธ',
u'้ฅ': u'้ฒ',
u'้ฉ': u'้',
u'้ฌ': u'้น',
u'้ฎ': u'๐จฑ',
u'้ฐ': u'้พ',
u'้ต': u'้ฎ',
u'้ถ': u'้ถ',
u'้บ': u'้',
u'้พ': u'้',
u'้': u'้',
u'้': u'้ฟ',
u'้': u'้
',
u'้': u'้',
u'้': u'้ฐ',
u'้': u'้',
u'้': u'้',
u'้': u'้',
u'้': u'้ค',
u'้': u'้',
u'้': u'๐จฑ',
u'้ก': u'้',
u'้ข': u'้จ',
u'้ฃ': u'่ฅ',
u'้ฆ': u'้',
u'้ง': u'้ ',
u'้ฉ': u'้ฉ',
u'้ช': u'้ผ',
u'้ฌ': u'้',
u'้ญ': u'้ฎ',
u'้ฎ': u'้',
u'้ฏ': u'๐จฑ',
u'้ฐ': u'้',
u'้ฒ': u'้',
u'้ณ': u'้',
u'้ต': u'้',
u'้ท': u'๐จฐพ',
u'้ธ': u'้',
u'้ฟ': u'้',
u'้': u'้',
u'้': u'๐จฑ',
u'้': u'้',
u'้': u'้พ',
u'้': u'๐จฑ',
u'้': u'้',
u'้': u'้',
u'้': u'้ ',
u'้': u'้',
u'้': u'้ฟ',
u'้': u'้ต',
u'้': u'ๆ',
u'้': u'้',
u'้': u'้',
u'้': u'้',
u'้': u'้ฒ',
u'้ก': u'้',
u'้ข': u'้',
u'้ค': u'้',
u'้ฆ': u'๐ซฉ',
u'้จ': u'้พ',
u'้ฐ': u'้',
u'้ต': u'้ง',
u'้ท': u'้ค',
u'้น': u'้ช',
u'้บ': u'ไฅฝ',
u'้ฝ': u'้',
u'้': u'้',
u'้': u'้ด',
u'้': u'๐ซ',
u'้': u'๐จฑ',
u'้': u'๐จฑ',
u'้': u'้ฃ',
u'้': u'้น',
u'้': u'้ฆ',
u'้': u'้ก',
u'้': u'้',
u'้': u'้ซ',
u'้': u'้ข',
u'้ ': u'้จ',
u'้ฅ': u'ไฆ
',
u'้ฆ': u'้',
u'้ง': u'้',
u'้จ': u'้',
u'้ซ': u'้',
u'้ฎ': u'้ฐ',
u'้ฏ': u'ไฆ',
u'้ฒ': u'้ฏ',
u'้ณ': u'้ญ',
u'้ต': u'้',
u'้ถ': u'้ฎ',
u'้ธ': u'้',
u'้บ': u'้',
u'้ฟ': u'้ฑ',
u'้': u'้ธ',
u'้': u'้ฌ',
u'้': u'้',
u'้': u'้ด',
u'้': u'้ด',
u'้': u'้ฒ',
u'้': u'้ง',
u'้': u'้ด',
u'้ ': u'้',
u'้ฃ': u'้ณ',
u'้ฅ': u'้ฅ',
u'้ญ': u'้ง',
u'้ฐ': u'้ฅ',
u'้ฑ': u'้ต',
u'้ฒ': u'้ถ',
u'้ท': u'้',
u'้น': u'้ฉ',
u'้ผ': u'้ฃ',
u'้ฝ': u'้ป',
u'้พ': u'้ฎ',
u'้ฟ': u'ๅฟ',
u'้': u'้ข',
u'้': u'ๆ',
u'้ท': u'้ฟ',
u'้': u'้จ',
u'้': u'้ฉ',
u'้': u'้ช',
u'้': u'้ซ',
u'้': u'้ฌ',
u'้': u'้ญ',
u'้': u'ๅผ',
u'้': u'้ถ',
u'้': u'๐จธ',
u'้': u'้ณ',
u'้': u'้ฐ',
u'้': u'๐จธ',
u'้': u'้ฒ',
u'้': u'้ฒ',
u'้': u'้ด',
u'้': u'้ต',
u'้': u'้ธ',
u'้ก': u'้',
u'้ฃ': u'้',
u'้ค': u'ๅ',
u'้ฅ': u'้',
u'้จ': u'้บ',
u'้ฉ': u'้ฝ',
u'้ซ': u'้',
u'้ฌ': u'้',
u'้ญ': u'้พ',
u'้ฑ': u'้
',
u'้ฒ': u'้
',
u'้ถ': u'้',
u'้น': u'้',
u'้ป': u'้',
u'้ผ': u'้',
u'้ฝ': u'้',
u'้พ': u'้',
u'้ฟ': u'้',
u'้': u'้',
u'้': u'ๆฟ',
u'้': u'้ฑ',
u'้': u'้',
u'้': u'้',
u'้': u'้',
u'้': u'้',
u'้': u'้',
u'้': u'้',
u'้': u'้ฟ',
u'้': u'้',
u'้': u'้',
u'้': u'้ฏ',
u'้': u'ๅ
ณ',
u'้': u'้',
u'้ ': u'้',
u'้ก': u'้',
u'้ข': u'่พ',
u'้ค': u'้',
u'้ฅ': u'้ผ',
u'้': u'้',
u'้': u'้',
u'้': u'ๅ',
u'้ฃ': u'้ต',
u'้ฐ': u'้ด',
u'้ณ': u'้',
u'้ธ': u'้',
u'้ฝ': u'้ณ',
u'้': u'้ง',
u'้': u'้',
u'้': u'้ถ',
u'้': u'้จ',
u'้': u'้
',
u'้จ': u'้',
u'้ช': u'้ฉ',
u'้ฑ': u'้',
u'้ด': u'้',
u'้ธ': u'้ถ',
u'้ป': u'ๅช',
u'้': u'้ฝ',
u'้': u'่ฝ',
u'้': u'ๅ',
u'้': u'้',
u'้': u'ๆ',
u'้': u'้ธก',
u'้ข': u'็ฆป',
u'้ฃ': u'้พ',
u'้ฒ': u'ไบ',
u'้ป': u'็ต',
u'้ข': u'้ก',
u'้ง': u'้พ',
u'้ฝ': u'้',
u'้': u'้ณ',
u'้': u'้ญ',
u'้': u'็ต',
u'้': u'้',
u'้': u'้',
u'้ฆ': u'่
ผ',
u'้จ': u'้ฅ',
u'้': u'้ผ',
u'้': u'ๅทฉ',
u'้': u'็ปฑ',
u'้ฆ': u'็ง',
u'้ฝ': u'้',
u'้': u'็ผฐ',
u'้': u'้',
u'้': u'ๅ',
u'้': u'้ฏ',
u'้': u'้ฆ',
u'้': u'้ง',
u'้': u'้จ',
u'้': u'้ฉ',
u'้': u'้ช',
u'้': u'้ฌ',
u'้': u'้ฒ',
u'้': u'้ซ',
u'้ป': u'้ต',
u'้ฟ': u'ๅ',
u'้ ': u'้กต',
u'้ ': u'้กถ',
u'้ ': u'้กท',
u'้
': u'้กน',
u'้ ': u'้กบ',
u'้ ': u'้กธ',
u'้ ': u'้กป',
u'้ ': u'้กผ',
u'้ ': u'้ข',
u'้ ': u'้ข',
u'้ ': u'้ข',
u'้ ': u'้ข',
u'้ ': u'้กฝ',
u'้ ': u'้ข',
u'้ ': u'้กฟ',
u'้ ': u'้ข',
u'้ ': u'้ข',
u'้ ': u'้ข',
u'้ ก': u'้ข',
u'้ ค': u'้ข',
u'้ ฆ': u'้ข',
u'้ ญ': u'ๅคด',
u'้ ฎ': u'้ข',
u'้ ฐ': u'้ข',
u'้ ฒ': u'้ข',
u'้ ด': u'้ข',
u'้ ท': u'้ข',
u'้ ธ': u'้ข',
u'้ น': u'้ข',
u'้ ป': u'้ข',
u'้ ฝ': u'้ข',
u'้ก': u'๐ฉ',
u'้ก': u'้ข',
u'้ก': u'้ข',
u'้ก': u'้ข',
u'้ก': u'้ข',
u'้ก': u'้ข',
u'้ก': u'้ข',
u'้ก': u'้ข',
u'้ก': u'้ข',
u'้ก': u'ๆฟ',
u'้ก': u'้ขก',
u'้ก': u'้ข ',
u'้ก': u'็ฑป',
u'้กข': u'้ข',
u'้กฅ': u'้ขข',
u'้กง': u'้กพ',
u'้กซ': u'้ขค',
u'้กฌ': u'้ขฅ',
u'้กฏ': u'ๆพ',
u'้กฐ': u'้ขฆ',
u'้กฑ': u'้ข
',
u'้กณ': u'้ข',
u'้กด': u'้ขง',
u'้ขจ': u'้ฃ',
u'้ขญ': u'้ฃ',
u'้ขฎ': u'้ฃ',
u'้ขฏ': u'้ฃ',
u'้ขฐ': u'๐ฉฅ',
u'้ขฑ': u'ๅฐ',
u'้ขณ': u'ๅฎ',
u'้ขถ': u'้ฃ',
u'้ขท': u'๐ฉช',
u'้ขธ': u'้ฃ',
u'้ขบ': u'้ฃ',
u'้ขป': u'้ฃ',
u'้ขผ': u'้ฃ',
u'้ขพ': u'๐ฉซ',
u'้ฃ': u'้ฃ',
u'้ฃ': u'้ฃ',
u'้ฃ': u'้ฃ',
u'้ฃ': u'้ฃ',
u'้ฃ': u'้ฃ',
u'้ฃ ': u'้ฅฃ',
u'้ฃข': u'้ฅฅ',
u'้ฃฃ': u'้ฅค',
u'้ฃฅ': u'้ฅฆ',
u'้ฃฉ': u'้ฅจ',
u'้ฃช': u'้ฅช',
u'้ฃซ': u'้ฅซ',
u'้ฃญ': u'้ฅฌ',
u'้ฃฏ': u'้ฅญ',
u'้ฃฑ': u'้ฃง',
u'้ฃฒ': u'้ฅฎ',
u'้ฃด': u'้ฅด',
u'้ฃผ': u'้ฅฒ',
u'้ฃฝ': u'้ฅฑ',
u'้ฃพ': u'้ฅฐ',
u'้ฃฟ': u'้ฅณ',
u'้ค': u'้ฅบ',
u'้ค': u'้ฅธ',
u'้ค
': u'้ฅผ',
u'้ค': u'้ฅท',
u'้ค': u'ๅ
ป',
u'้ค': u'้ฅต',
u'้ค': u'้ฅน',
u'้ค': u'้ฅป',
u'้ค': u'้ฅฝ',
u'้ค': u'้ฆ',
u'้ค': u'้ฅฟ',
u'้ค': u'๐ซฆ',
u'้ค': u'้ฆ',
u'้ค': u'้ฅพ',
u'้ค': u'๐ซง',
u'้ค': u'ไฝ',
u'้ค': u'่ด',
u'้ค': u'้ฆ',
u'้ค': u'้ฆ',
u'้ค': u'้ฅฏ',
u'้คก': u'้ฆ
',
u'้คฆ': u'๐ซ ',
u'้คจ': u'้ฆ',
u'้คญ': u'๐ซฎ',
u'้คฑ': u'็ณ',
u'้คณ': u'้ฅง',
u'้คต': u'ๅ',
u'้คถ': u'้ฆ',
u'้คท': u'้ฆ',
u'้คธ': u'๐ฉ ',
u'้คบ': u'้ฆ',
u'้คผ': u'้ฅฉ',
u'้คพ': u'้ฆ',
u'้คฟ': u'้ฆ',
u'้ฅ': u'้ฆ',
u'้ฅ': u'้ฆ',
u'้ฅ
': u'้ฆ',
u'้ฅ': u'้ฆ',
u'้ฅ': u'้ฆ',
u'้ฅ': u'้ฆ',
u'้ฅ': u'้ฆ',
u'้ฅ': u'้ฆ',
u'้ฅ': u'้ฅฅ',
u'้ฅ': u'้ฅถ',
u'้ฅ': u'้ฃจ',
u'้ฅ': u'๐ซด',
u'้ฅ': u'้ค',
u'้ฅ': u'้ฆ',
u'้ฅข': u'้ฆ',
u'้ฆฌ': u'้ฉฌ',
u'้ฆญ': u'้ฉญ',
u'้ฆฎ': u'ๅฏ',
u'้ฆฑ': u'้ฉฎ',
u'้ฆณ': u'้ฉฐ',
u'้ฆด': u'้ฉฏ',
u'้ฆน': u'้ฉฒ',
u'้ง': u'้ฉณ',
u'้ง': u'๐ซ',
u'้ง': u'๐ฉงจ',
u'้ง': u'้ฉป',
u'้ง': u'้ฉฝ',
u'้ง': u'้ฉน',
u'้ง': u'้ฉต',
u'้ง': u'้ฉพ',
u'้ง': u'้ช',
u'้ง': u'้ฉธ',
u'้ง': u'๐ฉงซ',
u'้ง': u'้ฉถ',
u'้ง': u'้ฉผ',
u'้ง': u'้ฉท',
u'้งก': u'้ช',
u'้งข': u'้ช',
u'้งง': u'๐ฉงฒ',
u'้งฉ': u'๐ฉงด',
u'้งญ': u'้ช',
u'้งฐ': u'้ช',
u'้งฑ': u'้ช',
u'้งถ': u'๐ฉงบ',
u'้งธ': u'้ช',
u'้งป': u'๐ซฃ',
u'้งฟ': u'้ช',
u'้จ': u'้ช',
u'้จ': u'้ช',
u'้จ': u'๐ซค',
u'้จ
': u'้ช',
u'้จ': u'้ช',
u'้จ': u'้ช',
u'้จ': u'้ช',
u'้จ': u'้ช',
u'้จ': u'๐ฉจ',
u'้จ': u'้ช',
u'้จ': u'้ช',
u'้จ': u'๐ฉจ',
u'้จ': u'๐ฉจ',
u'้จ': u'๐ฉจ',
u'้จ ': u'๐ซจ',
u'้จค': u'้ช',
u'้จง': u'ไฏ',
u'้จช': u'๐ฉจ',
u'้จซ': u'้ช',
u'้จญ': u'้ช',
u'้จฎ': u'้ช',
u'้จฐ': u'่
พ',
u'้จถ': u'้ฉบ',
u'้จท': u'้ช',
u'้จธ': u'้ช',
u'้จพ': u'้ชก',
u'้ฉ': u'่ฆ',
u'้ฉ': u'้ช',
u'้ฉ': u'้ช',
u'้ฉ': u'้ช ',
u'้ฉ': u'้ชข',
u'้ฉ
': u'้ฉฑ',
u'้ฉ': u'้ช
',
u'้ฉ': u'๐ฉงฏ',
u'้ฉ': u'้ช',
u'้ฉ': u'้ช',
u'้ฉ': u'้ชฃ',
u'้ฉ': u'้ช',
u'้ฉ': u'้ช',
u'้ฉ': u'ๆ',
u'้ฉ': u'้ฉฟ',
u'้ฉ': u'้ชค',
u'้ฉข': u'้ฉด',
u'้ฉค': u'้ชง',
u'้ฉฅ': u'้ชฅ',
u'้ฉฆ': u'้ชฆ',
u'้ฉช': u'้ช',
u'้ฉซ': u'้ช',
u'้ชฏ': u'่ฎ',
u'้ซ': u'้ซ
',
u'้ซ': u'่',
u'้ซ': u'ไฝ',
u'้ซ': u'้ซ',
u'้ซ': u'้ซ',
u'้ซฎ': u'ๅ',
u'้ฌ': u'ๆพ',
u'้ฌ': u'่ก',
u'้ฌ': u'้กป',
u'้ฌข': u'้ฌ',
u'้ฌฅ': u'ๆ',
u'้ฌง': u'้น',
u'้ฌจ': u'ๅ',
u'้ฌฉ': u'้',
u'้ฌฎ': u'้',
u'้ฌฑ': u'้',
u'้ญ': u'้ญ',
u'้ญ': u'้ญ',
u'้ญ': u'้ฑผ',
u'้ญ': u'้ฑฝ',
u'้ญ': u'๐ซ',
u'้ญข': u'้ฑพ',
u'้ญฅ': u'๐ฉฝน',
u'้ญจ': u'้ฒ',
u'้ญฏ': u'้ฒ',
u'้ญด': u'้ฒ',
u'้ญท': u'้ฑฟ',
u'้ญบ': u'้ฒ',
u'้ฎ': u'้ฒ
',
u'้ฎ': u'้ฒ',
u'้ฎ': u'๐ซ',
u'้ฎ': u'้ฒ',
u'้ฎ': u'้ฒ',
u'้ฎ': u'้ฒ',
u'้ฎ': u'้ฒ',
u'้ฎ': u'้ฒ',
u'้ฎ': u'้ฒ',
u'้ฎ': u'้ฒ',
u'้ฎ': u'้ฒ',
u'้ฎ': u'๐ฉพ',
u'้ฎ': u'้ฒ',
u'้ฎ': u'้ฒ',
u'้ฎ': u'้ฒ',
u'้ฎ': u'้ฒ',
u'้ฎ': u'๐ฉฝพ',
u'้ฎฃ': u'ไฒ',
u'้ฎฆ': u'้ฒ',
u'้ฎช': u'้ฒ',
u'้ฎซ': u'้ฒ',
u'้ฎญ': u'้ฒ',
u'้ฎฎ': u'้ฒ',
u'้ฎฐ': u'๐ซ',
u'้ฎณ': u'้ฒ',
u'้ฎถ': u'้ฒช',
u'้ฎธ': u'๐ฉพ',
u'้ฎบ': u'้ฒ',
u'้ฏ': u'้ฒง',
u'้ฏ': u'้ฒ ',
u'้ฏ': u'๐ฉพ',
u'้ฏ': u'๐ซ',
u'้ฏ': u'้ฒฉ',
u'้ฏ': u'้ฒค',
u'้ฏ': u'้ฒจ',
u'้ฏ': u'้ฒฌ',
u'้ฏ': u'้ฒป',
u'้ฏ': u'้ฒฏ',
u'้ฏ': u'้ฒญ',
u'้ฏ': u'้ฒ',
u'้ฏ': u'้ฒท',
u'้ฏ': u'้ฒด',
u'้ฏก': u'้ฒฑ',
u'้ฏข': u'้ฒต',
u'้ฏค': u'้ฒฒ',
u'้ฏง': u'้ฒณ',
u'้ฏจ': u'้ฒธ',
u'้ฏช': u'้ฒฎ',
u'้ฏซ': u'้ฒฐ',
u'้ฏฐ': u'้ฒ',
u'้ฏฑ': u'๐ฉพ',
u'้ฏด': u'้ฒบ',
u'้ฏถ': u'๐ฉฝผ',
u'้ฏท': u'้ณ',
u'้ฏฝ': u'้ฒซ',
u'้ฏฟ': u'้ณ',
u'้ฐ': u'้ณ',
u'้ฐ': u'้ฒ',
u'้ฐ': u'้ณ',
u'้ฐ': u'ไฒ ',
u'้ฐ': u'้ฒฝ',
u'้ฐ': u'้ณ',
u'้ฐ': u'ไฒก',
u'้ฐ': u'้ณ
',
u'้ฐ': u'้ฒพ',
u'้ฐ': u'้ณ',
u'้ฐ': u'้ณ',
u'้ฐ': u'้ณ',
u'้ฐ': u'้ณ',
u'้ฐ': u'้ณ',
u'้ฐ ': u'้ณ',
u'้ฐฃ': u'้ฒฅ',
u'้ฐค': u'๐ซ',
u'้ฐฅ': u'้ณ',
u'้ฐง': u'ไฒข',
u'้ฐจ': u'้ณ',
u'้ฐฉ': u'้ณ',
u'้ฐญ': u'้ณ',
u'้ฐฎ': u'้ณ',
u'้ฐฑ': u'้ฒข',
u'้ฐฒ': u'้ณ',
u'้ฐณ': u'้ณ',
u'้ฐต': u'้ณ',
u'้ฐท': u'้ฒฆ',
u'้ฐน': u'้ฒฃ',
u'้ฐบ': u'้ฒน',
u'้ฐป': u'้ณ',
u'้ฐผ': u'้ณ',
u'้ฐพ': u'้ณ',
u'้ฑ': u'้ณ',
u'้ฑ
': u'้ณ',
u'้ฑ': u'๐ฉพ',
u'้ฑ': u'้ณ',
u'้ฑ': u'้ณ',
u'้ฑ': u'้ณ',
u'้ฑ': u'้ณ',
u'้ฑ': u'้ณ',
u'้ฑ': u'้ณ',
u'้ฑ': u'้ฒ',
u'้ฑ': u'้ฒผ',
u'้ฑ': u'้ฒ',
u'้ฑ ': u'้ฒ',
u'้ฑฃ': u'้ณฃ',
u'้ฑค': u'้ณก',
u'้ฑง': u'้ณข',
u'้ฑจ': u'้ฒฟ',
u'้ฑญ': u'้ฒ',
u'้ฑฎ': u'๐ซ',
u'้ฑฏ': u'้ณ ',
u'้ฑท': u'้ณ',
u'้ฑธ': u'้ฒ',
u'้ฑบ': u'้ฒก',
u'้ณฅ': u'้ธ',
u'้ณง': u'ๅซ',
u'้ณฉ': u'้ธ ',
u'้ณฌ': u'ๅซ',
u'้ณฒ': u'้ธค',
u'้ณณ': u'ๅค',
u'้ณด': u'้ธฃ',
u'้ณถ': u'้ธข',
u'้ณท': u'๐ซ',
u'้ณผ': u'๐ช',
u'้ณพ': u'ได',
u'้ด': u'๐ซ',
u'้ด': u'้ธฉ',
u'้ด': u'้ธจ',
u'้ด': u'้ธฆ',
u'้ด': u'้ธฐ',
u'้ด': u'้ธต',
u'้ด': u'๐ซก',
u'้ด': u'้ธณ',
u'้ด': u'๐ช',
u'้ด': u'้ธฒ',
u'้ด': u'้ธฎ',
u'้ด': u'้ธฑ',
u'้ดฃ': u'้ธช',
u'้ดฆ': u'้ธฏ',
u'้ดจ': u'้ธญ',
u'้ดฏ': u'้ธธ',
u'้ดฐ': u'้ธน',
u'้ดฒ': u'๐ช',
u'้ดด': u'้ธป',
u'้ดท': u'ได',
u'้ดป': u'้ธฟ',
u'้ดฟ': u'้ธฝ',
u'้ต': u'ได',
u'้ต': u'้ธบ',
u'้ต': u'้ธผ',
u'้ต': u'้น',
u'้ต': u'้น',
u'้ต': u'้น',
u'้ต': u'้น',
u'้ต': u'๐ช',
u'้ต': u'้น',
u'้ต': u'้น
',
u'้ต ': u'้น',
u'้ตก': u'้น',
u'้ตช': u'้น',
u'้ตฌ': u'้น',
u'้ตฎ': u'้น',
u'้ตฏ': u'้น',
u'้ตฐ': u'้',
u'้ตฒ': u'้น',
u'้ตท': u'้น',
u'้ตพ': u'้น',
u'้ถ': u'ได',
u'้ถ': u'้ธซ',
u'้ถ': u'้น',
u'้ถ': u'้น',
u'้ถ': u'๐ซถ',
u'้ถ': u'้น',
u'้ถ': u'้น',
u'้ถ': u'๐ซธ',
u'้ถ': u'้น',
u'้ถ': u'้น',
u'้ถก': u'้น',
u'้ถฅ': u'้น',
u'้ถฉ': u'้น',
u'้ถช': u'ได',
u'้ถฌ': u'้ธง',
u'้ถฏ': u'่บ',
u'้ถฒ': u'้น',
u'้ถด': u'้นค',
u'้ถน': u'้น ',
u'้ถบ': u'้นก',
u'้ถป': u'้น',
u'้ถผ': u'้นฃ',
u'้ถฟ': u'้น',
u'้ท': u'้น',
u'้ท': u'้นข',
u'้ท': u'้น',
u'้ท': u'้ธก',
u'้ท': u'ได',
u'้ท': u'้น',
u'้ท': u'้นง',
u'้ท': u'๐ช',
u'้ท': u'้นฅ',
u'้ท': u'้ธฅ',
u'้ท': u'้ธท',
u'้ท': u'้นจ',
u'้ทฅ': u'้ธถ',
u'้ทฆ': u'้นช',
u'้ทจ': u'๐ช',
u'้ทซ': u'้น',
u'้ทฏ': u'้นฉ',
u'้ทฒ': u'้นซ',
u'้ทณ': u'้น',
u'้ทธ': u'้นฌ',
u'้ทน': u'้นฐ',
u'้ทบ': u'้นญ',
u'้ทฝ': u'้ธด',
u'้ทฟ': u'ได',
u'้ธ': u'ใถ',
u'้ธ': u'้นฏ',
u'้ธ': u'๐ซข',
u'้ธ': u'้นฑ',
u'้ธ': u'้นฒ',
u'้ธ': u'้ธฌ',
u'้ธ': u'้นด',
u'้ธ': u'้นฆ',
u'้ธ': u'้นณ',
u'้ธ': u'้น',
u'้ธ': u'้ธพ',
u'้นต': u'ๅค',
u'้นน': u'ๅธ',
u'้นบ': u'้นพ',
u'้นผ': u'็ขฑ',
u'้นฝ': u'็',
u'้บ': u'ไธฝ',
u'้บฅ': u'้บฆ',
u'้บจ': u'๐ช',
u'้บฉ': u'้บธ',
u'้บช': u'้ข',
u'้บซ': u'้ข',
u'้บฏ': u'ๆฒ',
u'้บฒ': u'๐ช',
u'้บณ': u'๐ช',
u'้บด': u'ๆฒ',
u'้บต': u'้ข',
u'้บผ': u'ไน',
u'้บฝ': u'ไน',
u'้ป': u'้ป',
u'้ป': u'้ป',
u'้ป': u'็น',
u'้ปจ': u'ๅ
',
u'้ปฒ': u'้ปช',
u'้ปด': u'้',
u'้ปถ': u'้ปก',
u'้ปท': u'้ปฉ',
u'้ปฝ': u'้ปพ',
u'้ปฟ': u'้ผ',
u'้ผ': u'้ผ',
u'้ผ': u'ๅฌ',
u'้ผด': u'้ผน',
u'้ฝ': u'้ฝ',
u'้ฝ': u'้ฝ',
u'้ฝ': u'ๆ',
u'้ฝ': u'่ต',
u'้ฝ': u'้ฝ',
u'้ฝ': u'้ฝฟ',
u'้ฝ': u'้พ',
u'้ฝ': u'้พ',
u'้ฝ': u'้พ',
u'้ฝ': u'้พ
',
u'้ฝ': u'้พ',
u'้ฝ': u'้พ',
u'้ฝ ': u'้พ',
u'้ฝก': u'้พ',
u'้ฝฃ': u'ๅบ',
u'้ฝฆ': u'้พ',
u'้ฝช': u'้พ',
u'้ฝฌ': u'้พ',
u'้ฝฒ': u'้พ',
u'้ฝถ': u'่
ญ',
u'้ฝท': u'้พ',
u'้พ': u'้พ',
u'้พ': u'ๅ',
u'้พ': u'ๅบ',
u'้พ': u'้พ',
u'้พ': u'้พ',
u'้พ': u'้พ',
u'๐กต': u'ใ',
u'๐ก น': u'ใฟ',
u'๐กข': u'ใ ',
u'๐กป': u'ๅฒ',
u'๐คชบ': u'ใป',
u'๐คซฉ': u'ใป',
u'๐ฆช': u'ไฝ',
u'๐งต': u'ไ',
u'๐ง': u'ไ',
u'๐งฆง': u'๐ซ',
u'๐งฉ': u'ไฅ',
u'๐งตณ': u'ไ',
u'๐จข': u'ไข',
u'๐จฅ': u'๐จฑ',
u'๐จฆซ': u'ไฆ',
u'๐จง': u'ไฆ',
u'๐จงฑ': u'๐จฑ',
u'๐จซ': u'๐จฑ',
u'๐จฎ': u'๐จฑ',
u'๐จฏ
': u'ไฅฟ',
u'๐ฉข': u'๐ฉพ',
u'๐ฉช': u'๐ฉฝ',
u'๐ฉฃ': u'๐ฉ',
u'๐ฉ': u'๐ฉฆ',
u'๐ฉก': u'๐ฉง',
u'๐ฉ': u'๐ฉฉ',
u'๐ฉ': u'๐ฉญ',
u'๐ฉน': u'๐ฉจ',
u'๐ฉบ': u'๐ฉฌ',
u'๐ฉ': u'๐ฉฐ',
u'๐ฉฆ': u'๐ฉ ',
u'๐ฉ': u'๐ฉ ',
u'๐ฉฏ': u'ไญช',
u'๐ฉ': u'๐ฉ
',
u'๐ฉกบ': u'๐ฉงฆ',
u'๐ฉขก': u'๐ฉงฌ',
u'๐ฉขด': u'๐ฉงต',
u'๐ฉขธ': u'๐ฉงณ',
u'๐ฉขพ': u'๐ฉงฎ',
u'๐ฉฃ': u'๐ฉงถ',
u'๐ฉฃ': u'ไฏ',
u'๐ฉฃต': u'๐ฉงป',
u'๐ฉฃบ': u'๐ฉงผ',
u'๐ฉค': u'๐ฉงฉ',
u'๐ฉค': u'๐ฉจ',
u'๐ฉคฒ': u'๐ฉจ',
u'๐ฉคธ': u'๐ฉจ
',
u'๐ฉฅ': u'๐ฉจ',
u'๐ฉฅ': u'๐ฉจ',
u'๐ฉฅ': u'๐ฉงฑ',
u'๐ฉฅ': u'๐ฉจ',
u'๐ฉง': u'๐ฉจ',
u'๐ฉตฉ': u'๐ฉฝบ',
u'๐ฉตน': u'๐ฉฝป',
u'๐ฉถ': u'ไฒ',
u'๐ฉถฐ': u'๐ฉฝฟ',
u'๐ฉถฑ': u'๐ฉฝฝ',
u'๐ฉทฐ': u'๐ฉพ',
u'๐ฉธ': u'๐ฉพ
',
u'๐ฉธฆ': u'๐ฉพ',
u'๐ฉฝ': u'๐ฉพ',
u'๐ฉฟช': u'๐ช',
u'๐ชฆ': u'๐ช
',
u'๐ชพ': u'๐ช',
u'๐ช': u'๐ช',
u'๐ช': u'๐ช',
u'๐ช': u'๐ช',
u'๐ช': u'๐ช',
u'๐ช': u'๐ช',
u'๐ช': u'๐ช',
u'๐ช': u'๐ช',
u'๐ชณ': u'๐ช',
u'๐ช': u'๐ช',
u'๐ชฏ': u'๐ช',
u'๐ซ': u'่ปฟ',
u'ใๆไนพ': u'ใๆไนพ',
u'ไธ่็่ทก': u'ไธ็็่ฟน',
u'ไธ่้้': u'ไธ็่พน้
',
u'่่': u'ไธ็',
u'่่ๆธ': u'ไธ่ไนฆ',
u'่่ไฝ': u'ไธ่ไฝ',
u'่่ๅ': u'ไธ่ๅ',
u'่่้': u'ไธ่ๅฝ',
u'่่็จฑ': u'ไธ่็งฐ',
u'่่่
': u'ไธ่่
',
u'่่่ฟฐ': u'ไธ่่ฟฐ',
u'ไธ่': u'ไธ็',
u'ไธ่ๆธ': u'ไธ่ไนฆ',
u'ไธ่ไฝ': u'ไธ่ไฝ',
u'ไธ่ๅ': u'ไธ่ๅ',
u'ไธ่้': u'ไธ่ๅฝ',
u'ไธ่็จฑ': u'ไธ่็งฐ',
u'ไธ่่
': u'ไธ่่
',
u'ไธ่่ฟฐ': u'ไธ่่ฟฐ',
u'ๅฐ่': u'ไธ่',
u'่จ่': u'ไธด็',
u'่จ่ๆธ': u'ไธด่ไนฆ',
u'่จ่ไฝ': u'ไธด่ไฝ',
u'่จ่ๅ': u'ไธด่ๅ',
u'่จ่้': u'ไธด่ๅฝ',
u'่จ่็จฑ': u'ไธด่็งฐ',
u'่จ่่
': u'ไธด่่
',
u'่จ่่ฟฐ': u'ไธด่่ฟฐ',
u'้บ่': u'ไธฝ็',
u'้บ่ๆธ': u'ไธฝ่ไนฆ',
u'้บ่ไฝ': u'ไธฝ่ไฝ',
u'้บ่ๅ': u'ไธฝ่ๅ',
u'้บ่้': u'ไธฝ่ๅฝ',
u'้บ่็จฑ': u'ไธฝ่็งฐ',
u'้บ่่
': u'ไธฝ่่
',
u'้บ่่ฟฐ': u'ไธฝ่่ฟฐ',
u'ๆจ่': u'ไน็',
u'ๆจ่ๆธ': u'ไน่ไนฆ',
u'ๆจ่ไฝ': u'ไน่ไฝ',
u'ๆจ่ๅ': u'ไน่ๅ',
u'ๆจ่้': u'ไน่ๅฝ',
u'ๆจ่็จฑ': u'ไน่็งฐ',
u'ๆจ่่
': u'ไน่่
',
u'ๆจ่่ฟฐ': u'ไน่่ฟฐ',
u'ไน่': u'ไน็',
u'ไน่ๆธ': u'ไน่ไนฆ',
u'ไน่ไฝ': u'ไน่ไฝ',
u'ไน่ๅ': u'ไน่ๅ',
u'ไน่้': u'ไน่ๅฝ',
u'ไน่็จฑ': u'ไน่็งฐ',
u'ไน่่
': u'ไน่่
',
u'ไน่่ฟฐ': u'ไน่่ฟฐ',
u'ไนพไธๅ': u'ไนพไธๅ',
u'ไนพไธๅฃ': u'ไนพไธๅ',
u'ไนพไธ็ป': u'ไนพไธ็ป',
u'ไนพไธ็ต': u'ไนพไธ็ป',
u'ไนพไธไนพไธ': u'ไนพไธไนพไธ',
u'ไนพ็บๅคฉ': u'ไนพไธบๅคฉ',
u'ไนพ็บ้ฝ': u'ไนพไธบ้ณ',
u'ไนพไน': u'ไนพไน',
u'ไนพไนพ': u'ไนพไนพ',
u'ไนพไบจ': u'ไนพไบจ',
u'ไนพๅ': u'ไนพไปช',
u'ไนพไปช': u'ไนพไปช',
u'ไนพไฝ': u'ไนพไฝ',
u'ไนพๅฅ': u'ไนพๅฅ',
u'ไนพๅฅไน': u'ไนพๅฅไน',
u'ไนพๅ
': u'ไนพๅ
',
u'ไนพๅ
': u'ไนพๅ
',
u'ไนพๅ
ด': u'ไนพๅ
ด',
u'ไนพ่': u'ไนพๅ
ด',
u'ไนพๅ': u'ไนพๅ',
u'ไนพๅฒก': u'ไนพๅ',
u'ไนพๅ': u'ไนพๅ',
u'ไนพๅ': u'ไนพๅ',
u'ไนพๅ': u'ไนพๅ',
u'ไนพๅ': u'ไนพๅ',
u'ไนพๅ': u'ไนพๅก',
u'ไนพๅก': u'ไนพๅก',
u'ไนพๅ': u'ไนพๅ',
u'ไนพๅฆ': u'ไนพๅฆ',
u'ไนพๅฟ': u'ไนพๅฟ',
u'ไนพ็ธฃ': u'ไนพๅฟ',
u'ไนพๅฐ': u'ไนพๅฐ',
u'ไนพๅ': u'ไนพๅ',
u'ไนพๅ': u'ไนพๅฏ',
u'ไนพๅฏ': u'ไนพๅฏ',
u'ไนพๅฝ': u'ไนพๅฝ',
u'ไนพๅ': u'ไนพๅ',
u'ไนพๅ': u'ไนพๅ',
u'ไนพๅ': u'ไนพๅพ',
u'ไนพๅพ': u'ไนพๅพ',
u'ไนพๅค': u'ไนพๅค',
u'ไนพๅ': u'ไนพๅ',
u'ไนพๅบ': u'ไนพๅบ',
u'ไนพๅคฉไน': u'ไนพๅคฉไน',
u'ไนพๅง': u'ไนพๅง',
u'ไนพๅง': u'ไนพๅง',
u'ไนพๅฏง': u'ไนพๅฎ',
u'ไนพๅฎ': u'ไนพๅฎ',
u'ไนพๅฎ
': u'ไนพๅฎ
',
u'ไนพๅฎ': u'ไนพๅฎ',
u'ไนพๅฎ': u'ไนพๅฎ',
u'ไนพๅฎ': u'ไนพๅฎ',
u'ไนพๅฐ': u'ไนพๅฐ',
u'ไนพๅฑ
': u'ไนพๅฑ
',
u'ไนพๅด': u'ไนพๅฒ',
u'ไนพๅฒ': u'ไนพๅฒ',
u'ไนพๅท': u'ไนพๅท',
u'ไนพๅท': u'ไนพๅท',
u'ไนพๅผ': u'ไนพๅผ',
u'ไนพ้': u'ไนพๅฝ',
u'ไนพๅฝ': u'ไนพๅฝ',
u'ไนพๅพ': u'ไนพๅพ',
u'ไนพๅพท': u'ไนพๅพท',
u'ไนพๅฟ': u'ไนพๅฟ',
u'ไนพๅฟ ': u'ไนพๅฟ ',
u'ไนพๆ': u'ไนพๆ',
u'ไนพๆท': u'ไนพๆญ',
u'ไนพๆญ': u'ไนพๆญ',
u'ไนพๆน': u'ไนพๆน',
u'ไนพๆฝ': u'ไนพๆฝ',
u'ไนพๆฆ': u'ไนพๆฆ',
u'ไนพๆ': u'ไนพๆ',
u'ไนพๆง': u'ไนพๆง',
u'ไนพๆ': u'ไนพๆ',
u'ไนพๆ': u'ไนพๆ',
u'ไนพๆฏ': u'ไนพๆฏ',
u'ไนพๆท': u'ไนพๆท',
u'ไนพๆ': u'ไนพๆ',
u'ไนพๆ': u'ไนพๆ',
u'ไนพๆง': u'ไนพๆ',
u'ไนพๆข': u'ไนพๆข',
u'ไนพๆจ': u'ไนพๆข',
u'ไนพๆ ': u'ไนพๆ ',
u'ไนพๆฃ': u'ไนพๆ ',
u'ไนพๆญฅ': u'ไนพๆญฅ',
u'ไนพๆฐ': u'ไนพๆฐ',
u'ไนพๆฒๅ': u'ไนพๆฒๅ',
u'ไนพๆฒๅฉ': u'ไนพๆฒๅฉ',
u'ไนพๆณ': u'ไนพๆณ',
u'ไนพๆทณ': u'ไนพๆทณ',
u'ไนพๆธ
ๅฎฎ': u'ไนพๆธ
ๅฎซ',
u'ไนพๆธ
ๅฎซ': u'ไนพๆธ
ๅฎซ',
u'ไนพๆธฅ': u'ไนพๆธฅ',
u'ไนพ้': u'ไนพ็ต',
u'ไนพ็ต': u'ไนพ็ต',
u'ไนพ็ท': u'ไนพ็ท',
u'ไนพ็': u'ไนพ็',
u'ไนพ็ไธ': u'ไนพ็ไธ',
u'ไนพ็ข': u'ไนพ็ข',
u'ไนพ็ฅ': u'ไนพ็ฅ',
u'ไนพ็ฉน': u'ไนพ็ฉน',
u'ไนพ็ซ': u'ไนพ็ชฆ',
u'ไนพ็ชฆ': u'ไนพ็ชฆ',
u'ไนพ็ซบ': u'ไนพ็ซบ',
u'ไนพ็ฏค': u'ไนพ็ฌ',
u'ไนพ็ฌ': u'ไนพ็ฌ',
u'ไนพ็ฌฆ': u'ไนพ็ฌฆ',
u'ไนพ็ญ': u'ไนพ็ญ',
u'ไนพ็ฒพ': u'ไนพ็ฒพ',
u'ไนพ็ด
': u'ไนพ็บข',
u'ไนพ็บข': u'ไนพ็บข',
u'ไนพ็ถฑ': u'ไนพ็บฒ',
u'ไนพ็บฒ': u'ไนพ็บฒ',
u'ไนพ็บฝ': u'ไนพ็บฝ',
u'ไนพ็ด': u'ไนพ็บฝ',
u'ไนพ็ตก': u'ไนพ็ป',
u'ไนพ็ป': u'ไนพ็ป',
u'ไนพ็ตฑ': u'ไนพ็ป',
u'ไนพ็ป': u'ไนพ็ป',
u'ไนพ็ถญ': u'ไนพ็ปด',
u'ไนพ็ปด': u'ไนพ็ปด',
u'ไนพ็พ
': u'ไนพ็ฝ',
u'ไนพ็ฝ': u'ไนพ็ฝ',
u'ไนพ่ฑ': u'ไนพ่ฑ',
u'ไนพ่ญ': u'ไนพ่ซ',
u'ไนพ่ซ': u'ไนพ่ซ',
u'ไนพ่ก': u'ไนพ่ก',
u'ไนพ่กก': u'ไนพ่กก',
u'ไนพ่ฆ': u'ไนพ่ฆ',
u'ไนพ่ฑก': u'ไนพ่ฑก',
u'ไนพ่ฑกๆญท': u'ไนพ่ฑกๅ',
u'ไนพ่ฑกๅ': u'ไนพ่ฑกๅ',
u'ไนพ่ด': u'ไนพ่ด',
u'ไนพ่ฒ': u'ไนพ่ด',
u'ไนพ่ฒบ': u'ไนพ่ดถ',
u'ไนพ่ดถ': u'ไนพ่ดถ',
u'ไนพ่ฝฆ': u'ไนพ่ฝฆ',
u'ไนพ่ป': u'ไนพ่ฝฆ',
u'ไนพ่ฝด': u'ไนพ่ฝด',
u'ไนพ่ปธ': u'ไนพ่ฝด',
u'ไนพ้': u'ไนพ้',
u'ไนพ้ ': u'ไนพ้ ',
u'ไนพ้': u'ไนพ้',
u'ไนพ้': u'ไนพ้ด',
u'ไนพ้ด': u'ไนพ้ด',
u'ไนพ้ง': u'ไนพ้ง',
u'ไนพ้': u'ไนพ้ง',
u'ไนพ้ผ': u'ไนพ้ผ',
u'ไนพ้ฅ': u'ไนพ้ผ',
u'ไนพ้': u'ไนพ้',
u'ไนพ้ต': u'ไนพ้ต',
u'ไนพ้': u'ไนพ้',
u'ไนพ้ณ': u'ไนพ้ณ',
u'ไนพ้กพ': u'ไนพ้กพ',
u'ไนพ้กง': u'ไนพ้กพ',
u'ไนพ้ฃ': u'ไนพ้ฃ',
u'ไนพ้ขจ': u'ไนพ้ฃ',
u'ไนพ้ฆ': u'ไนพ้ฆ',
u'ไนพ้ฆฌ': u'ไนพ้ฉฌ',
u'ไนพ้ฉฌ': u'ไนพ้ฉฌ',
u'ไนพ้ต ': u'ไนพ้น',
u'ไนพ้น': u'ไนพ้น',
u'ไนพ้ตฒ': u'ไนพ้น',
u'ไนพ้น': u'ไนพ้น',
u'ไนพ้พ': u'ไนพ้พ',
u'ไนพ้พ': u'ไนพ้พ',
u'ไนพ๏ผๅฅไน': u'ไนพ๏ผๅฅไน',
u'ไนพ๏ผๅคฉไน': u'ไนพ๏ผๅคฉไน',
u'็ญ่': u'ไบ็',
u'็ญ่ๆธ': u'ไบ่ไนฆ',
u'็ญ่ไฝ': u'ไบ่ไฝ',
u'็ญ่ๅ': u'ไบ่ๅ',
u'็ญ่้': u'ไบ่ๅฝ',
u'็ญ่็จฑ': u'ไบ่็งฐ',
u'็ญ่่
': u'ไบ่่
',
u'็ญ่่ฟฐ': u'ไบ่่ฟฐ',
u'ไบ็ฎๅฑฑ': u'ไบ็ฎๅฑฑ',
u'ไบฎ่': u'ไบฎ็',
u'ไบฎ่ๆธ': u'ไบฎ่ไนฆ',
u'ไบฎ่ไฝ': u'ไบฎ่ไฝ',
u'ไบฎ่ๅ': u'ไบฎ่ๅ',
u'ไบฎ่้': u'ไบฎ่ๅฝ',
u'ไบฎ่็จฑ': u'ไบฎ่็งฐ',
u'ไบฎ่่
': u'ไบฎ่่
',
u'ไบฎ่่ฟฐ': u'ไบฎ่่ฟฐ',
u'ไป่': u'ไป็',
u'ไป่ๆธ': u'ไป่ไนฆ',
u'ไป่ไฝ': u'ไป่ไฝ',
u'ไป่ๅ': u'ไป่ๅ',
u'ไป่้': u'ไป่ๅฝ',
u'ไป่็จฑ': u'ไป่็งฐ',
u'ไป่่
': u'ไป่่
',
u'ไป่่ฟฐ': u'ไป่่ฟฐ',
u'ไปฃ่กจ่': u'ไปฃ่กจ็',
u'ไปฃ่กจ่ๆธ': u'ไปฃ่กจ่ไนฆ',
u'ไปฃ่กจ่ไฝ': u'ไปฃ่กจ่ไฝ',
u'ไปฃ่กจ่ๅ': u'ไปฃ่กจ่ๅ',
u'ไปฃ่กจ่้': u'ไปฃ่กจ่ๅฝ',
u'ไปฃ่กจ่็จฑ': u'ไปฃ่กจ่็งฐ',
u'ไปฃ่กจ่่
': u'ไปฃ่กจ่่
',
u'ไปฃ่กจ่่ฟฐ': u'ไปฃ่กจ่่ฟฐ',
u'ไปฅๅพฎ็ฅ่': u'ไปฅๅพฎ็ฅ่',
u'ไปฐๅฑ่ๆธ': u'ไปฐๅฑ่ไนฆ',
u'ๅฝทๅฝฟ': u'ไปฟไฝ',
u'ๅคฅ่จ': u'ไผ่ฎก',
u'ๅณ่': u'ไผ ็',
u'ๅณ่ๆธ': u'ไผ ่ไนฆ',
u'ๅณ่ไฝ': u'ไผ ่ไฝ',
u'ๅณ่ๅ': u'ไผ ่ๅ',
u'ๅณ่้': u'ไผ ่ๅฝ',
u'ๅณ่็จฑ': u'ไผ ่็งฐ',
u'ๅณ่่
': u'ไผ ่่
',
u'ๅณ่่ฟฐ': u'ไผ ่่ฟฐ',
u'ไผด่': u'ไผด็',
u'ไผด่ๆธ': u'ไผด่ไนฆ',
u'ไผด่ไฝ': u'ไผด่ไฝ',
u'ไผด่ๅ': u'ไผด่ๅ',
u'ไผด่้': u'ไผด่ๅฝ',
u'ไผด่็จฑ': u'ไผด่็งฐ',
u'ไผด่่
': u'ไผด่่
',
u'ไผด่่ฟฐ': u'ไผด่่ฟฐ',
u'ไฝ่': u'ไฝ็',
u'ไฝ่ๆธ': u'ไฝ่ไนฆ',
u'ไฝ่ไฝ': u'ไฝ่ไฝ',
u'ไฝ่ๅ': u'ไฝ่ๅ',
u'ไฝ่้': u'ไฝ่ๅฝ',
u'ไฝ่็จฑ': u'ไฝ่็งฐ',
u'ไฝ่่
': u'ไฝ่่
',
u'ไฝ่่ฟฐ': u'ไฝ่่ฟฐ',
u'ไฝ่': u'ไฝ็',
u'ไฝ่ๆธ': u'ไฝ่ไนฆ',
u'ไฝ่ไฝ': u'ไฝ่ไฝ',
u'ไฝ่ๅ': u'ไฝ่ๅ',
u'ไฝ่้': u'ไฝ่ๅฝ',
u'ไฝ่็จฑ': u'ไฝ่็งฐ',
u'ไฝ่่
': u'ไฝ่่
',
u'ไฝ่่ฟฐ': u'ไฝ่่ฟฐ',
u'ไฝ้ ญ่็ณ': u'ไฝๅคด่็ฒช',
u'ไพๅธ็ด': u'ไพ็ฝ็บช',
u'ๅด่': u'ไพง็',
u'ๅด่ๆธ': u'ไพง่ไนฆ',
u'ๅด่ไฝ': u'ไพง่ไฝ',
u'ๅด่ๅ': u'ไพง่ๅ',
u'ๅด่้': u'ไพง่ๅฝ',
u'ๅด่็จฑ': u'ไพง่็งฐ',
u'ๅด่่
': u'ไพง่่
',
u'ๅด่่ฟฐ': u'ไพง่่ฟฐ',
u'ไฟ่ญท่': u'ไฟๆค็',
u'ไฟ้่': u'ไฟ้็',
u'ไฟ้่ๆธ': u'ไฟ้่ไนฆ',
u'ไฟ้่ไฝ': u'ไฟ้่ไฝ',
u'ไฟ้่ๅ': u'ไฟ้่ๅ',
u'ไฟ้่้': u'ไฟ้่ๅฝ',
u'ไฟ้่็จฑ': u'ไฟ้่็งฐ',
u'ไฟ้่่
': u'ไฟ้่่
',
u'ไฟ้่่ฟฐ': u'ไฟ้่่ฟฐ',
u'ไฟก่': u'ไฟก็',
u'ไฟก่ๆธ': u'ไฟก่ไนฆ',
u'ไฟก่ไฝ': u'ไฟก่ไฝ',
u'ไฟก่ๅ': u'ไฟก่ๅ',
u'ไฟก่้': u'ไฟก่ๅฝ',
u'ไฟก่็จฑ': u'ไฟก่็งฐ',
u'ไฟก่่
': u'ไฟก่่
',
u'ไฟก่่ฟฐ': u'ไฟก่่ฟฐ',
u'ไฟฎ้': u'ไฟฎ็ผ',
u'ๅ่': u'ๅ็',
u'ๅ่ๆธ': u'ๅ่ไนฆ',
u'ๅ่ไฝ': u'ๅ่ไฝ',
u'ๅ่ๅ': u'ๅ่ๅ',
u'ๅ่้': u'ๅ่ๅฝ',
u'ๅ่็จฑ': u'ๅ่็งฐ',
u'ๅ่่
': u'ๅ่่
',
u'ๅ่่ฟฐ': u'ๅ่่ฟฐ',
u'่ๅฉ': u'ๅๅฉ',
u'่ๅฃ': u'ๅๅฃ',
u'่ๆ': u'ๅๆ',
u'่ๆ
': u'ๅๆ
',
u'่ๆฉ': u'ๅๆบ',
u'่ๆญค': u'ๅๆญค',
u'่็ฑ': u'ๅ็ฑ',
u'ๅ่': u'ๅ็',
u'่็': u'ๅ็',
u'่่': u'ๅ็',
u'่็ซฏ': u'ๅ็ซฏ',
u'ๅ่ๆธ': u'ๅ่ไนฆ',
u'ๅ่ไฝ': u'ๅ่ไฝ',
u'ๅ่ๅ': u'ๅ่ๅ',
u'ๅ่้': u'ๅ่ๅฝ',
u'ๅ่็จฑ': u'ๅ่็งฐ',
u'ๅ่่
': u'ๅ่่
',
u'ๅ่่ฟฐ': u'ๅ่่ฟฐ',
u'่่ฉ': u'ๅ่ฏ',
u'ๅ่': u'ๅ็',
u'ๅ่ๆธ': u'ๅ่ไนฆ',
u'ๅ่ไฝ': u'ๅ่ไฝ',
u'ๅ่ๅ': u'ๅ่ๅ',
u'ๅ่้': u'ๅ่ๅฝ',
u'ๅ่็จฑ': u'ๅ่็งฐ',
u'ๅ่่
': u'ๅ่่
',
u'ๅ่่ฟฐ': u'ๅ่่ฟฐ',
u'ๅท่': u'ๅท็',
u'ๅท่ๆธ': u'ๅท่ไนฆ',
u'ๅท่ไฝ': u'ๅท่ไฝ',
u'ๅท่ๅ': u'ๅท่ๅ',
u'ๅท่้': u'ๅท่ๅฝ',
u'ๅท่็จฑ': u'ๅท่็งฐ',
u'ๅท่่
': u'ๅท่่
',
u'ๅท่่ฟฐ': u'ๅท่่ฟฐ',
u'ๅขไฟฌ': u'ๅขไฟฌ',
u'ๅ
่': u'ๅ
็',
u'ๅ
่ๆธ': u'ๅ
่ไนฆ',
u'ๅ
่ไฝ': u'ๅ
่ไฝ',
u'ๅ
่ๅ': u'ๅ
่ๅ',
u'ๅ
่้': u'ๅ
่ๅฝ',
u'ๅ
่็จฑ': u'ๅ
่็งฐ',
u'ๅ
่่
': u'ๅ
่่
',
u'ๅ
่่ฟฐ': u'ๅ
่่ฟฐ',
u'้่': u'ๅ
ณ็',
u'้่ๆธ': u'ๅ
ณ่ไนฆ',
u'้่ไฝ': u'ๅ
ณ่ไฝ',
u'้่ๅ': u'ๅ
ณ่ๅ',
u'้่้': u'ๅ
ณ่ๅฝ',
u'้่็จฑ': u'ๅ
ณ่็งฐ',
u'้่่
': u'ๅ
ณ่่
',
u'้่่ฟฐ': u'ๅ
ณ่่ฟฐ',
u'ๅ่': u'ๅ็',
u'ๅ่ๆธ': u'ๅ่ไนฆ',
u'ๅ่ไฝ': u'ๅ่ไฝ',
u'ๅ่ๅ': u'ๅ่ๅ',
u'ๅ่้': u'ๅ่ๅฝ',
u'ๅ่็จฑ': u'ๅ่็งฐ',
u'ๅ่่
': u'ๅ่่
',
u'ๅ่่ฟฐ': u'ๅ่่ฟฐ',
u'ๅ่': u'ๅ็',
u'ๅ่ๆธ': u'ๅ่ไนฆ',
u'ๅ่ไฝ': u'ๅ่ไฝ',
u'ๅ่ๅ': u'ๅ่ๅ',
u'ๅ่้': u'ๅ่ๅฝ',
u'ๅ่็จฑ': u'ๅ่็งฐ',
u'ๅ่่
': u'ๅ่่
',
u'ๅ่่ฟฐ': u'ๅ่่ฟฐ',
u'ๅฏซ่': u'ๅ็',
u'ๅฏซ่ๆธ': u'ๅ่ไนฆ',
u'ๅฏซ่ไฝ': u'ๅ่ไฝ',
u'ๅฏซ่ๅ': u'ๅ่ๅ',
u'ๅฏซ่้': u'ๅ่ๅฝ',
u'ๅฏซ่็จฑ': u'ๅ่็งฐ',
u'ๅฏซ่่
': u'ๅ่่
',
u'ๅฏซ่่ฟฐ': u'ๅ่่ฟฐ',
u'ๆถผ่': u'ๅ็',
u'ๆถผ่ๆธ': u'ๅ่ไนฆ',
u'ๆถผ่ไฝ': u'ๅ่ไฝ',
u'ๆถผ่ๅ': u'ๅ่ๅ',
u'ๆถผ่้': u'ๅ่ๅฝ',
u'ๆถผ่็จฑ': u'ๅ่็งฐ',
u'ๆถผ่่
': u'ๅ่่
',
u'ๆถผ่่ฟฐ': u'ๅ่่ฟฐ',
u'ๆ่': u'ๅญๅ',
u'ๅถ่': u'ๅถ็',
u'ๅถ่ๆธ': u'ๅถ่ไนฆ',
u'ๅถ่ไฝ': u'ๅถ่ไฝ',
u'ๅถ่ๅ': u'ๅถ่ๅ',
u'ๅถ่้': u'ๅถ่ๅฝ',
u'ๅถ่็จฑ': u'ๅถ่็งฐ',
u'ๅถ่่
': u'ๅถ่่
',
u'ๅถ่่ฟฐ': u'ๅถ่่ฟฐ',
u'ๅป่': u'ๅป็',
u'ๅป่ๆธ': u'ๅป่ไนฆ',
u'ๅป่ไฝ': u'ๅป่ไฝ',
u'ๅป่ๅ': u'ๅป่ๅ',
u'ๅป่้': u'ๅป่ๅฝ',
u'ๅป่็จฑ': u'ๅป่็งฐ',
u'ๅป่่
': u'ๅป่่
',
u'ๅป่่ฟฐ': u'ๅป่่ฟฐ',
u'่พฆ่': u'ๅ็',
u'่พฆ่ๆธ': u'ๅ่ไนฆ',
u'่พฆ่ไฝ': u'ๅ่ไฝ',
u'่พฆ่ๅ': u'ๅ่ๅ',
u'่พฆ่้': u'ๅ่ๅฝ',
u'่พฆ่็จฑ': u'ๅ่็งฐ',
u'่พฆ่่
': u'ๅ่่
',
u'่พฆ่่ฟฐ': u'ๅ่่ฟฐ',
u'ๅ่': u'ๅจ็',
u'ๅ่ๆธ': u'ๅจ่ไนฆ',
u'ๅ่ไฝ': u'ๅจ่ไฝ',
u'ๅ่ๅ': u'ๅจ่ๅ',
u'ๅ่้': u'ๅจ่ๅฝ',
u'ๅ่็จฑ': u'ๅจ่็งฐ',
u'ๅ่่
': u'ๅจ่่
',
u'ๅ่่ฟฐ': u'ๅจ่่ฟฐ',
u'ๅชๅ่': u'ๅชๅ็',
u'ๅชๅ่ๆธ': u'ๅชๅ่ไนฆ',
u'ๅชๅ่ไฝ': u'ๅชๅ่ไฝ',
u'ๅชๅ่ๅ': u'ๅชๅ่ๅ',
u'ๅชๅ่้': u'ๅชๅ่ๅฝ',
u'ๅชๅ่็จฑ': u'ๅชๅ่็งฐ',
u'ๅชๅ่่
': u'ๅชๅ่่
',
u'ๅชๅ่่ฟฐ': u'ๅชๅ่่ฟฐ',
u'ๅช่': u'ๅช็',
u'ๅช่ๆธ': u'ๅช่ไนฆ',
u'ๅช่ไฝ': u'ๅช่ไฝ',
u'ๅช่ๅ': u'ๅช่ๅ',
u'ๅช่้': u'ๅช่ๅฝ',
u'ๅช่็จฑ': u'ๅช่็งฐ',
u'ๅช่่
': u'ๅช่่
',
u'ๅช่่ฟฐ': u'ๅช่่ฟฐ',
u'ๅ่': u'ๅ่',
u'ๅฐ่': u'ๅฐ็',
u'ๅฐ่ๆธ': u'ๅฐ่ไนฆ',
u'ๅฐ่ไฝ': u'ๅฐ่ไฝ',
u'ๅฐ่ๅ': u'ๅฐ่ๅ',
u'ๅฐ่้': u'ๅฐ่ๅฝ',
u'ๅฐ่็จฑ': u'ๅฐ่็งฐ',
u'ๅฐ่่
': u'ๅฐ่่
',
u'ๅฐ่่ฟฐ': u'ๅฐ่่ฟฐ',
u'ๅท่': u'ๅท่',
u'ๅฃ่': u'ๅ็',
u'ๅฃ่ๆธ': u'ๅ่ไนฆ',
u'ๅฃ่ไฝ': u'ๅ่ไฝ',
u'ๅฃ่ๅ': u'ๅ่ๅ',
u'ๅฃ่้': u'ๅ่ๅฝ',
u'ๅฃ่็จฑ': u'ๅ่็งฐ',
u'ๅฃ่่
': u'ๅ่่
',
u'ๅฃ่่ฟฐ': u'ๅ่่ฟฐ',
u'ๅ่': u'ๅ่',
u'ๅป่': u'ๅป็',
u'ๅป่ๆธ': u'ๅป่ไนฆ',
u'ๅป่ไฝ': u'ๅป่ไฝ',
u'ๅป่ๅ': u'ๅป่ๅ',
u'ๅป่้': u'ๅป่ๅฝ',
u'ๅป่็จฑ': u'ๅป่็งฐ',
u'ๅป่่
': u'ๅป่่
',
u'ๅป่่ฟฐ': u'ๅป่่ฟฐ',
u'ๅๅ่ฆ่ฆ': u'ๅๅๅคๅค',
u'ๅ่ฆ': u'ๅๅค',
u'ๅ่': u'ๅ็',
u'ๅ่ๆธ': u'ๅ่ไนฆ',
u'ๅ่ไฝ': u'ๅ่ไฝ',
u'ๅ่ๅ': u'ๅ่ๅ',
u'ๅ่้': u'ๅ่ๅฝ',
u'ๅ่็จฑ': u'ๅ่็งฐ',
u'ๅ่่
': u'ๅ่่
',
u'ๅ่่ฟฐ': u'ๅ่่ฟฐ',
u'่ฎ่': u'ๅ็',
u'่ฎ่ๆธ': u'ๅ่ไนฆ',
u'่ฎ่ไฝ': u'ๅ่ไฝ',
u'่ฎ่ๅ': u'ๅ่ๅ',
u'่ฎ่้': u'ๅ่ๅฝ',
u'่ฎ่็จฑ': u'ๅ่็งฐ',
u'่ฎ่่
': u'ๅ่่
',
u'่ฎ่่ฟฐ': u'ๅ่่ฟฐ',
u'ๅซ่': u'ๅซ็',
u'ๅซ่ๆธ': u'ๅซ่ไนฆ',
u'ๅซ่ไฝ': u'ๅซ่ไฝ',
u'ๅซ่ๅ': u'ๅซ่ๅ',
u'ๅซ่้': u'ๅซ่ๅฝ',
u'ๅซ่็จฑ': u'ๅซ่็งฐ',
u'ๅซ่่
': u'ๅซ่่
',
u'ๅซ่่ฟฐ': u'ๅซ่่ฟฐ',
u'ๅฏ็ฉฟ่': u'ๅฏ็ฉฟ่',
u'ๅฑๅ': u'ๅฑๅ',
u'ๅไธ่': u'ๅไธ็',
u'ๅๅพ่': u'ๅๅพ็',
u'ๅ่': u'ๅ็',
u'ๅ่กฃ่้ฃฏ': u'ๅ่กฃ่้ฅญ',
u'ๅ่': u'ๅ่',
u'ๅ่': u'ๅ่',
u'ๅ่': u'ๅ็',
u'ๅ่ๆธ': u'ๅ่ไนฆ',
u'ๅ่ไฝ': u'ๅ่ไฝ',
u'ๅ่ๅ': u'ๅ่ๅ',
u'ๅ่้': u'ๅ่ๅฝ',
u'ๅ่็จฑ': u'ๅ่็งฐ',
u'ๅ่่
': u'ๅ่่
',
u'ๅ่่ฟฐ': u'ๅ่่ฟฐ',
u'ๅซ่': u'ๅซ็',
u'ๅซ่ๆธ': u'ๅซ่ไนฆ',
u'ๅซ่ไฝ': u'ๅซ่ไฝ',
u'ๅซ่ๅ': u'ๅซ่ๅ',
u'ๅซ่้': u'ๅซ่ๅฝ',
u'ๅซ่็จฑ': u'ๅซ่็งฐ',
u'ๅซ่่
': u'ๅซ่่
',
u'ๅซ่่ฟฐ': u'ๅซ่่ฟฐ',
u'่ฝไธ่': u'ๅฌไธ็',
u'่ฝๅพ่': u'ๅฌๅพ็',
u'่ฝ่': u'ๅฌ็',
u'่ฝ่ๆธ': u'ๅฌ่ไนฆ',
u'่ฝ่ไฝ': u'ๅฌ่ไฝ',
u'่ฝ่ๅ': u'ๅฌ่ๅ',
u'่ฝ่้': u'ๅฌ่ๅฝ',
u'่ฝ่็จฑ': u'ๅฌ่็งฐ',
u'่ฝ่่
': u'ๅฌ่่
',
u'่ฝ่่ฟฐ': u'ๅฌ่่ฟฐ',
u'ๅดๅ
ถๆฟฌ': u'ๅดๅ
ถๆฟฌ',
u'ๅณๅ
ถๆฟฌ': u'ๅดๅ
ถๆฟฌ',
u'ๅน่': u'ๅน็',
u'ๅน่ๆธ': u'ๅน่ไนฆ',
u'ๅน่ไฝ': u'ๅน่ไฝ',
u'ๅน่ๅ': u'ๅน่ๅ',
u'ๅน่้': u'ๅน่ๅฝ',
u'ๅน่็จฑ': u'ๅน่็งฐ',
u'ๅน่่
': u'ๅน่่
',
u'ๅน่่ฟฐ': u'ๅน่่ฟฐ',
u'ๅจๆไนพ': u'ๅจๆไนพ',
u'ๅณ่': u'ๅณ็',
u'ๅณ่ๆธ': u'ๅณ่ไนฆ',
u'ๅณ่ไฝ': u'ๅณ่ไฝ',
u'ๅณ่ๅ': u'ๅณ่ๅ',
u'ๅณ่้': u'ๅณ่ๅฝ',
u'ๅณ่็จฑ': u'ๅณ่็งฐ',
u'ๅณ่่
': u'ๅณ่่
',
u'ๅณ่่ฟฐ': u'ๅณ่่ฟฐ',
u'ๅผๅนบๅๅ
ญ': u'ๅผๅนบๅๅ
ญ',
u'้ฟ่': u'ๅ็',
u'้ฟ่ๆธ': u'ๅ่ไนฆ',
u'้ฟ่ไฝ': u'ๅ่ไฝ',
u'้ฟ่ๅ': u'ๅ่ๅ',
u'้ฟ่้': u'ๅ่ๅฝ',
u'้ฟ่็จฑ': u'ๅ่็งฐ',
u'้ฟ่่
': u'ๅ่่
',
u'้ฟ่่ฟฐ': u'ๅ่่ฟฐ',
u'ๅชๅ': u'ๅชๅ',
u'ๅญ่': u'ๅญ็',
u'ๅญ่ๆธ': u'ๅญ่ไนฆ',
u'ๅญ่ไฝ': u'ๅญ่ไฝ',
u'ๅญ่ๅ': u'ๅญ่ๅ',
u'ๅญ่้': u'ๅญ่ๅฝ',
u'ๅญ่็จฑ': u'ๅญ่็งฐ',
u'ๅญ่่
': u'ๅญ่่
',
u'ๅญ่่ฟฐ': u'ๅญ่่ฟฐ',
u'ๅฑ่': u'ๅฑ็',
u'ๅฑ่ๆธ': u'ๅฑ่ไนฆ',
u'ๅฑ่ไฝ': u'ๅฑ่ไฝ',
u'ๅฑ่ๅ': u'ๅฑ่ๅ',
u'ๅฑ่้': u'ๅฑ่ๅฝ',
u'ๅฑ่็จฑ': u'ๅฑ่็งฐ',
u'ๅฑ่่
': u'ๅฑ่่
',
u'ๅฑ่่ฟฐ': u'ๅฑ่่ฟฐ',
u'ๅ่': u'ๅ็',
u'ๅ่ๆธ': u'ๅ่ไนฆ',
u'ๅ่ไฝ': u'ๅ่ไฝ',
u'ๅ่ๅ': u'ๅ่ๅ',
u'ๅ่้': u'ๅ่ๅฝ',
u'ๅ่็จฑ': u'ๅ่็งฐ',
u'ๅ่่
': u'ๅ่่
',
u'ๅ่่ฟฐ': u'ๅ่่ฟฐ',
u'ๅ
ไธ่': u'ๅ
ไธ็',
u'ๅ
ๅพ่': u'ๅ
ๅพ็',
u'ๅ
่': u'ๅ
็',
u'ๅท่': u'ๅท็',
u'ๅท่ๆธ': u'ๅท่ไนฆ',
u'ๅท่ไฝ': u'ๅท่ไฝ',
u'ๅท่ๅ': u'ๅท่ๅ',
u'ๅท่้': u'ๅท่ๅฝ',
u'ๅท่็จฑ': u'ๅท่็งฐ',
u'ๅท่่
': u'ๅท่่
',
u'ๅท่่ฟฐ': u'ๅท่่ฟฐ',
u'ๅ่ฆ': u'ๅๅค',
u'ๅ ่': u'ๅ ็',
u'ๅ ่ใ': u'ๅ ่ใ',
u'ๅ ่ใ': u'ๅ ่ใ',
u'ๅ ่ๆธ': u'ๅ ่ไนฆ',
u'ๅ ่ไฝ': u'ๅ ่ไฝ',
u'ๅ ่ๅ': u'ๅ ่ๅ',
u'ๅ ่้': u'ๅ ่ๅฝ',
u'ๅ ่็จฑ': u'ๅ ่็งฐ',
u'ๅ ่่
': u'ๅ ่่
',
u'ๅ ่่ฟฐ': u'ๅ ่่ฟฐ',
u'ๅฐ่': u'ๅฐ็',
u'ๅฐ่ๆธ': u'ๅฐ่ไนฆ',
u'ๅฐ่ไฝ': u'ๅฐ่ไฝ',
u'ๅฐ่ๅ': u'ๅฐ่ๅ',
u'ๅฐ่้': u'ๅฐ่ๅฝ',
u'ๅฐ่็จฑ': u'ๅฐ่็งฐ',
u'ๅฐ่่
': u'ๅฐ่่
',
u'ๅฐ่่ฟฐ': u'ๅฐ่่ฟฐ',
u'ๅ่': u'ๅด็',
u'ๅ่ๆธ': u'ๅด่ไนฆ',
u'ๅ่ไฝ': u'ๅด่ไฝ',
u'ๅ่ๅ': u'ๅด่ๅ',
u'ๅ่้': u'ๅด่ๅฝ',
u'ๅ่็จฑ': u'ๅด่็งฐ',
u'ๅ่่
': u'ๅด่่
',
u'ๅ่่ฟฐ': u'ๅด่่ฟฐ',
u'ๅ่': u'ๅ่',
u'ๅจ่': u'ๅจ็',
u'ๅจ่ๆธ': u'ๅจ่ไนฆ',
u'ๅจ่ไฝ': u'ๅจ่ไฝ',
u'ๅจ่ๅ': u'ๅจ่ๅ',
u'ๅจ่้': u'ๅจ่ๅฝ',
u'ๅจ่็จฑ': u'ๅจ่็งฐ',
u'ๅจ่่
': u'ๅจ่่
',
u'ๅจ่่ฟฐ': u'ๅจ่่ฟฐ',
u'ๅ่': u'ๅ็',
u'ๅ่ๆธ': u'ๅ่ไนฆ',
u'ๅ่ไฝ': u'ๅ่ไฝ',
u'ๅ่ๅ': u'ๅ่ๅ',
u'ๅ่้': u'ๅ่ๅฝ',
u'ๅ่็จฑ': u'ๅ่็งฐ',
u'ๅ่่
': u'ๅ่่
',
u'ๅ่่ฟฐ': u'ๅ่่ฟฐ',
u'ๅคไนพ': u'ๅคไนพ',
u'ๅ่': u'ๅค็',
u'ๅ่ๆธ': u'ๅค่ไนฆ',
u'ๅ่ไฝ': u'ๅค่ไฝ',
u'ๅ่ๅ': u'ๅค่ๅ',
u'ๅ่้': u'ๅค่ๅฝ',
u'ๅ่็จฑ': u'ๅค่็งฐ',
u'ๅ่่
': u'ๅค่่
',
u'ๅ่่ฟฐ': u'ๅค่่ฟฐ',
u'ๅคฉ้ไธบไนพ': u'ๅคฉ้ไธบไนพ',
u'ๅคฉ้็บไนพ': u'ๅคฉ้ไธบไนพ',
u'ๅคพ่': u'ๅคน็',
u'ๅคพ่ๆธ': u'ๅคน่ไนฆ',
u'ๅคพ่ไฝ': u'ๅคน่ไฝ',
u'ๅคพ่ๅ': u'ๅคน่ๅ',
u'ๅคพ่้': u'ๅคน่ๅฝ',
u'ๅคพ่็จฑ': u'ๅคน่็งฐ',
u'ๅคพ่่
': u'ๅคน่่
',
u'ๅคพ่่ฟฐ': u'ๅคน่่ฟฐ',
u'ๅฅงๅ': u'ๅฅงๅบ',
u'ๅงๅนบ': u'ๅงๅนบ',
u'ๅญๆบ': u'ๅญๆบ',
u'ๅญค่': u'ๅญค็',
u'ๅญค่ๆธ': u'ๅญค่ไนฆ',
u'ๅญค่ไฝ': u'ๅญค่ไฝ',
u'ๅญค่ๅ': u'ๅญค่ๅ',
u'ๅญค่้': u'ๅญค่ๅฝ',
u'ๅญค่็จฑ': u'ๅญค่็งฐ',
u'ๅญค่่
': u'ๅญค่่
',
u'ๅญค่่ฟฐ': u'ๅญค่่ฟฐ',
u'ๅญธ่': u'ๅญฆ็',
u'ๅญธ่ๆธ': u'ๅญฆ่ไนฆ',
u'ๅญธ่ไฝ': u'ๅญฆ่ไฝ',
u'ๅญธ่ๅ': u'ๅญฆ่ๅ',
u'ๅญธ่้': u'ๅญฆ่ๅฝ',
u'ๅญธ่็จฑ': u'ๅญฆ่็งฐ',
u'ๅญธ่่
': u'ๅญฆ่่
',
u'ๅญธ่่ฟฐ': u'ๅญฆ่่ฟฐ',
u'ๅฎ่': u'ๅฎ็',
u'ๅฎ่ๆธ': u'ๅฎ่ไนฆ',
u'ๅฎ่ไฝ': u'ๅฎ่ไฝ',
u'ๅฎ่ๅ': u'ๅฎ่ๅ',
u'ๅฎ่้': u'ๅฎ่ๅฝ',
u'ๅฎ่็จฑ': u'ๅฎ่็งฐ',
u'ๅฎ่่
': u'ๅฎ่่
',
u'ๅฎ่่ฟฐ': u'ๅฎ่่ฟฐ',
u'ๅฎ่': u'ๅฎ็',
u'ๅฎ่ๆธ': u'ๅฎ่ไนฆ',
u'ๅฎ่ไฝ': u'ๅฎ่ไฝ',
u'ๅฎ่ๅ': u'ๅฎ่ๅ',
u'ๅฎ่้': u'ๅฎ่ๅฝ',
u'ๅฎ่็จฑ': u'ๅฎ่็งฐ',
u'ๅฎ่่
': u'ๅฎ่่
',
u'ๅฎ่่ฟฐ': u'ๅฎ่่ฟฐ',
u'ๅฐ่': u'ๅฏน็',
u'ๅฐ่ๆธ': u'ๅฏน่ไนฆ',
u'ๅฐ่ไฝ': u'ๅฏน่ไฝ',
u'ๅฐ่ๅ': u'ๅฏน่ๅ',
u'ๅฐ่้': u'ๅฏน่ๅฝ',
u'ๅฐ่็จฑ': u'ๅฏน่็งฐ',
u'ๅฐ่่
': u'ๅฏน่่
',
u'ๅฐ่่ฟฐ': u'ๅฏน่่ฟฐ',
u'ๅฐ่': u'ๅฏป็',
u'ๅฐ่ๆธ': u'ๅฏป่ไนฆ',
u'ๅฐ่ไฝ': u'ๅฏป่ไฝ',
u'ๅฐ่ๅ': u'ๅฏป่ๅ',
u'ๅฐ่้': u'ๅฏป่ๅฝ',
u'ๅฐ่็จฑ': u'ๅฏป่็งฐ',
u'ๅฐ่่
': u'ๅฏป่่
',
u'ๅฐ่่ฟฐ': u'ๅฏป่่ฟฐ',
u'ๅฐ่ปๆฝ่ป': u'ๅฐๅๆฝ่ป',
u'ๅฐผไนพ้': u'ๅฐผไนพ้',
u'ๅฑ่': u'ๅฑ็',
u'ๅฑ่ๆธ': u'ๅฑ่ไนฆ',
u'ๅฑ่ไฝ': u'ๅฑ่ไฝ',
u'ๅฑ่ๅ': u'ๅฑ่ๅ',
u'ๅฑ่้': u'ๅฑ่ๅฝ',
u'ๅฑ่็จฑ': u'ๅฑ่็งฐ',
u'ๅฑ่่
': u'ๅฑ่่
',
u'ๅฑ่่ฟฐ': u'ๅฑ่่ฟฐ',
u'ๅทจ่': u'ๅทจ่',
u'ๅธถ่': u'ๅธฆ็',
u'ๅธถ่ๆธ': u'ๅธฆ่ไนฆ',
u'ๅธถ่ไฝ': u'ๅธฆ่ไฝ',
u'ๅธถ่ๅ': u'ๅธฆ่ๅ',
u'ๅธถ่้': u'ๅธฆ่ๅฝ',
u'ๅธถ่็จฑ': u'ๅธฆ่็งฐ',
u'ๅธถ่่
': u'ๅธฆ่่
',
u'ๅธถ่่ฟฐ': u'ๅธฆ่่ฟฐ',
u'ๅนซ่': u'ๅธฎ็',
u'ๅนซ่ๆธ': u'ๅธฎ่ไนฆ',
u'ๅนซ่ไฝ': u'ๅธฎ่ไฝ',
u'ๅนซ่ๅ': u'ๅธฎ่ๅ',
u'ๅนซ่้': u'ๅธฎ่ๅฝ',
u'ๅนซ่็จฑ': u'ๅธฎ่็งฐ',
u'ๅนซ่่
': u'ๅธฎ่่
',
u'ๅนซ่่ฟฐ': u'ๅธฎ่่ฟฐ',
u'ไนพไนพๆทจๆทจ': u'ๅนฒๅนฒๅๅ',
u'ไนพไนพ่่': u'ๅนฒๅนฒ่่',
u'ไนพๆณๆฐด': u'ๅนฒๆณๆฐด',
u'ๅนน่': u'ๅนฒ็',
u'ไนไบไธ': u'ๅนบไบไธ',
u'ๅนบไบไธ': u'ๅนบไบไธ',
u'ไนๅ
': u'ๅนบๅ
',
u'ๅนบๅ
': u'ๅนบๅ
',
u'ๅนบ้ณณ': u'ๅนบๅค',
u'ไน้ณณ': u'ๅนบๅค',
u'ไนๅ็พค': u'ๅนบๅ็พค',
u'ๅนบๅ็พค': u'ๅนบๅ็พค',
u'ๅนบๅป': u'ๅนบๅฎ',
u'ๅนบๅฎ': u'ๅนบๅฎ',
u'ไนๅ': u'ๅนบๅ',
u'ๅนบๅ': u'ๅนบๅ',
u'ไนๅชฝ': u'ๅนบๅฆ',
u'ๅนบๅชฝ': u'ๅนบๅฆ',
u'ไนๅฆน': u'ๅนบๅฆน',
u'ๅนบๅฆน': u'ๅนบๅฆน',
u'ไนๅง': u'ๅนบๅง',
u'ๅนบๅง': u'ๅนบๅง',
u'ไนๅงจ': u'ๅนบๅงจ',
u'ๅนบๅงจ': u'ๅนบๅงจ',
u'ไนๅจ': u'ๅนบๅจ',
u'ไนๅญ': u'ๅนบๅจ',
u'ๅนบๅจ': u'ๅนบๅจ',
u'ๅนบๅญ': u'ๅนบๅจ',
u'ๅนบๅฐ': u'ๅนบๅฐ',
u'ไนๅฐ': u'ๅนบๅฐ',
u'ๅนบๆฐ': u'ๅนบๆฐ',
u'ไนๆฐ': u'ๅนบๆฐ',
u'ไน็ธ': u'ๅนบ็ธ',
u'ๅนบ็ธ': u'ๅนบ็ธ',
u'ๅนบ็น': u'ๅนบ็น',
u'ไน็น': u'ๅนบ็น',
u'ไน็ฏ': u'ๅนบ็ฏ',
u'ๅนบ็ฏ': u'ๅนบ็ฏ',
u'ไน่
': u'ๅนบ่
',
u'ๅนบ่
': u'ๅนบ่
',
u'ไน่พๅญ': u'ๅนบ่พๅญ',
u'ๅนบ่พๅญ': u'ๅนบ่พๅญ',
u'ไน่ฌ': u'ๅนบ่ฐฆ',
u'ๅนบ่ฌ': u'ๅนบ่ฐฆ',
u'ๅนบ้บฝ': u'ๅนบ้บฝ',
u'ไน้บผ': u'ๅนบ้บฝ',
u'ๅนบ้บฝๅฐไธ': u'ๅนบ้บฝๅฐไธ',
u'ไน้บผๅฐไธ': u'ๅนบ้บฝๅฐไธ',
u'ๅบ่ญท่': u'ๅบๆค็',
u'ๆ่': u'ๅบ็',
u'ๆ่ๆธ': u'ๅบ่ไนฆ',
u'ๆ่ไฝ': u'ๅบ่ไฝ',
u'ๆ่ๅ': u'ๅบ่ๅ',
u'ๆ่้': u'ๅบ่ๅฝ',
u'ๆ่็จฑ': u'ๅบ่็งฐ',
u'ๆ่่
': u'ๅบ่่
',
u'ๆ่่ฟฐ': u'ๅบ่่ฟฐ',
u'ๅบทไนพ': u'ๅบทไนพ',
u'ๅบท่': u'ๅบท็',
u'ๅบท่ๆธ': u'ๅบท่ไนฆ',
u'ๅบท่ไฝ': u'ๅบท่ไฝ',
u'ๅบท่ๅ': u'ๅบท่ๅ',
u'ๅบท่้': u'ๅบท่ๅฝ',
u'ๅบท่็จฑ': u'ๅบท่็งฐ',
u'ๅบท่่
': u'ๅบท่่
',
u'ๅบท่่ฟฐ': u'ๅบท่่ฟฐ',
u'้่': u'ๅผ็',
u'้่ๆธ': u'ๅผ่ไนฆ',
u'้่ไฝ': u'ๅผ่ไฝ',
u'้่ๅ': u'ๅผ่ๅ',
u'้่้': u'ๅผ่ๅฝ',
u'้่็จฑ': u'ๅผ่็งฐ',
u'้่่
': u'ๅผ่่
',
u'้่่ฟฐ': u'ๅผ่่ฟฐ',
u'ๅผตๆณไนพ': u'ๅผ ๆณไนพ',
u'ๅผ ๆณไนพ': u'ๅผ ๆณไนพ',
u'็ถ่': u'ๅฝ็',
u'็ถ่ๆธ': u'ๅฝ่ไนฆ',
u'็ถ่ไฝ': u'ๅฝ่ไฝ',
u'็ถ่ๅ': u'ๅฝ่ๅ',
u'็ถ่้': u'ๅฝ่ๅฝ',
u'็ถ่็จฑ': u'ๅฝ่็งฐ',
u'็ถ่่
': u'ๅฝ่่
',
u'็ถ่่ฟฐ': u'ๅฝ่่ฟฐ',
u'ๅฝฐๆ่ผ่': u'ๅฝฐๆ่พ่',
u'ๅพ
่': u'ๅพ
็',
u'ๅพ
่ๆธ': u'ๅพ
่ไนฆ',
u'ๅพ
่ไฝ': u'ๅพ
่ไฝ',
u'ๅพ
่ๅ': u'ๅพ
่ๅ',
u'ๅพ
่้': u'ๅพ
่ๅฝ',
u'ๅพ
่็จฑ': u'ๅพ
่็งฐ',
u'ๅพ
่่
': u'ๅพ
่่
',
u'ๅพ
่่ฟฐ': u'ๅพ
่่ฟฐ',
u'ๅพ่': u'ๅพ็',
u'ๅพ่ๆธ': u'ๅพ่ไนฆ',
u'ๅพ่ไฝ': u'ๅพ่ไฝ',
u'ๅพ่ๅ': u'ๅพ่ๅ',
u'ๅพ่้': u'ๅพ่ๅฝ',
u'ๅพ่็จฑ': u'ๅพ่็งฐ',
u'ๅพ่่
': u'ๅพ่่
',
u'ๅพ่่ฟฐ': u'ๅพ่่ฟฐ',
u'ๅพช่': u'ๅพช็',
u'ๅพช่ๆธ': u'ๅพช่ไนฆ',
u'ๅพช่ไฝ': u'ๅพช่ไฝ',
u'ๅพช่ๅ': u'ๅพช่ๅ',
u'ๅพช่้': u'ๅพช่ๅฝ',
u'ๅพช่็จฑ': u'ๅพช่็งฐ',
u'ๅพช่่
': u'ๅพช่่
',
u'ๅพช่่ฟฐ': u'ๅพช่่ฟฐ',
u'ๅฟ่': u'ๅฟ็',
u'ๅฟ่ๆธ': u'ๅฟ่ไนฆ',
u'ๅฟ่ไฝ': u'ๅฟ่ไฝ',
u'ๅฟ่ๅ': u'ๅฟ่ๅ',
u'ๅฟ่้': u'ๅฟ่ๅฝ',
u'ๅฟ่็จฑ': u'ๅฟ่็งฐ',
u'ๅฟ่่
': u'ๅฟ่่
',
u'ๅฟ่่ฟฐ': u'ๅฟ่่ฟฐ',
u'ๅฟ่': u'ๅฟ็',
u'ๅฟ่ๆธ': u'ๅฟ่ไนฆ',
u'ๅฟ่ไฝ': u'ๅฟ่ไฝ',
u'ๅฟ่ๅ': u'ๅฟ่ๅ',
u'ๅฟ่้': u'ๅฟ่ๅฝ',
u'ๅฟ่็จฑ': u'ๅฟ่็งฐ',
u'ๅฟ่่
': u'ๅฟ่่
',
u'ๅฟ่่ฟฐ': u'ๅฟ่่ฟฐ',
u'ๅฟ่': u'ๅฟ็',
u'ๅฟ่ๆธ': u'ๅฟ่ไนฆ',
u'ๅฟ่ไฝ': u'ๅฟ่ไฝ',
u'ๅฟ่ๅ': u'ๅฟ่ๅ',
u'ๅฟ่้': u'ๅฟ่ๅฝ',
u'ๅฟ่็จฑ': u'ๅฟ่็งฐ',
u'ๅฟ่่
': u'ๅฟ่่
',
u'ๅฟ่่ฟฐ': u'ๅฟ่่ฟฐ',
u'ๅฟ่': u'ๅฟ็',
u'ๅฟ่ๆธ': u'ๅฟ่ไนฆ',
u'ๅฟ่ไฝ': u'ๅฟ่ไฝ',
u'ๅฟ่ๅ': u'ๅฟ่ๅ',
u'ๅฟ่้': u'ๅฟ่ๅฝ',
u'ๅฟ่็จฑ': u'ๅฟ่็งฐ',
u'ๅฟ่่
': u'ๅฟ่่
',
u'ๅฟ่่ฟฐ': u'ๅฟ่่ฟฐ',
u'ๆท่': u'ๆ็',
u'ๆท่ๆธ': u'ๆ่ไนฆ',
u'ๆท่ไฝ': u'ๆ่ไฝ',
u'ๆท่ๅ': u'ๆ่ๅ',
u'ๆท่้': u'ๆ่ๅฝ',
u'ๆท่็จฑ': u'ๆ่็งฐ',
u'ๆท่่
': u'ๆ่่
',
u'ๆท่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆฅ่': u'ๆฅ็',
u'ๆฅ่ๆธ': u'ๆฅ่ไนฆ',
u'ๆฅ่ไฝ': u'ๆฅ่ไฝ',
u'ๆฅ่ๅ': u'ๆฅ่ๅ',
u'ๆฅ่้': u'ๆฅ่ๅฝ',
u'ๆฅ่็จฑ': u'ๆฅ่็งฐ',
u'ๆฅ่่
': u'ๆฅ่่
',
u'ๆฅ่่ฟฐ': u'ๆฅ่่ฟฐ',
u'ๆง่': u'ๆง็',
u'ๆง่ๆธ': u'ๆง่ไนฆ',
u'ๆง่ไฝ': u'ๆง่ไฝ',
u'ๆง่ๅ': u'ๆง่ๅ',
u'ๆง่้': u'ๆง่ๅฝ',
u'ๆง่็จฑ': u'ๆง่็งฐ',
u'ๆง่่
': u'ๆง่่
',
u'ๆง่่ฟฐ': u'ๆง่่ฟฐ',
u'ๆ่': u'ๆ็',
u'ๆ่ๆธ': u'ๆ่ไนฆ',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่้': u'ๆ่ๅฝ',
u'ๆ่็จฑ': u'ๆ่็งฐ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆฉๅจไธฆ่': u'ๆฉๅจๅนถ่',
u'ๆ ่': u'ๆ ็',
u'ๆ ่ๆธ': u'ๆ ่ไนฆ',
u'ๆ ่ไฝ': u'ๆ ่ไฝ',
u'ๆ ่ๅ': u'ๆ ่ๅ',
u'ๆ ่้': u'ๆ ่ๅฝ',
u'ๆ ่็จฑ': u'ๆ ่็งฐ',
u'ๆ ่่
': u'ๆ ่่
',
u'ๆ ่่ฟฐ': u'ๆ ่่ฟฐ',
u'ๆ
ฃ่': u'ๆฏ็',
u'ๆ
ฃ่ๆธ': u'ๆฏ่ไนฆ',
u'ๆ
ฃ่ไฝ': u'ๆฏ่ไฝ',
u'ๆ
ฃ่ๅ': u'ๆฏ่ๅ',
u'ๆ
ฃ่้': u'ๆฏ่ๅฝ',
u'ๆ
ฃ่็จฑ': u'ๆฏ่็งฐ',
u'ๆ
ฃ่่
': u'ๆฏ่่
',
u'ๆ
ฃ่่ฟฐ': u'ๆฏ่่ฟฐ',
u'ๆณ่': u'ๆณ็',
u'ๆณ่ๆธ': u'ๆณ่ไนฆ',
u'ๆณ่ไฝ': u'ๆณ่ไฝ',
u'ๆณ่ๅ': u'ๆณ่ๅ',
u'ๆณ่้': u'ๆณ่ๅฝ',
u'ๆณ่็จฑ': u'ๆณ่็งฐ',
u'ๆณ่่
': u'ๆณ่่
',
u'ๆณ่่ฟฐ': u'ๆณ่่ฟฐ',
u'ๆฐ่': u'ๆ็',
u'ๆฐ่ๆธ': u'ๆ่ไนฆ',
u'ๆฐ่ไฝ': u'ๆ่ไฝ',
u'ๆฐ่ๅ': u'ๆ่ๅ',
u'ๆฐ่้': u'ๆ่ๅฝ',
u'ๆฐ่็จฑ': u'ๆ่็งฐ',
u'ๆฐ่่
': u'ๆ่่
',
u'ๆฐ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆด่': u'ๆด็',
u'ๆด่ๆธ': u'ๆด่ไนฆ',
u'ๆด่ไฝ': u'ๆด่ไฝ',
u'ๆด่ๅ': u'ๆด่ๅ',
u'ๆด่้': u'ๆด่ๅฝ',
u'ๆด่็จฑ': u'ๆด่็งฐ',
u'ๆด่่
': u'ๆด่่
',
u'ๆด่่ฟฐ': u'ๆด่่ฟฐ',
u'ๆ่': u'ๆ็',
u'ๆ่ๆธ': u'ๆ่ไนฆ',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่้': u'ๆ่ๅฝ',
u'ๆ่็จฑ': u'ๆ่็งฐ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่': u'ๆ็',
u'ๆ่ๆธ': u'ๆ่ไนฆ',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่้': u'ๆ่ๅฝ',
u'ๆ่็จฑ': u'ๆ่็งฐ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่': u'ๆ็',
u'ๆ่ๆธ': u'ๆ่ไนฆ',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่้': u'ๆ่ๅฝ',
u'ๆ่็จฑ': u'ๆ่็งฐ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๅท่': u'ๆง่',
u'ๆพไธ่': u'ๆพไธ็',
u'ๆพๅพ่': u'ๆพๅพ็',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่้': u'ๆ่ๅฝ',
u'ๆ่็จฑ': u'ๆ่็งฐ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'่ญท่': u'ๆค็',
u'่ญท่ๆธ': u'ๆค่ไนฆ',
u'่ญท่ไฝ': u'ๆค่ไฝ',
u'่ญท่ๅ': u'ๆค่ๅ',
u'่ญท่้': u'ๆค่ๅฝ',
u'่ญท่็จฑ': u'ๆค่็งฐ',
u'่ญท่่
': u'ๆค่่
',
u'่ญท่่ฟฐ': u'ๆค่่ฟฐ',
u'ๆซ่': u'ๆซ็',
u'ๆซ่ๆธ': u'ๆซ่ไนฆ',
u'ๆซ่ไฝ': u'ๆซ่ไฝ',
u'ๆซ่ๅ': u'ๆซ่ๅ',
u'ๆซ่้': u'ๆซ่ๅฝ',
u'ๆซ่็จฑ': u'ๆซ่็งฐ',
u'ๆซ่่
': u'ๆซ่่
',
u'ๆซ่่ฟฐ': u'ๆซ่่ฟฐ',
u'ๆฌ่': u'ๆฌ็',
u'ๆฌ่ไฝ': u'ๆฌ่ไฝ',
u'ๆฌ่ๅ': u'ๆฌ่ๅ',
u'ๆฌ่้': u'ๆฌ่ๅฝ',
u'ๆฌ่็จฑ': u'ๆฌ่็งฐ',
u'ๆฌ่่
': u'ๆฌ่่
',
u'ๆฌ่่ฟฐ': u'ๆฌ่่ฟฐ',
u'ๆฑ่': u'ๆฑ็',
u'ๆฑ่ไฝ': u'ๆฑ่ไฝ',
u'ๆฑ่ๅ': u'ๆฑ่ๅ',
u'ๆฑ่้': u'ๆฑ่ๅฝ',
u'ๆฑ่็จฑ': u'ๆฑ่็งฐ',
u'ๆฑ่่
': u'ๆฑ่่
',
u'ๆฑ่่ฟฐ': u'ๆฑ่่ฟฐ',
u'ๆ่': u'ๆ็',
u'ๆ่ๆธ': u'ๆ่ไนฆ',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่้': u'ๆ่ๅฝ',
u'ๆ่็จฑ': u'ๆ่็งฐ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ้': u'ๆ้พ',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่้': u'ๆ่ๅฝ',
u'ๆ่็จฑ': u'ๆ่็งฐ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่้': u'ๆ่ๅฝ',
u'ๆ่็จฑ': u'ๆ่็งฐ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่': u'ๆ่',
u'ๆๅฝ': u'ๆๅฝ',
u'ๆๆ': u'ๆๆ',
u'ๆๆญป': u'ๆๆญป',
u'ๆผ่': u'ๆผ็',
u'ๆผ่ไฝ': u'ๆผ่ไฝ',
u'ๆผ่ๅ': u'ๆผ่ๅ',
u'ๆผ่้': u'ๆผ่ๅฝ',
u'ๆผ่็จฑ': u'ๆผ่็งฐ',
u'ๆผ่่
': u'ๆผ่่
',
u'ๆผ่่ฟฐ': u'ๆผ่่ฟฐ',
u'ๆฟ่': u'ๆฟ็',
u'ๆฟ่ไฝ': u'ๆฟ่ไฝ',
u'ๆฟ่ๅ': u'ๆฟ่ๅ',
u'ๆฟ่้': u'ๆฟ่ๅฝ',
u'ๆฟ่็จฑ': u'ๆฟ่็งฐ',
u'ๆฟ่่
': u'ๆฟ่่
',
u'ๆฟ่่ฟฐ': u'ๆฟ่่ฟฐ',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่้': u'ๆ่ๅฝ',
u'ๆ่็จฑ': u'ๆ่็งฐ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่้': u'ๆ่ๅฝ',
u'ๆ่็จฑ': u'ๆ่็งฐ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่': u'ๆก็',
u'ๆ่ไฝ': u'ๆก่ไฝ',
u'ๆ่ๅ': u'ๆก่ๅ',
u'ๆ่้': u'ๆก่ๅฝ',
u'ๆ่็จฑ': u'ๆก่็งฐ',
u'ๆ่่
': u'ๆก่่
',
u'ๆ่่ฟฐ': u'ๆก่่ฟฐ',
u'ๆ่': u'ๆฃ็',
u'ๆ่ๆธ': u'ๆฃ่ไนฆ',
u'ๆ่ไฝ': u'ๆฃ่ไฝ',
u'ๆ่ๅ': u'ๆฃ่ๅ',
u'ๆ่้': u'ๆฃ่ๅฝ',
u'ๆ่็จฑ': u'ๆฃ่็งฐ',
u'ๆ่่
': u'ๆฃ่่
',
u'ๆ่่ฟฐ': u'ๆฃ่่ฟฐ',
u'ๆฎ่': u'ๆฅ็',
u'ๆฎ่ไฝ': u'ๆฅ่ไฝ',
u'ๆฎ่ๅ': u'ๆฅ่ๅ',
u'ๆฎ่้': u'ๆฅ่ๅฝ',
u'ๆฎ่็จฑ': u'ๆฅ่็งฐ',
u'ๆฎ่่
': u'ๆฅ่่
',
u'ๆฎ่่ฟฐ': u'ๆฅ่่ฟฐ',
u'ๆจ่': u'ๆจ็',
u'ๆจ่ไฝ': u'ๆจ่ไฝ',
u'ๆจ่ๅ': u'ๆจ่ๅ',
u'ๆจ่้': u'ๆจ่ๅฝ',
u'ๆจ่็จฑ': u'ๆจ่็งฐ',
u'ๆจ่่
': u'ๆจ่่
',
u'ๆจ่่ฟฐ': u'ๆจ่่ฟฐ',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่้': u'ๆ่ๅฝ',
u'ๆ่็จฑ': u'ๆ่็งฐ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่': u'ๆฎ็',
u'ๆ่ๆธ': u'ๆฎ่ไนฆ',
u'ๆ่ไฝ': u'ๆฎ่ไฝ',
u'ๆ่ๅ': u'ๆฎ่ๅ',
u'ๆ่้': u'ๆฎ่ๅฝ',
u'ๆ่็จฑ': u'ๆฎ่็งฐ',
u'ๆ่่
': u'ๆฎ่่
',
u'ๆ่่ฟฐ': u'ๆฎ่่ฟฐ',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่้': u'ๆ่ๅฝ',
u'ๆ่็จฑ': u'ๆ่็งฐ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆฅ่': u'ๆฅ็',
u'ๆฅ่ไฝ': u'ๆฅ่ไฝ',
u'ๆฅ่ๅ': u'ๆฅ่ๅ',
u'ๆฅ่้': u'ๆฅ่ๅฝ',
u'ๆฅ่็จฑ': u'ๆฅ่็งฐ',
u'ๆฅ่่
': u'ๆฅ่่
',
u'ๆฅ่่ฟฐ': u'ๆฅ่่ฟฐ',
u'ๆ่': u'ๆ็',
u'ๆ่ๆธ': u'ๆ่ไนฆ',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่้': u'ๆ่ๅฝ',
u'ๆ่็จฑ': u'ๆ่็งฐ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่้': u'ๆ่ๅฝ',
u'ๆ่็จฑ': u'ๆ่็งฐ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่้': u'ๆ่ๅฝ',
u'ๆ่็จฑ': u'ๆ่็งฐ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆบ่': u'ๆ็',
u'ๆบ่ไฝ': u'ๆ่ไฝ',
u'ๆบ่ๅ': u'ๆ่ๅ',
u'ๆบ่้': u'ๆ่ๅฝ',
u'ๆบ่็จฑ': u'ๆ่็งฐ',
u'ๆบ่่
': u'ๆ่่
',
u'ๆบ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆฐ่': u'ๆฐ่',
u'ๆผ่': u'ๆผ็',
u'ๆผ่ๆธ': u'ๆผ่ไนฆ',
u'ๆผ่ไฝ': u'ๆผ่ไฝ',
u'ๆผ่ๅ': u'ๆผ่ๅ',
u'ๆผ่้': u'ๆผ่ๅฝ',
u'ๆผ่็จฑ': u'ๆผ่็งฐ',
u'ๆผ่่
': u'ๆผ่่
',
u'ๆผ่่ฟฐ': u'ๆผ่่ฟฐ',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่้': u'ๆ่ๅฝ',
u'ๆ่็จฑ': u'ๆ่็งฐ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆธ่': u'ๆฐ็',
u'ๆธ่ไฝ': u'ๆฐ่ไฝ',
u'ๆธ่ๅ': u'ๆฐ่ๅ',
u'ๆธ่้': u'ๆฐ่ๅฝ',
u'ๆธ่็จฑ': u'ๆฐ่็งฐ',
u'ๆธ่่
': u'ๆฐ่่
',
u'ๆธ่่ฟฐ': u'ๆฐ่่ฟฐ',
u'ๆ่': u'ๆ็',
u'ๆ่ๆธ': u'ๆ่ไนฆ',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่้': u'ๆ่ๅฝ',
u'ๆ่็จฑ': u'ๆ่็งฐ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆฅ่': u'ๆฅ็',
u'ๆฅ่ๆธ': u'ๆฅ่ไนฆ',
u'ๆฅ่ไฝ': u'ๆฅ่ไฝ',
u'ๆฅ่ๅ': u'ๆฅ่ๅ',
u'ๆฅ่้': u'ๆฅ่ๅฝ',
u'ๆฅ่็จฑ': u'ๆฅ่็งฐ',
u'ๆฅ่่
': u'ๆฅ่่
',
u'ๆฅ่่ฟฐ': u'ๆฅ่่ฟฐ',
u'ๆฐ่': u'ๆฐ่',
u'ๆฐ่้พ่้': u'ๆฐ่้พ่้จ',
u'ๆผไธๆ': u'ๆผไธๆ',
u'ๆผไน': u'ๆผไน',
u'ๆผไนไบๅ': u'ๆผไนไบๅ',
u'ๆผไนๅฎๅ': u'ๆผไนๅฎๅ',
u'ๆผไบๅ': u'ๆผไบๅ',
u'ๆผๅฒ': u'ๆผๅฒ',
u'ๆผๅคซ็ฝ': u'ๆผๅคซ็ฝ',
u'ๆผๅคซ็พ
': u'ๆผๅคซ็ฝ',
u'ๆผๅง': u'ๆผๅง',
u'ๆผๅฎๅ': u'ๆผๅฎๅ',
u'ๆผๅดๆ': u'ๆผๅดๆ',
u'ๆผๅฟ่ณ': u'ๆผๅฟ่ดบ',
u'ๆผๅฟ่ดบ': u'ๆผๅฟ่ดบ',
u'ๆผๆฒ': u'ๆผๆ',
u'ๆผๆขจ่ฏ': u'ๆผๆขจๅ',
u'ๆผๆขจๅ': u'ๆผๆขจๅ',
u'ๆผๆฐ': u'ๆผๆฐ',
u'ๆผๆฝ็ธฃ': u'ๆผๆฝๅฟ',
u'ๆผๆฝๅฟ': u'ๆผๆฝๅฟ',
u'ๆผ็ฅฅ็': u'ๆผ็ฅฅ็',
u'ๆผ่': u'ๆผ่',
u'ๆผ่ณขๅพท': u'ๆผ่ดคๅพท',
u'ๆผ้ค้ฌ': u'ๆผ้ค้ฌ',
u'ๆไนพ่ฝฌๅค': u'ๆไนพ่ฝฌๅค',
u'ๆไนพ่ฝๅค': u'ๆไนพ่ฝฌๅค',
u'ๆ ่ฅ็ผ็': u'ๆท่ฅๅ็',
u'ๆ่': u'ๆ็',
u'ๆ่ๆธ': u'ๆ่ไนฆ',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่้': u'ๆ่ๅฝ',
u'ๆ่็จฑ': u'ๆ่็งฐ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆยทไนพ': u'ๆยทไนพ',
u'ๆ็ถยทไนพ': u'ๆ็ปยทไนพ',
u'ๆ็ปยทไนพ': u'ๆ็ปยทไนพ',
u'ๆ็ถไนพ': u'ๆ็ปไนพ',
u'ๆ็ปไนพ': u'ๆ็ปไนพ',
u'ๆ ่': u'ๆ ็',
u'ๆ ่ๆธ': u'ๆ ่ไนฆ',
u'ๆ ่ไฝ': u'ๆ ่ไฝ',
u'ๆ ่ๅ': u'ๆ ่ๅ',
u'ๆ ่้': u'ๆ ่ๅฝ',
u'ๆ ่็จฑ': u'ๆ ่็งฐ',
u'ๆ ่่
': u'ๆ ่่
',
u'ๆ ่่ฟฐ': u'ๆ ่่ฟฐ',
u'ๆญ่': u'ๆญ่',
u'้กฏ่': u'ๆพ่',
u'ๆพ่': u'ๆพ่',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่้': u'ๆ่ๅฝ',
u'ๆ่็จฑ': u'ๆ่็งฐ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่': u'ๆ็',
u'ๆ่ๆธ': u'ๆ่ไนฆ',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่้': u'ๆ่ๅฝ',
u'ๆ่็จฑ': u'ๆ่็งฐ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่': u'ๆ็',
u'ๆ่ๆธ': u'ๆ่ไนฆ',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่้': u'ๆ่ๅฝ',
u'ๆ่็จฑ': u'ๆ่็งฐ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่้': u'ๆ่ๅฝ',
u'ๆ่็จฑ': u'ๆ่็งฐ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆไนพๅคๆ': u'ๆไนพๅคๆ',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่้': u'ๆ่ๅฝ',
u'ๆ่็จฑ': u'ๆ่็งฐ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆฌ่': u'ๆฌ็',
u'ๆฌ่ๆธ': u'ๆฌ่ไนฆ',
u'ๆฌ่ไฝ': u'ๆฌ่ไฝ',
u'ๆฌ่ๅ': u'ๆฌ่ๅ',
u'ๆฌ่้': u'ๆฌ่ๅฝ',
u'ๆฌ่็จฑ': u'ๆฌ่็งฐ',
u'ๆฌ่่
': u'ๆฌ่่
',
u'ๆฌ่่ฟฐ': u'ๆฌ่่ฟฐ',
u'ๆดๆผๅฎๅ': u'ๆดๆผๅฎๅ',
u'ๆฎบ่': u'ๆ็',
u'ๆฎบ่ๆธ': u'ๆ่ไนฆ',
u'ๆฎบ่ไฝ': u'ๆ่ไฝ',
u'ๆฎบ่ๅ': u'ๆ่ๅ',
u'ๆฎบ่้': u'ๆ่ๅฝ',
u'ๆฎบ่็จฑ': u'ๆ่็งฐ',
u'ๆฎบ่่
': u'ๆ่่
',
u'ๆฎบ่่ฟฐ': u'ๆ่่ฟฐ',
u'้่': u'ๆ็',
u'้่ๆธ': u'ๆ่ไนฆ',
u'้่ไฝ': u'ๆ่ไฝ',
u'้่ๅ': u'ๆ่ๅ',
u'้่้': u'ๆ่ๅฝ',
u'้่็จฑ': u'ๆ่็งฐ',
u'้่่
': u'ๆ่่
',
u'้่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆไนพๅพท': u'ๆไนพๅพท',
u'ๆไนพ้ ': u'ๆไนพ้กบ',
u'ๆไนพ้กบ': u'ๆไนพ้กบ',
u'ๆๆพค้
': u'ๆๆณฝ้',
u'ไพ่': u'ๆฅ็',
u'ไพ่ๆธ': u'ๆฅ่ไนฆ',
u'ไพ่ไฝ': u'ๆฅ่ไฝ',
u'ไพ่ๅ': u'ๆฅ่ๅ',
u'ไพ่้': u'ๆฅ่ๅฝ',
u'ไพ่็จฑ': u'ๆฅ่็งฐ',
u'ไพ่่
': u'ๆฅ่่
',
u'ไพ่่ฟฐ': u'ๆฅ่่ฟฐ',
u'ๆฅๅนบ': u'ๆจๅนบ',
u'ๆ่': u'ๆ็',
u'ๆ่ไฝ': u'ๆ่ไฝ',
u'ๆ่ๅ': u'ๆ่ๅ',
u'ๆ่้': u'ๆ่ๅฝ',
u'ๆ่็จฑ': u'ๆ่็งฐ',
u'ๆ่่
': u'ๆ่่
',
u'ๆ่่ฟฐ': u'ๆ่่ฟฐ',
u'ๆณ่ฉๅพต': u'ๆณ่ฏๅพต',
u'ๆณ่ฏๅพต': u'ๆณ่ฏๅพต',
u'ๆจๅฟ่': u'ๆ ๅฟ็',
u'ๆจ่ช่': u'ๆ ๅฟ็',
u'ๅคข่': u'ๆขฆ็',
u'ๅคข่ๆธ': u'ๆขฆ่ไนฆ',
u'ๅคข่ไฝ': u'ๆขฆ่ไฝ',
u'ๅคข่ๅ': u'ๆขฆ่ๅ',
u'ๅคข่้': u'ๆขฆ่ๅฝ',
u'ๅคข่็จฑ': u'ๆขฆ่็งฐ',
u'ๅคข่่
': u'ๆขฆ่่
',
u'ๅคข่่ฟฐ': u'ๆขฆ่่ฟฐ',
u'ๆขณ่': u'ๆขณ็',
u'ๆขณ่ไฝ': u'ๆขณ่ไฝ',
u'ๆขณ่ๅ': u'ๆขณ่ๅ',
u'ๆขณ่้': u'ๆขณ่ๅฝ',
u'ๆขณ่็จฑ': u'ๆขณ่็งฐ',
u'ๆขณ่่
': u'ๆขณ่่
',
u'ๆขณ่่ฟฐ': u'ๆขณ่่ฟฐ',
u'ๆจๆผๆ': u'ๆจๆผๆ',
u'ๆฐๆฐ': u'ๆฐๆฐ',
u'ๆฑ่': u'ๆฑ็',
u'ๆฑ่ๆธ': u'ๆฑ่ไนฆ',
u'ๆฑ่ไฝ': u'ๆฑ่ไฝ',
u'ๆฑ่ๅ': u'ๆฑ่ๅ',
u'ๆฑ่้': u'ๆฑ่ๅฝ',
u'ๆฑ่็จฑ': u'ๆฑ่็งฐ',
u'ๆฑ่่
': u'ๆฑ่่
',
u'ๆฑ่่ฟฐ': u'ๆฑ่่ฟฐ',
u'ๆฒๆฒ': u'ๆฒๆฒก',
u'ๆฒ่': u'ๆฒ็',
u'ๆฒ็ฉ': u'ๆฒ็งฏ',
u'ๆฒ่น': u'ๆฒ่น',
u'ๆฒ่ๆธ': u'ๆฒ่ไนฆ',
u'ๆฒ่ไฝ': u'ๆฒ่ไฝ',
u'ๆฒ่ๅ': u'ๆฒ่ๅ',
u'ๆฒ่้': u'ๆฒ่ๅฝ',
u'ๆฒ่็จฑ': u'ๆฒ่็งฐ',
u'ๆฒ่่
': u'ๆฒ่่
',
u'ๆฒ่่ฟฐ': u'ๆฒ่่ฟฐ',
u'ๆฒ้ป': u'ๆฒ้ป',
u'ๆฒฟ่': u'ๆฒฟ็',
u'ๆฒฟ่ๆธ': u'ๆฒฟ่ไนฆ',
u'ๆฒฟ่ไฝ': u'ๆฒฟ่ไฝ',
u'ๆฒฟ่ๅ': u'ๆฒฟ่ๅ',
u'ๆฒฟ่้': u'ๆฒฟ่ๅฝ',
u'ๆฒฟ่็จฑ': u'ๆฒฟ่็งฐ',
u'ๆฒฟ่่
': u'ๆฒฟ่่
',
u'ๆฒฟ่่ฟฐ': u'ๆฒฟ่่ฟฐ',
u'ๆฐพๆฟซ': u'ๆณๆปฅ',
u'ๆด้': u'ๆด็ป',
u'ๆดป่': u'ๆดป็',
u'ๆดป่ๆธ': u'ๆดป่ไนฆ',
u'ๆดป่ไฝ': u'ๆดป่ไฝ',
u'ๆดป่ๅ': u'ๆดป่ๅ',
u'ๆดป่้': u'ๆดป่ๅฝ',
u'ๆดป่็จฑ': u'ๆดป่็งฐ',
u'ๆดป่่
': u'ๆดป่่
',
u'ๆดป่่ฟฐ': u'ๆดป่่ฟฐ',
u'ๆต่': u'ๆต็',
u'ๆต่ๆธ': u'ๆต่ไนฆ',
u'ๆต่ไฝ': u'ๆต่ไฝ',
u'ๆต่ๅ': u'ๆต่ๅ',
u'ๆต่้': u'ๆต่ๅฝ',
u'ๆต่็จฑ': u'ๆต่็งฐ',
u'ๆต่่
': u'ๆต่่
',
u'ๆต่่ฟฐ': u'ๆต่่ฟฐ',
u'ๆต้ฒ่': u'ๆต้ฒ็',
u'ๆตฎ่': u'ๆตฎ็',
u'ๆตฎ่ๆธ': u'ๆตฎ่ไนฆ',
u'ๆตฎ่ไฝ': u'ๆตฎ่ไฝ',
u'ๆตฎ่ๅ': u'ๆตฎ่ๅ',
u'ๆตฎ่้': u'ๆตฎ่ๅฝ',
u'ๆตฎ่็จฑ': u'ๆตฎ่็งฐ',
u'ๆตฎ่่
': u'ๆตฎ่่
',
u'ๆตฎ่่ฟฐ': u'ๆตฎ่่ฟฐ',
u'ๆฝค่': u'ๆถฆ็',
u'ๆฝค่ๆธ': u'ๆถฆ่ไนฆ',
u'ๆฝค่ไฝ': u'ๆถฆ่ไฝ',
u'ๆฝค่ๅ': u'ๆถฆ่ๅ',
u'ๆฝค่้': u'ๆถฆ่ๅฝ',
u'ๆฝค่็จฑ': u'ๆถฆ่็งฐ',
u'ๆฝค่่
': u'ๆถฆ่่
',
u'ๆฝค่่ฟฐ': u'ๆถฆ่่ฟฐ',
u'ๆถต่': u'ๆถต็',
u'ๆถต่ๆธ': u'ๆถต่ไนฆ',
u'ๆถต่ไฝ': u'ๆถต่ไฝ',
u'ๆถต่ๅ': u'ๆถต่ๅ',
u'ๆถต่้': u'ๆถต่ๅฝ',
u'ๆถต่็จฑ': u'ๆถต่็งฐ',
u'ๆถต่่
': u'ๆถต่่
',
u'ๆถต่่ฟฐ': u'ๆถต่่ฟฐ',
u'ๆธด่': u'ๆธด็',
u'ๆธด่ๆธ': u'ๆธด่ไนฆ',
u'ๆธด่ไฝ': u'ๆธด่ไฝ',
u'ๆธด่ๅ': u'ๆธด่ๅ',
u'ๆธด่้': u'ๆธด่ๅฝ',
u'ๆธด่็จฑ': u'ๆธด่็งฐ',
u'ๆธด่่
': u'ๆธด่่
',
u'ๆธด่่ฟฐ': u'ๆธด่่ฟฐ',
u'ๆบข่': u'ๆบข็',
u'ๆบข่ๆธ': u'ๆบข่ไนฆ',
u'ๆบข่ไฝ': u'ๆบข่ไฝ',
u'ๆบข่ๅ': u'ๆบข่ๅ',
u'ๆบข่้': u'ๆบข่ๅฝ',
u'ๆบข่็จฑ': u'ๆบข่็งฐ',
u'ๆบข่่
': u'ๆบข่่
',
u'ๆบข่่ฟฐ': u'ๆบข่่ฟฐ',
u'ๆผ่': u'ๆผ็',
u'ๆผ่ๆธ': u'ๆผ่ไนฆ',
u'ๆผ่ไฝ': u'ๆผ่ไฝ',
u'ๆผ่ๅ': u'ๆผ่ๅ',
u'ๆผ่้': u'ๆผ่ๅฝ',
u'ๆผ่็จฑ': u'ๆผ่็งฐ',
u'ๆผ่่
': u'ๆผ่่
',
u'ๆผ่่ฟฐ': u'ๆผ่่ฟฐ',
u'ๆผซ่': u'ๆผซ็',
u'ๆผซ่ๆธ': u'ๆผซ่ไนฆ',
u'ๆผซ่ไฝ': u'ๆผซ่ไฝ',
u'ๆผซ่ๅ': u'ๆผซ่ๅ',
u'ๆผซ่้': u'ๆผซ่ๅฝ',
u'ๆผซ่็จฑ': u'ๆผซ่็งฐ',
u'ๆผซ่่
': u'ๆผซ่่
',
u'ๆผซ่่ฟฐ': u'ๆผซ่่ฟฐ',
u'้ป่': u'็น็',
u'้ป่ไฝ': u'็น่ไฝ',
u'้ป่ๅ': u'็น่ๅ',
u'้ป่้': u'็น่ๅฝ',
u'้ป่็จฑ': u'็น่็งฐ',
u'้ป่่
': u'็น่่
',
u'้ป่่ฟฐ': u'็น่่ฟฐ',
u'็่': u'็ง็',
u'็่ไฝ': u'็ง่ไฝ',
u'็่ๅ': u'็ง่ๅ',
u'็่้': u'็ง่ๅฝ',
u'็่็จฑ': u'็ง่็งฐ',
u'็่่
': u'็ง่่
',
u'็่่ฟฐ': u'็ง่่ฟฐ',
u'็
ง่': u'็
ง็',
u'็
ง่ๆธ': u'็
ง่ไนฆ',
u'็
ง่ไฝ': u'็
ง่ไฝ',
u'็
ง่ๅ': u'็
ง่ๅ',
u'็
ง่้': u'็
ง่ๅฝ',
u'็
ง่็จฑ': u'็
ง่็งฐ',
u'็
ง่่
': u'็
ง่่
',
u'็
ง่่ฟฐ': u'็
ง่่ฟฐ',
u'ๆ่ญท่': u'็ฑๆค็',
u'ๆ่': u'็ฑ็',
u'ๆ่ๆธ': u'็ฑ่ไนฆ',
u'ๆ่ไฝ': u'็ฑ่ไฝ',
u'ๆ่ๅ': u'็ฑ่ๅ',
u'ๆ่้': u'็ฑ่ๅฝ',
u'ๆ่็จฑ': u'็ฑ่็งฐ',
u'ๆ่่
': u'็ฑ่่
',
u'ๆ่่ฟฐ': u'็ฑ่่ฟฐ',
u'็ฝ่': u'็ต็',
u'็ฝ่ๆธ': u'็ต่ไนฆ',
u'็ฝ่ไฝ': u'็ต่ไฝ',
u'็ฝ่ๅ': u'็ต่ๅ',
u'็ฝ่้': u'็ต่ๅฝ',
u'็ฝ่็จฑ': u'็ต่็งฐ',
u'็ฝ่่
': u'็ต่่
',
u'็ฝ่่ฟฐ': u'็ต่่ฟฐ',
u'็ฏไธ่': u'็ฏไธ็',
u'็ฏๅพ่': u'็ฏๅพ็',
u'็จ่': u'็ฌ็',
u'็จ่ๆธ': u'็ฌ่ไนฆ',
u'็จ่ไฝ': u'็ฌ่ไฝ',
u'็จ่ๅ': u'็ฌ่ๅ',
u'็จ่้': u'็ฌ่ๅฝ',
u'็จ่็จฑ': u'็ฌ่็งฐ',
u'็จ่่
': u'็ฌ่่
',
u'็จ่่ฟฐ': u'็ฌ่่ฟฐ',
u'็่': u'็็',
u'็่ๆธ': u'็็ไนฆ',
u'็่ไฝ': u'็่ไฝ',
u'็่ๅ': u'็่ๅ',
u'็่้': u'็่ๅฝ',
u'็่็จฑ': u'็่็งฐ',
u'็่่
': u'็่่
',
u'็่่ฟฐ': u'็่่ฟฐ',
u'็ฉ่': u'็ฉ็',
u'็่': u'็็',
u'็่ๆธ': u'็่ไนฆ',
u'็่ไฝ': u'็่ไฝ',
u'็่ๅ': u'็่ๅ',
u'็่้': u'็่ๅฝ',
u'็่็จฑ': u'็่็งฐ',
u'็่่
': u'็่่
',
u'็่่ฟฐ': u'็่่ฟฐ',
u'็จไธ่': u'็จไธ็',
u'็จๅพ่': u'็จๅพ็',
u'็จ่': u'็จ็',
u'็จ่ๆธ': u'็จ่ไนฆ',
u'็จ่ไฝ': u'็จ่ไฝ',
u'็จ่ๅ': u'็จ่ๅ',
u'็จ่้': u'็จ่ๅฝ',
u'็จ่็จฑ': u'็จ่็งฐ',
u'็จ่่
': u'็จ่่
',
u'็จ่่ฟฐ': u'็จ่่ฟฐ',
u'็ทไธบไนพ': u'็ทไธบไนพ',
u'็ท็ฒไนพ': u'็ทไธบไนพ',
u'็ท็บไนพ': u'็ทไธบไนพ',
u'็ทๆง็บไนพ': u'็ทๆงไธบไนพ',
u'็ทๆง็ฒไนพ': u'็ทๆงไธบไนพ',
u'็ทๆงไธบไนพ': u'็ทๆงไธบไนพ',
u'็่': u'็็',
u'็่ๆธ': u'็็ไนฆ',
u'็่ไฝ': u'็่ไฝ',
u'็่ๅ': u'็่ๅ',
u'็่้': u'็่ๅฝ',
u'็่็จฑ': u'็่็งฐ',
u'็่่
': u'็่่
',
u'็่่ฟฐ': u'็่่ฟฐ',
u'็่': u'็็',
u'็่ๆธ': u'็่ไนฆ',
u'็่ไฝ': u'็่ไฝ',
u'็่ๅ': u'็่ๅ',
u'็่้': u'็่ๅฝ',
u'็่็จฑ': u'็่็งฐ',
u'็่่
': u'็่่
',
u'็่่ฟฐ': u'็่่ฟฐ',
u'็ฅ็': u'็ฅ็',
u'็บ่': u'็ฑ็',
u'็บ่ๆธ': u'็ฑ่ไนฆ',
u'็บ่ไฝ': u'็ฑ่ไฝ',
u'็บ่ๅ': u'็ฑ่ๅ',
u'็บ่้': u'็ฑ่ๅฝ',
u'็บ่็จฑ': u'็ฑ่็งฐ',
u'็บ่่
': u'็ฑ่่
',
u'็บ่่ฟฐ': u'็ฑ่่ฟฐ',
u'็่': u'็็',
u'็่ๆธ': u'็่ไนฆ',
u'็่ไฝ': u'็่ไฝ',
u'็่ๅ': u'็่ๅ',
u'็่้': u'็่ๅฝ',
u'็่็จฑ': u'็่็งฐ',
u'็่่
': u'็่่
',
u'็่่ฟฐ': u'็่่ฟฐ',
u'็ฏ่': u'็ฏ็',
u'็ฏ่ๆธ': u'็ฏ็ไนฆ',
u'็ฏ่ไฝ': u'็ฏ่ไฝ',
u'็ฏ่ๅ': u'็ฏ่ๅ',
u'็ฏ่้': u'็ฏ่ๅฝ',
u'็ฏ่็จฑ': u'็ฏ่็งฐ',
u'็ฏ่่
': u'็ฏ่่
',
u'็ฏ่่ฟฐ': u'็ฏ่่ฟฐ',
u'็พ่': u'็พ็',
u'็พ่ๆธ': u'็พ่ไนฆ',
u'็พ่ไฝ': u'็พ่ไฝ',
u'็พ่ๅ': u'็พ่ๅ',
u'็พ่้': u'็พ่ๅฝ',
u'็พ่็จฑ': u'็พ่็งฐ',
u'็พ่่
': u'็พ่่
',
u'็พ่่ฟฐ': u'็พ่่ฟฐ',
u'็ไธ่': u'็ไธ็',
u'็ๅพ่': u'็ๅพ็',
u'็่': u'็็',
u'็่ๆธ': u'็็ไนฆ',
u'็่ไฝ': u'็่ไฝ',
u'็่ๅ': u'็่ๅ',
u'็่้': u'็่ๅฝ',
u'็่็จฑ': u'็่็งฐ',
u'็่่
': u'็่่
',
u'็่่ฟฐ': u'็่่ฟฐ',
u'่ๆฅญ': u'็ไธ',
u'่็ตฒ': u'็ไธ',
u'่ไน': u'็ไน',
u'่ไบบ': u'็ไบบ',
u'่ไปไนๆฅ': u'็ไปไนๆฅ',
u'่ไป': u'็ไป',
u'่ไปค': u'็ไปค',
u'่ไฝ': u'็ไฝ',
u'่้ซ': u'็ไฝ',
u'่ไฝ ': u'็ไฝ ',
u'่ไพฟ': u'็ไพฟ',
u'่ๆถผ': u'็ๅ',
u'่ๅ': u'็ๅ',
u'่ๅ': u'็ๅฒ',
u'่่': u'็ๅท',
u'่ๅข': u'็ๅข',
u'่ๅฉ': u'็ๅฉ',
u'่ๅฐ': u'็ๅฐ',
u'่ๅขจ': u'็ๅขจ',
u'่่ฒ': u'็ๅฃฐ',
u'่่': u'็ๅค',
u'่ๅฅน': u'็ๅฅน',
u'่ๅฆณ': u'็ๅฆณ',
u'่ๅง': u'็ๅง',
u'่ๅฎ': u'็ๅฎ',
u'่ๅฎ': u'็ๅฎ',
u'่ๅฏฆ': u'็ๅฎ',
u'่ๅทฑ': u'็ๅทฑ',
u'่ๅธณ': u'็ๅธ',
u'่ๅบ': u'็ๅบ',
u'่ๅบธ': u'็ๅบธ',
u'่ๅผ': u'็ๅผ',
u'่้': u'็ๅฝ',
u'่ๅฟ': u'็ๅฟ',
u'่ๅฟ': u'็ๅฟ',
u'่ๅฟ': u'็ๅฟ',
u'่ๆฅ': u'็ๆฅ',
u'่ๆฑ': u'็ๆผ',
u'่้ฉ': u'็ๆ',
u'่ๆณ': u'็ๆณ',
u'่ๆ': u'็ๆ',
u'่ๆ
': u'็ๆ
',
u'่ๆ': u'็ๆ',
u'่ๆ': u'็ๆ',
u'่ๆน': u'็ๆน',
u'่ๆธ': u'็ๆธ',
u'่ๆฐ': u'็ๆฐ',
u'่ๆธ': u'็ๆฐ',
u'่ๆ': u'็ๆ',
u'่ๆซ': u'็ๆซ',
u'่ๆฅต': u'็ๆ',
u'่ๆ ผ': u'็ๆ ผ',
u'่ๆฃ': u'็ๆฃ',
u'่ๆง': u'็ๆง',
u'่ๆฐฃ': u'็ๆฐ',
u'่ๆณ': u'็ๆณ',
u'่ๆทบ': u'็ๆต
',
u'่็ซ': u'็็ซ',
u'่็ถ': u'็็ถ',
u'่็': u'็็',
u'่็': u'็็',
u'่็': u'็็',
u'่็ฝ': u'็็ฝ',
u'่็ธ': u'็็ธ',
u'่็ผ': u'็็ผ',
u'่่': u'็็',
u'่็ฅ': u'็็ฅ',
u'่็ฉ': u'็็งฏ',
u'่็จฟ': u'็็จฟ',
u'่็ญ': u'็็ฌ',
u'่็ฑ': u'็็ฑ',
u'่็ท': u'็็ดง',
u'่็ท': u'็็ท',
u'่็ต': u'็็ป',
u'่็ธพ': u'็็ปฉ',
u'่็ท': u'็็ปฏ',
u'่็ถ ': u'็็ปฟ',
u'่่': u'็่',
u'่่
ณ': u'็่',
u'่่ฆ': u'็่ฐ',
u'่่ฒ': u'็่ฒ',
u'่็ฏ': u'็่',
u'่่ฑ': u'็่ฑ',
u'่่ซ': u'็่ซ',
u'่่ฝ': u'็่ฝ',
u'่่': u'็่',
u'่่กฃ': u'็่กฃ',
u'่่ฃ': u'็่ฃ
',
u'่่ฆ': u'็่ฆ',
u'่่ญฆ': u'็่ญฆ',
u'่่ถฃ': u'็่ถฃ',
u'่้': u'็่พน',
u'่่ฟท': u'็่ฟท',
u'่่ทก': u'็่ฟน',
u'่้': u'็้',
u'่้ฒ': u'็้ฒ',
u'่่': u'็้ป',
u'่้ธ': u'็้',
u'่้': u'็้',
u'่้ญ': u'็้ญ',
u'่้ก': u'็้ข',
u'่้ญ': u'็้ญ',
u'็กไธ่': u'็กไธ็',
u'็กๅพ่': u'็กๅพ็',
u'็ก่': u'็ก็',
u'็ก่ๆธ': u'็ก่ไนฆ',
u'็ก่ไฝ': u'็ก่ไฝ',
u'็ก่ๅ': u'็ก่ๅ',
u'็ก่้': u'็ก่ๅฝ',
u'็ก่็จฑ': u'็ก่็งฐ',
u'็ก่่
': u'็ก่่
',
u'็ก่่ฟฐ': u'็ก่่ฟฐ',
u'็นๅพฎ็ฅ่': u'็นๅพฎ็ฅ่',
u'็ชไธธ': u'็พไธธ',
u'็่': u'็็',
u'็่ๆธ': u'็่ไนฆ',
u'็่ไฝ': u'็่ไฝ',
u'็่ๅ': u'็่ๅ',
u'็่้': u'็่ๅฝ',
u'็่็จฑ': u'็่็งฐ',
u'็่่
': u'็่่
',
u'็่่ฟฐ': u'็่่ฟฐ',
u'็ง่': u'็ง็',
u'็ง่ๆธ': u'็ง็ไนฆ',
u'็ง่ไฝ': u'็ง่ไฝ',
u'็ง่ๅ': u'็ง่ๅ',
u'็ง่้': u'็ง่ๅฝ',
u'็ง่็จฑ': u'็ง่็งฐ',
u'็ง่่
': u'็ง่่
',
u'็ง่่ฟฐ': u'็ง่่ฟฐ',
u'็ช่': u'็ช็',
u'็ช่ๆธ': u'็ช่ไนฆ',
u'็ช่ไฝ': u'็ช่ไฝ',
u'็ช่ๅ': u'็ช่ๅ',
u'็ช่้': u'็ช่ๅฝ',
u'็ช่็จฑ': u'็ช่็งฐ',
u'็ช่่
': u'็ช่่
',
u'็ช่่ฟฐ': u'็ช่่ฟฐ',
u'็ญๆ': u'็ญๆ',
u'็ณ็ข้': u'็ณ็ข้',
u'็ณ็ข้ฎ': u'็ณ็ข้',
u'็ฆ่': u'็ฆ็',
u'็ฆ่ๆธ': u'็ฆ่ไนฆ',
u'็ฆ่ไฝ': u'็ฆ่ไฝ',
u'็ฆ่ๅ': u'็ฆ่ๅ',
u'็ฆ่้': u'็ฆ่ๅฝ',
u'็ฆ่็จฑ': u'็ฆ่็งฐ',
u'็ฆ่่
': u'็ฆ่่
',
u'็ฆ่่ฟฐ': u'็ฆ่่ฟฐ',
u'็ฉๆข': u'็ฉๆข',
u'็ฉบ่': u'็ฉบ็',
u'็ฉบ่ๆธ': u'็ฉบ่ไนฆ',
u'็ฉบ่ไฝ': u'็ฉบ่ไฝ',
u'็ฉบ่ๅ': u'็ฉบ่ๅ',
u'็ฉบ่้': u'็ฉบ่ๅฝ',
u'็ฉบ่็จฑ': u'็ฉบ่็งฐ',
u'็ฉบ่่
': u'็ฉบ่่
',
u'็ฉบ่่ฟฐ': u'็ฉบ่่ฟฐ',
u'็ฉฟ่': u'็ฉฟ็',
u'็ฉฟ่ๆธ': u'็ฉฟ่ไนฆ',
u'็ฉฟ่ไฝ': u'็ฉฟ่ไฝ',
u'็ฉฟ่ๅ': u'็ฉฟ่ๅ',
u'็ฉฟ่้': u'็ฉฟ่ๅฝ',
u'็ฉฟ่็จฑ': u'็ฉฟ่็งฐ',
u'็ฉฟ่่
': u'็ฉฟ่่
',
u'็ฉฟ่่ฟฐ': u'็ฉฟ่่ฟฐ',
u'่ฑ่': u'็ซ็',
u'่ฑ่ๆธ': u'็ซ่ไนฆ',
u'่ฑ่ไฝ': u'็ซ่ไฝ',
u'่ฑ่ๅ': u'็ซ่ๅ',
u'่ฑ่้': u'็ซ่ๅฝ',
u'่ฑ่็จฑ': u'็ซ่็งฐ',
u'่ฑ่่
': u'็ซ่่
',
u'่ฑ่่ฟฐ': u'็ซ่่ฟฐ',
u'็ซ่': u'็ซ็',
u'็ซ่ๆธ': u'็ซ่ไนฆ',
u'็ซ่ไฝ': u'็ซ่ไฝ',
u'็ซ่ๅ': u'็ซ่ๅ',
u'็ซ่้': u'็ซ่ๅฝ',
u'็ซ่็จฑ': u'็ซ่็งฐ',
u'็ซ่่
': u'็ซ่่
',
u'็ซ่่ฟฐ': u'็ซ่่ฟฐ',
u'็ฌ่': u'็ฌ็',
u'็ฌ่ๆธ': u'็ฌ่ไนฆ',
u'็ฌ่ไฝ': u'็ฌ่ไฝ',
u'็ฌ่ๅ': u'็ฌ่ๅ',
u'็ฌ่้': u'็ฌ่ๅฝ',
u'็ฌ่็จฑ': u'็ฌ่็งฐ',
u'็ฌ่่
': u'็ฌ่่
',
u'็ฌ่่ฟฐ': u'็ฌ่่ฟฐ',
u'็ญ่ฆ': u'็ญๅค',
u'็ฎก่': u'็ฎก็',
u'็ฎก่ๆธ': u'็ฎก่ไนฆ',
u'็ฎก่ไฝ': u'็ฎก่ไฝ',
u'็ฎก่ๅ': u'็ฎก่ๅ',
u'็ฎก่้': u'็ฎก่ๅฝ',
u'็ฎก่็จฑ': u'็ฎก่็งฐ',
u'็ฎก่่
': u'็ฎก่่
',
u'็ฎก่่ฟฐ': u'็ฎก่่ฟฐ',
u'็ถ่': u'็ป็',
u'็ถ่ๆธ': u'็ป่ไนฆ',
u'็ถ่ไฝ': u'็ป่ไฝ',
u'็ถ่ๅ': u'็ป่ๅ',
u'็ถ่้': u'็ป่ๅฝ',
u'็ถ่็จฑ': u'็ป่็งฐ',
u'็ถ่่
': u'็ป่่
',
u'็ถ่่ฟฐ': u'็ป่่ฟฐ',
u'็น่': u'็ป็',
u'็น่ๆธ': u'็ป่ไนฆ',
u'็น่ไฝ': u'็ป่ไฝ',
u'็น่ๅ': u'็ป่ๅ',
u'็น่้': u'็ป่ๅฝ',
u'็น่็จฑ': u'็ป่็งฐ',
u'็น่่
': u'็ป่่
',
u'็น่่ฟฐ': u'็ป่่ฟฐ',
u'็ทจ่': u'็ผ่',
u'็บ่': u'็ผ ็',
u'็บ่ๆธ': u'็ผ ่ไนฆ',
u'็บ่ไฝ': u'็ผ ่ไฝ',
u'็บ่ๅ': u'็ผ ่ๅ',
u'็บ่้': u'็ผ ่ๅฝ',
u'็บ่็จฑ': u'็ผ ่็งฐ',
u'็บ่่
': u'็ผ ่่
',
u'็บ่่ฟฐ': u'็ผ ่่ฟฐ',
u'็ฝฉ่': u'็ฝฉ็',
u'็ฝฉ่ๆธ': u'็ฝฉ่ไนฆ',
u'็ฝฉ่ไฝ': u'็ฝฉ่ไฝ',
u'็ฝฉ่ๅ': u'็ฝฉ่ๅ',
u'็ฝฉ่้': u'็ฝฉ่ๅฝ',
u'็ฝฉ่็จฑ': u'็ฝฉ่็งฐ',
u'็ฝฉ่่
': u'็ฝฉ่่
',
u'็ฝฉ่่ฟฐ': u'็ฝฉ่่ฟฐ',
u'็พ่': u'็พ็',
u'็พ่ๆธ': u'็พ่ไนฆ',
u'็พ่ไฝ': u'็พ่ไฝ',
u'็พ่ๅ': u'็พ่ๅ',
u'็พ่้': u'็พ่ๅฝ',
u'็พ่็จฑ': u'็พ่็งฐ',
u'็พ่่
': u'็พ่่
',
u'็พ่่ฟฐ': u'็พ่่ฟฐ',
u'่่': u'่็',
u'่่ๆธ': u'่่ไนฆ',
u'่่ไฝ': u'่่ไฝ',
u'่่ๅ': u'่่ๅ',
u'่่้': u'่่ๅฝ',
u'่่็จฑ': u'่่็งฐ',
u'่่่
': u'่่่
',
u'่่่ฟฐ': u'่่่ฟฐ',
u'่ๅนบ': u'่ๅนบ',
u'่่': u'่็',
u'่่ๆธ': u'่่ไนฆ',
u'่่ไฝ': u'่่ไฝ',
u'่่ๅ': u'่่ๅ',
u'่่้': u'่่ๅฝ',
u'่่็จฑ': u'่่็งฐ',
u'่่่
': u'่่่
',
u'่่่ฟฐ': u'่่่ฟฐ',
u'่ไนพไนพ': u'่ๅนฒๅนฒ',
u'่ๆ้่ถณ': u'่ๆ้พ่ถณ',
u'่่': u'่็',
u'่่ๆธ': u'่่ไนฆ',
u'่่ไฝ': u'่่ไฝ',
u'่่ๅ': u'่่ๅ',
u'่่้': u'่่ๅฝ',
u'่่็จฑ': u'่่็งฐ',
u'่่่
': u'่่่
',
u'่่่ฟฐ': u'่่่ฟฐ',
u'่ ่': u'่ถ็',
u'่ ่ๆธ': u'่ถ่ไนฆ',
u'่ ่ไฝ': u'่ถ่ไฝ',
u'่ ่ๅ': u'่ถ่ๅ',
u'่ ่้': u'่ถ่ๅฝ',
u'่ ่็จฑ': u'่ถ่็งฐ',
u'่ ่่
': u'่ถ่่
',
u'่ ่่ฟฐ': u'่ถ่่ฟฐ',
u'่่': u'่บ็',
u'่่ๆธ': u'่บ่ไนฆ',
u'่่ไฝ': u'่บ่ไฝ',
u'่่ๅ': u'่บ่ๅ',
u'่่้': u'่บ่ๅฝ',
u'่่็จฑ': u'่บ่็งฐ',
u'่่่
': u'่บ่่
',
u'่่่ฟฐ': u'่บ่่ฟฐ',
u'่ฆ่': u'่ฆ็',
u'่ฆ่ๆธ': u'่ฆ่ไนฆ',
u'่ฆ่ไฝ': u'่ฆ่ไฝ',
u'่ฆ่ๅ': u'่ฆ่ๅ',
u'่ฆ่้': u'่ฆ่ๅฝ',
u'่ฆ่็จฑ': u'่ฆ่็งฐ',
u'่ฆ่่
': u'่ฆ่่
',
u'่ฆ่่ฟฐ': u'่ฆ่่ฟฐ',
u'่ง็ฏ': u'่ง็ฏ',
u'่ด็ฏ': u'่ง็ฏ',
u'็ฒ่': u'่ท็',
u'็ฒ่ๆธ': u'่ท่ไนฆ',
u'็ฒ่ไฝ': u'่ท่ไฝ',
u'็ฒ่ๅ': u'่ท่ๅ',
u'็ฒ่้': u'่ท่ๅฝ',
u'็ฒ่็จฑ': u'่ท่็งฐ',
u'็ฒ่่
': u'่ท่่
',
u'็ฒ่่ฟฐ': u'่ท่่ฟฐ',
u'่ญไนพ': u'่งไนพ',
u'่งไนพ': u'่งไนพ',
u'่ฝ่': u'่ฝ็',
u'่ฝ่ๆธ': u'่ฝ่ไนฆ',
u'่ฝ่ไฝ': u'่ฝ่ไฝ',
u'่ฝ่ๅ': u'่ฝ่ๅ',
u'่ฝ่้': u'่ฝ่ๅฝ',
u'่ฝ่็จฑ': u'่ฝ่็งฐ',
u'่ฝ่่
': u'่ฝ่่
',
u'่ฝ่่ฟฐ': u'่ฝ่่ฟฐ',
u'่ๆธ': u'่ไนฆ',
u'่ๆธ็ซ่ชช': u'่ไนฆ็ซ่ฏด',
u'่ไฝ': u'่ไฝ',
u'่ๅ': u'่ๅ',
u'่้่ฆๅ': u'่ๅฝ่งๅ',
u'่ๆ': u'่ๆ',
u'่ๆ': u'่ๆ',
u'่็จฑ': u'่็งฐ',
u'่่
': u'่่
',
u'่่บซ': u'่่บซ',
u'่่ฟฐ': u'่่ฟฐ',
u'่่': u'่็',
u'่่ๆธ': u'่่ไนฆ',
u'่่ไฝ': u'่่ไฝ',
u'่่ๅ': u'่่ๅ',
u'่่้': u'่่ๅฝ',
u'่่็จฑ': u'่่็งฐ',
u'่่่
': u'่่่
',
u'่่่ฟฐ': u'่่่ฟฐ',
u'่่': u'่็',
u'่่ๆธ': u'่่ไนฆ',
u'่่ไฝ': u'่่ไฝ',
u'่่ๅ': u'่่ๅ',
u'่่้': u'่่ๅฝ',
u'่่็จฑ': u'่่็งฐ',
u'่่่
': u'่่่
',
u'่่่ฟฐ': u'่่่ฟฐ',
u'่ธ่': u'่ธ็',
u'่ธ่ๆธ': u'่ธ่ไนฆ',
u'่ธ่ไฝ': u'่ธ่ไฝ',
u'่ธ่ๅ': u'่ธ่ๅ',
u'่ธ่้': u'่ธ่ๅฝ',
u'่ธ่็จฑ': u'่ธ่็งฐ',
u'่ธ่่
': u'่ธ่่
',
u'่ธ่่ฟฐ': u'่ธ่่ฟฐ',
u'่ก่': u'่ก็',
u'่ก่ๆธ': u'่ก่ไนฆ',
u'่ก่ไฝ': u'่ก่ไฝ',
u'่ก่ๅ': u'่ก่ๅ',
u'่ก่้': u'่ก่ๅฝ',
u'่ก่็จฑ': u'่ก่็งฐ',
u'่ก่่
': u'่ก่่
',
u'่ก่่ฟฐ': u'่ก่่ฟฐ',
u'่กฃ่': u'่กฃ็',
u'่กฃ่ๆธ': u'่กฃ่ไนฆ',
u'่กฃ่ไฝ': u'่กฃ่ไฝ',
u'่กฃ่ๅ': u'่กฃ่ๅ',
u'่กฃ่้': u'่กฃ่ๅฝ',
u'่กฃ่็จฑ': u'่กฃ่็งฐ',
u'่กฃ่่
': u'่กฃ่่
',
u'่กฃ่่ฟฐ': u'่กฃ่่ฟฐ',
u'่ฃ่': u'่ฃ
็',
u'่ฃ่ๆธ': u'่ฃ
่ไนฆ',
u'่ฃ่ไฝ': u'่ฃ
่ไฝ',
u'่ฃ่ๅ': u'่ฃ
่ๅ',
u'่ฃ่้': u'่ฃ
่ๅฝ',
u'่ฃ่็จฑ': u'่ฃ
่็งฐ',
u'่ฃ่่
': u'่ฃ
่่
',
u'่ฃ่่ฟฐ': u'่ฃ
่่ฟฐ',
u'่ฃน่': u'่ฃน็',
u'่ฃน่ๆธ': u'่ฃน่ไนฆ',
u'่ฃน่ไฝ': u'่ฃน่ไฝ',
u'่ฃน่ๅ': u'่ฃน่ๅ',
u'่ฃน่้': u'่ฃน่ๅฝ',
u'่ฃน่็จฑ': u'่ฃน่็งฐ',
u'่ฃน่่
': u'่ฃน่่
',
u'่ฃน่่ฟฐ': u'่ฃน่่ฟฐ',
u'่ฆ่': u'่ฆ่',
u'่ฆๅพฎ็ฅ่': u'่งๅพฎ็ฅ่',
u'่ฆ่': u'่ง็',
u'่ฆ่ๆธ': u'่ง่ไนฆ',
u'่ฆ่ไฝ': u'่ง่ไฝ',
u'่ฆ่ๅ': u'่ง่ๅ',
u'่ฆ่้': u'่ง่ๅฝ',
u'่ฆ่็จฑ': u'่ง่็งฐ',
u'่ฆ่่
': u'่ง่่
',
u'่ฆ่่ฟฐ': u'่ง่่ฟฐ',
u'่ฆๅพฎ็ฅ่': u'่งๅพฎ็ฅ่',
u'่จๅนพๆ็': u'่จๅนพๆ็',
u'่จ่': u'่ฎฐ็',
u'่จ่ๆธ': u'่ฎฐ่ไนฆ',
u'่จ่ไฝ': u'่ฎฐ่ไฝ',
u'่จ่ๅ': u'่ฎฐ่ๅ',
u'่จ่้': u'่ฎฐ่ๅฝ',
u'่จ่็จฑ': u'่ฎฐ่็งฐ',
u'่จ่่
': u'่ฎฐ่่
',
u'่จ่่ฟฐ': u'่ฎฐ่่ฟฐ',
u'่ซ่': u'่ฎบ่',
u'่ญฏ่': u'่ฏ่',
u'่ฉฆ่': u'่ฏ็',
u'่ฉฆ่ๆธ': u'่ฏ่ไนฆ',
u'่ฉฆ่ไฝ': u'่ฏ่ไฝ',
u'่ฉฆ่ๅ': u'่ฏ่ๅ',
u'่ฉฆ่้': u'่ฏ่ๅฝ',
u'่ฉฆ่็จฑ': u'่ฏ่็งฐ',
u'่ฉฆ่่
': u'่ฏ่่
',
u'่ฉฆ่่ฟฐ': u'่ฏ่่ฟฐ',
u'่ช่': u'่ฏญ็',
u'่ช่ๆธ': u'่ฏญ่ไนฆ',
u'่ช่ไฝ': u'่ฏญ่ไฝ',
u'่ช่ๅ': u'่ฏญ่ๅ',
u'่ช่้': u'่ฏญ่ๅฝ',
u'่ช่็จฑ': u'่ฏญ่็งฐ',
u'่ช่่
': u'่ฏญ่่
',
u'่ช่่ฟฐ': u'่ฏญ่่ฟฐ',
u'่ฑซ่': u'่ฑซ็',
u'่ฑซ่ๆธ': u'่ฑซ่ไนฆ',
u'่ฑซ่ไฝ': u'่ฑซ่ไฝ',
u'่ฑซ่ๅ': u'่ฑซ่ๅ',
u'่ฑซ่้': u'่ฑซ่ๅฝ',
u'่ฑซ่็จฑ': u'่ฑซ่็งฐ',
u'่ฑซ่่
': u'่ฑซ่่
',
u'่ฑซ่่ฟฐ': u'่ฑซ่่ฟฐ',
u'่ฒ่': u'่ด็',
u'่ฒ่ๆธ': u'่ด่ไนฆ',
u'่ฒ่ไฝ': u'่ด่ไฝ',
u'่ฒ่ๅ': u'่ด่ๅ',
u'่ฒ่้': u'่ด่ๅฝ',
u'่ฒ่็จฑ': u'่ด่็งฐ',
u'่ฒ่่
': u'่ด่่
',
u'่ฒ่่ฟฐ': u'่ด่่ฟฐ',
u'่ตฐ่': u'่ตฐ็',
u'่ตฐ่ๆธ': u'่ตฐ่ไนฆ',
u'่ตฐ่ไฝ': u'่ตฐ่ไฝ',
u'่ตฐ่ๅ': u'่ตฐ่ๅ',
u'่ตฐ่้': u'่ตฐ่ๅฝ',
u'่ตฐ่็จฑ': u'่ตฐ่็งฐ',
u'่ตฐ่่
': u'่ตฐ่่
',
u'่ตฐ่่ฟฐ': u'่ตฐ่่ฟฐ',
u'่ถ่': u'่ตถ็',
u'่ถ่ๆธ': u'่ตถ่ไนฆ',
u'่ถ่ไฝ': u'่ตถ่ไฝ',
u'่ถ่ๅ': u'่ตถ่ๅ',
u'่ถ่้': u'่ตถ่ๅฝ',
u'่ถ่็จฑ': u'่ตถ่็งฐ',
u'่ถ่่
': u'่ตถ่่
',
u'่ถ่่ฟฐ': u'่ตถ่่ฟฐ',
u'่ถด่': u'่ถด็',
u'่ถด่ๆธ': u'่ถด่ไนฆ',
u'่ถด่ไฝ': u'่ถด่ไฝ',
u'่ถด่ๅ': u'่ถด่ๅ',
u'่ถด่้': u'่ถด่ๅฝ',
u'่ถด่็จฑ': u'่ถด่็งฐ',
u'่ถด่่
': u'่ถด่่
',
u'่ถด่่ฟฐ': u'่ถด่่ฟฐ',
u'่บ่': u'่ท็',
u'่บ่ๆธ': u'่ท่ไนฆ',
u'่บ่ไฝ': u'่ท่ไฝ',
u'่บ่ๅ': u'่ท่ๅ',
u'่บ่้': u'่ท่ๅฝ',
u'่บ่็จฑ': u'่ท่็งฐ',
u'่บ่่
': u'่ท่่
',
u'่บ่่ฟฐ': u'่ท่่ฟฐ',
u'่ท่': u'่ท็',
u'่ท่ๆธ': u'่ท่ไนฆ',
u'่ท่ไฝ': u'่ท่ไฝ',
u'่ท่ๅ': u'่ท่ๅ',
u'่ท่้': u'่ท่ๅฝ',
u'่ท่็จฑ': u'่ท่็งฐ',
u'่ท่่
': u'่ท่่
',
u'่ท่่ฟฐ': u'่ท่่ฟฐ',
u'่ท่': u'่ท็',
u'่ท่ๆธ': u'่ท่ไนฆ',
u'่ท่ไฝ': u'่ท่ไฝ',
u'่ท่ๅ': u'่ท่ๅ',
u'่ท่้': u'่ท่ๅฝ',
u'่ท่็จฑ': u'่ท่็งฐ',
u'่ท่่
': u'่ท่่
',
u'่ท่่ฟฐ': u'่ท่่ฟฐ',
u'่ทช่': u'่ทช็',
u'่ทช่ๆธ': u'่ทช่ไนฆ',
u'่ทช่ไฝ': u'่ทช่ไฝ',
u'่ทช่ๅ': u'่ทช่ๅ',
u'่ทช่้': u'่ทช่ๅฝ',
u'่ทช่็จฑ': u'่ทช่็งฐ',
u'่ทช่่
': u'่ทช่่
',
u'่ทช่่ฟฐ': u'่ทช่่ฟฐ',
u'่ทณ่': u'่ทณ็',
u'่ทณ่ๆธ': u'่ทณ่ไนฆ',
u'่ทณ่ไฝ': u'่ทณ่ไฝ',
u'่ทณ่ๅ': u'่ทณ่ๅ',
u'่ทณ่้': u'่ทณ่ๅฝ',
u'่ทณ่็จฑ': u'่ทณ่็งฐ',
u'่ทณ่่
': u'่ทณ่่
',
u'่ทณ่่ฟฐ': u'่ทณ่่ฟฐ',
u'่บ่บๆปฟๅฟ': u'่ธ่บๆปฟๅฟ',
u'่ธ่': u'่ธ็',
u'่ธ่ๆธ': u'่ธ่ไนฆ',
u'่ธ่ไฝ': u'่ธ่ไฝ',
u'่ธ่ๅ': u'่ธ่ๅ',
u'่ธ่้': u'่ธ่ๅฝ',
u'่ธ่็จฑ': u'่ธ่็งฐ',
u'่ธ่่
': u'่ธ่่
',
u'่ธ่่ฟฐ': u'่ธ่่ฟฐ',
u'่ธฉ่': u'่ธฉ็',
u'่ธฉ่ๆธ': u'่ธฉ่ไนฆ',
u'่ธฉ่ไฝ': u'่ธฉ่ไฝ',
u'่ธฉ่ๅ': u'่ธฉ่ๅ',
u'่ธฉ่้': u'่ธฉ่ๅฝ',
u'่ธฉ่็จฑ': u'่ธฉ่็งฐ',
u'่ธฉ่่
': u'่ธฉ่่
',
u'่ธฉ่่ฟฐ': u'่ธฉ่่ฟฐ',
u'่บซ่': u'่บซ็',
u'่บซ่ๆธ': u'่บซ่ไนฆ',
u'่บซ่ไฝ': u'่บซ่ไฝ',
u'่บซ่ๅ': u'่บซ่ๅ',
u'่บซ่้': u'่บซ่ๅฝ',
u'่บซ่็จฑ': u'่บซ่็งฐ',
u'่บซ่่
': u'่บซ่่
',
u'่บซ่่ฟฐ': u'่บซ่่ฟฐ',
u'่บบ่': u'่บบ็',
u'่บบ่ๆธ': u'่บบ่ไนฆ',
u'่บบ่ไฝ': u'่บบ่ไฝ',
u'่บบ่ๅ': u'่บบ่ๅ',
u'่บบ่้': u'่บบ่ๅฝ',
u'่บบ่็จฑ': u'่บบ่็งฐ',
u'่บบ่่
': u'่บบ่่
',
u'่บบ่่ฟฐ': u'่บบ่่ฟฐ',
u'่ฝ่': u'่ฝฌ็',
u'่ฝ่ๆธ': u'่ฝฌ่ไนฆ',
u'่ฝ่ไฝ': u'่ฝฌ่ไฝ',
u'่ฝ่ๅ': u'่ฝฌ่ๅ',
u'่ฝ่้': u'่ฝฌ่ๅฝ',
u'่ฝ่็จฑ': u'่ฝฌ่็งฐ',
u'่ฝ่่
': u'่ฝฌ่่
',
u'่ฝ่่ฟฐ': u'่ฝฌ่่ฟฐ',
u'่ผ่': u'่ฝฝ็',
u'่ผ่ๆธ': u'่ฝฝ่ไนฆ',
u'่ผ่ไฝ': u'่ฝฝ่ไฝ',
u'่ผ่ๅ': u'่ฝฝ่ๅ',
u'่ผ่้': u'่ฝฝ่ๅฝ',
u'่ผ่็จฑ': u'่ฝฝ่็งฐ',
u'่ผ่่
': u'่ฝฝ่่
',
u'่ผ่่ฟฐ': u'่ฝฝ่่ฟฐ',
u'่ผ่': u'่พ่',
u'้่': u'่พพ็',
u'้่ๆธ': u'่พพ่ไนฆ',
u'้่ไฝ': u'่พพ่ไฝ',
u'้่ๅ': u'่พพ่ๅ',
u'้่้': u'่พพ่ๅฝ',
u'้่็จฑ': u'่พพ่็งฐ',
u'้่่
': u'่พพ่่
',
u'้่่ฟฐ': u'่พพ่่ฟฐ',
u'่ฟ่ง่ชไฟก': u'่ฟ่ง่ชไฟก',
u'่ฟ่ง่ฐไฟก': u'่ฟ่ง่ชไฟก',
u'้ ่': u'่ฟ็',
u'้ ่ๆธ': u'่ฟ่ไนฆ',
u'้ ่ไฝ': u'่ฟ่ไฝ',
u'้ ่ๅ': u'่ฟ่ๅ',
u'้ ่้': u'่ฟ่ๅฝ',
u'้ ่็จฑ': u'่ฟ่็งฐ',
u'้ ่่
': u'่ฟ่่
',
u'้ ่่ฟฐ': u'่ฟ่่ฟฐ',
u'้ฃ่': u'่ฟ็',
u'้ฃ่ๆธ': u'่ฟ่ไนฆ',
u'้ฃ่ไฝ': u'่ฟ่ไฝ',
u'้ฃ่ๅ': u'่ฟ่ๅ',
u'้ฃ่้': u'่ฟ่ๅฝ',
u'้ฃ่็จฑ': u'่ฟ่็งฐ',
u'้ฃ่่
': u'่ฟ่่
',
u'้ฃ่่ฟฐ': u'่ฟ่่ฟฐ',
u'่ฟซ่': u'่ฟซ็',
u'่ฟฝ่': u'่ฟฝ็',
u'่ฟฝ่ๆธ': u'่ฟฝ่ไนฆ',
u'่ฟฝ่ไฝ': u'่ฟฝ่ไฝ',
u'่ฟฝ่ๅ': u'่ฟฝ่ๅ',
u'่ฟฝ่้': u'่ฟฝ่ๅฝ',
u'่ฟฝ่็จฑ': u'่ฟฝ่็งฐ',
u'่ฟฝ่่
': u'่ฟฝ่่
',
u'่ฟฝ่่ฟฐ': u'่ฟฝ่่ฟฐ',
u'้่': u'้็',
u'้่ๆธ': u'้่ไนฆ',
u'้่ไฝ': u'้่ไฝ',
u'้่ๅ': u'้่ๅ',
u'้่้': u'้่ๅฝ',
u'้่็จฑ': u'้่็งฐ',
u'้่่
': u'้่่
',
u'้่่ฟฐ': u'้่่ฟฐ',
u'้ผ่': u'้ผ็',
u'้ผ่ๆธ': u'้ผ่ไนฆ',
u'้ผ่ไฝ': u'้ผ่ไฝ',
u'้ผ่ๅ': u'้ผ่ๅ',
u'้ผ่้': u'้ผ่ๅฝ',
u'้ผ่็จฑ': u'้ผ่็งฐ',
u'้ผ่่
': u'้ผ่่
',
u'้ผ่่ฟฐ': u'้ผ่่ฟฐ',
u'้่': u'้็',
u'้่ๆธ': u'้่ไนฆ',
u'้่ไฝ': u'้่ไฝ',
u'้่ๅ': u'้่ๅ',
u'้่้': u'้่ๅฝ',
u'้่็จฑ': u'้่็งฐ',
u'้่่
': u'้่่
',
u'้่่ฟฐ': u'้่่ฟฐ',
u'้บ่': u'้่',
u'้ฃ้บฝ': u'้ฃ้บฝ',
u'้ญๅญไนพ': u'้ญๅญไนพ',
u'้
่': u'้
็',
u'้
่ๆธ': u'้
่ไนฆ',
u'้
่ไฝ': u'้
่ไฝ',
u'้
่ๅ': u'้
่ๅ',
u'้
่้': u'้
่ๅฝ',
u'้
่็จฑ': u'้
่็งฐ',
u'้
่่
': u'้
่่
',
u'้
่่ฟฐ': u'้
่่ฟฐ',
u'้่': u'้
ฟ็',
u'้่ๆธ': u'้
ฟ่ไนฆ',
u'้่ไฝ': u'้
ฟ่ไฝ',
u'้่ๅ': u'้
ฟ่ๅ',
u'้่้': u'้
ฟ่ๅฝ',
u'้่็จฑ': u'้
ฟ่็งฐ',
u'้่่
': u'้
ฟ่่
',
u'้่่ฟฐ': u'้
ฟ่่ฟฐ',
u'้ฏๅฃบ': u'้ฏๅฃถ',
u'้ฏๅฃถ': u'้ฏๅฃถ',
u'้ฏ้
ฑ': u'้ฏ้
ฑ',
u'้ฏ้ฌ': u'้ฏ้
ฑ',
u'้ฏ้': u'้ฏ้',
u'้ฏ้ข': u'้ฏ้ข',
u'้ฏ้ธก': u'้ฏ้ธก',
u'้ฏ้': u'้ฏ้ธก',
u'้่ฆ': u'้ๅค',
u'้้': u'้้พ',
u'้ต้': u'้้พ',
u'้ธ้': u'้ฐ้พ',
u'้้': u'้ถ้พ',
u'้ช่': u'้บ็',
u'้ช่ๆธ': u'้บ่ไนฆ',
u'้ช่ไฝ': u'้บ่ไฝ',
u'้ช่ๅ': u'้บ่ๅ',
u'้ช่้': u'้บ่ๅฝ',
u'้ช่็จฑ': u'้บ่็งฐ',
u'้ช่่
': u'้บ่่
',
u'้ช่่ฟฐ': u'้บ่่ฟฐ',
u'้ๅญ': u'้พๅญ',
u'้ๆข': u'้พๆก',
u'้้': u'้พ้',
u'้้': u'้พ้ค',
u'้้': u'้้พ',
u'้พ้': u'้บ้ป',
u'้้พ': u'้ป้บ',
u'้ปๆท็ฆฎ': u'้ซๆ็คผ',
u'้่': u'้ญ็',
u'้่ๆธ': u'้ญ่ไนฆ',
u'้่ไฝ': u'้ญ่ไฝ',
u'้่ๅ': u'้ญ่ๅ',
u'้่้': u'้ญ่ๅฝ',
u'้่็จฑ': u'้ญ่็งฐ',
u'้่่
': u'้ญ่่
',
u'้่่ฟฐ': u'้ญ่่ฟฐ',
u'้่': u'้ฒ็',
u'้่ๆธ': u'้ฒ่ไนฆ',
u'้่ไฝ': u'้ฒ่ไฝ',
u'้่ๅ': u'้ฒ่ๅ',
u'้่้': u'้ฒ่ๅฝ',
u'้่็จฑ': u'้ฒ่็งฐ',
u'้่่
': u'้ฒ่่
',
u'้่่ฟฐ': u'้ฒ่่ฟฐ',
u'่ไธ่': u'้ปไธ็',
u'่ๅพ่': u'้ปๅพ็',
u'่่': u'้ป็',
u'้ณไธบไนพ': u'้ณไธบไนพ',
u'้ฝ็ฒไนพ': u'้ณไธบไนพ',
u'้ฝ็บไนพ': u'้ณไธบไนพ',
u'้ฟ้จๆญฃ็ญ': u'้ฟ้จๆญฃ็ญ',
u'้่': u'้็',
u'้็ช': u'้็พ',
u'้่ๆธ': u'้่ไนฆ',
u'้่ไฝ': u'้่ไฝ',
u'้่ๅ': u'้่ๅ',
u'้่้': u'้่ๅฝ',
u'้่็จฑ': u'้่็งฐ',
u'้่่
': u'้่่
',
u'้่่ฟฐ': u'้่่ฟฐ',
u'้่': u'้็',
u'้่ๆธ': u'้่ไนฆ',
u'้่ไฝ': u'้่ไฝ',
u'้่ๅ': u'้่ๅ',
u'้่้': u'้่ๅฝ',
u'้่็จฑ': u'้่็งฐ',
u'้่่
': u'้่่
',
u'้่่ฟฐ': u'้่่ฟฐ',
u'้ช่': u'้ช็',
u'้ช่ๆธ': u'้ช่ไนฆ',
u'้ช่ไฝ': u'้ช่ไฝ',
u'้ช่ๅ': u'้ช่ๅ',
u'้ช่้': u'้ช่ๅฝ',
u'้ช่็จฑ': u'้ช่็งฐ',
u'้ช่่
': u'้ช่่
',
u'้ช่่ฟฐ': u'้ช่่ฟฐ',
u'้ณๅ ต': u'้ณๅ ต',
u'้ณ็ฆ': u'้ณ็ฆ',
u'้จ่': u'้็',
u'้จ่ๆธ': u'้่ไนฆ',
u'้จ่ไฝ': u'้่ไฝ',
u'้จ่ๅ': u'้่ๅ',
u'้จ่้': u'้่ๅฝ',
u'้จ่็จฑ': u'้่็งฐ',
u'้จ่่
': u'้่่
',
u'้จ่่ฟฐ': u'้่่ฟฐ',
u'้่': u'้็',
u'้่ๆธ': u'้่ไนฆ',
u'้่ไฝ': u'้่ไฝ',
u'้่ๅ': u'้่ๅ',
u'้่้': u'้่ๅฝ',
u'้่็จฑ': u'้่็งฐ',
u'้่่
': u'้่่
',
u'้่่ฟฐ': u'้่่ฟฐ',
u'้ฑ็ช': u'้ฑ็พ',
u'้
่': u'้
็',
u'้
่ๆธ': u'้
่ไนฆ',
u'้
่ไฝ': u'้
่ไฝ',
u'้
่ๅ': u'้
่ๅ',
u'้
่้': u'้
่ๅฝ',
u'้
่็จฑ': u'้
่็งฐ',
u'้
่่
': u'้
่่
',
u'้
่่ฟฐ': u'้
่่ฟฐ',
u'้ไนพ': u'้ไนพ',
u'้ ่': u'้ ็',
u'้ ่ไฝ': u'้ ่ไฝ',
u'้ ่ๅ': u'้ ่ๅ',
u'้ ่้': u'้ ่ๅฝ',
u'้ ่็จฑ': u'้ ่็งฐ',
u'้ ่่
': u'้ ่่
',
u'้ ่่ฟฐ': u'้ ่่ฟฐ',
u'้ ่': u'้กถ็',
u'้ ่ๆธ': u'้กถ่ไนฆ',
u'้ ่ไฝ': u'้กถ่ไฝ',
u'้ ่ๅ': u'้กถ่ๅ',
u'้ ่้': u'้กถ่ๅฝ',
u'้ ่็จฑ': u'้กถ่็งฐ',
u'้ ่่
': u'้กถ่่
',
u'้ ่่ฟฐ': u'้กถ่่ฟฐ',
u'้
้': u'้กน้พ',
u'้ ่': u'้กบ็',
u'้ ่ๆธ': u'้กบ่ไนฆ',
u'้ ่ไฝ': u'้กบ่ไฝ',
u'้ ่ๅ': u'้กบ่ๅ',
u'้ ่้': u'้กบ่ๅฝ',
u'้ ่็จฑ': u'้กบ่็งฐ',
u'้ ่่
': u'้กบ่่
',
u'้ ่่ฟฐ': u'้กบ่่ฟฐ',
u'้ ่': u'้ข็',
u'้ ่ๆธ': u'้ข่ไนฆ',
u'้ ่ไฝ': u'้ข่ไฝ',
u'้ ่ๅ': u'้ข่ๅ',
u'้ ่้': u'้ข่ๅฝ',
u'้ ่็จฑ': u'้ข่็งฐ',
u'้ ่่
': u'้ข่่
',
u'้ ่่ฟฐ': u'้ข่่ฟฐ',
u'้ฃ่': u'้ฃ็',
u'้ฃ่ๆธ': u'้ฃ่ไนฆ',
u'้ฃ่ไฝ': u'้ฃ่ไฝ',
u'้ฃ่ๅ': u'้ฃ่ๅ',
u'้ฃ่้': u'้ฃ่ๅฝ',
u'้ฃ่็จฑ': u'้ฃ่็งฐ',
u'้ฃ่่
': u'้ฃ่่
',
u'้ฃ่่ฟฐ': u'้ฃ่่ฟฐ',
u'้ฃญไปค': u'้ฃญไปค',
u'้ง่': u'้ฉพ็',
u'้ง่ๆธ': u'้ฉพ่ไนฆ',
u'้ง่ไฝ': u'้ฉพ่ไฝ',
u'้ง่ๅ': u'้ฉพ่ๅ',
u'้ง่้': u'้ฉพ่ๅฝ',
u'้ง่็จฑ': u'้ฉพ่็งฐ',
u'้ง่่
': u'้ฉพ่่
',
u'้ง่่ฟฐ': u'้ฉพ่่ฟฐ',
u'็ฝต่': u'้ช็',
u'็ฝต่ๆธ': u'้ช่ไนฆ',
u'็ฝต่ไฝ': u'้ช่ไฝ',
u'็ฝต่ๅ': u'้ช่ๅ',
u'็ฝต่้': u'้ช่ๅฝ',
u'็ฝต่็จฑ': u'้ช่็งฐ',
u'็ฝต่่
': u'้ช่่
',
u'็ฝต่่ฟฐ': u'้ช่่ฟฐ',
u'้จ่': u'้ช็',
u'้จ่ๆธ': u'้ช่ไนฆ',
u'้จ่ไฝ': u'้ช่ไฝ',
u'้จ่ๅ': u'้ช่ๅ',
u'้จ่้': u'้ช่ๅฝ',
u'้จ่็จฑ': u'้ช่็งฐ',
u'้จ่่
': u'้ช่่
',
u'้จ่่ฟฐ': u'้ช่่ฟฐ',
u'้จ่': u'้ช็',
u'้จ่ๆธ': u'้ช่ไนฆ',
u'้จ่ไฝ': u'้ช่ไฝ',
u'้จ่ๅ': u'้ช่ๅ',
u'้จ่้': u'้ช่ๅฝ',
u'้จ่็จฑ': u'้ช่็งฐ',
u'้จ่่
': u'้ช่่
',
u'้จ่่ฟฐ': u'้ช่่ฟฐ',
u'้ซ่': u'้ซ็',
u'้ซ่ๆธ': u'้ซ่ไนฆ',
u'้ซ่ไฝ': u'้ซ่ไฝ',
u'้ซ่ๅ': u'้ซ่ๅ',
u'้ซ่้': u'้ซ่ๅฝ',
u'้ซ่็จฑ': u'้ซ่็งฐ',
u'้ซ่่
': u'้ซ่่
',
u'้ซ่่ฟฐ': u'้ซ่่ฟฐ',
u'้ซญ่': u'้ซญ็',
u'้ซญ่ๆธ': u'้ซญ่ไนฆ',
u'้ซญ่ไฝ': u'้ซญ่ไฝ',
u'้ซญ่ๅ': u'้ซญ่ๅ',
u'้ซญ่้': u'้ซญ่ๅฝ',
u'้ซญ่็จฑ': u'้ซญ่็งฐ',
u'้ซญ่่
': u'้ซญ่่
',
u'้ซญ่่ฟฐ': u'้ซญ่่ฟฐ',
u'้ฌฑๅง': u'้ฌฑๅง',
u'้ฌฑๆฐ': u'้ฌฑๆฐ',
u'้ญๅพต': u'้ญๅพต',
u'้ญไนพไนพ': u'้ฑผๅนฒๅนฒ',
u'้ฏฐ้ญ': u'้ฒถ้ฑผ',
u'้บฏๅด่ฃ': u'้บฏๅด่ฃ',
u'้บด็พฉ': u'้บดไน',
u'้บดไน': u'้บดไน',
u'้บด่ฑ': u'้บด่ฑ',
u'้บฝๆฐ': u'้บฝๆฐ',
u'้บฝ้บฝ': u'้บฝ้บฝ',
u'้บผ้บผ': u'้บฝ้บฝ',
u'้ปๆถฆไนพ': u'้ปๆถฆไนพ',
u'้ปๆฝคไนพ': u'้ปๆถฆไนพ',
u'้ป่': u'้ป็',
u'้ป่ๆธ': u'้ป่ไนฆ',
u'้ป่ไฝ': u'้ป่ไฝ',
u'้ป่ๅ': u'้ป่ๅ',
u'้ป่้': u'้ป่ๅฝ',
u'้ป่็จฑ': u'้ป่็งฐ',
u'้ป่่
': u'้ป่่
',
u'้ป่่ฟฐ': u'้ป่่ฟฐ',
}
zh2tw = {
u'โ': u'ใ',
u'โ': u'ใ',
u'โ': u'ใ',
u'โ': u'ใ',
u'ไธๆฅต็ฎก': u'ไธๆฅต้ซ',
u'ไธๆ็ฎก': u'ไธๆฅต้ซ',
u'ไธ็่ฃ': u'ไธ็่ฃก',
u'ไธญๆ่ฃ': u'ไธญๆ่ฃก',
u'ไธฒ่ก': u'ไธฒๅ',
u'ไธฒๅๅ ้ๅจ': u'ไธฒๅๅ ้ๅจ',
u'ไปฅๅคช็ฝ': u'ไนๅคช็ถฒ',
u'ๅฅถ้
ช': u'ไนณ้
ช',
u'ไบๆฅต็ฎก': u'ไบๆฅต้ซ',
u'ไบๆ็ฎก': u'ไบๆฅต้ซ',
u'ไบคไบๅผ': u'ไบๅๅผ',
u'้ฟๅกๆ็': u'ไบๅกๆ็ถ',
u'ไบบๅทฅๆบ่ฝ': u'ไบบๅทฅๆบๆ
ง',
u'ๆฅๅฃ': u'ไป้ข',
u'ไปปๆ็ๅก': u'ไปปๆ็ๅก',
u'ไปปๆ็ๅ': u'ไปปๆ็ๅก',
u'ๆๅกๅจ': u'ไผบๆๅจ',
u'ๅญ็ฏ': u'ไฝๅ
็ต',
u'ๅญ่': u'ไฝๅ
็ต',
u'ไฝๅ่ฃ': u'ไฝๅ่ฃก',
u'ไผๅ
็บง': u'ๅชๅ
้ ๅบ',
u'ๅ
ๅ
': u'ๅ
ๅถ',
u'ๅ
ๅถ': u'ๅ
ๅถ',
u'ๅ
็': u'ๅ
็ข',
u'ๅ
้ฉฑ': u'ๅ
็ขๆฉ',
u'ๅ
็พ
ๅฐไบ': u'ๅ
็พ
ๅ่ฅฟไบ',
u'ๅ
็ฝๅฐไบ': u'ๅ
็พ
ๅ่ฅฟไบ',
u'ๅ
จ่ง': u'ๅ
จๅฝข',
u'ๅฌๅคฉ่ฃ': u'ๅฌๅคฉ่ฃก',
u'ๅฌๆฅ่ฃ': u'ๅฌๆฅ่ฃก',
u'ๅ่': u'ๅท็ค',
u'ๅท่': u'ๅท็ค',
u'ๅถๅจ': u'ๅถๅจ',
u'ๅ
ๅจ': u'ๅถๅจ',
u'ๅถๅพ': u'ๅถๅพ',
u'ๅ
ๅพ': u'ๅถๅพ',
u'ๅ
ๆ': u'ๅถๆ',
u'ๅถๆ': u'ๅถๆ',
u'ๅ
ๆก': u'ๅถๆก',
u'ๅถๆก': u'ๅถๆก',
u'ๅถๆฎ': u'ๅถๆฎ',
u'ๅ
ๆฎ': u'ๅถๆฎ',
u'ๅถๆฎ': u'ๅถๆฎ',
u'ๅ
ๆฎบ': u'ๅถๆฎบ',
u'ๅถๆ': u'ๅถๆฎบ',
u'ๅถๆฎบ': u'ๅถๆฎบ',
u'ๅๅธๅผ': u'ๅๆฃๅผ',
u'ๆๅฐ': u'ๅๅฐ',
u'ๅๆฏๆฆๅฃซ็ป': u'ๅๆฏๆฆๆฏ็ป',
u'ๅชๅฝฉ': u'ๅช็ถต',
u'ๅ ่ฌ': u'ๅ ๅฝญ',
u'ๆป็บฟ': u'ๅฏๆตๆ',
u'ๅฑๅ็ฝ': u'ๅๅ็ถฒ',
u'็น็ซๅฐผ้ๅๅคๅทดๅฅ': u'ๅ้้ๆ่ฒๅฅ',
u'็น็ซๅฐผ่พพๅๆๅทดๅฅ': u'ๅ้้ๆ่ฒๅฅ',
u'ๅ่ง': u'ๅๅฝข',
u'ๅกๅก็พ': u'ๅก้',
u'ๅกๅกๅฐ': u'ๅก้',
u'ๆๅฐๆฉ': u'ๅฐ่กจๆฉ',
u'ๆๅฐๆบ': u'ๅฐ่กจๆฉ',
u'ๅ็ซ็น้ไบ': u'ๅๅฉๅไบ',
u'ๅ็ซ็น้ไบ': u'ๅๅฉๅไบ',
u'ๅ็ๅคๅฐ': u'ๅ็ๅค',
u'ๅ็ๅค็พ': u'ๅ็ๅค',
u'ๆฏๅจๅฃซๅ
ฐ': u'ๅฒ็ฆๆฟ่ญ',
u'ๆฏๅจๅฃซ่ญ': u'ๅฒ็ฆๆฟ่ญ',
u'ๅๅธๆ': u'ๅๅธๅฐ',
u'ๅๅธๅ ค': u'ๅๅธๅฐ',
u'ๅบ้ๅทดๆฏ': u'ๅ้ๅทดๆฏ',
u'ๅ็ฆ็ง': u'ๅ็ฆ้ญฏ',
u'ๅพ็ฆๅข': u'ๅ็ฆ้ญฏ',
u'ๅ่จๅ
ๆฏๅฆ': u'ๅ่ฉๅ
',
u'ๅฅๆฏ้้ปๅ ': u'ๅฅๆฏๅคง้ปๅ ',
u'ๅฅๆฏ่พพ้ปๅ ': u'ๅฅๆฏๅคง้ปๅ ',
u'ๆ ผ้ญฏๅไบ': u'ๅฌๆฒปไบ',
u'ๆ ผ้ฒๅไบ': u'ๅฌๆฒปไบ',
u'ไฝๆฒปไบ': u'ๅฌๆฒปไบ',
u'ไฝๆฒปไบ': u'ๅฌๆฒปไบ',
u'ๅด่ฃ': u'ๅด่ฃก',
u'ๅๅบๆผๆฏๅฆ': u'ๅๅบซๆผ',
u'่ฏไป': u'ๅ่ฑ',
u'ๅ่ฑ็ถฒ': u'ๅ่ฑ็ถฒ',
u'ๅ่ฑ็ฝ': u'ๅ่ฑ็ถฒ',
u'ๅฆๆกๅฐผไบ': u'ๅฆๅฐๅฐผไบ',
u'ๅฆๆกๅฐผไบ': u'ๅฆๅฐๅฐผไบ',
u'็ซฏๅฃ': u'ๅ ',
u'ๅกๅๅ
ๆฏๅฆ': u'ๅกๅๅ
',
u'ๅก่ๅฐ': u'ๅกๅธญ็พ',
u'ๅก่็พ': u'ๅกๅธญ็พ',
u'ๅกๆตฆ่ทฏๆฏ': u'ๅกๆฎๅๆฏ',
u'ๅคๅคฉ่ฃ': u'ๅคๅคฉ่ฃก',
u'ๅคๆฅ่ฃ': u'ๅคๆฅ่ฃก',
u'ๅคๆๅฐผๅ ๅ
ฑๅๅ': u'ๅคๆๅฐผๅ ',
u'ๅค็ฑณๅฐผๅ ๅ
ฑๅๅฝ': u'ๅคๆๅฐผๅ ',
u'ๅค็ฑณๅฐผๅ ๅ
ฑๅๅ': u'ๅคๆๅฐผๅ ',
u'ๅค็ฑณๅฐผๅ ๅฝ': u'ๅค็ฑณๅฐผๅ
',
u'ๅคๆๅฐผๅ ๅ': u'ๅค็ฑณๅฐผๅ
',
u'็ฉฟๆขญๆฉ': u'ๅคช็ฉบๆขญ',
u'่ชๅคฉ้ฃๆบ': u'ๅคช็ฉบๆขญ',
u'ๅฐผๆฅๅฉไบ': u'ๅฅๅๅฉไบ',
u'ๅฐผๆฅๅฉไบ': u'ๅฅๅๅฉไบ',
u'ๅญ็ฌฆ': u'ๅญๅ
',
u'ๅญๅท': u'ๅญๅๅคงๅฐ',
u'ๅญๅบ': u'ๅญๅๆช',
u'ๅญ็ฌฆ้': u'ๅญ็ฌฆ้',
u'ๅญ็': u'ๅญๆช',
u'ๅญธ่ฃ': u'ๅญธ่ฃก',
u'ๅฎๆ็ๅๅทดๅธ้': u'ๅฎๅฐๅกๅๅทดๅธ้',
u'ๅฎๆ็ๅๅทดๅธ่พพ': u'ๅฎๅฐๅกๅๅทดๅธ้',
u'ๅฎๅ
': u'ๅฎๅ
',
u'ๆดช้ฝๆๆฏ': u'ๅฎ้ฝๆๆฏ',
u'ๅฏปๅ': u'ๅฎๅ',
u'ๅฏๅ่ฃ': u'ๅฏๅ่ฃก',
u'ๅฎฝๅธฆ': u'ๅฏฌ้ ป',
u'่ๆพ': u'ๅฏฎๅ',
u'่ๆ': u'ๅฏฎๅ',
u'ๆ้จ': u'ๅฐ้',
u'ๅฐ่ผฏ่ฃ': u'ๅฐ่ผฏ่ฃก',
u'่ดๆฏไบ': u'ๅฐๆฏไบ',
u'่ตๆฏไบ': u'ๅฐๆฏไบ',
u'ๅฐผๆฅ็พ': u'ๅฐผๆฅ',
u'ๅฐผๆฅๅฐ': u'ๅฐผๆฅ',
u'ๅฑฑๆด่ฃ': u'ๅฑฑๆด่ฃก',
u'ๅทดๅธไบๆฐ็ฟๅ
งไบ': u'ๅทดๅธไบ็ดๅนพๅ
งไบ',
u'ๅทดๅธไบๆฐๅ ๅ
ไบ': u'ๅทดๅธไบ็ดๅนพๅ
งไบ',
u'ๅทดๅทดๅคๆฏ': u'ๅทด่ฒๅค',
u'ๅธๅบ็บณๆณ็ดข': u'ๅธๅ็ดๆณ็ดข',
u'ๅธๅบ็ดๆณ็ดข': u'ๅธๅ็ดๆณ็ดข',
u'ๅธไป': u'ๅธๅธ',
u'ๅธๆฎ': u'ๅธๅธ',
u'ๅธๅณ': u'ๅธ็',
u'ไพ็จ': u'ๅธธๅผ',
u'ๅนณๆฒปไนไนฑ': u'ๅนณๆฒปไนไบ',
u'ๅนณๆฒปไนไบ': u'ๅนณๆฒปไนไบ',
u'ๅนดไปฃ่ฃ': u'ๅนดไปฃ่ฃก',
u'ๅ ๅ
ไบๆฏ็ป': u'ๅนพๅ
งไบๆฏ็ดข',
u'ๅนพๅ
งไบๆฏ็ดน': u'ๅนพๅ
งไบๆฏ็ดข',
u'ๅฝฉๅธฆ': u'ๅฝฉๅธถ',
u'ๅฝฉๆ': u'ๅฝฉๆ',
u'ๅฝฉๆฅผ': u'ๅฝฉๆจ',
u'ๅฝฉ็ๆฅผ': u'ๅฝฉ็ๆจ',
u'ๅพฉ่': u'ๅพฉ็ฆ',
u'ๅค่': u'ๅพฉ็ฆ',
u'ๅฟ่ฃ': u'ๅฟ่ฃก',
u'ๅฟซ้ชๅญๅจๅจ': u'ๅฟซ้่จๆถ้ซ',
u'้ชๅญ': u'ๅฟซ้่จๆถ้ซ',
u'ๆณ่ฑก': u'ๆณๅ',
u'ไผ ๆ': u'ๆๆธฌ',
u'ไน ็จ': u'ๆ
ฃ็จ',
u'ๆๅฝฉๅจฑไบฒ': u'ๆฒ็ถตๅจ่ฆช',
u'ๆฒ่ฃ': u'ๆฒ่ฃก',
u'ๆ็ต็ญ': u'ๆ้ป็ญ',
u'ๆ็ต': u'ๆ้ป็ญ',
u'ๆฌๅท': u'ๆฌๅผง',
u'ๆฟ็ ดไพ': u'ๆฟ็ ดๅด',
u'ๆฟ็ ดไป': u'ๆฟ็ ดๅด',
u'็ฉๆถ': u'ๆท่ฑน',
u'ๆซ็ไปช': u'ๆ็ๅจ',
u'ๆ้ฉ': u'ๆ้ค',
u'ๆ้': u'ๆ้ค',
u'ๆงไปถ': u'ๆงๅถ้
',
u'ๅฐ็': u'ๆ็',
u'ๆก็': u'ๆ็',
u'ไพฟๆบๅผ': u'ๆๅธถๅ',
u'ๆ
ไบ่ฃ': u'ๆ
ไบ่ฃก',
u'่ฐๅถ่งฃ่ฐๅจ': u'ๆธๆๆฉ',
u'่ชฟๅถ่งฃ่ชฟๅจ': u'ๆธๆๆฉ',
u'ๆฏๆดๆๅฐผไบ': u'ๆฏๆด็ถญๅฐผไบ',
u'ๆฏๆดๆๅฐผไบ': u'ๆฏๆด็ถญๅฐผไบ',
u'ๆฐ็บชๅ
': u'ๆฐ็ดๅ
',
u'ๆฐ็ดๅ
': u'ๆฐ็ดๅ
',
u'ๆฅๅญ่ฃ': u'ๆฅๅญ่ฃก',
u'ๆฅๅ่ฃ': u'ๆฅๅ่ฃก',
u'ๆฅๅคฉ่ฃ': u'ๆฅๅคฉ่ฃก',
u'ๆฅๆฅ่ฃ': u'ๆฅๆฅ่ฃก',
u'ๆ้่ฃ': u'ๆ้่ฃก',
u'่ฏ็': u'ๆถๅ
',
u'ๆๅ่ฃ': u'ๆๅ่ฃก',
u'ๆๅญ่ฃ': u'ๆๅญ่ฃก',
u'ไนๅพ': u'ๆฅๅพท',
u'ๅ
ๆ้ ': u'ๆฏๆ้ ',
u'ๅ
ๆ้กฟ': u'ๆฏๆ้ ',
u'ๆ ผๆ็ด้': u'ๆ ผ็้ฃ้',
u'ๆ ผๆ็บณ่พพ': u'ๆ ผ็้ฃ้',
u'ๅก้ซ': u'ๆขต่ฐท',
u'ๆฃฎๆ่ฃ': u'ๆฃฎๆ่ฃก',
u'ๆฃบๆ่ฃ': u'ๆฃบๆ่ฃก',
u'ๆฆด่ฎ': u'ๆฆดๆงค',
u'ๆฆด่ฒ': u'ๆฆดๆงค',
u'ไปฟ็': u'ๆจกๆฌ',
u'ๆฏ้่ฃๆฏ': u'ๆจก้่ฅฟๆฏ',
u'ๆฏ้ๆฑๆฏ': u'ๆจก้่ฅฟๆฏ',
u'ๆฉๆขฐไบบ': u'ๆฉๅจไบบ',
u'ๆบๅจไบบ': u'ๆฉๅจไบบ',
u'ๅญๆฎต': u'ๆฌไฝ',
u'ๆญทๅฒ่ฃ': u'ๆญทๅฒ่ฃก',
u'ๅ
้ณ': u'ๆฏ้ณ',
u'ๆฐธๅ': u'ๆฐธๆ',
u'ๆ่ฑ': u'ๆฑถ่',
u'ๆฒ็น้ฟๆไผฏ': u'ๆฒ็ๅฐ้ฟๆไผฏ',
u'ๆฒๅฐ้ฟๆไผฏ': u'ๆฒ็ๅฐ้ฟๆไผฏ',
u'ๆณขๆฏๅฐผไบ้ปๅกๅฅ็ถญ้ฃ': u'ๆณขๅฃซๅฐผไบ่ตซๅกๅฅ็ถญ็ด',
u'ๆณขๆฏๅฐผไบๅ้ปๅกๅฅ็ปด้ฃ': u'ๆณขๅฃซๅฐผไบ่ตซๅกๅฅ็ถญ็ด',
u'ๅ่จ็ฆ็บณ': u'ๆณขๆญ้ฃ',
u'ๅ่จ็ฆ็ด': u'ๆณขๆญ้ฃ',
u'ไพฏ่ตๅ ': u'ๆตท็',
u'ไพฏ่ณฝๅ ': u'ๆตท็',
u'ๆทฑๆทต่ฃ': u'ๆทฑๆทต่ฃก',
u'ๅ
ๆ ': u'ๆธธๆจ',
u'้ผ ๆ ': u'ๆป้ผ ',
u'็ฎๆณ': u'ๆผ็ฎๆณ',
u'ไนๅ
นๅซๅ
ๆฏๅฆ': u'็่ฒๅฅๅ
',
u'่ฏ็ป': u'็่ช',
u'็่ฃ': u'็่ฃก',
u'ๅกๆๅฉๆ': u'็
ๅญๅฑฑ',
u'ๅฑๅฐ้ฉฌๆ': u'็ๅฐ้ฆฌๆ',
u'ๅฑๅฐ้ฆฌๆ': u'็ๅฐ้ฆฌๆ',
u'ๅๆฏไบ': u'็ๆฏไบ',
u'ๅฒกๆฏไบ': u'็ๆฏไบ',
u'็ๅ
': u'็ๅถ',
u'็ๅถ': u'็ๅถ',
u'็พ็ง่ฃ': u'็พ็ง่ฃก',
u'็ฎ่ฃ้ฝ็ง': u'็ฎ่ฃก้ฝ็ง',
u'็งๆบ้': u'็งๅฎ้',
u'ๅขๆบ่พพ': u'็งๅฎ้',
u'็ๅถ': u'็ๅถ',
u'็ๅ
': u'็ๅถ',
u'็ผ็่ฃ': u'็ผ็่ฃก',
u'็ก
็': u'็ฝ็',
u'็ก
่ฐท': u'็ฝ่ฐท',
u'็กฌ็': u'็กฌ็ข',
u'็กฌไปถ': u'็กฌ้ซ',
u'็็': u'็ข็',
u'็ฃ็': u'็ฃ็ข',
u'็ฃ้': u'็ฃ่ป',
u'็งๅ่ฃ': u'็งๅ่ฃก',
u'็งๅคฉ่ฃ': u'็งๅคฉ่ฃก',
u'็งๆฅ่ฃ': u'็งๆฅ่ฃก',
u'็จๆง': u'็จๅผๆงๅถ',
u'็ชๅฐผๆฏ': u'็ชๅฐผ่ฅฟไบ',
u'ๅฐพๆณจ': u'็ซ ็ฏ้่จป',
u'่นฆๆ่ทณ': u'็ฌจ่ฑฌ่ทณ',
u'็ป็ดง่ทณ': u'็ฌจ่ฑฌ่ทณ',
u'็ญไบ': u'็ญๆผ',
u'็ญ่จ': u'็ฐก่จ',
u'็ญไฟก': u'็ฐก่จ',
u'็ณปๅ่ฃ': u'็ณปๅ่ฃก',
u'ๆฐ่ฅฟ่ญ': u'็ด่ฅฟ่ญ',
u'ๆฐ่ฅฟๅ
ฐ': u'็ด่ฅฟ่ญ',
u'ๆ็ฝ้จ็พคๅฒ': u'็ดข็พ
้็พคๅณถ',
u'ๆ็พ
้็พคๅณถ': u'็ดข็พ
้็พคๅณถ',
u'็ดข้ฆฌ้': u'็ดข้ฆฌๅฉไบ',
u'็ดข้ฉฌ้': u'็ดข้ฆฌๅฉไบ',
u'็ปๅฝฉ': u'็ต็ถต',
u'ไฝๅพ่ง': u'็ถญๅพท่ง',
u'็ถฒ็ตก': u'็ถฒ่ทฏ',
u'็ฝ็ป': u'็ถฒ่ทฏ',
u'ไบ่ฏ็ถฒ': u'็ถฒ้็ถฒ่ทฏ',
u'ๅ ็น็ฝ': u'็ถฒ้็ถฒ่ทฏ',
u'ๅฝฉ็': u'็ถต็',
u'ๅฝฉ็ปธ': u'็ถต็ถข',
u'ๅฝฉ็บฟ': u'็ถต็ท',
u'ๅฝฉ่น': u'็ถต่น',
u'ๅฝฉ่กฃ': u'็ถต่กฃ',
u'็ผๅถ': u'็ทๅถ',
u'็ทๅ
': u'็ทๅถ',
u'็ทๅถ': u'็ทๅถ',
u'ๆๅคงๅฉ': u'็พฉๅคงๅฉ',
u'่ๅญๅท': u'่ๅญ่',
u'ๅฃๅบ่จๅๅฐผ็ปดๆฏ': u'่ๅ
้ๆฏๅค็ฆๅๅฐผ็ถญๆฏ',
u'่ๅๆฏ็ดๅๆฏ': u'่ๅ
้ๆฏๅค็ฆๅๅฐผ็ถญๆฏ',
u'่ๆๆฃฎ็นๅๆ ผๆ็ดไธๆฏ': u'่ๆๆฃฎๅๆ ผ็้ฃไธ',
u'ๅฃๆๆฃฎ็นๅๆ ผๆ็บณไธๆฏ': u'่ๆๆฃฎๅๆ ผ็้ฃไธ',
u'ๅฃๅข่ฅฟไบ': u'่้ฒ่ฅฟไบ',
u'่็ง่ฅฟไบ': u'่้ฒ่ฅฟไบ',
u'ๅฃ้ฉฌๅ่ฏบ': u'่้ฆฌๅฉ่ซพ',
u'่้ฆฌๅ่ซพ': u'่้ฆฌๅฉ่ซพ',
u'่่ฃ': u'่่ฃก',
u'่ฏๅฐผไบ': u'่ฏไบ',
u'่ฏ้
': u'่ฏไบ',
u'ไปปๆ็': u'่ช็ฑ็',
u'่ชๅคฉๅคงๅญฆ': u'่ชๅคฉๅคงๅญธ',
u'่ฆ่ฃ': u'่ฆ่ฃก',
u'ๆฏ้ๅกๅฐผไบ': u'่
ๅฉๅกๅฐผไบ',
u'ๆฏ้ๅกๅฐผไบ': u'่
ๅฉๅกๅฐผไบ',
u'่ซๆกๆฏๅ
': u'่ซไธๆฏๅ
',
u'ไธๅ': u'่ฌๆ',
u'็ฆๅช้ฟๅพ': u'่ฌ้ฃๆ',
u'็ฆๅช้ฟๅ': u'่ฌ้ฃๆ',
u'ไน้': u'่้',
u'ไน้จ': u'่้',
u'็': u'่',
u'็งๆฉ็พ
': u'่ๆฉ',
u'็งๆฉ็ฝ': u'่ๆฉ',
u'ๅธ้่ฟช': u'่ฒ้ๅฐ',
u'ๅญไบ้ฃ': u'่ไบ้ฃ',
u'ๅญไบ้ฃ': u'่ไบ้ฃ',
u'็ซ้
็ๅธฝ': u'่็ซ้',
u'่้ๅ': u'่ๅฉๅ',
u'่กๅถ': u'่กๅถ',
u'่กๅ
': u'่กๅถ',
u'่กๅถๅ': u'่กๅถๅพ',
u'่กๅ
ๅพ': u'่กๅถๅพ',
u'่กๅถๅพ': u'่กๅถๅพ',
u'ๆตๅ้ป่ฉฑ': u'่กๅ้ป่ฉฑ',
u'็งปๅจ็ต่ฏ': u'่กๅ้ป่ฉฑ',
u'่ก็จๆงๅถ': u'่ก็จๆงๅถ',
u'่ก': u'่ก',
u'ๅซ็': u'่ก็',
u'่ก็': u'่ก็',
u'ๅๅกไฟๆฏไบ': u'่กฃ็ดขๆฏไบ',
u'ๅๅกไฟๆฏไบ': u'่กฃ็ดขๆฏไบ',
u'่ฃๅพๅค้ฃ': u'่ฃกๅพๅค้ฃ',
u'่ฃ้ข': u'่ฃก้ข',
u'ๅ่พจ็': u'่งฃๆๅบฆ',
u'่ฏ็ ': u'่งฃ็ขผ',
u'ๅบ็ง่ฝฆ': u'่จ็จ่ป',
u'ๆ้': u'่จฑๅฏๆฌ',
u'็้ฒ': u'่ซพ้ญฏ',
u'็้ญฏ': u'่ซพ้ญฏ',
u'ๅ้': u'่ฎๆธ',
u'็ง็น่ฟช็ฆ': u'่ฑก็ๆตทๅฒธ',
u'่ฒๅฏง': u'่ฒๅ',
u'่ดๅฎ': u'่ฒๅ',
u'ไผฏๅฉ่ฒ': u'่ฒ้ๆฏ',
u'ไผฏๅฉๅ
น': u'่ฒ้ๆฏ',
u'่ฒทๅ
': u'่ฒทๅถ',
u'ไนฐๅถ': u'่ฒทๅถ',
u'่ฒทๅถ': u'่ฒทๅถ',
u'ๆฐๆฎๅบ': u'่ณๆๅบซ',
u'ไฟกๆฏ่ฎบ': u'่ณ่จ็่ซ',
u'ๅฅ้ฉฐ': u'่ณๅฃซ',
u'ๅนณๆฒป': u'่ณๅฃซ',
u'ๅฉๆฏ้ไบ': u'่ณดๆฏ็ไบ',
u'ๅฉๆฏ้ไบ': u'่ณดๆฏ็ไบ',
u'่็ดขๆ': u'่ณด็ดขๆ',
u'่ฑ็ดขๆ': u'่ณด็ดขๆ',
u'่ฝฏ้ฉฑ': u'่ป็ขๆฉ',
u'่ปไปถ': u'่ป้ซ',
u'่ฝฏไปถ': u'่ป้ซ',
u'ๅ ่ฝฝ': u'่ผๅ
ฅ',
u'ๆดฅๅทดๅธ้ฆ': u'่พๅทดๅจ',
u'ๆดฅๅทดๅธ้': u'่พๅทดๅจ',
u'่ฏๆฑ': u'่พญๅฝ',
u'ๅ ็บณ': u'่ฟฆ็ด',
u'ๅ ็ด': u'่ฟฆ็ด',
u'่ฟฝๅถ': u'่ฟฝๅถ',
u'่ฟฝๅ
': u'่ฟฝๅถ',
u'้่ฃ': u'้่ฃก',
u'ไฟก้': u'้้',
u'้ๅถ้ฌฅ็ ': u'้ๅถ้ฌฅ็ ',
u'้ๅ
้ฌฅ็ ': u'้ๅถ้ฌฅ็ ',
u'้ๅถๆ็ ': u'้ๅถ้ฌฅ็ ',
u'ๅณ้ฃ้บต': u'้้ฃ้บต',
u'ๆนไพฟ้ข': u'้้ฃ้บต',
u'ๅฟซ้้ข': u'้้ฃ้บต',
u'่ฟๅญๅท': u'้ฃๅญ่',
u'่ฟๅถ': u'้ฒไฝ',
u'ๅ
ฅ็': u'้ฒ็',
u'็ฎๅญ': u'้็ฎๅ
',
u'้ ็จๆงๅถ': u'้ ็จๆงๅถ',
u'่ฟ็จๆงๅถ': u'้ ็จๆงๅถ',
u'ๆบซ็ดๅ่ฌ': u'้ฃๆ',
u'้ซ้ข่ฃ': u'้ซ้ข่ฃก',
u'้
ฐ': u'้ฏ',
u'ๅทจๅ': u'้
่ณ',
u'้ฉ': u'้ค',
u'้': u'้ค',
u'้ฉๅฟๆ่ง': u'้คๅฟ้ฌฅ่ง',
u'้ๅฟ้ฌฅ่ง': u'้คๅฟ้ฌฅ่ง',
u'ๅไฟๆค': u'้ฒๅฏซ',
u'้ฟๆไผฏ่ๅ้
้ฟๅฝ': u'้ฟๆไผฏ่ฏๅๅคงๅ
ฌๅ',
u'้ฟๆไผฏ่ฏๅ้
้ทๅ': u'้ฟๆไผฏ่ฏๅๅคงๅ
ฌๅ',
u'ๅชๅฃฐ': u'้่จ',
u'่ฑๆบ': u'้ข็ท',
u'้ช่ฃ็ด
': u'้ช่ฃก็ด
',
u'้ช่ฃ่ป': u'้ช่ฃก่ป',
u'้ช้้พ': u'้ช้ต้พ',
u'้้็ด ': u'้้ปด็ด ',
u'ๅผๆญฅ': u'้ๅๆญฅ',
u'ๅฃฐๅก': u'้ณๆๅก',
u'็ผบ็': u'้ ่จญ',
u'้ขๅธ': u'้ ๅธ',
u'้ ไฝ': u'้ ๅธ',
u'้ ๅ่ฃ': u'้ ๅ่ฃก',
u'ๅคด็': u'้ ญๆง',
u'็ฒๅ
ฅ็': u'้ก้ฒ็',
u'้คจ่ฃ': u'้คจ่ฃก',
u'้ฉฌ้ๅ
ฑๅๅฝ': u'้ฆฌๅฉๅ
ฑๅๅ',
u'้ฆฌ้ๅ
ฑๅๅ': u'้ฆฌๅฉๅ
ฑๅๅ',
u'้ฉฌ่ณไป': u'้ฆฌ็พไป',
u'้ฉฌๅฐไปฃๅคซ': u'้ฆฌ็พๅฐๅคซ',
u'้ฆฌ็พไปฃๅคซ': u'้ฆฌ็พๅฐๅคซ',
u'่ฌไบๅพ': u'้ฆฌ่ช้',
u'็ๅฎๅจ': u'้ปๅฎๅจ',
u'ๆดๅฎๅจ': u'้ปๅฎๅจ',
u'้ป่ฃ': u'้ป่ฃก',
u'ไฝๅพ': u'้ป้ฃๅ',
} | AdvancedLangConv | /AdvancedLangConv-0.01.tar.gz/AdvancedLangConv-0.01/langconv/defaulttables/zh_hans.py | zh_hans.py |
import argparse
import base64
import httplib
import urllib
import asd
def arg_parser():
"""Setup argument Parsing."""
parser = argparse.ArgumentParser(
usage='%(prog)s',
description='Gather information quickly and efficiently',
epilog='Licensed... Go read...'
)
query_search = argparse.ArgumentParser(add_help=False)
services = ['nova', 'swift', 'glance', 'keystone', 'heat', 'cinder',
'ceilometer', 'trove', 'python', 'openstack', 'linux',
'ubuntu', 'centos', 'mysql', 'rabbitmq', 'lvm', 'kernel',
'networking', 'ipv4', 'ipv6', 'neutron', 'quantum', 'custom']
meta = 'Gather information quickly and efficiently from trusted sources'
subpar = parser.add_subparsers(title='Search Options', metavar=meta)
for service in services:
action = subpar.add_parser(
service,
parents=[query_search],
help='Look for "%s" Information' % service
)
action.set_defaults(topic=service)
action.add_argument(
'--now',
default=False,
action='store_true',
help='Perform a more CPU intense search, will produce faster'
' results.'
)
action.add_argument('--query', nargs='*', required=True)
return parser
class ExternalInformationIndexer(object):
def __init__(self, config):
standard_salt = 'aHR0cDovL2xtZ3RmeS5jb20vP3E9'
optimized_salt = 'aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS93ZWJocCNxPQ=='
self.config = config
if self.config.get('now', False) is True:
self.definition_salt = optimized_salt
else:
self.definition_salt = standard_salt
query = self.config.get('query')
topic = self.config.get('topic')
if topic != 'custom':
query.insert(0, '"%s"' % topic)
self.query = urllib.quote(' '.join(query))
with asd.Timer() as time:
self.indexer()
print('Advanced Search completed in %s Seconds' % time.interval)
def indexer(self):
"""Builds the query content for our targeted search."""
prefix = base64.decodestring(self.definition_salt)
self.fetch_results(query_text='%s%s' % (prefix, self.query))
@staticmethod
def fetch_results(query_text):
"""Opens a web browser tab containing the search information.
Sends a query request to the Index engine for the provided search
criteria.
:param query_text: ``str``
"""
import webbrowser
if webbrowser.open(url=query_text) is not True:
encoder = 'dGlueXVybC5jb20='
api = 'L2FwaS1jcmVhdGUucGhwP3VybD0lcw=='
conn = httplib.HTTPConnection(host=base64.decodestring(encoder))
conn.request('GET', base64.decodestring(api) % query_text)
resp = conn.getresponse()
if resp.status >= 300:
raise httplib.CannotSendRequest('failed to make request...')
print("It seems that you are not executing from a desktop\n"
"operating system or you don't have a browser installed.\n"
"Here is the link to the content that you're looking for.\n")
print('\nContent: %s\n' % resp.read())
def main():
"""Run Main Program."""
parser = arg_parser()
config = vars(parser.parse_args())
ExternalInformationIndexer(config=config)
if __name__ == '__main__':
main() | AdvancedSearchDiscovery | /AdvancedSearchDiscovery-0.0.3.tar.gz/AdvancedSearchDiscovery-0.0.3/asd/run.py | run.py |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.