code
stringlengths
501
5.19M
package
stringlengths
2
81
path
stringlengths
9
304
filename
stringlengths
4
145
# Adifpy ## Table of Contents - [Introduction](#Introduction) - [Background](#Background) - [How to Use](#How-to-Use) - [Software Organization](#Software-Organization) - [Directory Structure](#Directory-Structure) - [Subpackages](#Subpackages) - [Implementation](#Implementation) - [Libraries](#Libraries) - [Modules and Classes](#Modules-and-Classes) - [Elementary Funcions](#Elementary-Functions) - [Extention](#Future-Features) - [Reverse Mode](#Reverse-Mode) - [Visualization](#Visualization) - [Impact](#Impact) - [Future](#Future) ## Introduction This software is aimed at allowing users to evaluate and differentiate their function. This is a powerful tool that will allow users to find solutions to complex derivations and visualize their results. It serves as an efficient way to take derivatives when compared to other solutions such as symbolic differentiation. Applications are widespread, ranging from graphing simple functions to taking the derivative of complex, high dimension functions that can have widespread uses such as in optimization problems, machine learning, and data analysis. ## Background Traditional methods for differentiation include symbolic differentiation and numerical differentiation. Each of these techniques brings its own challenges when used for computational science - symbolic differentiation requires converting complex computer programs into simple components and often results in complex and cryptic expressions, while numerical differentiation is susceptible to floating point and rounding errors. Automatic differentiation (AD) solves these problems: any mathematical function (for which a derivative is needed) can be broken down into a series of constituent elementary (binary and unary) operations, executed in a specific order on a predetermined set of inputs. A technique for visualizing the sequence of operations corresponding to the function is the computational graph, with nodes representing intermediate variables and lines leaving from nodes representing operations used on intermediate variables. AD combines the known derivatives of the constituent elementary operations (e.g. arithmetic and transcendental functions) via the chain rule to find the derivative of the overall composition. For example, for the hypothetical function ![equation](https://latex.codecogs.com/gif.image?%5Cbg_white%5Cinline%20%5Cdpi%7B110%7Dy%20=%20f(g(h(x)))), where ![equation](https://latex.codecogs.com/gif.image?%5Cbg_white%5Cinline%20%5Cdpi%7B110%7Df,%20g,%20h) all represent elementary operations, we can pose ![equation](https://latex.codecogs.com/gif.image?%5Cbg_white%5Cinline%20%5Cdpi%7B110%7Dv_0%20=%20x,%5Cquad%20v_1%20=%20h(v_0),%5Cquad%20v_2%20=%20g(v_1),%5Cquad%20y%20=%20v_3%20=%20h(v_2)). The desired output is ![equation](https://latex.codecogs.com/gif.image?%5Cbg_white%5Cinline%20%5Cdpi%7B110%7D%5Cfrac%7Bdy%7D%7Bdx%7D), and by the chain rule and simple derivatives, we obtain: <p align="center"> <img src="https://latex.codecogs.com/gif.image?%5Cbg_white%20%5Cdpi%7B110%7D%5Cfrac%7Bdy%7D%7Bdx%7D%20=%20%5Cfrac%7Bdv_3%7D%7Bdv_2%7D%20%5Ccdot%20%5Cfrac%7Bdv_2%7D%7Bdv_1%7D%20%5Ccdot%20%5Cfrac%7Bdv_1%7D%7Bdv_0%7D"> </p> Our implementation of AD uses dual numbers to calculate derivatives of individual components. Dual numbers have real and dual components, taking the form ![equation](https://latex.codecogs.com/gif.image?%5Cbg_white%5Cinline%20%5Cdpi%7B110%7Da%20&plus;%20b%5Cepsilon$%20with%20$%5Cepsilon%5E2%20=%200) and ![equation](https://latex.codecogs.com/gif.image?%5Cbg_white%5Cinline%20%5Cdpi%7B110%7D%5Cepsilon%20%5Cneq%200) and where `a` and `b` are real. By the Taylor series expansion of a function around a point, notice that evaluating a function at ![equation](https://latex.codecogs.com/gif.image?%5Cbg_white%5Cinline%20%5Cdpi%7B110%7Da%20&plus;%20%5Cepsilon) yields: <p align="center"> <img src="https://latex.codecogs.com/gif.image?%5Cbg_white%20%5Cdpi%7B110%7Df(a%20&plus;%20%5Cepsilon)%20=%20f(a)%20&plus;%20%5Cfrac%7Bf'(a)%7D%7B1!%7D%20%5Cepsilon%20&plus;%20%5Cfrac%7Bf''(a)%7D%7B2!%7D%20%5Cepsilon%5E2%20&plus;%20...%20=%20f(a)%20&plus;%20f'(a)%20%5Cepsilon"> </p> Hence, by evaluating the function at the desired point ![equation](https://latex.codecogs.com/gif.image?%5Cbg_white%5Cinline%20%5Cdpi%7B110%7Da%20&plus;%20%5Cepsilon), the outputted real and dual components are the function evaluated at `a` and derivative of the function evaluated at `a` respectively. This is an efficient way of calculating requisite derivatives. ## How to Use First, ensure that you are using Python 3.10 or newer. All future steps can/should be completed in a virtual environment so as not to pollute your base Python installation. To create and activate a new virtual environment, use the following: ``` python3 -m venv [desired/path/to/venv] source [desired/path/to/venv]/bin/activate ``` Next, clone the package from this GitHub repository and install the needed dependencies and the package: ``` git clone https://code.harvard.edu/CS107/team33.git python3 -m pip install -r requirements.txt python3 -m pip install . ``` Now, you're ready to use the package. Continue to the [Example](#Example) to test our the package! ### Example First, import the package in your python code: ``` import Adifpy ``` and create an `Evaluator` object, which takes a callable function as an argument: ``` evaluator = Adifpy.Evaluator(lambda x : x**2) ``` Next, we want to find the value and derivative of the function at a point (currently, only scalar functions with 1 input and 1 output are supported). We can use the `Evaluator`'s `eval` function, passing in the point at which you want to evaluate (and optionally, a scalar seed vector): ``` output = evaluator.eval(3) ``` This function returns a tuple, in the form `(value, derivative)`, where the value is the evaluation of the function at that point (in this case, 9) and the derivative is the derivative of the function at that point (in this case, 6). Additionally a seed vector (for now, only scalars such as type `int` or `float` are supported) can be passed to take the derivative with respect to a different seed vector. For example, if you want to take the derivative with respect to a seed vector of `2` you could call the following: ``` output2 = evaluator.eval(3, seed_vector=2) ``` which would return `(9,12)` (since the directional derivative is in the same direction, with twice the magnitude). ## Software Organization The following section outlines our plans for organizing the package directory, sub-packages, modules, classes, and deployment. ### Directory Structure <pre> adifpy/ โ”œโ”€โ”€ docs โ”‚ โ””โ”€โ”€ milestone1 โ”‚ โ””โ”€โ”€ milestone2 โ”‚ โ””โ”€โ”€ milestone2_progress โ”‚ โ””โ”€โ”€ documentation โ”œโ”€โ”€ LICENSE โ”œโ”€โ”€ README.md โ”œโ”€โ”€ requirements.txt โ”œโ”€โ”€ pyproject.toml โ”œโ”€โ”€ Adifpy โ”‚ โ”œโ”€โ”€ differentiate โ”‚ โ”‚ โ””โ”€โ”€ <a href="#dual_numberpy">dual_number.py</a> โ”‚ โ”‚ โ””โ”€โ”€ <a href="#evaluatorpy">evaluator.py</a> โ”‚ โ”‚ โ””โ”€โ”€ <a href="#forward_modepy">forward_mode.py</a> โ”‚ โ”‚ โ””โ”€โ”€ <a href="#function_treepy">function_tree.py</a> โ”‚ โ”‚ โ””โ”€โ”€ <a href="#reverse_modepy">reverse_mode.py</a> โ”‚ โ”œโ”€โ”€ visualize โ”‚ โ”‚ โ””โ”€โ”€ <a href="#graph_functionpy">graph_function.py</a> โ”‚ โ”œโ”€โ”€ test โ”‚ โ”‚ โ””โ”€โ”€ README.md โ”‚ โ”‚ โ””โ”€โ”€ run_tests.sh โ”‚ โ”‚ โ””โ”€โ”€ test_dual_number.py โ”‚ โ”‚ โ””โ”€โ”€ ... (unit and integration tests) โ”‚ โ”œโ”€โ”€ __init__.py โ”‚ โ””โ”€โ”€ config.py โ””โ”€โ”€ .github โ””โ”€โ”€ workflows โ””โ”€โ”€ coverage.yaml โ””โ”€โ”€ test.yaml </pre> ### Subpackages The `Adifpy` directory contains the source code of our package, which contains 3 subpackages: `differentiate`, `visualize`, and `test`, described below. #### Differentiate The differentiate subpackage currently contains modules required to perform forward mode AD on functions from R to R. Contained in this subpackage are the modules `dual_number.py`, `elementary_functions.py`, `evaluator.py`, `forward_mode.py`, `function_tree.py`, and `reverse_mode.py`. For more information on each module, see [Modules and Classes](#Modules-and-Classes). #### Visualize This subpackage has not been implemented yet. Check out our implementation plan [below](#Visualization). #### Test The test suite is contained in the test sub-package, as shown above in the [Directory Structure](#Directory-Structure). The test directory contains a `run_tests.sh`, which installs the package and runs the relevant `pytest` commands to display data on the testing suite (similar to the CI workflows). The individual test files, each of which are named in the `test_*.py` format, test a different aspect of the package. Within each file, each function (also named `test_*`) tests a smaller detail of that aspect. For example, the `test_dual_number.py` test module tests the implementation of the `DualNumber` class. Each function in that module tests one of the overloaded operators. Thus, error messaging will be comprehensive, should one of these operators be changed and fail to work. The easiest way to run the test suite is to go to the `test` directory and run `./run_tests.sh`. ## Implementation Major data structures, including descriptions on how dual numbers are implemented, are described in the [Modules and Classes](#Modules-and-Classes) section below. ### Libraries The `differentiate` sub-package requires the `NumPy` library. Additionally, the `visualization` sub-package will require `MatPlotLib` for displaying graphs. Additional libraries may be required later for additional ease of computation or visualization. These requirements are specified in the `requirements.txt` for easy installation. ### Modules and Classes #### `dual_number.py` the `DualNumber` class, stored in this module, contains the functionality for dual numbers for automatic differentiation. When a forward pass (in forward mode) is performed on a user function, a `DualNumber` object is passed to mimic the function's numeric or vector input. All of `DualNumber`'s major mathematical dunder methods are overloaded so that the `DualNumber` is updated for each of the function's elementary operations. Each of the binary dunder methods (addition, division, etc.) work with both other numeric types (integers and floats) as well as other `DualNumber`s. #### `evaluator.py` The `Evaluator` class, stored in this module, is the user's main communication with the package. An `Evaluator` object is defined by its function, which is provided by the user on creation. A derivative can be calculated at any point, with any seed vector, by calling an `Evaluator`'s `eval` function. The `Evaluator` class ensures that a user's function is valid, decides whether to use forward or reverse mode (based on performance), and returns the derivative on `eval` calls. *When reverse mode is implemented, the `Evaluator` class may also contain optimizations for making future `eval` calls faster by storing a computational graph.* #### `forward_mode.py` This module contains only the `forward_mode` method, which takes a user function, evaluation point, and seed vector. Its implementation is incredibly simple: a `DualNumber` is created with the real part as the evaluation point and the dual part as the seed vector. This `DualNumber` is then passed through the user's function, and the resulting real and dual components of the output `DualNumber` are the function output and derivative. #### `function_tree.py` The `FunctionTree` class, stored in this module, is a representation of a computational graph in the form of a tree, where intermediate variables are stored as nodes. The parent-child relationship between these nodes represents the elementary operations for these intermediate variables. This class contains optimizations like ensuring duplicate nodes are avoided. *This module is currently unused (and un-implemented). When reverse mode is implemented, a given `Evaluator` object will build up and store a `FunctionTree` for optimization.* #### `reverse_mode.py` This module contains only the `reverse_mode` method, which takes the same arguments as `forward_pass`. This function is not yet implemented. #### `graph_tree.py` This module will contain functionality for displaying a presentable representation of a computation graph in an image. Using a `FunctionTree` object, the resulting image will be a tree-like structure with nodes and connections representing intermediate variables and elementary operations. This functionality is not yet implemented. #### `graph_function.py` This module will contain functionality for graphing a function and its derivative. It will create an `Evaluator` object and make the necessary `eval` calls to fill a graph for display. This functionality is not yet implemented. ### Elementary Functions Many elementary functions like trigonometric, inverse trigonometric and exponential cannot be overloaded by Python's dunder methods (like addition and subtraction can). However, a user must still be able to use these operators in their functions, but cannot use the standard `math` or `np` versions, since a `DualNumber` object is passed to the function for forward passes. Thus, we define a module `elementary_functions.py` that contains methods which take a `DualNumber`, and return a `DualNumber`, with the real part equal to the elementary operation applied to the real part, and the derivative of the operation applied to the dual part. Thus, these functions are essentially our package's **storage** for the common derivatives (cosine is the derivative of sine, etc.), where the storage of the derivative is the assignment of the dual part of the output of these elementary operations. These operations will be automatically imported in the package's `__init__.py` so that users can simply call `Adifpy.sin()` or `Adifpy.cos()` (for this milestone our implementation requires users to call `ef.sin()` and `ef.cos()`, not `Adifpy.sin()` or `Adifpy.cos()`), as they would with `np.sin()` and `np.cos()`. ## Extension Now that our forward mode implementation is complete, we will move on to implement additional features and conveniences for the user. ### Reverse Mode We will implement reverse mode AD in the differentiate subpackage. Given that we have already been quizzed on the background math, encoding this process should not be too onerous. One of the biggest challenges that we foresee is determining when it is best to use Reverse Mode and when it is best to use Forward Mode. Obviously, it is better to use forward mode when there are far more outputs than inputs and vice-versa for reverse mode, but in the cases where number of inputs and outputs are similar it is not so simple. To address this we will do a series of practical tests on functions of different dimensions, and manually encode the most efficient results into `evaluator.py`. ### Visualization We are planning on creating a visualization tool with `MatPlotLib` that can plot the computational graph (calculated in Reverse Mode) of simple functions that are being differentiated. Obviously, the computational graph of very complex functions with many different inputs and outputs can be impractical to represent on a screen, so one of the biggest challenges that we will face is to have our program able determine when it can produce a visual tool that can be easily rendered, and when it cannot. ## Impact ## Future
Adifpy
/adifpy-0.0.3.tar.gz/adifpy-0.0.3/docs/documentation.md
documentation.md
## Distribution of Tasks - [User Interaction](#User-Interaction) - [Differentiation](#Differentiation) - [Visualization](#Visualization) - [Testing Suite](#Testing-Suite) ### User Interaction **Aaron** will handle the interaction with the user, which includes the `main.py` file and the `construct` sub-package, which includes the `function_tree.py` and `node.py` files. ### Differentiation **Ream** and **Alex** will handle the `differentiate` sub-package, which includes implementing Dual Numbers (in `dual_number.py`) and forward and reverse pass (in `forward_pass.py` and `reverse_pass.py`). ### Visualization **Jack** will handle the `visualize` sub-package, which includes the `graph_tree.py` file and any other visualizations we find are practical and useful. ### Testing Suite **Eli** will lead the testing suite (the `test` sub-package) and all of its unit and other tests. **Jack** will also assist with the testing suite, since the workload may be very high (especially in the beginning for creating black box tests). ## Progress Before the deadline for Milestone 2B, each group member will have completed the outlines for their part of the package. This outline includes creating the Python files, classes, and functions. No implementation for these will be completed yet, so all functions will just `pass` for now. This will allow us to have a better idea of our own workloads and re-distribute tasks if needed.
Adifpy
/adifpy-0.0.3.tar.gz/adifpy-0.0.3/docs/milestone2_progress.md
milestone2_progress.md
import math import matplotlib.pyplot as plt from .Generaldistribution import Distribution class Gaussian(Distribution): """ Gaussian distribution class for calculating and visualizing a Gaussian distribution. Attributes: mean (float) representing the mean value of the distribution stdev (float) representing the standard deviation of the distribution data_list (list of floats) a list of floats extracted from the data file """ def __init__(self, mu=0, sigma=1): Distribution.__init__(self, mu, sigma) def calculate_mean(self): """Function to calculate the mean of the data set. Args: None Returns: float: mean of the data set """ avg = 1.0 * sum(self.data) / len(self.data) self.mean = avg return self.mean def calculate_stdev(self, sample=True): """Function to calculate the standard deviation of the data set. Args: sample (bool): whether the data represents a sample or population Returns: float: standard deviation of the data set """ if sample: n = len(self.data) - 1 else: n = len(self.data) mean = self.calculate_mean() sigma = 0 for d in self.data: sigma += (d - mean) ** 2 sigma = math.sqrt(sigma / n) self.stdev = sigma return self.stdev def plot_histogram(self): """Function to output a histogram of the instance variable data using matplotlib pyplot library. Args: None Returns: None """ plt.hist(self.data) plt.title('Histogram of Data') plt.xlabel('data') plt.ylabel('count') def pdf(self, x): """Probability density function calculator for the gaussian distribution. Args: x (float): point for calculating the probability density function Returns: float: probability density function output """ return (1.0 / (self.stdev * math.sqrt(2*math.pi))) * math.exp(-0.5*((x - self.mean) / self.stdev) ** 2) def plot_histogram_pdf(self, n_spaces = 50): """Function to plot the normalized histogram of the data and a plot of the probability density function along the same range Args: n_spaces (int): number of data points Returns: list: x values for the pdf plot list: y values for the pdf plot """ mu = self.mean sigma = self.stdev min_range = min(self.data) max_range = max(self.data) # calculates the interval between x values interval = 1.0 * (max_range - min_range) / n_spaces x = [] y = [] # calculate the x values to visualize for i in range(n_spaces): tmp = min_range + interval*i x.append(tmp) y.append(self.pdf(tmp)) # make the plots fig, axes = plt.subplots(2,sharex=True) fig.subplots_adjust(hspace=.5) axes[0].hist(self.data, density=True) axes[0].set_title('Normed Histogram of Data') axes[0].set_ylabel('Density') axes[1].plot(x, y) axes[1].set_title('Normal Distribution for \n Sample Mean and Sample Standard Deviation') axes[0].set_ylabel('Density') plt.show() return x, y def __add__(self, other): """Function to add together two Gaussian distributions Args: other (Gaussian): Gaussian instance Returns: Gaussian: Gaussian distribution """ result = Gaussian() result.mean = self.mean + other.mean result.stdev = math.sqrt(self.stdev ** 2 + other.stdev ** 2) return result def __repr__(self): """Function to output the characteristics of the Gaussian instance Args: None Returns: string: characteristics of the Gaussian """ return "mean {}, standard deviation {}".format(self.mean, self.stdev)
Adithya-Gaussian-Distribution
/Adithya%20Gaussian%20Distribution-0.1.tar.gz/Adithya Gaussian Distribution-0.1/Adithya Gaussian Distribution/Gaussiandistribution.py
Gaussiandistribution.py
import math import matplotlib.pyplot as plt from .Generaldistribution import Distribution class Binomial(Distribution): """ Binomial distribution class for calculating and visualizing a Binomial distribution. Attributes: mean (float) representing the mean value of the distribution stdev (float) representing the standard deviation of the distribution data_list (list of floats) a list of floats to be extracted from the data file p (float) representing the probability of an event occurring n (int) number of trials TODO: Fill out all functions below """ def __init__(self, prob=.5, size=20): self.n = size self.p = prob Distribution.__init__(self, self.calculate_mean(), self.calculate_stdev()) def calculate_mean(self): """Function to calculate the mean from p and n Args: None Returns: float: mean of the data set """ self.mean = self.p * self.n return self.mean def calculate_stdev(self): """Function to calculate the standard deviation from p and n. Args: None Returns: float: standard deviation of the data set """ self.stdev = math.sqrt(self.n * self.p * (1 - self.p)) return self.stdev def replace_stats_with_data(self): """Function to calculate p and n from the data set Args: None Returns: float: the p value float: the n value """ self.n = len(self.data) self.p = 1.0 * sum(self.data) / len(self.data) self.mean = self.calculate_mean() self.stdev = self.calculate_stdev() def plot_bar(self): """Function to output a histogram of the instance variable data using matplotlib pyplot library. Args: None Returns: None """ plt.bar(x = ['0', '1'], height = [(1 - self.p) * self.n, self.p * self.n]) plt.title('Bar Chart of Data') plt.xlabel('outcome') plt.ylabel('count') def pdf(self, k): """Probability density function calculator for the gaussian distribution. Args: x (float): point for calculating the probability density function Returns: float: probability density function output """ a = math.factorial(self.n) / (math.factorial(k) * (math.factorial(self.n - k))) b = (self.p ** k) * (1 - self.p) ** (self.n - k) return a * b def plot_bar_pdf(self): """Function to plot the pdf of the binomial distribution Args: None Returns: list: x values for the pdf plot list: y values for the pdf plot """ x = [] y = [] # calculate the x values to visualize for i in range(self.n + 1): x.append(i) y.append(self.pdf(i)) # make the plots plt.bar(x, y) plt.title('Distribution of Outcomes') plt.ylabel('Probability') plt.xlabel('Outcome') plt.show() return x, y def __add__(self, other): """Function to add together two Binomial distributions with equal p Args: other (Binomial): Binomial instance Returns: Binomial: Binomial distribution """ try: assert self.p == other.p, 'p values are not equal' except AssertionError as error: raise result = Binomial() result.n = self.n + other.n result.p = self.p result.calculate_mean() result.calculate_stdev() return result def __repr__(self): """Function to output the characteristics of the Binomial instance Args: None Returns: string: characteristics of the Gaussian """ return "mean {}, standard deviation {}, p {}, n {}".\ format(self.mean, self.stdev, self.p, self.n)
Adithya-Gaussian-Distribution
/Adithya%20Gaussian%20Distribution-0.1.tar.gz/Adithya Gaussian Distribution-0.1/Adithya Gaussian Distribution/Binomialdistribution.py
Binomialdistribution.py
__project__ = "Draugr" __author__ = "Christian Heider Nielsen" __version__ = "1.0.1" __doc__ = r""" Created on 27/04/2019 @author: cnheider """ import datetime import os from logging import warning from pathlib import Path from typing import Any import pkg_resources from apppath import AppPath with open(Path(__file__).parent / "README.md", "r") as this_init_file: __doc__ += this_init_file.read() # with open(Path(__file__).parent.parent / "README.md", "r") as this_init_file: # __doc__ += this_init_file.read() # __all__ = ["PROJECT_APP_PATH", "PROJECT_NAME", "PROJECT_VERSION", "get_version"] def dist_is_editable(dist: Any) -> bool: """ Return True if given Distribution is an editable installation.""" import sys for path_item in sys.path: egg_link = Path(path_item) / f"{dist.project_name}.egg-link" if egg_link.is_file(): return True return False PROJECT_ORGANISATION = "pything" PROJECT_NAME = __project__.lower().strip().replace(" ", "_") PROJECT_VERSION = __version__ PROJECT_YEAR = 2018 PROJECT_AUTHOR = __author__.lower().strip().replace(" ", "_") PROJECT_APP_PATH = AppPath(app_name=PROJECT_NAME, app_author=PROJECT_AUTHOR) PACKAGE_DATA_PATH = Path(pkg_resources.resource_filename(PROJECT_NAME, "data")) INCLUDE_PROJECT_READMES = False distributions = {v.key: v for v in pkg_resources.working_set} if PROJECT_NAME in distributions: distribution = distributions[PROJECT_NAME] DEVELOP = dist_is_editable(distribution) else: DEVELOP = True def get_version(append_time: Any = DEVELOP) -> str: """description""" version = __version__ if not version: version = os.getenv("VERSION", "0.0.0") if append_time: now = datetime.datetime.utcnow() date_version = now.strftime("%Y%m%d%H%M%S") # date_version = time.time() if version: # Most git tags are prefixed with 'v' (example: v1.2.3) this is # never desirable for artifact repositories, so we strip the # leading 'v' if it's present. version = ( version[1:] if isinstance(version, str) and version.startswith("v") else version ) else: # Default version is an ISO8601 compliant datetime. PyPI doesn't allow # the colon ':' character in its versions, and time is required to allow # for multiple publications to master in one day. This datetime string # uses the 'basic' ISO8601 format for both its date and time components # to avoid issues with the colon character (ISO requires that date and # time components of a date-time string must be uniformly basic or # extended, which is why the date component does not have dashes. # # Publications using datetime versions should only be made from master # to represent the HEAD moving forward. warning( f"Environment variable VERSION is not set, only using datetime: {date_version}" ) # warn(f'Environment variable VERSION is not set, only using timestamp: {version}') version = f"{version}.{date_version}" return version if __version__ is None: __version__ = get_version(append_time=True) __version_info__ = tuple(int(segment) for segment in __version__.split("."))
Adjacency
/Adjacency-1.0.1-py3-none-any.whl/adjacency/__init__.py
__init__.py
__all__ = ('adjax_response',) from django.http import HttpResponse from django.shortcuts import render_to_response, redirect from django.template import RequestContext from django.conf import settings from django.core.serializers import json, serialize from adjax.base import get_store from django.utils.functional import Promise from django.utils.encoding import force_unicode class LazyEncoder(json.DjangoJSONEncoder): def default(self, obj): if isinstance(obj, Promise): return force_unicode(obj) return super(LazyEncoder, self).default(obj) class JsonResponse(HttpResponse): def __init__(self, object): if isinstance(object, QuerySet): content = serialize('json', object) else: content = simplejson.dumps( object, indent=2, cls=LazyEncoder, ensure_ascii=False) super(JsonResponse, self).__init__( content, content_type='application/json') # Where to redirect to when view is called without an ajax request. DEFAULT_REDIRECT = getattr(settings, 'ADJAX_DEFAULT_REDIRECT', None) ADJAX_CONTEXT_KEY = 'adjax' def adjax_response(func): """ Renders the response using JSON, if appropriate. """ # TODO allow a template to be given for non-ajax requests template_name = None def wrapper(request, *args, **kw): output = func(request, *args, **kw) store = get_store(request) # If a dict is given, add that to the output if output is None: output = {} elif isinstance(output, dict): output = output.copy() output.pop('request', None) for key, val in output.items(): store.extra(key, val) # Intercept redirects elif isinstance(output, HttpResponse) and output.status_code in (301, 302): store.redirect(output['Location']) if request.is_ajax(): return store.json_response if isinstance(output, dict): # If we have a template, render that if template_name: output.setdefault(ADJAX_CONTEXT_KEY, store) return render_to_response(template_name, output, context_instance=RequestContext(request)) # Try and redirect somewhere useful if 'HTTP_REFERER' in request.META: return redirect(request.META['HTTP_REFERER']) elif DEFAULT_REDIRECT: return redirect(DEFAULT_REDIRECT) else: return HttpResponse() return output return wrapper
Adjax
/Adjax-1.0.1.tar.gz/Adjax-1.0.1/adjax/decorators.py
decorators.py
from utils import get_key, JsonResponse, get_template_include_key from django.contrib import messages from django.core import urlresolvers from django.template.context import RequestContext from django.template.loader import render_to_string def get_store(request): """ Gets a relevant store object from the given request. """ if not hasattr(request, '_adjax_store'): request._adjax_store = AdjaxStore(request) return request._adjax_store class AdjaxStore(object): """ This class will help store ajax data collected in views. """ def __init__(self, request): self.request = request self.update_data = {} self.form_data = {} self.replace_data = {} self.hide_data = [] self.extra_data = {} self.redirect_data = None @property def messages_data(self): return [{'tags': m.tags, 'content': unicode(m), 'level': m.level} for m in messages.get_messages(self.request)] def update(self, obj, attributes=None): """ Make values from a given object available. """ for attr in attributes: value = getattr(obj, attr) if callable(value): value = value() self.update_data[get_key(obj, attr)] = value def form(self, form_obj): """ Validate the given form and send errors to browser. """ if not form_obj.is_valid(): for name, errors in form_obj.errors.items(): if form_obj.prefix: key = 'id_%s-%s' % (form_obj.prefix, name) else: key = 'id_%s' % name self.form_data[key] = errors def replace(self, element, html): """ Replace the given DOM element with the given html. The DOM element is specified using css identifiers. Some javascript libraries may have an extended syntax, which can be used if you don't value portability. """ self.replace_data[element] = html def hide(self, element): """ Hides the given DOM element. The DOM element is specified using css identifiers. Some javascript libraries may have an extended syntax, which can be used if you don't value portability. """ self.hide_data.append(element) def redirect(self, to, *args, **kwargs): """ Redirect the browser dynamically to another page. """ if hasattr(to, 'get_absolute_url'): self.redirect_data = to.get_absolute_url() return try: self.redirect_data = urlresolvers.reverse(to, args=args, kwargs=kwargs) return except urlresolvers.NoReverseMatch: # If this is a callable, re-raise. if callable(to): raise # If this doesn't "feel" like a URL, re-raise. if '/' not in to and '.' not in to: raise # Finally, fall back and assume it's a URL self.redirect_data = to def extra(self, key, value): """ Send additional information to the browser. """ self.extra_data[key] = value def render_to_response(self, template_name, dictionary=None, prefix=None, context_instance=None): """ Update any included templates. """ # Because we have access to the request object, we can use request context # This is not analogous to render_to_strings interface if context_instance is None: context_instance = RequestContext(self.request) rendered_content = render_to_string(template_name, dictionary, context_instance=context_instance) dom_element = ".%s" % get_template_include_key(template_name, prefix) self.replace(dom_element, rendered_content) @property def json_response(self): """ Return a json response with our ajax data """ elements = ( ('extra', self.extra_data), ('messages', self.messages_data), ('forms', self.form_data), ('replace', self.replace_data), ('hide', self.hide_data), ('update', self.update_data), ('redirect', self.redirect_data), ) return JsonResponse(dict((a,b) for a,b in elements if b))
Adjax
/Adjax-1.0.1.tar.gz/Adjax-1.0.1/adjax/base.py
base.py
from django.core.serializers import json, serialize from django.http import HttpResponse from django.utils import simplejson from django.db.models.query import QuerySet try: import hashlib hash_function = hashlib.sha1 except ImportError: import sha hash_function = sha.new def get_key(instance, field_name): """ Returns the key that will be used to identify dynamic fields in the DOM. """ # TODO: Avoid any characters that may not appear in class names m = instance._meta return '-'.join(('data', m.app_label, m.object_name, str(instance.pk), field_name)) def get_template_include_key(template_name, prefix=None): """ Get a valid element class name, we'll stick to ascii letters, numbers and hyphens. NB class names cannot start with a hyphen """ digest = int(hash_function(template_name).hexdigest(),16) hash = base36.from_decimal(digest) if prefix: return 'tpl-%s-%s' % (prefix, hash) else: return 'tpl-%s' % (hash) class JsonResponse(HttpResponse): def __init__(self, obj): if isinstance(obj, QuerySet): content = serialize('json', obj) else: content = simplejson.dumps(obj, indent=2, cls=json.DjangoJSONEncoder, ensure_ascii=False) super(JsonResponse, self).__init__(content, content_type='application/json') """ Convert numbers from base 10 integers to base X strings and back again. Sample usage: >>> base20 = BaseConverter('0123456789abcdefghij') >>> base20.from_decimal(1234) '31e' >>> base20.to_decimal('31e') 1234 From http://www.djangosnippets.org/snippets/1431/ """ class BaseConverter(object): decimal_digits = "0123456789" def __init__(self, digits): self.digits = digits def from_decimal(self, i): return self.convert(i, self.decimal_digits, self.digits) def to_decimal(self, s): return int(self.convert(s, self.digits, self.decimal_digits)) def convert(number, fromdigits, todigits): # Based on http://code.activestate.com/recipes/111286/ if str(number)[0] == '-': number = str(number)[1:] neg = 1 else: neg = 0 # make an integer out of the number x = 0 for digit in str(number): x = x * len(fromdigits) + fromdigits.index(digit) # create the result in base 'len(todigits)' if x == 0: res = todigits[0] else: res = "" while x > 0: digit = x % len(todigits) res = todigits[digit] + res x = int(x / len(todigits)) if neg: res = '-' + res return res convert = staticmethod(convert) base36 = BaseConverter('0123456789abcdefghijklmnopqrstuvwxyz')
Adjax
/Adjax-1.0.1.tar.gz/Adjax-1.0.1/adjax/utils.py
utils.py
__all__ = ('adjax_response', 'success', 'info', 'warning', 'error', 'debug', 'redirect', 'update', 'form', 'replace', 'hide', 'extra', 'render_to_response') __version_info__ = ('1', '0', '1') __version__ = '.'.join(__version_info__) __authors__ = ["Will Hardy <[email protected]>"] from adjax.decorators import adjax_response from adjax.base import get_store from django.contrib.messages import success, info, warning, error, debug def update(request, obj, attributes=None): """ Sends the updated version of the given attributes on the given object. If no attributes are given, all attributes are sent (be careful if you don't want all data to be public). If a minus sign is in front of an attribute, it is omitted. A mix of attribtue names with and without minus signs is just silly. No other attributes will be included. """ store = get_store(request) if not attributes or all(map(lambda s: s.startswith("-"), attributes)): attributes = obj.__dict__.keys() store.update(obj, (a for a in attributes if not a.startswith("-"))) def form(request, form_obj): """ Validate the given form and send errors to browser. """ get_store(request).form(form_obj) def replace(request, element, html): """ Replace the given DOM element with the given html. The DOM element is specified using css identifiers. Some javascript libraries may have an extended syntax, which can be used if you don't value portability. """ get_store(request).replace(element, html) def redirect(request, path): """ Redirect the browser dynamically to another page. """ get_store(request).redirect(path) def hide(request, element): """ Hides the given DOM element. The DOM element is specified using css identifiers. Some javascript libraries may have an extended syntax, which can be used if you don't value portability. """ get_store(request).hide(element) def extra(request, key, value): """ Send additional information to the browser. """ get_store(request).extra(key, value) def render_to_response(request, template_name, context=None, prefix=None): """ Update any included templates. """ get_store(request).render_to_response(template_name, context, prefix)
Adjax
/Adjax-1.0.1.tar.gz/Adjax-1.0.1/adjax/__init__.py
__init__.py
from django import template from django.template.loader import get_template from adjax.utils import get_key, get_template_include_key from django.conf import settings register = template.Library() def adjax(parser, token): try: tag_name, object_name = token.split_contents() except ValueError: raise template.TemplateSyntaxError, "%r tag requires a single argument" % token.contents.split()[0] return DynamicValueNode(object_name) class DynamicValueNode(template.Node): def __init__(self, object_name): self.object_name, self.field_name = object_name.rsplit(".", 1) self.instance = template.Variable(self.object_name) self.value = template.Variable(object_name) def render(self, context): instance = self.instance.resolve(context) if hasattr(instance, '_meta'): return '<span class="%s">%s</span>' % (get_key(instance, self.field_name), self.value.resolve(context)) def adjax_include(parser, token): bits = token.split_contents() try: tag_name, template_name = bits[:2] except ValueError: raise template.TemplateSyntaxError, "%r tag requires a template name" % bits[0] kwargs = {} for arg in bits[2:]: key, value = arg.split("=", 1) if key in ('prefix', 'wrapper'): kwargs[str(key)] = value else: raise template.TemplateSyntaxError, "invalid argument (%s) for %r tag" % (key, tag_name) return AdjaxIncludeNode(template_name, **kwargs) class AdjaxIncludeNode(template.Node): def __init__(self, template_name, prefix=None, wrapper='"div"'): self.template_name = template.Variable(template_name) self.prefix = prefix and template.Variable(prefix) or None self.wrapper = template.Variable(wrapper) def render(self, context): template_name = self.template_name.resolve(context) wrapper = self.wrapper.resolve(context) prefix = self.prefix and self.prefix.resolve(context) or None key = get_template_include_key(template_name, prefix) try: content = get_template(template_name).render(context) return '<%s class="%s">%s</%s>' % (wrapper, key, content, wrapper) except template.TemplateSyntaxError, e: if settings.TEMPLATE_DEBUG: raise return '' except: return '' # Like Django, fail silently for invalid included templates. # Register our tags register.tag('adjax', adjax) register.tag('adjax_include', adjax_include)
Adjax
/Adjax-1.0.1.tar.gz/Adjax-1.0.1/adjax/templatetags/ajax.py
ajax.py
import sys DEFAULT_VERSION = "0.6c9" DEFAULT_URL = "http://pypi.python.org/packages/%s/s/setuptools/" % sys.version[:3] md5_data = { 'setuptools-0.6b1-py2.3.egg': '8822caf901250d848b996b7f25c6e6ca', 'setuptools-0.6b1-py2.4.egg': 'b79a8a403e4502fbb85ee3f1941735cb', 'setuptools-0.6b2-py2.3.egg': '5657759d8a6d8fc44070a9d07272d99b', 'setuptools-0.6b2-py2.4.egg': '4996a8d169d2be661fa32a6e52e4f82a', 'setuptools-0.6b3-py2.3.egg': 'bb31c0fc7399a63579975cad9f5a0618', 'setuptools-0.6b3-py2.4.egg': '38a8c6b3d6ecd22247f179f7da669fac', 'setuptools-0.6b4-py2.3.egg': '62045a24ed4e1ebc77fe039aa4e6f7e5', 'setuptools-0.6b4-py2.4.egg': '4cb2a185d228dacffb2d17f103b3b1c4', 'setuptools-0.6c1-py2.3.egg': 'b3f2b5539d65cb7f74ad79127f1a908c', 'setuptools-0.6c1-py2.4.egg': 'b45adeda0667d2d2ffe14009364f2a4b', 'setuptools-0.6c2-py2.3.egg': 'f0064bf6aa2b7d0f3ba0b43f20817c27', 'setuptools-0.6c2-py2.4.egg': '616192eec35f47e8ea16cd6a122b7277', 'setuptools-0.6c3-py2.3.egg': 'f181fa125dfe85a259c9cd6f1d7b78fa', 'setuptools-0.6c3-py2.4.egg': 'e0ed74682c998bfb73bf803a50e7b71e', 'setuptools-0.6c3-py2.5.egg': 'abef16fdd61955514841c7c6bd98965e', 'setuptools-0.6c4-py2.3.egg': 'b0b9131acab32022bfac7f44c5d7971f', 'setuptools-0.6c4-py2.4.egg': '2a1f9656d4fbf3c97bf946c0a124e6e2', 'setuptools-0.6c4-py2.5.egg': '8f5a052e32cdb9c72bcf4b5526f28afc', 'setuptools-0.6c5-py2.3.egg': 'ee9fd80965da04f2f3e6b3576e9d8167', 'setuptools-0.6c5-py2.4.egg': 'afe2adf1c01701ee841761f5bcd8aa64', 'setuptools-0.6c5-py2.5.egg': 'a8d3f61494ccaa8714dfed37bccd3d5d', 'setuptools-0.6c6-py2.3.egg': '35686b78116a668847237b69d549ec20', 'setuptools-0.6c6-py2.4.egg': '3c56af57be3225019260a644430065ab', 'setuptools-0.6c6-py2.5.egg': 'b2f8a7520709a5b34f80946de5f02f53', 'setuptools-0.6c7-py2.3.egg': '209fdf9adc3a615e5115b725658e13e2', 'setuptools-0.6c7-py2.4.egg': '5a8f954807d46a0fb67cf1f26c55a82e', 'setuptools-0.6c7-py2.5.egg': '45d2ad28f9750e7434111fde831e8372', 'setuptools-0.6c8-py2.3.egg': '50759d29b349db8cfd807ba8303f1902', 'setuptools-0.6c8-py2.4.egg': 'cba38d74f7d483c06e9daa6070cce6de', 'setuptools-0.6c8-py2.5.egg': '1721747ee329dc150590a58b3e1ac95b', 'setuptools-0.6c9-py2.3.egg': 'a83c4020414807b496e4cfbe08507c03', 'setuptools-0.6c9-py2.4.egg': '260a2be2e5388d66bdaee06abec6342a', 'setuptools-0.6c9-py2.5.egg': 'fe67c3e5a17b12c0e7c541b7ea43a8e6', 'setuptools-0.6c9-py2.6.egg': 'ca37b1ff16fa2ede6e19383e7b59245a', } import sys, os try: from hashlib import md5 except ImportError: from md5 import md5 def _validate_md5(egg_name, data): if egg_name in md5_data: digest = md5(data).hexdigest() if digest != md5_data[egg_name]: print >>sys.stderr, ( "md5 validation of %s failed! (Possible download problem?)" % egg_name ) sys.exit(2) return data def use_setuptools( version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir, download_delay=15 ): """Automatically find/download setuptools and make it available on sys.path `version` should be a valid setuptools version number that is available as an egg for download under the `download_base` URL (which should end with a '/'). `to_dir` is the directory where setuptools will be downloaded, if it is not already available. If `download_delay` is specified, it should be the number of seconds that will be paused before initiating a download, should one be required. If an older version of setuptools is installed, this routine will print a message to ``sys.stderr`` and raise SystemExit in an attempt to abort the calling script. """ was_imported = 'pkg_resources' in sys.modules or 'setuptools' in sys.modules def do_download(): egg = download_setuptools(version, download_base, to_dir, download_delay) sys.path.insert(0, egg) import setuptools; setuptools.bootstrap_install_from = egg try: import pkg_resources except ImportError: return do_download() try: pkg_resources.require("setuptools>="+version); return except pkg_resources.VersionConflict, e: if was_imported: print >>sys.stderr, ( "The required version of setuptools (>=%s) is not available, and\n" "can't be installed while this script is running. Please install\n" " a more recent version first, using 'easy_install -U setuptools'." "\n\n(Currently using %r)" ) % (version, e.args[0]) sys.exit(2) else: del pkg_resources, sys.modules['pkg_resources'] # reload ok return do_download() except pkg_resources.DistributionNotFound: return do_download() def download_setuptools( version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir, delay = 15 ): """Download setuptools from a specified location and return its filename `version` should be a valid setuptools version number that is available as an egg for download under the `download_base` URL (which should end with a '/'). `to_dir` is the directory where the egg will be downloaded. `delay` is the number of seconds to pause before an actual download attempt. """ import urllib2, shutil egg_name = "setuptools-%s-py%s.egg" % (version,sys.version[:3]) url = download_base + egg_name saveto = os.path.join(to_dir, egg_name) src = dst = None if not os.path.exists(saveto): # Avoid repeated downloads try: from distutils import log if delay: log.warn(""" --------------------------------------------------------------------------- This script requires setuptools version %s to run (even to display help). I will attempt to download it for you (from %s), but you may need to enable firewall access for this script first. I will start the download in %d seconds. (Note: if this machine does not have network access, please obtain the file %s and place it in this directory before rerunning this script.) ---------------------------------------------------------------------------""", version, download_base, delay, url ); from time import sleep; sleep(delay) log.warn("Downloading %s", url) src = urllib2.urlopen(url) # Read/write all in one block, so we don't create a corrupt file # if the download is interrupted. data = _validate_md5(egg_name, src.read()) dst = open(saveto,"wb"); dst.write(data) finally: if src: src.close() if dst: dst.close() return os.path.realpath(saveto) def main(argv, version=DEFAULT_VERSION): """Install or upgrade setuptools and EasyInstall""" try: import setuptools except ImportError: egg = None try: egg = download_setuptools(version, delay=0) sys.path.insert(0,egg) from setuptools.command.easy_install import main return main(list(argv)+[egg]) # we're done here finally: if egg and os.path.exists(egg): os.unlink(egg) else: if setuptools.__version__ == '0.0.1': print >>sys.stderr, ( "You have an obsolete version of setuptools installed. Please\n" "remove it from your system entirely before rerunning this script." ) sys.exit(2) req = "setuptools>="+version import pkg_resources try: pkg_resources.require(req) except pkg_resources.VersionConflict: try: from setuptools.command.easy_install import main except ImportError: from easy_install import main main(list(argv)+[download_setuptools(delay=0)]) sys.exit(0) # try to force an exit else: if argv: from setuptools.command.easy_install import main main(argv) else: print "Setuptools version",version,"or greater has been installed." print '(Run "ez_setup.py -U setuptools" to reinstall or upgrade.)' def update_md5(filenames): """Update our built-in md5 registry""" import re for name in filenames: base = os.path.basename(name) f = open(name,'rb') md5_data[base] = md5(f.read()).hexdigest() f.close() data = [" %r: %r,\n" % it for it in md5_data.items()] data.sort() repl = "".join(data) import inspect srcfile = inspect.getsourcefile(sys.modules[__name__]) f = open(srcfile, 'rb'); src = f.read(); f.close() match = re.search("\nmd5_data = {\n([^}]+)}", src) if not match: print >>sys.stderr, "Internal error!" sys.exit(2) src = src[:match.start(1)] + repl + src[match.end(1):] f = open(srcfile,'w') f.write(src) f.close() if __name__=='__main__': if len(sys.argv)>2 and sys.argv[1]=='--md5update': update_md5(sys.argv[2:]) else: main(sys.argv[1:])
Adjector
/Adjector-1.0b1.tar.gz/Adjector-1.0b1/ez_setup.py
ez_setup.py
Adjector 1.0b ************* Hi there. Thanks for using Adjector, a lightweight, flexible, open-source ad server written in Python. Adjector is licensed under the GPL, version 2 or 3, at your option. For more information, see LICENSE.txt. This Distribution ----------------- This is the main Adjector distribution. A client-only version and a Trac plugin are also available. They can be downloaded at http://projects.icapsid.net/adjector/wiki/Download Documentation ------------- All of our documentation is online at http://projects.icapsid.net/adjector You may wish to get started with 'Installing Adjector' at http://projects.icapsid.net/adjector/wiki/Install For questions, comments, help, or any other information, visit us online or email [email protected].
Adjector
/Adjector-1.0b1.tar.gz/Adjector-1.0b1/README.txt
README.txt
"""Setup the adjector application""" import elixir import logging from pylons import config from adjector.config.environment import load_environment from adjector.core.conf import conf as adjector_conf from adjector.lib.util import import_module from adjector.lib.precache import precache_zone log = logging.getLogger(__name__) def setup_app(command, conf, vars): """Place any commands to setup adjector here""" load_environment(conf.global_conf, conf.local_conf) # This has to be *after* the environment is loaded, otherwise our options don't make it to the model import adjector.model as model from adjector.model import meta # Create the tables if they don't already exist meta.metadata.create_all(bind=meta.engine) elixir.create_all(meta.engine) # Import initial data, if it exists if adjector_conf.initial_data: try: module = import_module(adjector_conf.initial_data) print 'Importing initial data...' if hasattr(module, 'sets'): print ' Importing %i sets' % len(module.sets) for set in module.sets: model.Set(set) model.session.commit() if hasattr(module, 'creatives'): print ' Importing %i creatives' % len(module.creatives) for creative in module.creatives: model.Creative(creative) if hasattr(module, 'locations'): print ' Importing %i locations' % len(module.locations) for location in module.locations: model.Location(location) model.session.commit() if hasattr(module, 'zones'): print ' Importing %i zones' % len(module.zones) for zone in module.zones: model.Zone(zone) model.session.commit() print ' Done' print 'Precaching...' for zone in model.Zone.query(): precache_zone(zone) print ' Done' except ImportError: log.warn('Could not find example data.')
Adjector
/Adjector-1.0b1.tar.gz/Adjector-1.0b1/adjector/websetup.py
websetup.py
import os import tw.api as twa from beaker.middleware import CacheMiddleware, SessionMiddleware from paste.cascade import Cascade from paste.recursive import RecursiveMiddleware from paste.registry import RegistryManager from paste.urlparser import StaticURLParser from paste.deploy.converters import asbool from pylons import config from pylons.middleware import ErrorHandler, StatusCodeRedirect from pylons.wsgiapp import PylonsApp from routes.middleware import RoutesMiddleware from adjector.config.environment import load_environment from adjector.core.conf import conf from adjector.lib.middleware import FilteredApp def make_app(global_conf, full_stack=True, static_files=True, **app_conf): """Create a Pylons WSGI application and return it ``global_conf`` The inherited configuration for this application. Normally from the [DEFAULT] section of the Paste ini file. ``full_stack`` Whether this application provides a full WSGI stack (by default, meaning it handles its own exceptions and errors). Disable full_stack when this application is "managed" by another WSGI middleware. ``static_files`` Whether this application serves its own static files; disable when another web server is responsible for serving them. ``app_conf`` The application's local configuration. Normally specified in the [app:<name>] section of the Paste ini file (where <name> defaults to main). """ # Configure the Pylons environment load_environment(global_conf, app_conf) # The Pylons WSGI app app = PylonsApp() # Routing/Session/Cache Middleware app = RoutesMiddleware(app, config['routes.map']) app = SessionMiddleware(app, config) app = CacheMiddleware(app, config) # CUSTOM MIDDLEWARE HERE (filtered by error handling middlewares) # Catch internal redirects app = RecursiveMiddleware(app) # Toscawidgets app = twa.make_middleware(app, { 'toscawidgets.framework' : 'pylons', 'toscawidgets.framework.default_view' : 'genshi', 'toscawidgets.middleware.inject_resources' : True }) if asbool(full_stack): # Handle Python exceptions app = ErrorHandler(app, global_conf, **config['pylons.errorware']) # Display error documents for 401, 403, 404 status codes (and # 500 when debug is disabled) if asbool(config['debug']): app = StatusCodeRedirect(app) else: app = StatusCodeRedirect(app, [400, 401, 403, 404, 500]) # Establish the Registry for this application app = RegistryManager(app) #if asbool(static_files): # Serve static files #static_app = StaticURLParser(config['pylons.paths']['static_files']) static_app = StaticURLParser(config['pylons.paths']['static_files']) if conf.base_url: static_app = FilteredApp(static_app, conf.base_url) app = Cascade([static_app, app]) return app
Adjector
/Adjector-1.0b1.tar.gz/Adjector-1.0b1/adjector/config/middleware.py
middleware.py
import logging from pylons import config from routes import Mapper from adjector.core.conf import conf log = logging.getLogger(__name__) def intify(*keys): ''' Make vars into integers ''' def container(environ, result): for key in keys: if result.get(key) is None: continue if result[key].isdigit(): result[key] = int(result[key]) else: log.error('%s was sent to intify method but is not in digit form' % result[key]) result[key] = None return True return dict(function=container) def make_map(): """Create, configure and return the routes Mapper""" map = Mapper(directory=config['pylons.paths']['controllers'], always_scan=config['debug']) map.minimization = False # The ErrorController route (handles 404/500 error pages); it should # likely stay at the top, ensuring it can always be resolved map.connect('/error/{action}', controller='error') map.connect('/error/{action}/{id}', controller='error') base = conf.admin_base_url # CUSTOM ROUTES HERE map.redirect(base, base + '/') map.connect(base + '/', controller='main', action='index') map.connect(base + '/import/cj', controller='cj', action='start') map.connect(base + '/import/cj/{action}', controller='cj') map.connect(base + '/import/cj/{site_id}/{id}/{action}', controller='cj', requirements=dict(site_id='\d+', id='\d+')) #note that i am not intifying this on purpose map.connect(base + '/new/{controller}', action='new') map.connect(base + '/stats', controller='stats', action='index') map.connect(conf.render_base_url + '/zone/{ident}/render', controller='zone', action='render') map.connect(conf.render_base_url + '/zone/{ident}/render.js', controller='zone', action='render_js') map.connect(conf.tracking_base_url + '/track/{action}', controller='track') map.connect(base + '/{controller}', action='list') map.connect(base + '/{controller}/{id}', action='view', requirements=dict(id='\d+'), conditions=intify('id')) map.connect(base + '/{controller}/{action}', requirements=dict(controller='(?!tracking)')) map.connect(base + '/{controller}/{id}/{action}', requirements=dict(id='\d+'), conditions=intify('id')) return map
Adjector
/Adjector-1.0b1.tar.gz/Adjector-1.0b1/adjector/config/routing.py
routing.py
"""Pylons environment configuration""" import os #from genshi.template import TemplateLoader from pylons import config from sqlalchemy import engine_from_config import adjector.lib.app_globals as app_globals import adjector.lib.helpers from adjector.config.routing import make_map from adjector.core.conf import conf def load_environment(global_conf, app_conf): """Configure the Pylons environment via the ``pylons.config`` object """ # Pylons paths root = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) paths = dict(root=root, controllers=os.path.join(root, 'controllers'), static_files=os.path.join(root, 'public'), templates=[os.path.join(root, 'templates')]) # Initialize config with the basic options config.init_app(global_conf, app_conf, package='adjector', template_engine=None, paths=paths) # Load config - has to be done before routing config['pylons.app_globals'] = app_globals.Globals() config['pylons.h'] = adjector.lib.helpers # Create the Genshi TemplateLoader genshi_options = {'genshi.default_doctype': 'xhtml-strict', 'genshi.default_format': 'xhtml', 'genshi.default_encoding': 'UTF-8', 'genshi.max_cache_size': 250, } config.add_template_engine('genshi', 'adjector.templates', genshi_options) #config['pylons.app_globals'].genshi_loader = TemplateLoader( # paths['templates'], auto_reload=True) # CONFIGURATION OPTIONS HERE (note: all config options will override # any Pylons config options) conf.load(config) config['pylons.app_globals'].conf = conf # Setup the SQLAlchemy database engine # If we put this here, we can load our config *first* from adjector.model import init_model engine = engine_from_config(config, 'sqlalchemy.') init_model(engine) # Setup routing *after* config options parsed config['routes.map'] = make_map()
Adjector
/Adjector-1.0b1.tar.gz/Adjector-1.0b1/adjector/config/environment.py
environment.py
import os.path import random import re from adjector.core.conf import conf from adjector.core.cj_util import remove_tracking_cj def add_tracking(html): if re.search('google_ad_client', html): return add_tracking_adsense(html) else: return add_tracking_generic(html) def add_tracking_generic(html): def repl(match): groups = match.groups() return groups[0] + 'ADJECTOR_TRACKING_BASE_URL/track/click_with_redirect?creative_id=ADJECTOR_CREATIVE_ID&zone_id=ADJECTOR_ZONE_ID&cache_bust=' + cache_bust() + '&url=' + groups[1] + groups[2] html_tracked = re.sub(r'''(.*<a[^>]+href\s*=\s*['"])([^"']+)(['"][^>]*>.*)''', repl, html) if html == html_tracked: # if no change, don't touch. return else: return html_tracked def add_tracking_adsense(html): adsense_tracking_code = open(os.path.join(conf.root, 'public', 'js', 'adsense_tracker.js')).read() click_track = 'ADJECTOR_TRACKING_BASE_URL/track/click_with_image?creative_id=ADJECTOR_CREATIVE_ID&zone_id=ADJECTOR_ZONE_ID&cache_bust=' # cache_bust added in js html_tracked = ''' <span> %(html)s <script type="text/javascript"><!--// <![CDATA[ /* adjector_click_track=%(click_track)s */ %(adsense_tracking_code)s // ]]> --></script> </span> ''' % dict(html=html, adsense_tracking_code=adsense_tracking_code, click_track=click_track) return html_tracked def cache_bust(): return str(random.random())[2:] def remove_tracking(html, cj_site_id = None): if cj_site_id: return remove_tracking_cj(html, cj_site_id) elif re.search('google_ad_client', html): return remove_tracking_adsense(html) else: return html # we can't do anything def remove_tracking_adsense(html): html_notrack = ''' <script type='text/javascript'> var adjector_google_adtest_backup = google_adtest; var google_adtest='on'; </script> %(html)s <script type='text/javascript'> var google_adtest=adjector_google_adtest_backup; </script> ''' % dict(html=html) return html_notrack
Adjector
/Adjector-1.0b1.tar.gz/Adjector-1.0b1/adjector/core/tracking.py
tracking.py
from __future__ import division import logging import random import re from sqlalchemy import and_, func, or_ from sqlalchemy.sql import case, join, select, subquery import adjector.model as model from adjector.core.conf import conf from adjector.core.tracking import remove_tracking log = logging.getLogger(__name__) def old_render_zone(ident, track=None, admin=False): ''' Render A Random Creative for this Zone. Access by id or name. Respect all zone requirements. Use creative weights and their containing set weights to weight randomness. If zone.normalize_by_container, normalize creatives by the total weight of the set they are in, so the total weight of the creatives directly in any set is always 1. If block and text ads can be shown, a decision will be made to show one or the other based on the total probability of each type of creative. Note that this function is called by the API function render_zone. ''' # Note that this is my first time seriously using SA, feel free to clean this up if isinstance(ident, int) or ident.isdigit(): zone = model.Zone.get(int(ident)) else: zone = model.Zone.query.filter_by(name=ident).first() if zone is None: # Fail gracefully, don't commit suicide because someone deleted a zone from the ad server log.error('Tried to render zone %s. Zone Not Found' % ident) return '' # Find zone site_id, if applicable. Default to global site_id, or else None. cj_site_id = zone.parent_cj_site_id or conf.cj_site_id # Figure out what kind of creative we need # Size filtering whereclause_zone = and_(or_(and_(model.Creative.width >= zone.min_width, model.Creative.width <= zone.max_width, model.Creative.height >= zone.min_height, model.Creative.height <= zone.max_height), model.Creative.is_text == True), # Date filtering or_(model.Creative.start_date == None, model.Creative.start_date <= func.now()), or_(model.Creative.end_date == None, model.Creative.end_date >= func.now()), # Site Id filtering or_(model.Creative.cj_site_id == None, model.Creative.cj_site_id == cj_site_id, and_(conf.enable_cj_site_replacements, cj_site_id != None, model.Creative.cj_site_id != None)), # Disabled? model.Creative.disabled == False) creative_types = zone.creative_types # This might change later. doing_text = None # just so it can't be undefined later # Sanity check - this shouldn't ever happen if zone.num_texts == 0: creative_types = 2 # Filter by text or block if needed. If you want both we do some magic later. But first we need to find out how much of each we have, weight wise. if creative_types == 1: whereclause_zone.append(model.Creative.is_text==True) number_needed = zone.num_texts doing_text = True elif creative_types == 2: whereclause_zone.append(model.Creative.is_text==False) number_needed = 1 doing_text = False creatives = model.Creative.table all_results = [] # Find random creatives; Loop until we have as many as we need while True: # First let's figure how to normalize by how many items will be displayed. This ensures all items are displayed equally. # We want this to be 1 for blocks and num_texts for texts. Also throw in the zone.weight_texts #items_displayed = cast(creatives.c.is_text, Integer) * (zone.num_texts - 1) + 1 text_weight_adjust = case([(True, zone.weight_texts / zone.num_texts), (False, 1)], creatives.c.is_text) if zone.normalize_by_container: # Find the total weight of each parent in order to normalize parent_weights = subquery('parent_weight', [creatives.c.parent_id, func.sum(creatives.c.parent_weight * creatives.c.weight).label('pw_total')], group_by=creatives.c.parent_id) # Join creatives table and normalized weight table - I'm renaming a lot of fields here to make life easier down the line # SA was insisting on doing a full subquery anyways (I just wanted a join) c1 = subquery('c1', [creatives.c.id.label('id'), creatives.c.title.label('title'), creatives.c.html.label('html'), creatives.c.html_tracked.label('html_tracked'), creatives.c.is_text.label('is_text'), creatives.c.cj_site_id.label('cj_site_id'), (creatives.c.weight * creatives.c.parent_weight * text_weight_adjust / case([(parent_weights.c.pw_total > 0, parent_weights.c.pw_total)], else_ = None)).label('normalized_weight')], # Make sure we can't divide by 0 whereclause_zone, # here go our filters from_obj=join(creatives, parent_weights, or_(creatives.c.parent_id == parent_weights.c.parent_id, and_(creatives.c.parent_id == None, parent_weights.c.parent_id == None)))).alias('c1') else: # We don't normalize weight by parent weight, so we dont' need fancy joins c1 = subquery('c1', [creatives.c.id, creatives.c.title, creatives.c.html, creatives.c.html_tracked, creatives.c.is_text, creatives.c.cj_site_id, (creatives.c.weight * creatives.c.parent_weight * text_weight_adjust).label('normalized_weight')], whereclause_zone) #for a in model.session.execute(c1).fetchall(): print a if creative_types == 0: # (Either type) # Now that we have our weights in order, let's figure out how many of each thing (text/block) we have, weightwise. texts_weight = select([func.sum(c1.c.normalized_weight)], c1.c.is_text == True).scalar() or 0 blocks_weight = select([func.sum(c1.c.normalized_weight)], c1.c.is_text == False).scalar() or 0 # Create weighted bins, text first (0-whatever). We are going to decide what kind of thing to make right here, right now, # based on the weights of each. Because we can't have both (yet). rand = random.random() if texts_weight + blocks_weight == 0: break if rand < texts_weight / (texts_weight + blocks_weight): c1 = c1.select().where(c1.c.is_text == True).alias('text') total_weight = texts_weight number_needed = zone.num_texts doing_text = True else: c1 = c1.select().where(c1.c.is_text == False).alias('nottext') total_weight = blocks_weight number_needed = 1 doing_text = False else: # Find total normalized weight of all creatives in order to normalize *that* total_weight = select([func.sum(c1.c.normalized_weight)])#.scalar() or 0 #if not total_weight: # break c2 = c1.alias('c2') # Find the total weight above a creative in the table in order to form weighted bins for the random number generator # Note that this is the upper bound, not the lower (if it was the lower it could be NULL) incremental_weight = select([func.sum(c1.c.normalized_weight) / total_weight], c1.c.id <= c2.c.id, from_obj=c1) # Get everything into one thing - for debugging this is a good place to select and print out stuff shmush = select([c2.c.id, c2.c.title, c2.c.html, c2.c.html_tracked, c2.c.cj_site_id, incremental_weight.label('inc_weight'), (c2.c.normalized_weight / total_weight).label('final_weight')], from_obj=c2).alias('shmush') #for a in model.session.execute(shmush).fetchall(): print a # Generate some random numbers and comparisons - sorry about the magic it saves about 10 lines # The crazy 0.9999 is to make sure we don't get a number so close to one we run into float precision errors (all the weights might not quite sum to 1, # and so we might end up falling outside the bin!) # Experimentally the error never seems to be worse than that, and that number is imprecise enough to be displayed exactly by python. rand = [random.random() * 0.9999999999 for i in xrange(number_needed)] whereclause_rand = or_(*[and_(shmush.c.inc_weight - shmush.c.final_weight <= rand[i], rand[i] < shmush.c.inc_weight) for i in xrange(number_needed)]) # Select only creatives where the random number falls between its cutoff and the next results = model.session.execute(select([shmush.c.id, shmush.c.title, shmush.c.html, shmush.c.html_tracked, shmush.c.cj_site_id], whereclause_rand)).fetchall() # Deal with number of results if len(results) == 0: if not doing_text or not all_results: return '' # Otherwise, we are probably just out of results. break if len(results) > number_needed: log.error('Too many results while rendering zone %i. I got %i results and wanted %i' % (zone.id, len(results), number_needed)) results = results[:number_needed] all_results.extend(results) break elif len(results) < number_needed: if not doing_text: raise Exception('Somehow we managed to get past several checks, and we have 0 < results < needed_results for block creatives.' + \ 'Since needed_results should be 1, this seems fairly difficult.') all_results.extend(results) # It looks like we need more results, this should only happen when we are doing text. Try again. number_needed -= len(results) # Exclude ones we've already got whereclause_zone.append(and_(*[model.Creative.id != result.id for result in results])) # Set to only render text this time around if creative_types == 0: creative_types = 1 whereclause_zone.append(model.Creative.is_text == True) # Continue loop... else: # we have the right number? all_results.extend(results) break if doing_text and len(all_results) < zone.num_texts: log.warn('Could only retrieve %i of %i desired creatives for zone %i. This (hopefully) means you are requesting more creatives than exist.' \ % (len(all_results), zone.num_texts, zone.id)) # Ok, that's done, we have our results. # Let's render some html html = '' if doing_text: html += zone.before_all_text or '' for creative in all_results: if track or (track is None and conf.enable_adjector_view_tracking): # Create a view thingy model.View(creative['id'], zone.id) model.session.commit() # Figure out the html value... # Use either click tracked or regular html if (track or (track is None and conf.enable_adjector_click_tracking)) and creative['html_tracked'] is not None: creative_html = creative['html_tracked'].replace('ADJECTOR_TRACKING_BASE_URL', conf.tracking_base_url)\ .replace('ADJECTOR_CREATIVE_ID', str(creative['id'])).replace('ADJECTOR_ZONE_ID', str(zone.id)) else: creative_html = creative['html'] # Remove or modify third party click tracking if (track is False or (track is None and not conf.enable_third_party_tracking)) and creative['cj_site_id'] is not None: creative_html = remove_tracking(creative_html, creative['cj_site_id']) elif conf.enable_cj_site_replacements: creative_html = re.sub(str(creative['cj_site_id']), str(cj_site_id), creative_html) ########### Now we can do some text assembly ########### # If text, add pre-text if doing_text: html += zone.before_each_text or '' html += creative_html # Are we in admin mode? if admin: html += ''' <div class='adjector_admin' style='color: red; background-color: silver'> Creative: <a href='%(admin_base_url)s%(creative_url)s'>%(creative_title)s</a> Zone: <a href='%(admin_base_url)s%(zone_url)s'>%(zone_title)s</a> </div> ''' % dict(admin_base_url = conf.admin_base_url, creative_url = '/creative/%i' % creative['id'], zone_url = zone.view(), creative_title = creative.title, zone_title = zone.title) if doing_text: html += zone.after_each_text or '' if doing_text: html += zone.after_all_text or '' # Wrap in javascript if asked if html and '<script' not in html and conf.require_javascript: wrapper = '''<script type='text/javascript'>document.write('%s')</script>''' # Do some quick substitutions to inject... #TODO there must be an existing function that does this html = re.sub(r"'", r"\'", html) # escape quotes html = re.sub(r"[\r\n]", r"", html) # remove line breaks return wrapper % html return html def render_zone(ident, track=None, admin=False): ''' Render A Random Creative for this Zone, using precached data. Access by id or name. Respect all zone requirements. Use creative weights and their containing set weights to weight randomness. If zone.normalize_by_container, normalize creatives by the total weight of the set they are in, so the total weight of the creatives directly in any set is always 1. If block and text ads can be shown, a decision will be made to show one or the other based on the total probability of each type of creative. Note that this function is called by the API function render_zone. ''' # Note that this is my first time seriously using SA, feel free to clean this up if isinstance(ident, int) or ident.isdigit(): zone = model.Zone.get(int(ident)) else: zone = model.Zone.query.filter_by(name=ident).first() if zone is None: # Fail gracefully, don't commit suicide because someone deleted a zone from the ad server log.error('Tried to render zone %s. Zone Not Found' % ident) return '' # Find zone site_id, if applicable. Default to global site_id, or else None. cj_site_id = zone.parent_cj_site_id or conf.cj_site_id # Texts or blocks? rand = random.random() if rand < zone.total_text_weight: # texts! number_needed = zone.num_texts doing_text = True else: # blocks! number_needed = 1 doing_text = False query = model.CreativeZonePair.query.filter_by(zone_id = zone.id, is_text = doing_text) num_pairs = query.count() if num_pairs == number_needed: pairs = query.all() else: pairs = [] # keep going until we get as many as we need still_needed = number_needed banned_ranges = [] while still_needed: # Generate some random numbers and comparisons - sorry about the magic it saves about 10 lines # The crazy 0.9999 is to make sure we don't get a number so close to one we run into float precision errors (all the weights might not quite sum to 1, # and so we might end up falling outside the bin!) # Experimentally the error never seems to be worse than that, and that number is imprecise enough to be displayed exactly by python. # Assemble random numbers rands = [] while len(rands) < still_needed: rand = random.random() * 0.9999999999 bad_rand = False for range in banned_ranges: if range[0] <= rand < range[1]: bad_rand = True break if not bad_rand: rands.append(rand) # Select only creatives where the random number falls between its cutoff and the next results = query.filter(or_(*[and_(model.CreativeZonePair.lower_bound <= rands[i], rands[i] < model.CreativeZonePair.upper_bound) for i in xrange(still_needed)])).all() # What if there are no results? if len(results) == 0: if not pairs: # I guess there are no results return '' break # or else we are just out of results still_needed -= len(results) pairs += results # Exclude ones we've already got, if we need to loop again banned_ranges.extend([pair.lower_bound, pair.upper_bound] for pair in results) #JIC if len(pairs) > number_needed: # This shouldn't be able to happen log.error('Too many results while rendering zone %i. I got %i results and wanted %i' % (zone.id, len(results), number_needed)) pairs = pairs[:number_needed] elif len(pairs) < number_needed: log.warn('Could only retrieve %i of %i desired creatives for zone %i. This (hopefully) means you are requesting more creatives than exist.' \ % (len(pairs), zone.num_texts, zone.id)) # Ok, that's done, we have our results. # Let's render some html html = '' if doing_text: html += zone.before_all_text or '' for pair in pairs: creative = pair.creative if track or (track is None and conf.enable_adjector_view_tracking): # Create a view thingy - this is much faster than using SA (almost instant) model.session.execute('INSERT INTO views (creative_id, zone_id, time) VALUES (%i, %i, now())' % (creative.id, zone.id)) #model.View(creative.id, zone.id) # Figure out the html value... # Use either click tracked or regular html if (track or (track is None and conf.enable_adjector_click_tracking)) and creative.html_tracked is not None: creative_html = creative.html_tracked.replace('ADJECTOR_TRACKING_BASE_URL', conf.tracking_base_url)\ .replace('ADJECTOR_CREATIVE_ID', str(creative.id)).replace('ADJECTOR_ZONE_ID', str(zone.id)) else: creative_html = creative.html # Remove or modify third party click tracking if (track is False or (track is None and not conf.enable_third_party_tracking)) and creative.cj_site_id is not None: creative_html = remove_tracking(creative_html, creative.cj_site_id) elif cj_site_id and creative.cj_site_id and conf.enable_cj_site_replacements: creative_html = re.sub(str(creative.cj_site_id), str(cj_site_id), creative_html) ########### Now we can do some text assembly ########### # If text, add pre-text if doing_text: html += zone.before_each_text or '' html += creative_html # Are we in admin mode? if admin: html += ''' <div class='adjector_admin' style='color: red; background-color: silver'> Creative: <a href='%(admin_base_url)s%(creative_url)s'>%(creative_title)s</a> Zone: <a href='%(admin_base_url)s%(zone_url)s'>%(zone_title)s</a> </div> ''' % dict(admin_base_url = conf.admin_base_url, creative_url = '/creative/%i' % creative.id, zone_url = zone.view(), creative_title = creative.title, zone_title = zone.title) if doing_text: html += zone.after_each_text or '' if doing_text: html += zone.after_all_text or '' model.session.commit() #having this down here saves us quite a bit of time # Wrap in javascript if asked if html and '<script' not in html and conf.require_javascript: wrapper = '''<script type='text/javascript'>document.write('%s')</script>''' # Do some quick substitutions to inject... #TODO there must be an existing function that does this html = re.sub(r"'", r"\'", html) # escape quotes html = re.sub(r"[\r\n]", r"", html) # remove line breaks return wrapper % html return html
Adjector
/Adjector-1.0b1.tar.gz/Adjector-1.0b1/adjector/core/render.py
render.py
var adjector_adSenseDeliveryDone; var adjector_adSensePx; var adjector_adSensePy; function adjector_adSenseClick(path) { // Add cache buster here to ensure multiple clicks are recorded var cb = new String (Math.random()); cb = cb.substring(2,11); var i = new Image(); i.src = path + cb; } function adjector_adSenseLog(obj) { if (typeof obj.parentNode != 'undefined') { parent = obj.parentNode; while(parent.tagName == 'INS'){ // escape from google's <ins> nodes parent = parent.parentNode } var t = parent.innerHTML; var params = t.match(/\/\*\s*adjector_click_track=([^ ]+)\s*\*\//) if (params) { adjector_adSenseClick(params[1]); } } } function adjector_adSenseGetMouse(e) { // Adapted from http://www.howtocreate.co.uk/tutorials/javascript/eventinfo if (typeof e.pageX == 'number') { //most browsers adjector_adSensePx = e.pageX; adjector_adSensePy = e.pageY; } else if (typeof e.clientX == 'number') { //Internet Explorer and older browsers //other browsers provide this, but follow the pageX/Y branch adjector_adSensePx = e.clientX; adjector_adSensePy = e.clientY; if (document.body && (document.body.scrollLeft || document.body.scrollTop)) { //IE 4, 5 & 6 (in non-standards compliant mode) adjector_adSensePx += document.body.scrollLeft; adjector_adSensePy += document.body.scrollTop; } else if (document.documentElement && (document.documentElement.scrollLeft || document.documentElement.scrollTop )) { //IE 6 (in standards compliant mode) adjector_adSensePx += document.documentElement.scrollLeft; adjector_adSensePy += document.documentElement.scrollTop; } } } function adjector_adSenseFindX(obj) { var x = 0; while (obj) { x += obj.offsetLeft; obj = obj.offsetParent; } return x; } function adjector_adSenseFindY(obj) { var y = 0; while (obj) { y += obj.offsetTop; obj = obj.offsetParent; } return y; } function adjector_adSensePageExit(e) { var ad = document.getElementsByTagName("iframe"); if (typeof adjector_adSensePx == 'undefined') return; for (var i = 0; i < ad.length; i++) { var adLeft = adjector_adSenseFindX(ad[i]); var adTop = adjector_adSenseFindY(ad[i]); var adRight = parseInt(adLeft) + parseInt(ad[i].width) + 15; var adBottom = parseInt(adTop) + parseInt(ad[i].height) + 10; var inFrameX = (adjector_adSensePx > (adLeft - 10) && adjector_adSensePx < adRight); var inFrameY = (adjector_adSensePy > (adTop - 10) && adjector_adSensePy < adBottom); //alert(adjector_adSensePx + ',' + adjector_adSensePy + ' ' + adLeft + ':' + adRight + 'x' + adTop + ':' + adBottom); if (inFrameY && inFrameX) { if (ad[i].src.match(/googlesyndication\.com|ypn-js\.overture\.com|googleads\.g\.doubleclick\.net/)) adjector_adSenseLog(ad[i]); } } } function adjector_adSenseInit() { if (document.all && typeof window.opera == 'undefined') { //ie var el = document.getElementsByTagName("iframe"); for (var i = 0; i < el.length; i++) { if (el[i].src.match(/googlesyndication\.com|ypn-js\.overture\.com|googleads\.g\.doubleclick\.net/)) { el[i].onfocus = function() { adjector_adSenseLog(this); } } } } else if (typeof window.addEventListener != 'undefined') { // other browsers window.addEventListener('unload', adjector_adSensePageExit, false); window.addEventListener('mousemove', adjector_adSenseGetMouse, true); } } function adjector_adSenseDelivery() { if (typeof adjector_adSenseDeliveryDone != 'undefined' && adjector_adSenseDeliveryDone) return; adjector_adSenseDeliveryDone = true; if(typeof window.addEventListener != 'undefined') { //.. gecko, safari, konqueror and standard window.addEventListener('load', adjector_adSenseInit, false); } else if(typeof document.addEventListener != 'undefined') { //.. opera 7 document.addEventListener('load', adjector_adSenseInit, false); } else if(typeof window.attachEvent != 'undefined') { //.. win/ie window.attachEvent('onload', adjector_adSenseInit); } else { //.. mac/ie5 and anything else that gets this far //if there's an existing onload function if(typeof window.onload == 'function') { //store it var existing = onload; //add new onload handler window.onload = function() { //call existing onload function existing(); //call adsense_init onload function adjector_adSenseInit(); }; } else { //setup onload function window.onload = adjector_adSenseInit; } } } adjector_adSenseDelivery();
Adjector
/Adjector-1.0b1.tar.gz/Adjector-1.0b1/adjector/public/js/adsense_tracker.js
adsense_tracker.js
import logging #from suds import WebFault from urllib import unquote_plus from urllib2 import HTTPError #from suds.sudsobject import asdict #from adjector.core.cj_util import from_cj_date from adjector.lib.cj_interface import get_cj_links from adjector.lib.base import * log = logging.getLogger(__name__) class CjController(BaseController): errors = {} def start(self): if not conf.cj_api_key: return 'You must enter an api key in order to connect to Commission Junction.' # Fill websiteId field site_ids = [] cj_sites = model.Location.query.filter(model.Location.cj_site_id != None) if cj_sites: site_ids = [[loc.cj_site_id, loc.title] for loc in cj_sites] if conf.cj_site_id and conf.cj_site_id not in [str(id) for id, title in site_ids]: # if your global site id is yet another thing site_ids.insert(0, [conf.cj_site_id, 'Global']) # Only show the box if neccessary if len(site_ids) == 0: return 'You must enter at least one global or location-specific site id in order to connect to Commission Junction.' elif len(site_ids) == 1: c.form = forms.CJLinkSearchOneSite(action='/import/cj/search', value={'website_id': site_ids[0][0]}) else: c.form = forms.CJLinkSearch(action='/import/cj/search', child_args={'website_id': dict(options = site_ids)}) c.title = 'Import from Commission Junction' return render('common.form') @rest.dispatch_on(POST='do_search') def search(self): ''' redo last search''' if not session.has_key('last_search'): return redirect_to('/import/cj') last = session['last_search'] if request.params.has_key('page') and request.params['page'] != last['form_result']['page_number']: # render new one, I guess self.form_result = last['form_result'] self.form_result['page_number'] = request.params['page'] return self._actually_do_search() c.links = last['links'] c.total = last['total'] c.page = last['page'] c.count = last['count'] c.per_page = last['per_page'] self._process(c.links, all=True) c.title = 'Import from Commission Junction' return render('cj.links') @rest.restrict('POST') @validate(form=forms.CJLinkSearch, error_handler='list') def do_search(self): return self._actually_do_search() def _actually_do_search(self): if not conf.cj_api_key: return 'You must enter the necessary credentials in order to connect to Commission Junction.' result = self.form_result.copy() c.show_imported = self.form_result.pop('show_imported') c.show_ignored = self.form_result.pop('show_ignored') try: links, counts = get_cj_links(**self.form_result) except HTTPError, error: return 'Could not connect to Commission Junction.<br />Code: %s<br />Error: %s' % (error.code, error.msg) c.total = counts['total'] c.page = counts['page'] c.count = counts['count'] c.per_page = self.form_result['records_per_page'] if c.total == 0: return 'No Links Found' self._process(links) session['last_search'] = dict(form_result=result, links=c.links, total=c.total, page=c.page, count=c.count, per_page = c.per_page) session.save() c.title = 'Import from Commission Junction' return render('cj.links') def _process(self, links, all=False): session['cj_links'] = session.get('cj_links', {}) c.links = [] for link in links: # Add to sessionn session['cj_links']['%s:%s' % (link['cj_site_id'], link['cj_link_id'])] = link # Filter more for what to display... # Check if ignored. If so, continue unless paramenter sent. link['ignored'] = model.CJIgnoredLink.query.filter_by(cj_link_id = link['cj_link_id']).first() if link['ignored'] is not None and not (all or c.show_ignored): continue # Check to see if we already have this imported. If so, continue unless parameter sent link['creative'] = model.Creative.query.filter_by(cj_link_id = link['cj_link_id']).first() if link['creative'] is not None and not (all or c.show_imported): continue c.links.append(link) session.save() def process(self): ''' Process multiple links at once ''' # See if we still have the links somewhere try: links = session['cj_links'] except KeyError: session['message'] = 'Link storage error; try searching again before importing any links.' session.save() return redirect_to('/import/cj') idents = [unquote_plus(param) for param in request.params.keys() if ':' in unquote_plus(param)] ### ADD LINKS ### if request.params.has_key('import'): action, verb = self._add, 'imported' ### IGNORE LINKS ### elif request.params.has_key('ignore'): action, verb = self._ignore, 'ignored' ## UNIGNORE LINKS ### elif request.params.has_key('unignore'): action, verb = self._unignore, 'unignored' else: return redirect_to('/import/cj/search') count = 0 self.updated = [] for ident in idents: link = links[ident] count += action(link) if action == self._add: self._on_updates(self.updated) session['message'] = '%i links %s.' % (count, verb) if count < len(idents): session['message'] += ' %i links were already %s.' % (len(idents) - count, verb) session.save() return redirect_to('/import/cj/search') def _add(self, link): # Do we already have this one as a creative? if model.Creative.query.filter_by(cj_link_id = link['cj_link_id']).first() is not None: log.warn('Link id %s already added' % link['cj_link_id']) #TODO: output message return False # Remove ignored tag if neccessary ignored = model.CJIgnoredLink.query.filter_by(cj_link_id = link['cj_link_id']).first() if ignored: model.session.delete(ignored) # Create set if necessary theset = model.Set.query.filter_by(cj_advertiser_id = link['cj_advertiser_id']).first() if theset is None: theset = model.Set(dict(title = link['advertiser_name'], cj_advertiser_id = link['cj_advertiser_id'])) self.updated.extend(theset._updated) # Import link to creative creative = model.Creative(dict([key, value] for key, value in link.iteritems() if key in model.Creative.__dict__)) self.updated.extend(creative._updated) creative.parent = theset model.session.commit() return True def _ignore(self, link): ignored = model.CJIgnoredLink.query.filter_by(cj_link_id = link['cj_link_id']).first() if ignored: log.warn('Tried to ignore a link that was already ignored. Link id = %s' % link['cj_link_id']) return False model.CJIgnoredLink(link['cj_link_id'], link['cj_advertiser_id']) model.session.commit() return True def _unignore(self, link): ignored = model.CJIgnoredLink.query.filter_by(cj_link_id = link['cj_link_id']).first() if not ignored: log.warn('Tried to unignore a link that was not ignored. Link id = %s' % link['cj_link_id']) return False else: model.session.delete(ignored) model.session.commit() return True
Adjector
/Adjector-1.0b1.tar.gz/Adjector-1.0b1/adjector/controllers/cj.py
cj.py
import logging import re from paste.deploy.converters import asbool from adjector.core.render import render_zone from adjector.lib.base import * log = logging.getLogger(__name__) class ZoneController(ObjectController): native = model.Zone form = forms.Zone singular = 'zone' plural = 'zones' @rest.dispatch_on(POST='do_edit') def view(self, id): obj = self._obj(id) setattr(c, self.singular, obj) value=obj.value() value['preview'] = c.render = h.Markup(render_zone(id, track=False)) child_args = dict(parent_id=dict(options=[''] + obj.possible_parents())) c.form = self.form(action=h.url_for(), value = value, child_args=child_args, edit=True) c.title = obj.title return render('view.zone') def render(self, ident): options = request.environ.get('adjector.options', {}) if request.params.has_key('track'): options['track'] = asbool(request.params.has_key('track')) if request.params.has_key('admin'): options['admin'] = asbool(request.params.has_key('admin')) return render_zone(ident, **options) def render_js(self, ident): ''' Render ads through a javascript tag Usage Example: <script type='text/javascript' src='http://localhost:5000/RENDER_BASE_URL/zone/NAME/render.js?track=0' /> Where RENDER_BASE_URL is the url you specified in your .ini file and NAME is your ad name. ''' options = request.environ.get('adjector.options', {}) if request.params.has_key('track'): options['track'] = asbool(request.params['track']) if request.params.has_key('admin'): options['admin'] = asbool(request.params['admin']) rendered = render_zone(ident, **options) wrapper = '''document.write('%s')''' # Do some quick substitutions to inject... #TODO there must be an existing function that does this rendered = re.sub(r"'", r"\'", rendered) # escape quotes rendered = re.sub(r"[\r\n]", r"", rendered) # remove line breaks response.headers['content-type'] = 'text/javascript; charset=utf8' return wrapper % rendered
Adjector
/Adjector-1.0b1.tar.gz/Adjector-1.0b1/adjector/controllers/zone.py
zone.py
import pylons from paste.deploy import loadapp from webob.exc import HTTPNotFound #from paste.recursive import Includer from adjector.core.conf import conf class AdjectorMiddleware(object): def __init__(self, app, config): self.app = app #raw_config = appconfig('config:%s' % config['__file__'], name='adjector') #self.path = adjector_config_raw.local_conf.get('base_url', '/adjector') self.adjector_app = loadapp('config:%s' % config['__file__'], name='adjector') self.path = conf.base_url # Remove the adjector config from the config stack; otherwise the host app gets *very* confused # We should be done initializing adjector, so this isn't used again anyways. # The RegistryMiddleware takes care of this from now on (during requests). process_configs = pylons.config._process_configs adjector_dict = [dic for dic in process_configs if dic['pylons.package'] == 'adjector'][0] process_configs.remove(adjector_dict) def __call__(self, environ, start_response): if self.path and environ['PATH_INFO'].startswith(self.path): #environ['PATH_INFO'] = environ['PATH_INFO'][len(self.path):] or '/' #environ['SCRIPT_NAME'] = self.path return self.adjector_app(environ, start_response) else: #environ['adjector.app'] = self.adjector_app #environ['adjector.include'] = Includer(self.adjector_app, environ, start_response) return self.app(environ, start_response) def make_middleware(app, global_conf, **app_conf): return AdjectorMiddleware(app, global_conf) def null_middleware(global_conf, **app_conf): return lambda app: app class FilterWith(object): def __init__(self, app, filter, path): self.app = app self.filter = filter self.path = path def __call__(self, environ, start_response): if self.path and environ['PATH_INFO'].startswith(self.path): environ['PATH_INFO'] = environ['PATH_INFO'][len(self.path):] or '/' environ['SCRIPT_NAME'] += self.path return self.filter(environ, start_response) else: return self.app(environ, start_response) class FilteredApp(object): ''' Only allow access when path_info starts with 'path', otherwise throw 404 This can't be a subclass of StaticURLParser because that creates new instances of its __class__ ''' def __init__(self, app, path): self.app = app self.path = path def __call__(self, environ, start_response): if self.path and environ['PATH_INFO'].startswith(self.path): environ['PATH_INFO'] = environ['PATH_INFO'][len(self.path):] or '/' environ['SCRIPT_NAME'] += self.path return self.app(environ, start_response) else: raise HTTPNotFound() class StripTrailingSlash(object): def __init__(self, app): self.app = app def __call__(self, environ, start_response): environ['PATH_INFO'] = environ.get('PATH_INFO', '').rstrip('/') return self.app(environ, start_response)
Adjector
/Adjector-1.0b1.tar.gz/Adjector-1.0b1/adjector/lib/middleware.py
middleware.py
import logging import adjector.forms as forms #IGNORE:W0611 import adjector.model as model #IGNORE:W0611 from datetime import datetime, timedelta from pylons import g, request, response, session, tmpl_context as c #IGNORE:W0611 from pylons.controllers import WSGIController from pylons.controllers.util import abort, redirect_to #IGNORE:W0611 from pylons.decorators import rest #pylint: disable-msg=E0611,W0611 from paste.recursive import ForwardRequestException #pylint: disable-msg=E0611,W0611 from pylons.templating import render #IGNORE:W0611 from sqlalchemy import and_, asc, desc, func, or_ #IGNORE:W0611 from tw.api import WidgetBunch #IGNORE:W0611 from tw.mods.pylonshf import validate #IGNORE:W0611 from webob.exc import HTTPNotFound from adjector.core.conf import conf #IGNORE:W0611 from adjector.model import meta from adjector.model.entities import CircularDependencyException from adjector.lib import helpers as h #IGNORE:W0611 from adjector.lib.precache import precache_zone from adjector.lib.util import FormProxy log = logging.getLogger(__name__) class BaseController(WSGIController): def __call__(self, environ, start_response): '''Invoke the Controller''' # WSGIController.__call__ dispatches to the Controller method # the request is routed to. This routing information is # available in environ['pylons.routes_dict'] try: return WSGIController.__call__(self, environ, start_response) finally: meta.Session.remove() def __init__(self): WSGIController.__init__(self) if session.has_key('message'): c.session_message = session['message'] del session['message'] session.save() def _on_updates(self, updated): ''' Do things to dirty objects ''' # precaching creatives = [obj for obj in updated if isinstance(obj, model.Creative)] zones = [obj for obj in updated if isinstance(obj, model.Zone)] # if no creatives modified, we only have to modify the changed zones if not creatives: return [precache_zone(zone) for zone in zones] # If creatives modified, we need to figure out what zones they belonged to # and totally redo those for creative in creatives: for pair in creative.creative_zone_pairs: zones.append(pair.zone) # now that we know what zones we *definitely* need to refresh... for zone in model.Zone.query(): if zone in zones: # totally redo all weights for this zone precache_zone(zone) else: # only redo if the creatives NOW will be in that zone precache_zone(zone, [c.id for c in creatives]) class ObjectController(BaseController): native = None form_proxy = FormProxy() singular = None plural = None def __init__(self): BaseController.__init__(self) if self.form: self.form_proxy.set(self.form) def _obj(self, id): obj = self.native.get(int(id)) if obj is None: raise HTTPNotFound('%s not found' % self.singular.title()) return obj def list(self): query = self.native.query() setattr(c, self.plural, self.native.query()) c.title = self.plural.title() return render('list.%s' % self.plural) @rest.dispatch_on(POST='do_edit') def view(self, id): obj = self._obj(id) setattr(c, self.singular, obj) child_args = dict(parent_id=dict(options=[''] + obj.possible_parents(obj))) c.form = self.form(action=h.url_for(), value=obj.value(), child_args=child_args, edit=True) c.title = obj.title return render('view.%s' % self.singular) @rest.dispatch_on(POST='do_new') def new(self): child_args = dict(parent_id=dict(options=[''] + self.native.possible_parents())) c.form = self.form(action=h.url_for(), value=dict(request.params), child_args=child_args, edit=False) c.title = 'New %s' % self.singular.title() return render('common.form') @rest.restrict('POST') @validate(form=form_proxy, error_handler='new') #pylint: disable-msg=E0602 def do_new(self): try: obj = self.native(self.form_result) model.session.commit() self._on_updates(obj._updated) session['message'] = 'Changes saved.' except CircularDependencyException: model.session.rollback() session['message'] = 'Assigning that set/location creates a cycle. Don\'t do that!' session.save() return redirect_to(obj.view()) @rest.restrict('POST') @validate(form=form_proxy, error_handler='view') #pylint: disable-msg=E0602 def do_edit(self, id): obj = self._obj(id) if request.POST.has_key('delete'): self._delete(self, obj) try: updates = obj.set(self.form_result) model.session.commit() self._on_updates(updates) session['message'] = 'Changes saved.' except CircularDependencyException: model.session.rollback() session['message'] = 'Assigning that set/location creates a cycle. Don\'t do that!' session.save() return redirect_to(obj.view()) def _delete(self, obj): obj.delete() model.session.commit() session['message'] = '%s deleted.' % self.singular.title() session.save() return redirect_to(h.url_for(action='list')) class ContainerObjectController(ObjectController): def list(self): setattr(c, self.plural, self.native.query.filter_by(parent_id=None)) c.title = self.plural.title() return render('list.%s' % self.plural) def _delete(self, obj): updated = obj.delete() model.session.commit() _on_updates(self, updated) session['message'] = '%s deleted.' % self.singular.title() session.save() return redirect_to(h.url_for(action='list'))
Adjector
/Adjector-1.0b1.tar.gz/Adjector-1.0b1/adjector/lib/base.py
base.py
#import os.path #from suds.client import Client #from libxml2 import parseDoc from xml.dom.minidom import parseString from urllib import urlencode from urllib2 import urlopen, Request import adjector.model as model from adjector.core.conf import conf from adjector.core.cj_util import from_cj_date #def get_link_search_client(): # cj_linksearch_wsdl = os.path.join(conf.root, 'external', 'CJ_LinkSearchServiceV2.0.wsdl') # return Client('file://' + cj_linksearch_wsdl) # #def get_link_search_defaults(): # return dict(developerKey = conf.cj_api_key, # advertiserIds = 'joined', # language = 'en', # #linkSize = '300x250 Medium Rectangle', # serviceableArea = 'US', # #promotionEndDate = 'Ongoing', # #sortBy = 'linkType', # sortOrder = 'desc', # startAt = 0, # maxResults = 100) #Note: change this for debugging so you don't hammer CJ def get_link_property(link, property): child = link.getElementsByTagName(property)[0].firstChild if not child: return '' return str(child.toxml()) def get_cj_links(**kwargs): params = {} for k,v in kwargs.iteritems(): params[k.replace('_','-')] = v params.update({'advertiser-ids': 'joined'}) req = Request('https://linksearch.api.cj.com/v2/link-search?%s' % urlencode(params), headers = {'authorization': conf.cj_api_key}) result = urlopen(req).read() doc = parseString(result) links_attr = doc.getElementsByTagName('links')[0] total = int(links_attr.getAttribute('total-matched')) count = int(links_attr.getAttribute('records-returned')) page = int(links_attr.getAttribute('page-number')) links = [] now = model.tz_now() for link in doc.getElementsByTagName('link'): if get_link_property(link, 'relationship-status') != 'joined': # There is no need to show links from advertisers we won't make $$ from continue if get_link_property(link, 'promotion-end-date') and from_cj_date(get_link_property(link, 'promotion-end-date')) < now: # Don't show expired links continue links.append(dict( title = get_link_property(link, 'link-name'), html = get_link_property(link, 'link-code-html').replace('&lt;', '<').replace('&gt;', '>'), is_text = get_link_property(link, 'link-type') == 'Text Link', width = int(get_link_property(link, 'creative-width')), height = int(get_link_property(link, 'creative-height')), start_date = from_cj_date(get_link_property(link, 'promotion-start-date')), end_date = from_cj_date(get_link_property(link, 'promotion-end-date')), cj_link_id = int(get_link_property(link, 'link-id')), cj_advertiser_id = int(get_link_property(link, 'advertiser-id')), cj_site_id = int(params['website-id']), # Values not stored by adjector description = get_link_property(link, 'description'), link_type = get_link_property(link, 'link-type'), advertiser_name = get_link_property(link, 'advertiser-name'), promo_type = get_link_property(link, 'promotion-type'), seven_day_epc = get_link_property(link, 'seven-day-epc'), three_month_epc = get_link_property(link, 'three-month-epc'), click_commission = get_link_property(link, 'click-commission'), lead_commission = get_link_property(link, 'lead-commission'), sale_commission = get_link_property(link, 'sale-commission'), )) return links, {'count': count, 'total': total, 'page': page}
Adjector
/Adjector-1.0b1.tar.gz/Adjector-1.0b1/adjector/lib/cj_interface.py
cj_interface.py
from __future__ import division import logging from sqlalchemy import and_, func, or_ from sqlalchemy.sql import case, join, select, subquery from adjector.core.conf import conf import adjector.model as model log = logging.getLogger(__name__) def precache_zone(zone, only_if_creative_ids = None): ''' Precache data for creatives for this Zone. Access by id or name. Respect all zone requirements. Use creative weights and their containing set weights to weight randomness. If zone.normalize_by_container, normalize creatives by the total weight of the set they are in, so the total weight of the creatives directly in any set is always 1. If block and text ads can be shown, a decision will be made to show one or the other based on the total probability of each type of creative. Note that this function is called each time you update a relevant creative or zone. ''' # Find zone site_id, if applicable. Default to global site_id, or else None. cj_site_id = zone.parent_cj_site_id or conf.cj_site_id #print 'precaching zone %s with oici %s' % (zone.id, only_if_creative_ids) # FILTERING # Figure out what kind of creative we need # Size filtering whereclause_zone = and_(or_(and_(model.Creative.width >= zone.min_width, model.Creative.width <= zone.max_width, model.Creative.height >= zone.min_height, model.Creative.height <= zone.max_height), model.Creative.is_text == True), # Date filtering or_(model.Creative.start_date == None, model.Creative.start_date <= func.now()), or_(model.Creative.end_date == None, model.Creative.end_date >= func.now()), # Site Id filtering or_(model.Creative.cj_site_id == None, model.Creative.cj_site_id == cj_site_id, and_(conf.enable_cj_site_replacements, cj_site_id != None, model.Creative.cj_site_id != None)), # Disabled? model.Creative.disabled == False) # Sanity check - this shouldn't ever happen if zone.num_texts == 0: zone.creative_types = 2 # Filter by text or block if needed. If you want both we do some magic later. # But first we will need to find out how much of each we have, weight wise. # Also we delete all of the ones that we won't need if zone.creative_types == 1: zone.total_text_weight = 1.0 whereclause_zone.append(model.Creative.is_text==True) [model.session.delete(pair) for pair in model.CreativeZonePair.query.filter_by(zone_id = zone.id, is_text = False)] elif zone.creative_types == 2: zone.total_text_weight = 0.0 whereclause_zone.append(model.Creative.is_text==False) [model.session.delete(pair) for pair in model.CreativeZonePair.query.filter_by(zone_id = zone.id, is_text = True)] # Bail if the edited creative won't go in here; no sense in redoing everything... if only_if_creative_ids and not model.Creative.query.filter(and_(whereclause_zone, model.Creative.id in only_if_creative_ids)).first(): return #print 'continuing' # WEIGHING creatives = model.Creative.table # First let's figure how to normalize by how many items will be displayed. This ensures all items are displayed equally. # We want this to be 1 for blocks and num_texts for texts. Also throw in the zone.weight_texts #items_displayed = cast(creatives.c.is_text, Integer) * (zone.num_texts - 1) + 1 text_weight_adjust = case([(True, zone.weight_texts / zone.num_texts), (False, 1)], creatives.c.is_text) if zone.normalize_by_container: # Find the total weight of each parent in order to normalize parent_weights = subquery('parent_weight', [creatives.c.parent_id, func.sum(creatives.c.parent_weight * creatives.c.weight).label('pw_total')], group_by=creatives.c.parent_id) # Join creatives table and normalized weight table - I'm renaming fields here to make life easier down the line # SA was insisting on doing a full subquery anyways (I just wanted a join) c1 = subquery('c1', [creatives.c.id.label('id'), creatives.c.is_text.label('is_text'), (creatives.c.weight * creatives.c.parent_weight * text_weight_adjust / case([(parent_weights.c.pw_total > 0, parent_weights.c.pw_total)], else_ = None)).label('normalized_weight')], # Make sure we can't divide by 0 whereclause_zone, # here go our filters from_obj=join(creatives, parent_weights, or_(creatives.c.parent_id == parent_weights.c.parent_id, and_(creatives.c.parent_id == None, parent_weights.c.parent_id == None)))).alias('c1') else: # We don't normalize weight by parent weight, so we dont' need fancy joins c1 = subquery('c1', [creatives.c.id.label('id'), creatives.c.is_text.label('is_text'), (creatives.c.weight * creatives.c.parent_weight * text_weight_adjust).label('normalized_weight')], whereclause_zone) #for a in model.session.execute(c1).fetchall(): print a if zone.creative_types == 0: # (Either type) # Now that we have our weights in order, let's figure out how many of each thing (text/block) we have, weightwise. # This will let us choose texts OR blocks later texts_weight = select([func.sum(c1.c.normalized_weight)], c1.c.is_text == True).scalar() or 0 blocks_weight = select([func.sum(c1.c.normalized_weight)], c1.c.is_text == False).scalar() or 0 if texts_weight + blocks_weight == 0: return _on_empty_zone(zone) total_weight = texts_weight + blocks_weight zone.total_text_weight = texts_weight / total_weight c1texts = subquery('c1', [c1.c.id, c1.c.normalized_weight], c1.c.is_text == True) c1blocks = subquery('c1', [c1.c.id, c1.c.normalized_weight], c1.c.is_text == False) _finish_precache(c1texts, texts_weight, zone, True) _finish_precache(c1blocks, blocks_weight, zone, False) else: # Find total normalized weight of all creatives in order to normalize *that* total_weight = select([func.sum(c1.c.normalized_weight)])#.scalar() or 0 if total_weight == 0: return _on_empty_zone(zone) _finish_precache(c1, total_weight, zone, zone.creative_types == 1) def _finish_precache(c1, total_weight, zone, is_text): c2 = c1.alias('c2') # Find the total weight above a creative in the table in order to form weighted bins for the random number generator # Note that this is the upper bound, not the lower (if it was the lower it could be NULL) incremental_weight = select([func.sum(c1.c.normalized_weight) / total_weight], c1.c.id <= c2.c.id, from_obj=c1) # Get everything into one thing # Lower bound = inc_weight - final weight, upper_bound = inc_weight shmush = select([c2.c.id, incremental_weight.label('inc_weight'), (c2.c.normalized_weight / total_weight).label('final_weight')], from_obj=c2).alias('shmush') #for a in model.session.execute(shmush).fetchall(): print a creatives = model.session.execute(shmush).fetchall() for creative in creatives: # current pair? pair = model.CreativeZonePair.query.filter_by(zone_id = zone.id, creative_id = creative['id']).first() if pair: pair.set(dict(is_text = is_text, lower_bound = creative['inc_weight'] - creative['final_weight'], upper_bound = creative['inc_weight'])) else: pair = model.CreativeZonePair(dict(zone_id = zone.id, creative_id = creative['id'], is_text = is_text, lower_bound = creative['inc_weight'] - creative['final_weight'], upper_bound = creative['inc_weight'])) # Delete old cache objects for pair in model.CreativeZonePair.query.filter(and_(model.CreativeZonePair.zone_id == zone.id, model.CreativeZonePair.is_text == is_text, model.CreativeZonePair.creative_id not in [creative['id'] for creative in creatives])): model.session.delete(pair) model.session.commit() def _on_empty_zone(zone): for pair in model.CreativeZonePair.query.filter(model.Zone.id == zone.id): model.session.delete(pair) model.session.commit()
Adjector
/Adjector-1.0b1.tar.gz/Adjector-1.0b1/adjector/lib/precache.py
precache.py
import logging, re, time from datetime import datetime, timedelta from dateutil.relativedelta import relativedelta from paste.deploy.converters import asbool from tw.forms.validators import FancyValidator, FormValidator, Invalid, UnicodeString, Wrapper from adjector.core.conf import conf AsBool = Wrapper(to_python=asbool) log = logging.getLogger(__name__) class DateTime(FancyValidator): strip = True end_interval = False messages = { 'invalidDate': 'Enter a valid date of the form YYYY-MM-DD HH:MM:SS. You may leave off anything but the year.' } def _to_python(self, value, state): formats = ['%Y-%m-%d %H:%M:%S', '%Y-%m-%d %H:%M', '%Y-%m-%d', '%Y-%m', '%Y'] add_if_end = [None, relativedelta(seconds=59), relativedelta(days=1, seconds=-1), relativedelta(months=1, seconds=-1), relativedelta(years=1, seconds=-1)] for format, aie in zip(formats, add_if_end): try: dt = datetime(*(time.strptime(value, format)[0:6])) if self.end_interval and aie: dt += aie return conf.timezone.localize(dt) except ValueError, e: log.debug('Validation error %s' % e) raise Invalid(self.message('invalidDate', state), value, state) def _from_python(self, value, state): return value.strftime('%Y-%m-%d %H:%M:%S') class SimpleString(UnicodeString): messages = { 'invalidString': 'May only contain alphanumerics, underscores, periods, and dashes.' } def validate_python(self, value, state): UnicodeString.validate_python(self, value, state) if re.search(r'[^\w\-.]', value): raise Invalid(self.message('invalidString', state), value, state) # From Siafoo class UniqueValue(FormValidator): validate_partial_form = True value_field = '' previous_value_field = '' unique_test = None # A function that gets passed the new value to test for uniqueness. should return trueish or falsish not_empty = True __unpackargs__ = ('unique_test', 'value_field', 'previous_value_field') messages = { 'notUnique': 'You must enter a unique value' } def validate_partial(self, field_dict, state): for name in [self.value_field, self.previous_value_field]: if name and not field_dict.has_key(name): return self.validate_python(field_dict, state) def validate_python(self, field_dict, state): FormValidator.validate_python(self, field_dict, state) value = field_dict.get(self.value_field) previous_value = field_dict.get(self.previous_value_field) if (not self.not_empty or value == '') and value != previous_value and not self.unique_test(value): errors = {self.value_field: self.message('notUnique', state)} error_list = errors.items() error_list.sort() error_message = '<br>\n'.join( ['%s: %s' % (name, value) for name, value in error_list]) raise Invalid(error_message, field_dict, state, error_dict=errors)
Adjector
/Adjector-1.0b1.tar.gz/Adjector-1.0b1/adjector/forms/validators.py
validators.py
import tw.api as twa import tw.forms as twf from formencode.schema import Schema from tw.forms.validators import Int, Number, UnicodeString from adjector.core.conf import conf import adjector.model as model from adjector.forms.validators import * class FilteringSchema(Schema): allow_extra_fields = True filter_extra_fields = True class GenericField(twf.FormField): template = 'genshi:adjector.templates.form.widgets.generic_field' # Some shortcuts UnicodeEmptyString = UnicodeString(strip=True, not_empty=False, if_missing=None) UnicodeNonEmptyString = UnicodeString(strip=True, not_empty=True, max=80) IntMissing = Int(if_missing=None) PositiveInt = Int(min=0, if_missing=None) class CreativeForm(twf.ListForm): '''Creative creation form''' class fields(twa.WidgetsList): preview = GenericField(label_text='Preview', validator=None, edit_only=True) title = twf.TextField(label_text='Title', validator=UnicodeNonEmptyString, size=50) parent_id = twf.SingleSelectField(label_text='Set', validator=Int) is_text = twf.SingleSelectField(label_text='Type', options=[[0, 'Block'], [1, 'Text']], validator=AsBool(not_empty=True)) weight = twf.TextField(label_text='Weight', default=1.0, validator=Number(not_empty=True, min=0.0)) total_weight = twf.TextField(label_text='Total Weight', validator=None, disabled=True, edit_only=True) html = twf.TextArea(label_text='HTML', cols=100, rows=10, validator=UnicodeString(strip=True, not_empty=False, if_missing='')) html_tracked = twf.TextArea(label_text='HTML with Tracking Code', cols=100, rows=10, validator=None, disabled=True, edit_only=True) add_tracking = twf.CheckBox(label_text='Add Tracking', help_text='Won\'t be used unless enable_adjector_click_tracking is true.', default=True) width = twf.TextField(label_text='Width', validator=PositiveInt) height = twf.TextField(label_text='Height', validator=PositiveInt) start_date = twf.TextField(label_text='Start Date', validator=DateTime(if_missing=None)) end_date = twf.TextField(label_text='End Date', validator=DateTime(end_interval=True, if_missing=None)) disabled = twf.CheckBox(label_text='Disabled', default=False) delete = twf.SubmitButton(default='Delete', named_button=True, validator=None, edit_only=True) validator = FilteringSchema() template = 'genshi:adjector.templates.form.basic' Creative = CreativeForm('new_creative') class LocationForm(twf.ListForm): '''Set creation form''' class fields(twa.WidgetsList): title = twf.TextField(label_text='Title', validator=UnicodeNonEmptyString, size=50) parent_id = twf.SingleSelectField(label_text='Parent Location', validator=Int) description = twf.TextArea(label_text='Description', cols=100, rows=5, validator=UnicodeEmptyString) cj_site_id = twf.TextField(label_text='CJ Site ID', validator=Int(min=0)) delete = twf.SubmitButton(default='Delete', named_button=True, validator=None, edit_only=True) validator = FilteringSchema() template = 'genshi:adjector.templates.form.basic' Location = LocationForm() class SetForm(twf.ListForm): '''Set creation form''' class fields(twa.WidgetsList): title = twf.TextField(label_text='Title', validator=UnicodeNonEmptyString, size=50) parent_id = twf.SingleSelectField(label_text='Parent Set', validator=Int) weight = twf.TextField(label_text='Weight', default=1.0, validator=Number(not_empty=True, min=0.0)) total_weight = twf.TextField(label_text='Total Weight', validator=None, disabled=True, edit_only=True) description = twf.TextArea(label_text='Description', cols=100, rows=5, validator=UnicodeEmptyString) delete = twf.SubmitButton(default='Delete', named_button=True, validator=None, edit_only=True) validator = FilteringSchema() template = 'genshi:adjector.templates.form.basic' Set = SetForm() class ZoneForm(twf.ListForm): '''Zone creation form''' class fields(twa.WidgetsList): preview = GenericField(label_text='Preview', validator=None, edit_only=True) title = twf.TextField(label_text='Title', validator=UnicodeNonEmptyString, size=50) name = twf.TextField(label_text='Unique Name', help_text='optional; an alternate way to access the zone', validator = SimpleString(strip=True, not_empty=False, max=80)) parent_id = twf.SingleSelectField(label_text='Location', validator=Int, help_text='necessary for imported creatives to display here') creative_types = twf.SingleSelectField(label_text='Show Creative Types', options=[[0, 'Blocks and Text'], [1, 'Text Only'], [2, 'Blocks Only']], validator=Int(not_empty=True, min=0, max=2)) description = twf.TextArea(label_text='Description', cols=100, rows=5, validator=UnicodeEmptyString) min_width = twf.TextField(label_text='Min Width', validator=PositiveInt) max_width = twf.TextField(label_text='Max Width', validator=PositiveInt) min_height = twf.TextField(label_text='Min Height', validator=PositiveInt) max_height = twf.TextField(label_text='Max Height', validator=PositiveInt) num_texts = twf.TextField(label_text='Number of Text Creatives to Show', default=1, validator=Int(not_empty=False, min=1, if_missing=1)) before_all_text = twf.TextArea(label_text='Before All Text Creatives', validator=UnicodeEmptyString, cols=100) after_all_text = twf.TextArea(label_text='After All Text Creatives', validator=UnicodeEmptyString, cols=100) before_each_text = twf.TextArea(label_text='Before Each Text Creative', validator=UnicodeEmptyString, cols=100) after_each_text = twf.TextArea(label_text='After Each Text Creative', validator=UnicodeEmptyString, cols=100) weight_texts = twf.TextField(label_text='Adjust Weight for Text Creatives (Blocks = 1.0)', default=1.0, validator=Number(not_empty=True, min=0.0, if_missing=1.0)) normalize_by_container = twf.CheckBox(label_text='Normalize By Container') previous_name = twf.HiddenField(validator=UnicodeString(strip=True, not_empty=False, max=80, if_missing=None)) delete = twf.SubmitButton(default='Delete', named_button=True, validator=None, edit_only=True) validator = FilteringSchema(chained_validators=[ UniqueValue(lambda name: model.Zone.query.filter_by(name=name).count() == 0, 'name', 'previous_name', not_empty=False) ]) template = 'genshi:adjector.templates.form.basic' Zone = ZoneForm('new_zone') class CJLinkSearchFields(twa.WidgetsList): # Some of these don't seem to be supported by the REST interface. Thanks CJ, I really appreciate it. keywords = twf.TextField(label_text='Keywords', help_text = 'space separated, +keyword requires a keyword, -is a not, default is or operation', validator = UnicodeString(strip=True, not_empty=False)) # link_size = twf.SingleSelectField(label_text='Size', validator = UnicodeString(strip=True, not_empty=False), # options=['', '88x31 Micro Bar', '120x60 Button 2', '120x90 Button 1', # '150x50 Banner', '234x60 Half Banner', '468x60 Full Banner', '125x125 Square Button', '180x150 Rectangle', # '250x250 Square Pop-Up', '300x250 Medium Rectangle', '336x280 Large Rectangle', '240x400 Vertical Rectangle', # '120x240 Vertical Banner', '120x600 Skyscraper', '160x600 Wide Skyscraper', 'Other']) link_type = twf.SingleSelectField(label_text='Type', validator = UnicodeString(strip=True, not_empty=False), options=['', 'Banner', 'Advanced Link', 'Text Link', 'Content Link', 'SmartLink', 'Product Catalog', 'Advertiser SmartZone', 'Keyword Link']) promotion_start_date = twf.TextField(label_text='Start Date', help_text = 'Format: MM/DD/YYYY', validator = UnicodeString(strip=True, not_empty=False)) promotion_end_date = twf.TextField(label_text='End Date', help_text = 'Format: MM/DD/YYYY or "Ongoing" for only links with no end date', validator = UnicodeString(strip=True, not_empty=False)) promotion_type = twf.SingleSelectField(label_text='Promotion Type', help_text = 'Required if Start or End date given', validator = UnicodeString(strip=True, not_empty=False), options = ['', ['coupon', 'Coupon'], ['sweepstakes', 'Sweepstakes'], ['product', 'Product'], ['sale', 'Sale'], ['free shipping', 'Free Shipping'], ['seasonal link', 'Seasonal Link']]) # language = twf.TextField(label_text='Language', default='en', validator = UnicodeString(strip=True, not_empty=False)) # serviceable_area = twf.TextField(label_text='Serviceable Area', default='US', validator = UnicodeString(strip=True, not_empty=False)) records_per_page = twf.TextField(label_text='Records Per Page', default=100, validator=Int(min=0, not_empty=False)) page_number = twf.TextField(label_text='Page Number', default=1, validator=Int(min=0, not_empty=False)) # sort_by = twf.SingleSelectField(label_text='Sort By', # validator = UnicodeString(strip=True, not_empty=False), # options=[['', 'Relevance'], # ['link-id', 'Link ID'], # ['link-destination', 'Link Destination'], # ['link-type', 'Link Type'], # ['advertiser-id', 'Advertiser ID'], # ['advertiser-name', 'Advertiser Name'], # ['creative-width', 'Width'], # ['creative-height', 'Height'], # ['promotion-start-date', 'Start Date'], # ['promotion-end-date', 'End Date'], # ['category', 'Category']]) # sort_order = twf.SingleSelectField(label_text='Sort Order', options=[['dec', 'Descending'], ['asc', 'Ascending']], # validator=UnicodeString(not_empty=True)) show_ignored = twf.CheckBox(label_text='Show Ignored Links') show_imported = twf.CheckBox(label_text='Show Imported Links') class CJLinkSearchForm(twf.ListForm): class extra_field(twa.WidgetsList): website_id = twf.SingleSelectField(label_text='Website', help_text='Doesn\'t matter much if enable_cj_site_replacements is true.', validator=UnicodeString(not_empty=True)) fields = CJLinkSearchFields + extra_field template = 'genshi:adjector.templates.form.basic' CJLinkSearch = CJLinkSearchForm() class CJLinkSearchOneSiteForm(twf.ListForm): class extra_field(twa.WidgetsList): website_id = twf.HiddenField(validator=UnicodeString(not_empty=True)) fields = CJLinkSearchFields + extra_field template = 'genshi:adjector.templates.form.basic' CJLinkSearchOneSite = CJLinkSearchOneSiteForm()
Adjector
/Adjector-1.0b1.tar.gz/Adjector-1.0b1/adjector/forms/forms.py
forms.py
import logging from datetime import datetime from elixir import using_options, using_table_options, BLOB, Boolean, ColumnProperty, \ DateTime, Entity, EntityMeta, Field, Float, Integer, ManyToMany, ManyToOne, \ OneToMany, OneToOne, SmallInteger, String, UnicodeText from genshi import Markup from sqlalchemy import func, UniqueConstraint from adjector.core.conf import conf from adjector.core.tracking import add_tracking, remove_tracking log = logging.getLogger(__name__) max_int = 2147483647 tz_now = lambda : datetime.now(conf.timezone) UnicodeText = UnicodeText(assert_unicode=False) class CircularDependencyException(Exception): pass class GenericEntity(object): def __init__(self, data): self._updated = self.set(data) def set(self, data): for field in data.keys(): if hasattr(self, field): if field == 'title': data[field] = data[field][:80] self.__setattr__(field, data[field]) else: log.warning('No field: %s' % field) def value(self): return self.__dict__ class GenericListEntity(GenericEntity): def set(self, data): GenericEntity.set(self, data) # Detect cycles in parenting - Brent's algorithm http://www.siafoo.net/algorithm/11 turtle = self rabbit = self steps_taken = 0 step_limit = 2 while True: if not rabbit.parent_id: break #no loop rabbit = rabbit.query.get(rabbit.parent_id) steps_taken += 1 if rabbit == turtle: # loop! raise CircularDependencyException if steps_taken == step_limit: steps_taken = 0 step_limit *=2 turtle = rabbit class CJIgnoredLink(Entity): cj_advertiser_id = Field(Integer, required=True) cj_link_id = Field(Integer, required=True) using_options(tablename=conf.table_prefix + 'cj_ignored_links') using_table_options(UniqueConstraint('cj_link_id')) def __init__(self, link_id, advertiser_id): self.cj_advertiser_id = advertiser_id self.cj_link_id = link_id class Click(Entity): time = Field(DateTime(timezone=True), required=True, default=tz_now) creative = ManyToOne('Creative', ondelete='set null') zone = ManyToOne('Zone', ondelete='set null') using_options(tablename=conf.table_prefix + 'clicks') def __init__(self, creative_id, zone_id): self.creative_id = creative_id self.zone_id = zone_id class Creative(GenericEntity, Entity): parent = ManyToOne('Set', required=False, ondelete='set null') #zones = ManyToMany('Zone', tablename='creatives_to_zones') creative_zone_pairs = OneToMany('CreativeZonePair', cascade='delete') title = Field(String(80, convert_unicode=True), required=True) html = Field(UnicodeText, required=True, default='') is_text = Field(Boolean, required=True, default=False) width = Field(Integer, required=True, default=0) height = Field(Integer, required=True, default=0) start_date = Field(DateTime(timezone=True)) end_date = Field(DateTime(timezone=True)) weight = Field(Float, required=True, default=1.0) add_tracking = Field(Boolean, required=True, default=True) disabled = Field(Boolean, required=True, default=False) create_date = Field(DateTime(timezone=True), required=True, default=tz_now) cj_link_id = Field(Integer) cj_advertiser_id = Field(Integer) cj_site_id = Field(Integer) views = OneToMany('View') clicks = OneToMany('Click') # Cached Values html_tracked = Field(UnicodeText) #will be overwritten on set parent_weight = Field(Float, required=True, default=1.0) # overwritten on any parent weight change using_options(tablename=conf.table_prefix + 'creatives', order_by='title') using_table_options(UniqueConstraint('cj_link_id')) def __init__(self, data): GenericEntity.__init__(self, data) if self.parent_id: self.parent_weight = Set.get(self.parent_id).weight def get_clicks(self, start=None, end=None): query = Click.query.filter_by(creative_id = self.id) if start: query = query.filter(Click.time > start) if end: query = query.filter(Click.time < end) return query.count() def get_views(self, start=None, end=None): query = View.query.filter_by(creative_id = self.id) if start: query = query.filter(View.time > start) if end: query = query.filter(View.time < end) return query.count() @staticmethod def possible_parents(this=None): return [[set.id, set.title] for set in Set.query()] def set(self, data): old_parent_id = self.parent_id old_html = self.html old_add_tracking = self.add_tracking GenericEntity.set(self, data) if self.parent_id != old_parent_id: self.parent_weight = Set.get(self.parent_id).weight # TODO: Handle Block / Text bullshit # Parse html if self.html != old_html or self.add_tracking != old_add_tracking: if self.add_tracking is not False: self.html_tracked = add_tracking(self.html) else: self.html_tracked = None return [self] def value(self): value = GenericEntity.value(self) value['preview'] = Markup(remove_tracking(self.html, self.cj_site_id)) value['total_weight'] = self.weight * self.parent_weight value['html_tracked'] = value['html_tracked'] or value['html'] return value def view(self): return '%s/creative/%i' % (conf.admin_base_url, self.id) class CreativeZonePair(GenericEntity, Entity): creative = ManyToOne('Creative', ondelete='cascade', use_alter=True) zone = ManyToOne('Zone', ondelete='cascade', use_alter=True) is_text = Field(Boolean, required=True) lower_bound = Field(Float, required=True) upper_bound = Field(Float, required=True) using_options(tablename=conf.table_prefix + 'creative_zone_pairs') using_table_options(UniqueConstraint('creative_id', 'zone_id')) class Location(GenericListEntity, Entity): ''' A container for locations or zones ''' parent = ManyToOne('Location', required=False, ondelete='set null') sublocations = OneToMany('Location') zones = OneToMany('Zone') title = Field(String(80, convert_unicode=True), required=True) description = Field(UnicodeText) create_date = Field(DateTime(timezone=True), required=True, default=tz_now) cj_site_id = Field(Integer) parent_cj_site_id = Field(Integer) using_options(tablename=conf.table_prefix + 'locations', order_by='title') def __init__(self, data): GenericEntity.__init__(self, data) if self.parent_id: self.parent_cj_site_id = Location.get(self.parent_id).cj_site_id def delete(self, data): updated = [] for subloc in self.sublocations: updated.extend(subloc.set(dict(parent_cj_site_id = None))) for zone in self.zones: updated.extend(zone.set(dict(parent_cj_site_id = None))) Entity.delete(self) return updated @staticmethod def possible_parents(this = None): filter = None if this: filter = Location.id != this.id return [[location.id, location.title] for location in Location.query.filter(filter)] def set(self, data): updated = [self] old_parent_id = self.parent_id old_cj_site_id = self.cj_site_id old_parent_cj_site_id = self.parent_cj_site_id GenericEntity.set(self, data) if self.parent_id != old_parent_id: self.parent_cj_site_id = Location.get(self.parent_id).cj_site_id if self.cj_site_id != old_cj_site_id or self.parent_cj_site_id != old_parent_cj_site_id: # Only pass parent- down if we don't have our own for subloc in self.sublocations: updated.extend(subloc.set(dict(parent_cj_site_id = self.cj_site_id or self.parent_cj_site_id))) for zone in self.zones: updated.extend(zone.set(dict(parent_cj_site_id = self.cj_site_id or self.parent_cj_site_id))) return updated def view(self): return '%s/location/%i' % (conf.admin_base_url, self.id) class Set(GenericListEntity, Entity): parent = ManyToOne('Set', required=False, ondelete='set null') subsets = OneToMany('Set') creatives = OneToMany('Creative') title = Field(String(80, convert_unicode=True), required=True) description = Field(UnicodeText) weight = Field(Float, required=True, default=1.0) parent_weight = Field(Float, required=True, default=1.0) # overwritten on any parent weight change create_date = Field(DateTime(timezone=True), required=True, default=tz_now) cj_advertiser_id = Field(Integer) using_options(tablename=conf.table_prefix + 'sets', order_by='title') using_table_options(UniqueConstraint('cj_advertiser_id')) def __init__(self, data): GenericEntity.__init__(self, data) if self.parent_id: self.parent_weight = Set.get(self.parent_id).weight def delete(self, data): updated = [] for subset in self.subsets: updated.extend(subset.set(dict(parent_weight = 1.0))) for creative in self.creatives: updated.extend(creative.set(dict(parent_weight = 1.0))) Entity.delete(self) return updated @staticmethod def possible_parents(this = None): filter = None if this: filter = Set.id != this.id return [[set.id, set.title] for set in Set.query.filter(filter)] def set(self, data): updated = [self] old_parent_id = self.parent_id old_weight = self.weight old_parent_weight = self.parent_weight GenericEntity.set(self, data) if self.parent_id != old_parent_id: self.parent_weight = Set.get(self.parent_id).weight if self.weight != old_weight or self.parent_weight != old_parent_weight: for subset in self.subsets: updated.extend(subset.set(dict(parent_weight = self.parent_weight * self.weight))) for creative in self.creatives: updated.extend(creative.set(dict(parent_weight = self.parent_weight * self.weight))) return updated def value(self): value = GenericEntity.value(self) value['total_weight'] = self.weight * self.parent_weight return value def view(self): return '%s/set/%i' % (conf.admin_base_url, self.id) class View(GenericEntity, Entity): time = Field(DateTime(timezone=True), required=True, default=tz_now) creative = ManyToOne('Creative', ondelete='set null') zone = ManyToOne('Zone', ondelete='set null') using_options(tablename=conf.table_prefix + 'views') def __init__(self, creative_id, zone_id): self.creative_id = creative_id self.zone_id = zone_id class Zone(GenericEntity, Entity): parent = ManyToOne('Location', required=False, ondelete='set null') creative_zone_pairs = OneToMany('CreativeZonePair', cascade='delete') name = Field(String(80, convert_unicode=True), required=False) title = Field(String(80, convert_unicode=True), required=True) description = Field(UnicodeText) #creatives = ManyToMany('Creative', tablename='creatives_to_zones') normalize_by_container = Field(Boolean, required=True, default=False) creative_types = Field(SmallInteger, required=True, default=0) #0: Both, 1: Text, 2: Blocks # These only matter if blocks allowed min_width = Field(Integer, required=True, default=0) max_width = Field(Integer, required=True, default=max_int) min_height = Field(Integer, required=True, default=0) max_height = Field(Integer, required=True, default=max_int) # These only matter if text allowed num_texts = Field(SmallInteger, required=True, default=1) weight_texts = Field(Float, required=True, default=1.0) before_all_text = Field(UnicodeText) after_all_text = Field(UnicodeText) before_each_text = Field(UnicodeText) after_each_text = Field(UnicodeText) create_date = Field(DateTime(timezone=True), required=True, default=tz_now) # Cached from parent parent_cj_site_id = Field(Integer) # Cached from creatives total_text_weight = Field(Float) # i dunno, some default? should be updated quick. views = OneToMany('View') clicks = OneToMany('Click') using_options(tablename=conf.table_prefix + 'zones', order_by='title') def __init__(self, data): GenericEntity.__init__(self, data) if self.parent_id: self.parent_cj_site_id = Location.get(self.parent_id).cj_site_id def get_clicks(self, start=None, end=None): query = Click.query.filter_by(zone_id = self.id) if start: query = query.filter(Click.time > start) if end: query = query.filter(Click.time < end) return query.count() def get_views(self, start=None, end=None): query = View.query.filter_by(zone_id = self.id) if start: query = query.filter(View.time > start) if end: query = query.filter(View.time < end) return query.count() @staticmethod def possible_parents(this=None): return [[location.id, location.title] for location in Location.query()] def set(self, data): if data.has_key('previous_name'): del data['previous_name'] old_parent_id = self.parent_id GenericEntity.set(self, data) if self.parent_id != old_parent_id: self.parent_cj_site_id = Location.get(self.parent_id).cj_site_id return [self] def value(self): val = self.__dict__.copy() val['previous_name'] = self.name return val def view(self): return '%s/zone/%i' % (conf.admin_base_url, self.id)
Adjector
/Adjector-1.0b1.tar.gz/Adjector-1.0b1/adjector/model/entities.py
entities.py
import sys DEFAULT_VERSION = "0.6c9" DEFAULT_URL = "http://pypi.python.org/packages/%s/s/setuptools/" % sys.version[:3] md5_data = { 'setuptools-0.6b1-py2.3.egg': '8822caf901250d848b996b7f25c6e6ca', 'setuptools-0.6b1-py2.4.egg': 'b79a8a403e4502fbb85ee3f1941735cb', 'setuptools-0.6b2-py2.3.egg': '5657759d8a6d8fc44070a9d07272d99b', 'setuptools-0.6b2-py2.4.egg': '4996a8d169d2be661fa32a6e52e4f82a', 'setuptools-0.6b3-py2.3.egg': 'bb31c0fc7399a63579975cad9f5a0618', 'setuptools-0.6b3-py2.4.egg': '38a8c6b3d6ecd22247f179f7da669fac', 'setuptools-0.6b4-py2.3.egg': '62045a24ed4e1ebc77fe039aa4e6f7e5', 'setuptools-0.6b4-py2.4.egg': '4cb2a185d228dacffb2d17f103b3b1c4', 'setuptools-0.6c1-py2.3.egg': 'b3f2b5539d65cb7f74ad79127f1a908c', 'setuptools-0.6c1-py2.4.egg': 'b45adeda0667d2d2ffe14009364f2a4b', 'setuptools-0.6c2-py2.3.egg': 'f0064bf6aa2b7d0f3ba0b43f20817c27', 'setuptools-0.6c2-py2.4.egg': '616192eec35f47e8ea16cd6a122b7277', 'setuptools-0.6c3-py2.3.egg': 'f181fa125dfe85a259c9cd6f1d7b78fa', 'setuptools-0.6c3-py2.4.egg': 'e0ed74682c998bfb73bf803a50e7b71e', 'setuptools-0.6c3-py2.5.egg': 'abef16fdd61955514841c7c6bd98965e', 'setuptools-0.6c4-py2.3.egg': 'b0b9131acab32022bfac7f44c5d7971f', 'setuptools-0.6c4-py2.4.egg': '2a1f9656d4fbf3c97bf946c0a124e6e2', 'setuptools-0.6c4-py2.5.egg': '8f5a052e32cdb9c72bcf4b5526f28afc', 'setuptools-0.6c5-py2.3.egg': 'ee9fd80965da04f2f3e6b3576e9d8167', 'setuptools-0.6c5-py2.4.egg': 'afe2adf1c01701ee841761f5bcd8aa64', 'setuptools-0.6c5-py2.5.egg': 'a8d3f61494ccaa8714dfed37bccd3d5d', 'setuptools-0.6c6-py2.3.egg': '35686b78116a668847237b69d549ec20', 'setuptools-0.6c6-py2.4.egg': '3c56af57be3225019260a644430065ab', 'setuptools-0.6c6-py2.5.egg': 'b2f8a7520709a5b34f80946de5f02f53', 'setuptools-0.6c7-py2.3.egg': '209fdf9adc3a615e5115b725658e13e2', 'setuptools-0.6c7-py2.4.egg': '5a8f954807d46a0fb67cf1f26c55a82e', 'setuptools-0.6c7-py2.5.egg': '45d2ad28f9750e7434111fde831e8372', 'setuptools-0.6c8-py2.3.egg': '50759d29b349db8cfd807ba8303f1902', 'setuptools-0.6c8-py2.4.egg': 'cba38d74f7d483c06e9daa6070cce6de', 'setuptools-0.6c8-py2.5.egg': '1721747ee329dc150590a58b3e1ac95b', 'setuptools-0.6c9-py2.3.egg': 'a83c4020414807b496e4cfbe08507c03', 'setuptools-0.6c9-py2.4.egg': '260a2be2e5388d66bdaee06abec6342a', 'setuptools-0.6c9-py2.5.egg': 'fe67c3e5a17b12c0e7c541b7ea43a8e6', 'setuptools-0.6c9-py2.6.egg': 'ca37b1ff16fa2ede6e19383e7b59245a', } import sys, os try: from hashlib import md5 except ImportError: from md5 import md5 def _validate_md5(egg_name, data): if egg_name in md5_data: digest = md5(data).hexdigest() if digest != md5_data[egg_name]: print >>sys.stderr, ( "md5 validation of %s failed! (Possible download problem?)" % egg_name ) sys.exit(2) return data def use_setuptools( version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir, download_delay=15 ): """Automatically find/download setuptools and make it available on sys.path `version` should be a valid setuptools version number that is available as an egg for download under the `download_base` URL (which should end with a '/'). `to_dir` is the directory where setuptools will be downloaded, if it is not already available. If `download_delay` is specified, it should be the number of seconds that will be paused before initiating a download, should one be required. If an older version of setuptools is installed, this routine will print a message to ``sys.stderr`` and raise SystemExit in an attempt to abort the calling script. """ was_imported = 'pkg_resources' in sys.modules or 'setuptools' in sys.modules def do_download(): egg = download_setuptools(version, download_base, to_dir, download_delay) sys.path.insert(0, egg) import setuptools; setuptools.bootstrap_install_from = egg try: import pkg_resources except ImportError: return do_download() try: pkg_resources.require("setuptools>="+version); return except pkg_resources.VersionConflict, e: if was_imported: print >>sys.stderr, ( "The required version of setuptools (>=%s) is not available, and\n" "can't be installed while this script is running. Please install\n" " a more recent version first, using 'easy_install -U setuptools'." "\n\n(Currently using %r)" ) % (version, e.args[0]) sys.exit(2) else: del pkg_resources, sys.modules['pkg_resources'] # reload ok return do_download() except pkg_resources.DistributionNotFound: return do_download() def download_setuptools( version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir, delay = 15 ): """Download setuptools from a specified location and return its filename `version` should be a valid setuptools version number that is available as an egg for download under the `download_base` URL (which should end with a '/'). `to_dir` is the directory where the egg will be downloaded. `delay` is the number of seconds to pause before an actual download attempt. """ import urllib2, shutil egg_name = "setuptools-%s-py%s.egg" % (version,sys.version[:3]) url = download_base + egg_name saveto = os.path.join(to_dir, egg_name) src = dst = None if not os.path.exists(saveto): # Avoid repeated downloads try: from distutils import log if delay: log.warn(""" --------------------------------------------------------------------------- This script requires setuptools version %s to run (even to display help). I will attempt to download it for you (from %s), but you may need to enable firewall access for this script first. I will start the download in %d seconds. (Note: if this machine does not have network access, please obtain the file %s and place it in this directory before rerunning this script.) ---------------------------------------------------------------------------""", version, download_base, delay, url ); from time import sleep; sleep(delay) log.warn("Downloading %s", url) src = urllib2.urlopen(url) # Read/write all in one block, so we don't create a corrupt file # if the download is interrupted. data = _validate_md5(egg_name, src.read()) dst = open(saveto,"wb"); dst.write(data) finally: if src: src.close() if dst: dst.close() return os.path.realpath(saveto) def main(argv, version=DEFAULT_VERSION): """Install or upgrade setuptools and EasyInstall""" try: import setuptools except ImportError: egg = None try: egg = download_setuptools(version, delay=0) sys.path.insert(0,egg) from setuptools.command.easy_install import main return main(list(argv)+[egg]) # we're done here finally: if egg and os.path.exists(egg): os.unlink(egg) else: if setuptools.__version__ == '0.0.1': print >>sys.stderr, ( "You have an obsolete version of setuptools installed. Please\n" "remove it from your system entirely before rerunning this script." ) sys.exit(2) req = "setuptools>="+version import pkg_resources try: pkg_resources.require(req) except pkg_resources.VersionConflict: try: from setuptools.command.easy_install import main except ImportError: from easy_install import main main(list(argv)+[download_setuptools(delay=0)]) sys.exit(0) # try to force an exit else: if argv: from setuptools.command.easy_install import main main(argv) else: print "Setuptools version",version,"or greater has been installed." print '(Run "ez_setup.py -U setuptools" to reinstall or upgrade.)' def update_md5(filenames): """Update our built-in md5 registry""" import re for name in filenames: base = os.path.basename(name) f = open(name,'rb') md5_data[base] = md5(f.read()).hexdigest() f.close() data = [" %r: %r,\n" % it for it in md5_data.items()] data.sort() repl = "".join(data) import inspect srcfile = inspect.getsourcefile(sys.modules[__name__]) f = open(srcfile, 'rb'); src = f.read(); f.close() match = re.search("\nmd5_data = {\n([^}]+)}", src) if not match: print >>sys.stderr, "Internal error!" sys.exit(2) src = src[:match.start(1)] + repl + src[match.end(1):] f = open(srcfile,'w') f.write(src) f.close() if __name__=='__main__': if len(sys.argv)>2 and sys.argv[1]=='--md5update': update_md5(sys.argv[2:]) else: main(sys.argv[1:])
AdjectorClient
/AdjectorClient-1.0b1.tar.gz/AdjectorClient-1.0b1/ez_setup.py
ez_setup.py
Adjector 1.0b ************* Hi there. Thanks for using Adjector, a lightweight, flexible, open-source ad server written in Python. Adjector is licensed under the GPL, version 2 or 3, at your option. For more information, see LICENSE.txt. This Distribution ----------------- This is the client-only Adjector distribution. This distribution does not have as many dependencies, but can only access a database created by the full version of Adjector. This version is intended for installation on systems that need ads served, but which are using another system to configure the ads. A Trac plugin is also available. The full Adjector version and the Trac plugin can be downloaded at http://projects.icapsid.net/adjector/wiki/Download Documentation ------------- All of our documentation is online at http://projects.icapsid.net/adjector You may wish to get started with 'Installing the Adjector Client' at http://projects.icapsid.net/adjector/wiki/ClientInstall For questions, comments, help, or any other information, visit us online or email [email protected].
AdjectorClient
/AdjectorClient-1.0b1.tar.gz/AdjectorClient-1.0b1/README.txt
README.txt
import os.path import random import re from adjector.core.conf import conf from adjector.core.cj_util import remove_tracking_cj def add_tracking(html): if re.search('google_ad_client', html): return add_tracking_adsense(html) else: return add_tracking_generic(html) def add_tracking_generic(html): def repl(match): groups = match.groups() return groups[0] + 'ADJECTOR_TRACKING_BASE_URL/track/click_with_redirect?creative_id=ADJECTOR_CREATIVE_ID&zone_id=ADJECTOR_ZONE_ID&cache_bust=' + cache_bust() + '&url=' + groups[1] + groups[2] html_tracked = re.sub(r'''(.*<a[^>]+href\s*=\s*['"])([^"']+)(['"][^>]*>.*)''', repl, html) if html == html_tracked: # if no change, don't touch. return else: return html_tracked def add_tracking_adsense(html): adsense_tracking_code = open(os.path.join(conf.root, 'public', 'js', 'adsense_tracker.js')).read() click_track = 'ADJECTOR_TRACKING_BASE_URL/track/click_with_image?creative_id=ADJECTOR_CREATIVE_ID&zone_id=ADJECTOR_ZONE_ID&cache_bust=' # cache_bust added in js html_tracked = ''' <span> %(html)s <script type="text/javascript"><!--// <![CDATA[ /* adjector_click_track=%(click_track)s */ %(adsense_tracking_code)s // ]]> --></script> </span> ''' % dict(html=html, adsense_tracking_code=adsense_tracking_code, click_track=click_track) return html_tracked def cache_bust(): return str(random.random())[2:] def remove_tracking(html, cj_site_id = None): if cj_site_id: return remove_tracking_cj(html, cj_site_id) elif re.search('google_ad_client', html): return remove_tracking_adsense(html) else: return html # we can't do anything def remove_tracking_adsense(html): html_notrack = ''' <script type='text/javascript'> var adjector_google_adtest_backup = google_adtest; var google_adtest='on'; </script> %(html)s <script type='text/javascript'> var google_adtest=adjector_google_adtest_backup; </script> ''' % dict(html=html) return html_notrack
AdjectorClient
/AdjectorClient-1.0b1.tar.gz/AdjectorClient-1.0b1/adjector/core/tracking.py
tracking.py
from __future__ import division import logging import random import re from sqlalchemy import and_, func, or_ from sqlalchemy.sql import case, join, select, subquery import adjector.model as model from adjector.core.conf import conf from adjector.core.tracking import remove_tracking log = logging.getLogger(__name__) def old_render_zone(ident, track=None, admin=False): ''' Render A Random Creative for this Zone. Access by id or name. Respect all zone requirements. Use creative weights and their containing set weights to weight randomness. If zone.normalize_by_container, normalize creatives by the total weight of the set they are in, so the total weight of the creatives directly in any set is always 1. If block and text ads can be shown, a decision will be made to show one or the other based on the total probability of each type of creative. Note that this function is called by the API function render_zone. ''' # Note that this is my first time seriously using SA, feel free to clean this up if isinstance(ident, int) or ident.isdigit(): zone = model.Zone.get(int(ident)) else: zone = model.Zone.query.filter_by(name=ident).first() if zone is None: # Fail gracefully, don't commit suicide because someone deleted a zone from the ad server log.error('Tried to render zone %s. Zone Not Found' % ident) return '' # Find zone site_id, if applicable. Default to global site_id, or else None. cj_site_id = zone.parent_cj_site_id or conf.cj_site_id # Figure out what kind of creative we need # Size filtering whereclause_zone = and_(or_(and_(model.Creative.width >= zone.min_width, model.Creative.width <= zone.max_width, model.Creative.height >= zone.min_height, model.Creative.height <= zone.max_height), model.Creative.is_text == True), # Date filtering or_(model.Creative.start_date == None, model.Creative.start_date <= func.now()), or_(model.Creative.end_date == None, model.Creative.end_date >= func.now()), # Site Id filtering or_(model.Creative.cj_site_id == None, model.Creative.cj_site_id == cj_site_id, and_(conf.enable_cj_site_replacements, cj_site_id != None, model.Creative.cj_site_id != None)), # Disabled? model.Creative.disabled == False) creative_types = zone.creative_types # This might change later. doing_text = None # just so it can't be undefined later # Sanity check - this shouldn't ever happen if zone.num_texts == 0: creative_types = 2 # Filter by text or block if needed. If you want both we do some magic later. But first we need to find out how much of each we have, weight wise. if creative_types == 1: whereclause_zone.append(model.Creative.is_text==True) number_needed = zone.num_texts doing_text = True elif creative_types == 2: whereclause_zone.append(model.Creative.is_text==False) number_needed = 1 doing_text = False creatives = model.Creative.table all_results = [] # Find random creatives; Loop until we have as many as we need while True: # First let's figure how to normalize by how many items will be displayed. This ensures all items are displayed equally. # We want this to be 1 for blocks and num_texts for texts. Also throw in the zone.weight_texts #items_displayed = cast(creatives.c.is_text, Integer) * (zone.num_texts - 1) + 1 text_weight_adjust = case([(True, zone.weight_texts / zone.num_texts), (False, 1)], creatives.c.is_text) if zone.normalize_by_container: # Find the total weight of each parent in order to normalize parent_weights = subquery('parent_weight', [creatives.c.parent_id, func.sum(creatives.c.parent_weight * creatives.c.weight).label('pw_total')], group_by=creatives.c.parent_id) # Join creatives table and normalized weight table - I'm renaming a lot of fields here to make life easier down the line # SA was insisting on doing a full subquery anyways (I just wanted a join) c1 = subquery('c1', [creatives.c.id.label('id'), creatives.c.title.label('title'), creatives.c.html.label('html'), creatives.c.html_tracked.label('html_tracked'), creatives.c.is_text.label('is_text'), creatives.c.cj_site_id.label('cj_site_id'), (creatives.c.weight * creatives.c.parent_weight * text_weight_adjust / case([(parent_weights.c.pw_total > 0, parent_weights.c.pw_total)], else_ = None)).label('normalized_weight')], # Make sure we can't divide by 0 whereclause_zone, # here go our filters from_obj=join(creatives, parent_weights, or_(creatives.c.parent_id == parent_weights.c.parent_id, and_(creatives.c.parent_id == None, parent_weights.c.parent_id == None)))).alias('c1') else: # We don't normalize weight by parent weight, so we dont' need fancy joins c1 = subquery('c1', [creatives.c.id, creatives.c.title, creatives.c.html, creatives.c.html_tracked, creatives.c.is_text, creatives.c.cj_site_id, (creatives.c.weight * creatives.c.parent_weight * text_weight_adjust).label('normalized_weight')], whereclause_zone) #for a in model.session.execute(c1).fetchall(): print a if creative_types == 0: # (Either type) # Now that we have our weights in order, let's figure out how many of each thing (text/block) we have, weightwise. texts_weight = select([func.sum(c1.c.normalized_weight)], c1.c.is_text == True).scalar() or 0 blocks_weight = select([func.sum(c1.c.normalized_weight)], c1.c.is_text == False).scalar() or 0 # Create weighted bins, text first (0-whatever). We are going to decide what kind of thing to make right here, right now, # based on the weights of each. Because we can't have both (yet). rand = random.random() if texts_weight + blocks_weight == 0: break if rand < texts_weight / (texts_weight + blocks_weight): c1 = c1.select().where(c1.c.is_text == True).alias('text') total_weight = texts_weight number_needed = zone.num_texts doing_text = True else: c1 = c1.select().where(c1.c.is_text == False).alias('nottext') total_weight = blocks_weight number_needed = 1 doing_text = False else: # Find total normalized weight of all creatives in order to normalize *that* total_weight = select([func.sum(c1.c.normalized_weight)])#.scalar() or 0 #if not total_weight: # break c2 = c1.alias('c2') # Find the total weight above a creative in the table in order to form weighted bins for the random number generator # Note that this is the upper bound, not the lower (if it was the lower it could be NULL) incremental_weight = select([func.sum(c1.c.normalized_weight) / total_weight], c1.c.id <= c2.c.id, from_obj=c1) # Get everything into one thing - for debugging this is a good place to select and print out stuff shmush = select([c2.c.id, c2.c.title, c2.c.html, c2.c.html_tracked, c2.c.cj_site_id, incremental_weight.label('inc_weight'), (c2.c.normalized_weight / total_weight).label('final_weight')], from_obj=c2).alias('shmush') #for a in model.session.execute(shmush).fetchall(): print a # Generate some random numbers and comparisons - sorry about the magic it saves about 10 lines # The crazy 0.9999 is to make sure we don't get a number so close to one we run into float precision errors (all the weights might not quite sum to 1, # and so we might end up falling outside the bin!) # Experimentally the error never seems to be worse than that, and that number is imprecise enough to be displayed exactly by python. rand = [random.random() * 0.9999999999 for i in xrange(number_needed)] whereclause_rand = or_(*[and_(shmush.c.inc_weight - shmush.c.final_weight <= rand[i], rand[i] < shmush.c.inc_weight) for i in xrange(number_needed)]) # Select only creatives where the random number falls between its cutoff and the next results = model.session.execute(select([shmush.c.id, shmush.c.title, shmush.c.html, shmush.c.html_tracked, shmush.c.cj_site_id], whereclause_rand)).fetchall() # Deal with number of results if len(results) == 0: if not doing_text or not all_results: return '' # Otherwise, we are probably just out of results. break if len(results) > number_needed: log.error('Too many results while rendering zone %i. I got %i results and wanted %i' % (zone.id, len(results), number_needed)) results = results[:number_needed] all_results.extend(results) break elif len(results) < number_needed: if not doing_text: raise Exception('Somehow we managed to get past several checks, and we have 0 < results < needed_results for block creatives.' + \ 'Since needed_results should be 1, this seems fairly difficult.') all_results.extend(results) # It looks like we need more results, this should only happen when we are doing text. Try again. number_needed -= len(results) # Exclude ones we've already got whereclause_zone.append(and_(*[model.Creative.id != result.id for result in results])) # Set to only render text this time around if creative_types == 0: creative_types = 1 whereclause_zone.append(model.Creative.is_text == True) # Continue loop... else: # we have the right number? all_results.extend(results) break if doing_text and len(all_results) < zone.num_texts: log.warn('Could only retrieve %i of %i desired creatives for zone %i. This (hopefully) means you are requesting more creatives than exist.' \ % (len(all_results), zone.num_texts, zone.id)) # Ok, that's done, we have our results. # Let's render some html html = '' if doing_text: html += zone.before_all_text or '' for creative in all_results: if track or (track is None and conf.enable_adjector_view_tracking): # Create a view thingy model.View(creative['id'], zone.id) model.session.commit() # Figure out the html value... # Use either click tracked or regular html if (track or (track is None and conf.enable_adjector_click_tracking)) and creative['html_tracked'] is not None: creative_html = creative['html_tracked'].replace('ADJECTOR_TRACKING_BASE_URL', conf.tracking_base_url)\ .replace('ADJECTOR_CREATIVE_ID', str(creative['id'])).replace('ADJECTOR_ZONE_ID', str(zone.id)) else: creative_html = creative['html'] # Remove or modify third party click tracking if (track is False or (track is None and not conf.enable_third_party_tracking)) and creative['cj_site_id'] is not None: creative_html = remove_tracking(creative_html, creative['cj_site_id']) elif conf.enable_cj_site_replacements: creative_html = re.sub(str(creative['cj_site_id']), str(cj_site_id), creative_html) ########### Now we can do some text assembly ########### # If text, add pre-text if doing_text: html += zone.before_each_text or '' html += creative_html # Are we in admin mode? if admin: html += ''' <div class='adjector_admin' style='color: red; background-color: silver'> Creative: <a href='%(admin_base_url)s%(creative_url)s'>%(creative_title)s</a> Zone: <a href='%(admin_base_url)s%(zone_url)s'>%(zone_title)s</a> </div> ''' % dict(admin_base_url = conf.admin_base_url, creative_url = '/creative/%i' % creative['id'], zone_url = zone.view(), creative_title = creative.title, zone_title = zone.title) if doing_text: html += zone.after_each_text or '' if doing_text: html += zone.after_all_text or '' # Wrap in javascript if asked if html and '<script' not in html and conf.require_javascript: wrapper = '''<script type='text/javascript'>document.write('%s')</script>''' # Do some quick substitutions to inject... #TODO there must be an existing function that does this html = re.sub(r"'", r"\'", html) # escape quotes html = re.sub(r"[\r\n]", r"", html) # remove line breaks return wrapper % html return html def render_zone(ident, track=None, admin=False): ''' Render A Random Creative for this Zone, using precached data. Access by id or name. Respect all zone requirements. Use creative weights and their containing set weights to weight randomness. If zone.normalize_by_container, normalize creatives by the total weight of the set they are in, so the total weight of the creatives directly in any set is always 1. If block and text ads can be shown, a decision will be made to show one or the other based on the total probability of each type of creative. Note that this function is called by the API function render_zone. ''' # Note that this is my first time seriously using SA, feel free to clean this up if isinstance(ident, int) or ident.isdigit(): zone = model.Zone.get(int(ident)) else: zone = model.Zone.query.filter_by(name=ident).first() if zone is None: # Fail gracefully, don't commit suicide because someone deleted a zone from the ad server log.error('Tried to render zone %s. Zone Not Found' % ident) return '' # Find zone site_id, if applicable. Default to global site_id, or else None. cj_site_id = zone.parent_cj_site_id or conf.cj_site_id # Texts or blocks? rand = random.random() if rand < zone.total_text_weight: # texts! number_needed = zone.num_texts doing_text = True else: # blocks! number_needed = 1 doing_text = False query = model.CreativeZonePair.query.filter_by(zone_id = zone.id, is_text = doing_text) num_pairs = query.count() if num_pairs == number_needed: pairs = query.all() else: pairs = [] # keep going until we get as many as we need still_needed = number_needed banned_ranges = [] while still_needed: # Generate some random numbers and comparisons - sorry about the magic it saves about 10 lines # The crazy 0.9999 is to make sure we don't get a number so close to one we run into float precision errors (all the weights might not quite sum to 1, # and so we might end up falling outside the bin!) # Experimentally the error never seems to be worse than that, and that number is imprecise enough to be displayed exactly by python. # Assemble random numbers rands = [] while len(rands) < still_needed: rand = random.random() * 0.9999999999 bad_rand = False for range in banned_ranges: if range[0] <= rand < range[1]: bad_rand = True break if not bad_rand: rands.append(rand) # Select only creatives where the random number falls between its cutoff and the next results = query.filter(or_(*[and_(model.CreativeZonePair.lower_bound <= rands[i], rands[i] < model.CreativeZonePair.upper_bound) for i in xrange(still_needed)])).all() # What if there are no results? if len(results) == 0: if not pairs: # I guess there are no results return '' break # or else we are just out of results still_needed -= len(results) pairs += results # Exclude ones we've already got, if we need to loop again banned_ranges.extend([pair.lower_bound, pair.upper_bound] for pair in results) #JIC if len(pairs) > number_needed: # This shouldn't be able to happen log.error('Too many results while rendering zone %i. I got %i results and wanted %i' % (zone.id, len(results), number_needed)) pairs = pairs[:number_needed] elif len(pairs) < number_needed: log.warn('Could only retrieve %i of %i desired creatives for zone %i. This (hopefully) means you are requesting more creatives than exist.' \ % (len(pairs), zone.num_texts, zone.id)) # Ok, that's done, we have our results. # Let's render some html html = '' if doing_text: html += zone.before_all_text or '' for pair in pairs: creative = pair.creative if track or (track is None and conf.enable_adjector_view_tracking): # Create a view thingy - this is much faster than using SA (almost instant) model.session.execute('INSERT INTO views (creative_id, zone_id, time) VALUES (%i, %i, now())' % (creative.id, zone.id)) #model.View(creative.id, zone.id) # Figure out the html value... # Use either click tracked or regular html if (track or (track is None and conf.enable_adjector_click_tracking)) and creative.html_tracked is not None: creative_html = creative.html_tracked.replace('ADJECTOR_TRACKING_BASE_URL', conf.tracking_base_url)\ .replace('ADJECTOR_CREATIVE_ID', str(creative.id)).replace('ADJECTOR_ZONE_ID', str(zone.id)) else: creative_html = creative.html # Remove or modify third party click tracking if (track is False or (track is None and not conf.enable_third_party_tracking)) and creative.cj_site_id is not None: creative_html = remove_tracking(creative_html, creative.cj_site_id) elif cj_site_id and creative.cj_site_id and conf.enable_cj_site_replacements: creative_html = re.sub(str(creative.cj_site_id), str(cj_site_id), creative_html) ########### Now we can do some text assembly ########### # If text, add pre-text if doing_text: html += zone.before_each_text or '' html += creative_html # Are we in admin mode? if admin: html += ''' <div class='adjector_admin' style='color: red; background-color: silver'> Creative: <a href='%(admin_base_url)s%(creative_url)s'>%(creative_title)s</a> Zone: <a href='%(admin_base_url)s%(zone_url)s'>%(zone_title)s</a> </div> ''' % dict(admin_base_url = conf.admin_base_url, creative_url = '/creative/%i' % creative.id, zone_url = zone.view(), creative_title = creative.title, zone_title = zone.title) if doing_text: html += zone.after_each_text or '' if doing_text: html += zone.after_all_text or '' model.session.commit() #having this down here saves us quite a bit of time # Wrap in javascript if asked if html and '<script' not in html and conf.require_javascript: wrapper = '''<script type='text/javascript'>document.write('%s')</script>''' # Do some quick substitutions to inject... #TODO there must be an existing function that does this html = re.sub(r"'", r"\'", html) # escape quotes html = re.sub(r"[\r\n]", r"", html) # remove line breaks return wrapper % html return html
AdjectorClient
/AdjectorClient-1.0b1.tar.gz/AdjectorClient-1.0b1/adjector/core/render.py
render.py
import logging from datetime import datetime from elixir import using_options, using_table_options, BLOB, Boolean, ColumnProperty, \ DateTime, Entity, EntityMeta, Field, Float, Integer, ManyToMany, ManyToOne, \ OneToMany, OneToOne, SmallInteger, String, UnicodeText from genshi import Markup from sqlalchemy import func, UniqueConstraint from adjector.core.conf import conf from adjector.core.tracking import add_tracking, remove_tracking log = logging.getLogger(__name__) max_int = 2147483647 tz_now = lambda : datetime.now(conf.timezone) UnicodeText = UnicodeText(assert_unicode=False) class CircularDependencyException(Exception): pass class GenericEntity(object): def __init__(self, data): self._updated = self.set(data) def set(self, data): for field in data.keys(): if hasattr(self, field): if field == 'title': data[field] = data[field][:80] self.__setattr__(field, data[field]) else: log.warning('No field: %s' % field) def value(self): return self.__dict__ class GenericListEntity(GenericEntity): def set(self, data): GenericEntity.set(self, data) # Detect cycles in parenting - Brent's algorithm http://www.siafoo.net/algorithm/11 turtle = self rabbit = self steps_taken = 0 step_limit = 2 while True: if not rabbit.parent_id: break #no loop rabbit = rabbit.query.get(rabbit.parent_id) steps_taken += 1 if rabbit == turtle: # loop! raise CircularDependencyException if steps_taken == step_limit: steps_taken = 0 step_limit *=2 turtle = rabbit class CJIgnoredLink(Entity): cj_advertiser_id = Field(Integer, required=True) cj_link_id = Field(Integer, required=True) using_options(tablename=conf.table_prefix + 'cj_ignored_links') using_table_options(UniqueConstraint('cj_link_id')) def __init__(self, link_id, advertiser_id): self.cj_advertiser_id = advertiser_id self.cj_link_id = link_id class Click(Entity): time = Field(DateTime(timezone=True), required=True, default=tz_now) creative = ManyToOne('Creative', ondelete='set null') zone = ManyToOne('Zone', ondelete='set null') using_options(tablename=conf.table_prefix + 'clicks') def __init__(self, creative_id, zone_id): self.creative_id = creative_id self.zone_id = zone_id class Creative(GenericEntity, Entity): parent = ManyToOne('Set', required=False, ondelete='set null') #zones = ManyToMany('Zone', tablename='creatives_to_zones') creative_zone_pairs = OneToMany('CreativeZonePair', cascade='delete') title = Field(String(80, convert_unicode=True), required=True) html = Field(UnicodeText, required=True, default='') is_text = Field(Boolean, required=True, default=False) width = Field(Integer, required=True, default=0) height = Field(Integer, required=True, default=0) start_date = Field(DateTime(timezone=True)) end_date = Field(DateTime(timezone=True)) weight = Field(Float, required=True, default=1.0) add_tracking = Field(Boolean, required=True, default=True) disabled = Field(Boolean, required=True, default=False) create_date = Field(DateTime(timezone=True), required=True, default=tz_now) cj_link_id = Field(Integer) cj_advertiser_id = Field(Integer) cj_site_id = Field(Integer) views = OneToMany('View') clicks = OneToMany('Click') # Cached Values html_tracked = Field(UnicodeText) #will be overwritten on set parent_weight = Field(Float, required=True, default=1.0) # overwritten on any parent weight change using_options(tablename=conf.table_prefix + 'creatives', order_by='title') using_table_options(UniqueConstraint('cj_link_id')) def __init__(self, data): GenericEntity.__init__(self, data) if self.parent_id: self.parent_weight = Set.get(self.parent_id).weight def get_clicks(self, start=None, end=None): query = Click.query.filter_by(creative_id = self.id) if start: query = query.filter(Click.time > start) if end: query = query.filter(Click.time < end) return query.count() def get_views(self, start=None, end=None): query = View.query.filter_by(creative_id = self.id) if start: query = query.filter(View.time > start) if end: query = query.filter(View.time < end) return query.count() @staticmethod def possible_parents(this=None): return [[set.id, set.title] for set in Set.query()] def set(self, data): old_parent_id = self.parent_id old_html = self.html old_add_tracking = self.add_tracking GenericEntity.set(self, data) if self.parent_id != old_parent_id: self.parent_weight = Set.get(self.parent_id).weight # TODO: Handle Block / Text bullshit # Parse html if self.html != old_html or self.add_tracking != old_add_tracking: if self.add_tracking is not False: self.html_tracked = add_tracking(self.html) else: self.html_tracked = None return [self] def value(self): value = GenericEntity.value(self) value['preview'] = Markup(remove_tracking(self.html, self.cj_site_id)) value['total_weight'] = self.weight * self.parent_weight value['html_tracked'] = value['html_tracked'] or value['html'] return value def view(self): return '%s/creative/%i' % (conf.admin_base_url, self.id) class CreativeZonePair(GenericEntity, Entity): creative = ManyToOne('Creative', ondelete='cascade', use_alter=True) zone = ManyToOne('Zone', ondelete='cascade', use_alter=True) is_text = Field(Boolean, required=True) lower_bound = Field(Float, required=True) upper_bound = Field(Float, required=True) using_options(tablename=conf.table_prefix + 'creative_zone_pairs') using_table_options(UniqueConstraint('creative_id', 'zone_id')) class Location(GenericListEntity, Entity): ''' A container for locations or zones ''' parent = ManyToOne('Location', required=False, ondelete='set null') sublocations = OneToMany('Location') zones = OneToMany('Zone') title = Field(String(80, convert_unicode=True), required=True) description = Field(UnicodeText) create_date = Field(DateTime(timezone=True), required=True, default=tz_now) cj_site_id = Field(Integer) parent_cj_site_id = Field(Integer) using_options(tablename=conf.table_prefix + 'locations', order_by='title') def __init__(self, data): GenericEntity.__init__(self, data) if self.parent_id: self.parent_cj_site_id = Location.get(self.parent_id).cj_site_id def delete(self, data): updated = [] for subloc in self.sublocations: updated.extend(subloc.set(dict(parent_cj_site_id = None))) for zone in self.zones: updated.extend(zone.set(dict(parent_cj_site_id = None))) Entity.delete(self) return updated @staticmethod def possible_parents(this = None): filter = None if this: filter = Location.id != this.id return [[location.id, location.title] for location in Location.query.filter(filter)] def set(self, data): updated = [self] old_parent_id = self.parent_id old_cj_site_id = self.cj_site_id old_parent_cj_site_id = self.parent_cj_site_id GenericEntity.set(self, data) if self.parent_id != old_parent_id: self.parent_cj_site_id = Location.get(self.parent_id).cj_site_id if self.cj_site_id != old_cj_site_id or self.parent_cj_site_id != old_parent_cj_site_id: # Only pass parent- down if we don't have our own for subloc in self.sublocations: updated.extend(subloc.set(dict(parent_cj_site_id = self.cj_site_id or self.parent_cj_site_id))) for zone in self.zones: updated.extend(zone.set(dict(parent_cj_site_id = self.cj_site_id or self.parent_cj_site_id))) return updated def view(self): return '%s/location/%i' % (conf.admin_base_url, self.id) class Set(GenericListEntity, Entity): parent = ManyToOne('Set', required=False, ondelete='set null') subsets = OneToMany('Set') creatives = OneToMany('Creative') title = Field(String(80, convert_unicode=True), required=True) description = Field(UnicodeText) weight = Field(Float, required=True, default=1.0) parent_weight = Field(Float, required=True, default=1.0) # overwritten on any parent weight change create_date = Field(DateTime(timezone=True), required=True, default=tz_now) cj_advertiser_id = Field(Integer) using_options(tablename=conf.table_prefix + 'sets', order_by='title') using_table_options(UniqueConstraint('cj_advertiser_id')) def __init__(self, data): GenericEntity.__init__(self, data) if self.parent_id: self.parent_weight = Set.get(self.parent_id).weight def delete(self, data): updated = [] for subset in self.subsets: updated.extend(subset.set(dict(parent_weight = 1.0))) for creative in self.creatives: updated.extend(creative.set(dict(parent_weight = 1.0))) Entity.delete(self) return updated @staticmethod def possible_parents(this = None): filter = None if this: filter = Set.id != this.id return [[set.id, set.title] for set in Set.query.filter(filter)] def set(self, data): updated = [self] old_parent_id = self.parent_id old_weight = self.weight old_parent_weight = self.parent_weight GenericEntity.set(self, data) if self.parent_id != old_parent_id: self.parent_weight = Set.get(self.parent_id).weight if self.weight != old_weight or self.parent_weight != old_parent_weight: for subset in self.subsets: updated.extend(subset.set(dict(parent_weight = self.parent_weight * self.weight))) for creative in self.creatives: updated.extend(creative.set(dict(parent_weight = self.parent_weight * self.weight))) return updated def value(self): value = GenericEntity.value(self) value['total_weight'] = self.weight * self.parent_weight return value def view(self): return '%s/set/%i' % (conf.admin_base_url, self.id) class View(GenericEntity, Entity): time = Field(DateTime(timezone=True), required=True, default=tz_now) creative = ManyToOne('Creative', ondelete='set null') zone = ManyToOne('Zone', ondelete='set null') using_options(tablename=conf.table_prefix + 'views') def __init__(self, creative_id, zone_id): self.creative_id = creative_id self.zone_id = zone_id class Zone(GenericEntity, Entity): parent = ManyToOne('Location', required=False, ondelete='set null') creative_zone_pairs = OneToMany('CreativeZonePair', cascade='delete') name = Field(String(80, convert_unicode=True), required=False) title = Field(String(80, convert_unicode=True), required=True) description = Field(UnicodeText) #creatives = ManyToMany('Creative', tablename='creatives_to_zones') normalize_by_container = Field(Boolean, required=True, default=False) creative_types = Field(SmallInteger, required=True, default=0) #0: Both, 1: Text, 2: Blocks # These only matter if blocks allowed min_width = Field(Integer, required=True, default=0) max_width = Field(Integer, required=True, default=max_int) min_height = Field(Integer, required=True, default=0) max_height = Field(Integer, required=True, default=max_int) # These only matter if text allowed num_texts = Field(SmallInteger, required=True, default=1) weight_texts = Field(Float, required=True, default=1.0) before_all_text = Field(UnicodeText) after_all_text = Field(UnicodeText) before_each_text = Field(UnicodeText) after_each_text = Field(UnicodeText) create_date = Field(DateTime(timezone=True), required=True, default=tz_now) # Cached from parent parent_cj_site_id = Field(Integer) # Cached from creatives total_text_weight = Field(Float) # i dunno, some default? should be updated quick. views = OneToMany('View') clicks = OneToMany('Click') using_options(tablename=conf.table_prefix + 'zones', order_by='title') def __init__(self, data): GenericEntity.__init__(self, data) if self.parent_id: self.parent_cj_site_id = Location.get(self.parent_id).cj_site_id def get_clicks(self, start=None, end=None): query = Click.query.filter_by(zone_id = self.id) if start: query = query.filter(Click.time > start) if end: query = query.filter(Click.time < end) return query.count() def get_views(self, start=None, end=None): query = View.query.filter_by(zone_id = self.id) if start: query = query.filter(View.time > start) if end: query = query.filter(View.time < end) return query.count() @staticmethod def possible_parents(this=None): return [[location.id, location.title] for location in Location.query()] def set(self, data): if data.has_key('previous_name'): del data['previous_name'] old_parent_id = self.parent_id GenericEntity.set(self, data) if self.parent_id != old_parent_id: self.parent_cj_site_id = Location.get(self.parent_id).cj_site_id return [self] def value(self): val = self.__dict__.copy() val['previous_name'] = self.name return val def view(self): return '%s/zone/%i' % (conf.admin_base_url, self.id)
AdjectorClient
/AdjectorClient-1.0b1.tar.gz/AdjectorClient-1.0b1/adjector/model/entities.py
entities.py
Adjector 1.0b ************* Hi there. Thanks for using Adjector, a lightweight, flexible, open-source ad server written in Python. Adjector is licensed under the GPL, version 2 or 3, at your option. For more information, see LICENSE.txt. This Distribution ----------------- This is the Trac plugin for Adjector. Either the full version or the client-only version of Adjector are also required. If neither is installed on your system, the client-only version will be installed when you install the plugin. Both versions can be downloaded at http://projects.icapsid.net/adjector/wiki/Download Documentation ------------- All of our documentation is online at http://projects.icapsid.net/adjector You may wish to get started with 'The Trac Plugin' at http://projects.icapsid.net/adjector/wiki/TracPlugin For questions, comments, help, or any other information, visit us online or email [email protected].
AdjectorTracPlugin
/AdjectorTracPlugin-1.0b1.tar.gz/AdjectorTracPlugin-1.0b1/README.txt
README.txt
from adjector.client import initialize_adjector, render_zone from trac.core import Component, implements from trac.web.api import IRequestFilter class AdjectorTracPlugin(Component): implements(IRequestFilter) #magic self variables: env, config, log def __init__(self): self.log.error('ADJECTOR') config = dict(self.config.options('adjector')) initialize_adjector(config) #IRequestFilter """Extension point interface for components that want to filter HTTP requests, before and/or after they are processed by the main handler.""" def pre_process_request(self, req, handler): """Called after initial handler selection, and can be used to change the selected handler or redirect request. Always returns the request handler, even if unchanged. """ return handler # for ClearSilver templates def post_process_request(self, req, template, content_type): """Do any post-processing the request might need; typically adding values to req.hdf, or changing template or mime type. Always returns a tuple of (template, content_type), even if unchanged. Note that `template`, `content_type` will be `None` if: - called when processing an error page - the default request handler did not return any result (for 0.10 compatibility; only used together with ClearSilver templates) """ return (template, content_type) # for Genshi templates def post_process_request(self, req, template, data, content_type): """Do any post-processing the request might need; typically adding values to the template `data` dictionary, or changing template or mime type. `data` may be update in place. Always returns a tuple of (template, data, content_type), even if unchanged. Note that `template`, `data`, `content_type` will be `None` if: - called when processing an error page - the default request handler did not return any result (Since 0.11) """ data['render_zone'] = render_zone return (template, data, content_type)
AdjectorTracPlugin
/AdjectorTracPlugin-1.0b1.tar.gz/AdjectorTracPlugin-1.0b1/adjector/plugins/trac/adjector_trac.py
adjector_trac.py
## AdminToolsDjango ไฝฟ็”จๆจกๆฟ ่‡ชๅŠจๅˆ›ๅปบDjango้กน็›ฎ ## ไป‹็ป๏ผš ๅ…ผๅฎน linux windows ็ณป็ปŸ ๅˆ›ๅปบ่™šๆ‹Ÿ็ŽฏๅขƒๅŽ ๅฎ‰่ฃ…djano==3.2.11 ็„ถๅŽ็”จๆจกๆฟๅˆ›ๅปบ้กน็›ฎ๏ผŒ ๅฏไปฅ่‡ชๅฎšไน‰ๆจกๆฟ๏ผš ๆจกๆฟ่ทฏๅพ„ ๅปบ่ฎฎไฝฟ็”จ็ปๅฏน่ทฏๅพ„ ๅฏไปฅ่‡ชๅฎšไน‰้กน็›ฎไธŠ็บงๆ–‡ไปถๅคน๏ผŒproject_parent_dir๏ผŒ ๅปบ่ฎฎไฝฟ็”จ็ปๅฏน่ทฏๅพ„ ## ไฝฟ็”จ็คบไพ‹๏ผš ```python import AdminToolsDjango # ๅˆ›ๅปบไธ€ไธชๅฏน่ฑก django_project = AdminToolsDjango.ProjectManager(project_parent_dir='/obj_test/new_obj_parent', project_name='new_obj', ) print(django_project.cmd_activate_venv) # ๅˆ›ๅปบ้กน็›ฎ django_project.create_project() # ้…็ฝฎ็”Ÿไบง็Žฏๅขƒ ไผš่‡ชๅŠจ้…็ฝฎnginxๅๅ‘ไปฃ็† ่ฆ็กฎไฟlinux็ณป็ปŸ ๆœ‰ nginx ๆœๅŠกๅ™จ django_project.configure_production_environment() ``` ## ๅฎ‰่ฃ…๏ผš ไฝฟ็”จไปฅไธ‹ๅ‘ฝไปคๅฎ‰่ฃ… AdminToolsDjango๏ผš ```shell pip install AdminToolsDjango ```
AdminToolsDjango
/AdminToolsDjango-1.0.6.tar.gz/AdminToolsDjango-1.0.6/README.md
README.md
from copy import deepcopy import datetime import json from time import time class RequestCreator: """ A class to help build a request for Adobe Analytics API 2.0 getReport """ template = { "globalFilters": [], "metricContainer": { "metrics": [], "metricFilters": [], }, "settings": { "countRepeatInstances": True, "limit": 20000, "page": 0, "nonesBehavior": "exclude-nones", }, "statistics": {"functions": ["col-max", "col-min"]}, "rsid": "", } def __init__(self, request: dict = None) -> None: """ Instanciate the constructor. Arguments: request : OPTIONAL : overwrite the template with the definition provided. """ if request is not None: if '.json' in request and type(request) == str: with open(request,'r') as f: request = json.load(f) self.__request = deepcopy(request) or deepcopy(self.template) self.__metricCount = len(self.__request["metricContainer"]["metrics"]) self.__metricFilterCount = len( self.__request["metricContainer"].get("metricFilters", []) ) self.__globalFiltersCount = len(self.__request["globalFilters"]) ### Preparing some time statement. today = datetime.datetime.now() today_date_iso = today.isoformat().split("T")[0] ## should give '20XX-XX-XX' tomorrow_date_iso = ( (today + datetime.timedelta(days=1)).isoformat().split("T")[0] ) time_start = "T00:00:00.000" time_end = "T23:59:59.999" startToday_iso = today_date_iso + time_start endToday_iso = today_date_iso + time_end startMonth_iso = f"{today_date_iso[:-2]}01{time_start}" tomorrow_iso = tomorrow_date_iso + time_start next_month = today.replace(day=28) + datetime.timedelta(days=4) last_day_month = next_month - datetime.timedelta(days=next_month.day) last_day_month_date_iso = last_day_month.isoformat().split("T")[0] last_day_month_iso = last_day_month_date_iso + time_end thirty_days_prior_date_iso = ( (today - datetime.timedelta(days=30)).isoformat().split("T")[0] ) thirty_days_prior_iso = thirty_days_prior_date_iso + time_start seven_days_prior_iso_date = ( (today - datetime.timedelta(days=7)).isoformat().split("T")[0] ) seven_days_prior_iso = seven_days_prior_iso_date + time_start ### assigning predefined dates: self.dates = { "thisMonth": f"{startMonth_iso}/{last_day_month_iso}", "untilToday": f"{startMonth_iso}/{startToday_iso}", "todayIncluded": f"{startMonth_iso}/{endToday_iso}", "last30daysTillToday": f"{thirty_days_prior_iso}/{startToday_iso}", "last30daysTodayIncluded": f"{thirty_days_prior_iso}/{tomorrow_iso}", "last7daysTillToday": f"{seven_days_prior_iso}/{startToday_iso}", "last7daysTodayIncluded": f"{seven_days_prior_iso}/{endToday_iso}", } self.today = today def __repr__(self): return json.dumps(self.__request, indent=4) def __str__(self): return json.dumps(self.__request, indent=4) def addMetric(self, metricId: str = None) -> None: """ Add a metric to the template. Arguments: metricId : REQUIRED : The metric to add """ if metricId is None: raise ValueError("Require a metric ID") columnId = self.__metricCount addMetric = {"columnId": str(columnId), "id": metricId} if columnId == 0: addMetric["sort"] = "desc" self.__request["metricContainer"]["metrics"].append(addMetric) self.__metricCount += 1 def removeMetrics(self) -> None: """ Remove all metrics. """ self.__request["metricContainer"]["metrics"] = [] self.__metricCount = 0 def getMetrics(self) -> list: """ return a list of the metrics used """ return [metric["id"] for metric in self.__request["metricContainer"]["metrics"]] def setSearch(self,clause:str=None)->None: """ Add a search clause in the Analytics request. Arguments: clause : REQUIRED : String to tell what search clause to add. Examples: "( CONTAINS 'unspecified' ) OR ( CONTAINS 'none' ) OR ( CONTAINS '' )" "( MATCH 'undefined' )" "( NOT CONTAINS 'undefined' )" "( BEGINS-WITH 'undefined' )" "( BEGINS-WITH 'undefined' ) AND ( BEGINS-WITH 'none' )" """ if clause is None: raise ValueError("Require a clause to add to the request") self.__request["search"] = { "clause" : clause } def removeSearch(self)->None: """ Remove the search associated with the request. """ del self.__request["search"] def addMetricFilter( self, metricId: str = None, filterId: str = None, metricIndex: int = None ) -> None: """ Add a filter to a metric. Arguments: metricId : REQUIRED : metric where the filter is added filterId : REQUIRED : The filter to add. when breakdown, use the following format for the value "dimension:::itemId" metricIndex : OPTIONAL : If used, set the filter to the metric located on that index. """ if metricId is None: raise ValueError("Require a metric ID") if filterId is None: raise ValueError("Require a filter ID") filterIdCount = self.__metricFilterCount if filterId.startswith("s") and "@AdobeOrg" in filterId: filterType = "segment" filter = { "id": str(filterIdCount), "type": filterType, "segmentId": filterId, } elif filterId.startswith("20") and "/20" in filterId: filterType = "dateRange" filter = { "id": str(filterIdCount), "type": filterType, "dateRange": filterId, } elif ":::" in filterId: filterType = "breakdown" dimension, itemId = filterId.split(":::") filter = { "id": str(filterIdCount), "type": filterType, "dimension": dimension, "itemId": itemId, } else: ### case when it is predefined segments like "All_Visits" filterType = "segment" filter = { "id": str(filterIdCount), "type": filterType, "segmentId": filterId, } if filterIdCount == 0: self.__request["metricContainer"]["metricFilters"] = [filter] else: self.__request["metricContainer"]["metricFilters"].append(filter) ### adding filter to the metric if metricIndex is None: for metric in self.__request["metricContainer"]["metrics"]: if metric["id"] == metricId: if "filters" in metric.keys(): metric["filters"].append(str(filterIdCount)) else: metric["filters"] = [str(filterIdCount)] else: metric = self.__request["metricContainer"]["metrics"][metricIndex] if "filters" in metric.keys(): metric["filters"].append(str(filterIdCount)) else: metric["filters"] = [str(filterIdCount)] ### incrementing the filter counter self.__metricFilterCount += 1 def removeMetricFilter(self, filterId: str = None) -> None: """ remove a filter from a metric Arguments: filterId : REQUIRED : The filter to add. when breakdown, use the following format for the value "dimension:::itemId" """ found = False ## flag if filterId is None: raise ValueError("Require a filter ID") if ":::" in filterId: filterId = filterId.split(":::")[1] list_index = [] for metricFilter in self.__request["metricContainer"]["metricFilters"]: if filterId in str(metricFilter): list_index.append(metricFilter["id"]) found = True ## decrementing the filter counter if found: for metricFilterId in reversed(list_index): del self.__request["metricContainer"]["metricFilters"][ int(metricFilterId) ] for metric in self.__request["metricContainer"]["metrics"]: if metricFilterId in metric.get("filters", []): metric["filters"].remove(metricFilterId) self.__metricFilterCount -= 1 def setLimit(self, limit: int = 100) -> None: """ Specific the number of element to retrieve. Default is 10. Arguments: limit : OPTIONAL : number of elements to return """ self.__request["settings"]["limit"] = limit def setRepeatInstance(self, repeat: bool = True) -> None: """ Specify if repeated instances should be counted. Arguments: repeat : OPTIONAL : True or False (True by default) """ self.__request["settings"]["countRepeatInstances"] = repeat def setNoneBehavior(self, returnNones: bool = True) -> None: """ Set the behavior of the None values in that request. Arguments: returnNones : OPTIONAL : True or False (True by default) """ if returnNones: self.__request["settings"]["nonesBehavior"] = "return-nones" else: self.__request["settings"]["nonesBehavior"] = "exclude-nones" def setDimension(self, dimension: str = None) -> None: """ Set the dimension to be used for reporting. Arguments: dimension : REQUIRED : the dimension to build your report on """ if dimension is None: raise ValueError("A dimension must be passed") self.__request["dimension"] = dimension def setRSID(self, rsid: str = None) -> None: """ Set the reportSuite ID to be used for the reporting. Arguments: rsid : REQUIRED : The Data View ID to be passed. """ if rsid is None: raise ValueError("A reportSuite ID must be passed") self.__request["rsid"] = rsid def addGlobalFilter(self, filterId: str = None) -> None: """ Add a global filter to the report. NOTE : You need to have a dateRange filter at least in the global report. Arguments: filterId : REQUIRED : The filter to add to the global filter. example : "s2120430124uf03102jd8021" -> segment "2020-01-01T00:00:00.000/2020-02-01T00:00:00.000" -> dateRange """ if filterId.startswith("s") and "@AdobeOrg" in filterId: filterType = "segment" filter = { "type": filterType, "segmentId": filterId, } elif filterId.startswith("20") and "/20" in filterId: filterType = "dateRange" filter = { "type": filterType, "dateRange": filterId, } elif ":::" in filterId: filterType = "breakdown" dimension, itemId = filterId.split(":::") filter = { "type": filterType, "dimension": dimension, "itemId": itemId, } else: ### case when it is predefined segments like "All_Visits" filterType = "segment" filter = { "type": filterType, "segmentId": filterId, } ### incrementing the count for globalFilter self.__globalFiltersCount += 1 ### adding to the globalFilter list self.__request["globalFilters"].append(filter) def updateDateRange( self, dateRange: str = None, shiftingDays: int = None, shiftingDaysEnd: int = None, shiftingDaysStart: int = None, ) -> None: """ Update the dateRange filter on the globalFilter list One of the 3 elements specified below is required. Arguments: dateRange : OPTIONAL : string representing the new dateRange string, such as: 2020-01-01T00:00:00.000/2020-02-01T00:00:00.000 shiftingDays : OPTIONAL : An integer, if you want to add or remove days from the current dateRange provided. Apply to end and beginning of dateRange. So 2020-01-01T00:00:00.000/2020-02-01T00:00:00.000 with +2 will give 2020-01-03T00:00:00.000/2020-02-03T00:00:00.000 shiftingDaysEnd : : OPTIONAL : An integer, if you want to add or remove days from the last part of the current dateRange. Apply only to end of the dateRange. So 2020-01-01T00:00:00.000/2020-02-01T00:00:00.000 with +2 will give 2020-01-01T00:00:00.000/2020-02-03T00:00:00.000 shiftingDaysStart : OPTIONAL : An integer, if you want to add or remove days from the last first part of the current dateRange. Apply only to beginning of the dateRange. So 2020-01-01T00:00:00.000/2020-02-01T00:00:00.000 with +2 will give 2020-01-03T00:00:00.000/2020-02-01T00:00:00.000 """ pos = -1 for index, filter in enumerate(self.__request["globalFilters"]): if filter["type"] == "dateRange": pos = index curDateRange = filter["dateRange"] start, end = curDateRange.split("/") start = datetime.datetime.fromisoformat(start) end = datetime.datetime.fromisoformat(end) if dateRange is not None and type(dateRange) == str: for index, filter in enumerate(self.__request["globalFilters"]): if filter["type"] == "dateRange": pos = index curDateRange = filter["dateRange"] newDef = { "type": "dateRange", "dateRange": dateRange, } if shiftingDays is not None and type(shiftingDays) == int: newStart = (start + datetime.timedelta(shiftingDays)).isoformat( timespec="milliseconds" ) newEnd = (end + datetime.timedelta(shiftingDays)).isoformat( timespec="milliseconds" ) newDef = { "type": "dateRange", "dateRange": f"{newStart}/{newEnd}", } elif shiftingDaysEnd is not None and type(shiftingDaysEnd) == int: newEnd = (end + datetime.timedelta(shiftingDaysEnd)).isoformat( timespec="milliseconds" ) newDef = { "type": "dateRange", "dateRange": f"{start}/{newEnd}", } elif shiftingDaysStart is not None and type(shiftingDaysStart) == int: newStart = (start + datetime.timedelta(shiftingDaysStart)).isoformat( timespec="milliseconds" ) newDef = { "type": "dateRange", "dateRange": f"{newStart}/{end}", } if pos > -1: self.__request["globalFilters"][pos] = newDef else: ## in case there is no dateRange already self.__request["globalFilters"][pos].append(newDef) def removeGlobalFilter(self, index: int = None, filterId: str = None) -> None: """ Remove a specific filter from the globalFilter list. You can use either the index of the list or the specific Id of the filter used. Arguments: index : REQUIRED : index in the list return filterId : REQUIRED : the id of the filter to be removed (ex: filterId, dateRange) """ pos = -1 if index is not None: del self.__request["globalFilters"][index] elif filterId is not None: for index, filter in enumerate(self.__request["globalFilters"]): if filterId in str(filter): pos = index if pos > -1: del self.__request["globalFilters"][pos] ### decrementing the count for globalFilter self.__globalFiltersCount -= 1 def to_dict(self) -> None: """ Return the request definition """ return deepcopy(self.__request) def save(self, fileName: str = None) -> None: """ save the request definition in a JSON file. Argument: filename : OPTIONAL : Name of the file. (default cjapy_request_<timestamp>.json) """ fileName = fileName or f"aa_request_{int(time())}.json" with open(fileName, "w") as f: f.write(json.dumps(self.to_dict(), indent=4))
Adobe-Lib-Manual
/Adobe_Lib_Manual-4.2.tar.gz/Adobe_Lib_Manual-4.2/aanalytics2/requestCreator.py
requestCreator.py
import pandas as pd import json from typing import Union, IO import time from .requestCreator import RequestCreator from copy import deepcopy class Workspace: """ A class to return data from the getReport method. """ startDate = None endDate = None settings = None def __init__( self, responseData: dict, dataRequest: dict = None, columns: dict = None, summaryData: dict = None, analyticsConnector: object = None, reportType: str = "normal", metrics: Union[dict, list] = None, ## for normal type, static report metricFilters: dict = None, resolveColumns: bool = True, ) -> None: """ Setup the different values from the response of the getReport Argument: responseData : REQUIRED : data returned & predigested by the getReport method. dataRequest : REQUIRED : dataRequest containing the request columns : REQUIRED : the columns element of the response. summaryData : REQUIRED : summary data containing total calculated by CJA analyticsConnector : REQUIRED : analytics object connector. reportType : OPTIONAL : define type of report retrieved.(normal, static, multi) metrics : OPTIONAL : dictionary of the columns Id for normal report and list of columns name for Static report metricFilters : OPTIONAL : Filter name for the id of the filter resolveColumns : OPTIONAL : If you want to resolve the column name and returning ID instead of name """ for filter in dataRequest["globalFilters"]: if filter["type"] == "dateRange": self.startDate = filter["dateRange"].split("/")[0] self.endDate = filter["dateRange"].split("/")[1] self.dataRequest = RequestCreator(dataRequest) self.requestSize = dataRequest["settings"]["limit"] self.settings = dataRequest["settings"] self.pageRequested = dataRequest["settings"]["page"] + 1 self.summaryData = summaryData self.reportType = reportType self.analyticsObject = analyticsConnector ## global filters resolution filters = [] for filter in dataRequest["globalFilters"]: if filter["type"] == "segment": segmentId = filter.get("segmentId",None) if segmentId is not None: seg = self.analyticsObject.getSegment(filter["segmentId"]) filter["segmentName"] = seg["name"] else: context = filter.get('segmentDefinition',{}).get('container',{}).get('context') description = filter.get('segmentDefinition',{}).get('container',{}).get('pred',{}).get('description') listName = ','.join(filter.get('segmentDefinition',{}).get('container',{}).get('pred',{}).get('list',[])) function = filter.get('segmentDefinition',{}).get('container',{}).get('pred',{}).get('func') filter["segmentId"] = f"Dynamic: {context} {description} {function} {listName}" filter["segmentName"] = f"{context} {description} {listName}" filters.append(filter) self.globalFilters = filters self.metricFilters = metricFilters if reportType == "normal" or reportType == "static": df_init = pd.DataFrame(responseData).T df_init = df_init.reset_index() elif reportType == "multi": df_init = responseData if reportType == "normal": columns_data = ["itemId"] elif reportType == "static": columns_data = ["SegmentName"] ### adding dimensions & metrics in columns names when reportType is "normal" if "dimension" in dataRequest.keys() and reportType == "normal": columns_data.append(dataRequest["dimension"]) ### adding metrics in columns names columnIds = columns["columnIds"] # To get readable names of template metrics and Success Events, we need to get the full list of metrics for the Report Suite first. # But we won't do this if there are no such metrics in the report. if (resolveColumns is True) & ( len([metric for metric in metrics.values() if metric.startswith("metrics/")]) > 0): rsMetricsList = self.analyticsObject.getMetrics(rsid=dataRequest["rsid"]) for col in columnIds: metrics: dict = metrics ## case when dict is used metricListName: list = metrics[col].split(":::") if resolveColumns: metricResolvedName = [] for metric in metricListName: if metric.startswith("cm"): cm = self.analyticsObject.getCalculatedMetric(metric) metricName = cm.get("name",metric) metricResolvedName.append(metricName) elif metric.startswith("s"): seg = self.analyticsObject.getSegment(metric) segName = seg.get("name",metric) metricResolvedName.append(segName) elif metric.startswith("metrics/"): metricName = rsMetricsList[rsMetricsList["id"] == metric]["name"].iloc[0] metricResolvedName.append(metricName) else: metricResolvedName.append(metric) colName = ":::".join(metricResolvedName) columns_data.append(colName) else: columns_data.append(metrics[col]) elif reportType == "static": metrics: list = metrics ## case when a list is used columns_data.append("SegmentId") columns_data += metrics if df_init.empty == False and ( reportType == "static" or reportType == "normal" ): df_init.columns = columns_data self.columns = list(df_init.columns) elif reportType == "multi": self.columns = list(df_init.columns) else: self.columns = list(df_init.columns) self.row_numbers = len(df_init) self.dataframe = df_init def __str__(self): return json.dumps( { "startDate": self.startDate, "endDate": self.endDate, "globalFilters": self.globalFilters, "totalRows": self.row_numbers, "columns": self.columns, }, indent=4, ) def __repr__(self): return json.dumps( { "startDate": self.startDate, "endDate": self.endDate, "globalFilters": self.globalFilters, "totalRows": self.row_numbers, "columns": self.columns, }, indent=4, ) def to_csv( self, filename: str = None, delimiter: str = ",", index: bool = False, ) -> IO: """ Save the result in a CSV Arguments: filename : OPTIONAL : name of the file delimiter : OPTIONAL : delimiter of the CSV index : OPTIONAL : should the index be included in the CSV (default False) """ if filename is None: filename = f"cjapy_{int(time.time())}.csv" self.df_init.to_csv(filename, delimiter=delimiter, index=index) def to_json(self, filename: str = None, orient: str = "index") -> IO: """ Save the result to JSON Arguments: filename : OPTIONAL : name of the file orient : OPTIONAL : orientation of the JSON """ if filename is None: filename = f"cjapy_{int(time.time())}.json" self.df_init.to_json(filename, orient=orient) def breakdown( self, index: Union[int, str] = None, dimension: str = None, n_results: Union[int, str] = 10, ) -> object: """ Breakdown a specific index or value of the dataframe, by another dimension. NOTE: breakdowns are possible only from normal reportType. Return a workspace instance. Arguments: index : REQUIRED : Value to use as filter for the breakdown or index of the dataframe to use for the breakdown. dimension : REQUIRED : dimension to report. n_results : OPTIONAL : number of results you want to have on your breakdown. Default 10, can use "inf" """ if index is None or dimension is None: raise ValueError( "Require a value to use as breakdown and dimension to request" ) breadown_dimension = list(self.dataframe.columns)[1] if type(index) == str: row: pd.Series = self.dataframe[self.dataframe.iloc[:, 1] == index] itemValue: str = row["itemId"].values[0] elif type(index) == int: itemValue = self.dataframe.loc[index, "itemId"] breakdown = f"{breadown_dimension}:::{itemValue}" new_request = RequestCreator(self.dataRequest.to_dict()) new_request.setDimension(dimension) metrics = new_request.getMetrics() for metric in metrics: new_request.addMetricFilter(metricId=metric, filterId=breakdown) if n_results < 20000: new_request.setLimit(n_results) report = self.analyticsObject.getReport2( new_request.to_dict(), n_results=n_results ) if n_results == "inf" or n_results > 20000: report = self.analyticsObject.getReport2( new_request.to_dict(), n_results=n_results ) return report
Adobe-Lib-Manual
/Adobe_Lib_Manual-4.2.tar.gz/Adobe_Lib_Manual-4.2/aanalytics2/workspace.py
workspace.py
import json import os from pathlib import Path from typing import Optional import time # Non standard libraries from .config import config_object, header def find_path(path: str) -> Optional[Path]: """Checks if the file denoted by the specified `path` exists and returns the Path object for the file. If the file under the `path` does not exist and the path denotes an absolute path, tries to find the file by converting the absolute path to a relative path. If the file does not exist with either the absolute and the relative path, returns `None`. """ if Path(path).exists(): return Path(path) elif path.startswith('/') and Path('.' + path).exists(): return Path('.' + path) elif path.startswith('\\') and Path('.' + path).exists(): return Path('.' + path) else: return None def createConfigFile(destination: str = 'config_analytics_template.json',auth_type: str = "oauthV2",verbose: bool = False) -> None: """Creates a `config_admin.json` file with the pre-defined configuration format to store the access data in under the specified `destination`. Arguments: destination : OPTIONAL : the name of the file + path if you want auth_type : OPTIONAL : The type of Oauth type you want to use for your config file. Possible value: "jwt" or "oauthV2" """ json_data = { 'org_id': '<orgID>', 'client_id': "<APIkey>", 'secret': "<YourSecret>", } if auth_type == 'oauthV2': json_data['scopes'] = "<scopes>" elif auth_type == 'jwt': json_data["tech_id"] = "<something>@techacct.adobe.com" json_data["pathToKey"] = "<path/to/your/privatekey.key>" if '.json' not in destination: destination += '.json' with open(destination, 'w') as cf: cf.write(json.dumps(json_data, indent=4)) if verbose: print(f" file created at this location : {os.getcwd()}{os.sep}{destination}.json") def importConfigFile(path: str = None,auth_type:str=None) -> None: """Reads the file denoted by the supplied `path` and retrieves the configuration information from it. Arguments: path: REQUIRED : path to the configuration file. Can be either a fully-qualified or relative. auth_type : OPTIONAL : The type of Auth to be used by default. Detected if none is passed, OauthV2 takes precedence. Possible values: "jwt" or "oauthV2" Example of path value. "config.json" "./config.json" "/my-folder/config.json" """ config_file_path: Optional[Path] = find_path(path) if config_file_path is None: raise FileNotFoundError( f"Unable to find the configuration file under path `{path}`." ) with open(config_file_path, 'r') as file: provided_config = json.load(file) provided_keys = provided_config.keys() if 'api_key' in provided_keys: ## old naming for client_id client_id = provided_config['api_key'] elif 'client_id' in provided_keys: client_id = provided_config['client_id'] else: raise RuntimeError(f"Either an `api_key` or a `client_id` should be provided.") if auth_type is None: if 'scopes' in provided_keys: auth_type = 'oauthV2' elif 'tech_id' in provided_keys and "pathToKey" in provided_keys: auth_type = 'jwt' args = { "org_id" : provided_config['org_id'], "secret" : provided_config['secret'], "client_id" : client_id } if auth_type == 'oauthV2': args["scopes"] = provided_config["scopes"].replace(' ','') if auth_type == 'jwt': args["tech_id"] = provided_config["tech_id"] args["path_to_key"] = provided_config["pathToKey"] configure(**args) def configure(org_id: str = None, tech_id: str = None, secret: str = None, client_id: str = None, path_to_key: str = None, private_key: str = None, oauth: bool = False, token: str = None, scopes: str = None ): """Performs programmatic configuration of the API using provided values. Arguments: org_id : REQUIRED : Organization ID tech_id : REQUIRED : Technical Account ID secret : REQUIRED : secret generated for your connection client_id : REQUIRED : The client_id (old api_key) provided by the JWT connection. path_to_key : REQUIRED : If you have a file containing your private key value. private_key : REQUIRED : If you do not use a file but pass a variable directly. oauth : OPTIONAL : If you wish to pass the token generated by oauth token : OPTIONAL : If oauth set to True, you need to pass the token scopes : OPTIONAL : If you use Oauth, you need to pass the scopes """ if not org_id: raise ValueError("`org_id` must be specified in the configuration.") if not client_id: raise ValueError("`client_id` must be specified in the configuration.") if not tech_id and oauth == False and not scopes: raise ValueError("`tech_id` must be specified in the configuration.") if not secret and oauth == False: raise ValueError("`secret` must be specified in the configuration.") if (not path_to_key and not private_key and oauth == False) and not scopes: raise ValueError("scopes must be specified if Oauth setup.\n `pathToKey` or `private_key` must be specified in the configuration if JWT setup.") config_object["org_id"] = org_id config_object["client_id"] = client_id header["x-api-key"] = client_id config_object["tech_id"] = tech_id config_object["secret"] = secret config_object["pathToKey"] = path_to_key config_object["private_key"] = private_key config_object["scopes"] = scopes # ensure the reset of the state by overwriting possible values from previous import. config_object["date_limit"] = 0 config_object["token"] = "" if oauth: date_limit = int(time.time()) + (22 * 60 * 60) config_object["date_limit"] = date_limit config_object["token"] = token header["Authorization"] = f"Bearer {token}" def get_private_key_from_config(config: dict) -> str: """ Returns the private key directly or read a file to return the private key. """ private_key = config.get('private_key') if private_key is not None: return private_key private_key_path = find_path(config['pathToKey']) if private_key_path is None: raise FileNotFoundError(f'Unable to find the private key under path `{config["pathToKey"]}`.') with open(Path(private_key_path), 'r') as f: private_key = f.read() return private_key def generateLoggingObject( level:str="WARNING", stream:bool=True, file:bool=False, filename:str="aanalytics2.log", format:str="%(asctime)s::%(name)s::%(funcName)s::%(levelname)s::%(message)s::%(lineno)d" )->dict: """ Generates a dictionary for the logging object with basic configuration. You can find the information for the different possible values on the logging documentation. https://docs.python.org/3/library/logging.html Arguments: level : Level of the logger to display information (NOTSET, DEBUG,INFO,WARNING,EROR,CRITICAL) stream : If the logger should display print statements file : If the logger should write the messages to a file filename : name of the file where log are written format : format of the log to be written. """ myObject = { "level" : level, "stream" : stream, "file" : file, "format" : format, "filename":filename } return myObject
Adobe-Lib-Manual
/Adobe_Lib_Manual-4.2.tar.gz/Adobe_Lib_Manual-4.2/aanalytics2/configs.py
configs.py
import json, os, re import time, datetime from concurrent import futures from copy import deepcopy from pathlib import Path from typing import IO, Union, List from collections import defaultdict from itertools import tee import logging # Non standard libraries import pandas as pd from urllib import parse from aanalytics2 import config, connector, token_provider from .projects import * from .requestCreator import RequestCreator from .workspace import Workspace JsonOrDataFrameType = Union[pd.DataFrame, dict] JsonListOrDataFrameType = Union[pd.DataFrame, List[dict]] def retrieveToken(verbose: bool = False, save: bool = False, **kwargs)->str: """ LEGACY retrieve token directly following the importConfigFile or Configure method. """ token_with_expiry = token_provider.get_jwt_token_and_expiry_for_config(config.config_object,**kwargs) token = token_with_expiry['token'] config.config_object['token'] = token config.config_object['date_limit'] = time.time() + token_with_expiry['expiry'] / 1000 - 500 config.header.update({'Authorization': f'Bearer {token}'}) if verbose: print(f"token valid till : {time.ctime(time.time() + token_with_expiry['expiry'] / 1000)}") return token class Login: """ Class to connect to the the login company. """ loggingEnabled = False logger = None def __init__(self, config: dict = config.config_object, header: dict = config.header, retry: int = 0,loggingObject:dict=None) -> None: """ Instantiate the Loggin class. Arguments: config : REQUIRED : dictionary with your configuration information. header : REQUIRED : dictionary of your header. retry : OPTIONAL : if you want to retry, the number of time to retry loggingObject : OPTIONAL : If you want to set logging capability for your actions. """ if loggingObject is not None and sorted(["level","stream","format","filename","file"]) == sorted(list(loggingObject.keys())): self.loggingEnabled = True self.logger = logging.getLogger(f"{__name__}.login") self.logger.setLevel(loggingObject["level"]) if type(loggingObject["format"]) == str: formatter = logging.Formatter(loggingObject["format"]) elif type(loggingObject["format"]) == logging.Formatter: formatter = loggingObject["format"] if loggingObject["file"]: fileHandler = logging.FileHandler(loggingObject["filename"]) fileHandler.setFormatter(formatter) self.logger.addHandler(fileHandler) if loggingObject["stream"]: streamHandler = logging.StreamHandler() streamHandler.setFormatter(formatter) self.logger.addHandler(streamHandler) self.connector = connector.AdobeRequest( config_object=config, header=header, retry=retry,loggingEnabled=self.loggingEnabled,logger=self.logger) self.header = self.connector.header self.COMPANY_IDS = {} self.retry = retry def getCompanyId(self,verbose:bool=False) -> dict: """ Retrieve the company ids for later call for the properties. """ if self.loggingEnabled: self.logger.debug("getCompanyId start") res = self.connector.getData( "https://analytics.adobe.io/discovery/me", headers=self.header) json_res = res if self.loggingEnabled: self.logger.debug(f"getCompanyId reponse: {json_res}") try: companies = json_res['imsOrgs'][0]['companies'] self.COMPANY_IDS = json_res['imsOrgs'][0]['companies'] return companies except: if verbose: print("exception when trying to get companies with parameter 'all'") print(json_res) if self.loggingEnabled: self.logger.error(f"Error trying to get companyId: {json_res}") return None def createAnalyticsConnection(self, companyId: str = None,loggingObject:dict=None) -> object: """ Returns an instance of the Analytics class so you can query the different elements from that instance. Arguments: companyId: REQUIRED : The globalCompanyId that you want to use in your connection loggingObject : OPTIONAL : If you want to set logging capability for your actions. the retry parameter set in the previous class instantiation will be used here. """ analytics = Analytics(company_id=companyId, config_object=self.connector.config, header=self.header, retry=self.retry,loggingObject=loggingObject) return analytics class Analytics: """ Class that instantiate a connection to a single login company. """ # Endpoints header = {"Accept": "application/json", "Content-Type": "application/json", "Authorization": "Bearer ", "X-Api-Key": "" } _endpoint = 'https://analytics.adobe.io/api' _getRS = '/collections/suites' _getDimensions = '/dimensions' _getMetrics = '/metrics' _getSegments = '/segments' _getCalcMetrics = '/calculatedmetrics' _getDateRanges = '/dateranges' _getReport = '/reports' loggingEnabled = False logger = None def __init__(self, company_id: str = None, config_object: dict = config.config_object, header: dict = config.header, retry: int = 0,loggingObject:dict=None): """ Instantiate the Analytics class. The Analytics class will be automatically connected to the API 2.0. You have possibility to review the connection detail by looking into the connector instance. "header", "company_id" and "endpoint_company" are attribute accessible for debugging. Arguments: company_id : REQUIRED : company ID retrieved by the getCompanyId retry : OPTIONAL : Number of time you want to retrieve fail calls loggingObject : OPTIONAL : logging object to log actions during runtime. config_object : OPTIONAL : config object to be used for setting token (do not update if you do not know) header : OPTIONAL : template header used for all requests (do not update if you do not know!) """ if company_id is None: raise AttributeError( 'Expected "company_id" to be referenced.\nPlease ensure you pass the globalCompanyId when instantiating this class.') if loggingObject is not None and sorted(["level","stream","format","filename","file"]) == sorted(list(loggingObject.keys())): self.loggingEnabled = True self.logger = logging.getLogger(f"{__name__}.analytics") self.logger.setLevel(loggingObject["level"]) if type(loggingObject["format"]) == str: formatter = logging.Formatter(loggingObject["format"]) elif type(loggingObject["format"]) == logging.Formatter: formatter = loggingObject["format"] if loggingObject["file"]: fileHandler = logging.FileHandler(loggingObject["filename"]) fileHandler.setFormatter(formatter) self.logger.addHandler(fileHandler) if loggingObject["stream"]: streamHandler = logging.StreamHandler() streamHandler.setFormatter(formatter) self.logger.addHandler(streamHandler) self.connector = connector.AdobeRequest( config_object=config_object, header=header, retry=retry,loggingEnabled=self.loggingEnabled,logger=self.logger) self.header = self.connector.header self.connector.header['x-proxy-global-company-id'] = company_id self.header['x-proxy-global-company-id'] = company_id self.endpoint_company = f"{self._endpoint}/{company_id}" self.company_id = company_id self.listProjectIds = [] self.projectsDetails = {} self.segments = [] self.calculatedMetrics = [] try: import importlib.resources as pkg_resources pathLOGS = pkg_resources.path( "aanalytics2", "eventType_usageLogs.pickle") except ImportError: try: # Try backported to PY<37 `importlib_resources`. import pkg_resources pathLOGS = pkg_resources.resource_filename( "aanalytics2", "eventType_usageLogs.pickle") except: print('Empty LOGS_EVENT_TYPE attribute') try: with pathLOGS as f: self.LOGS_EVENT_TYPE = pd.read_pickle(f) except: self.LOGS_EVENT_TYPE = "no data" def __str__(self)->str: obj = { "endpoint" : self.endpoint_company, "companyId" : self.company_id, "header" : self.header, "token" : self.connector.config['token'] } return json.dumps(obj,indent=4) def __repr__(self)->str: obj = { "endpoint" : self.endpoint_company, "companyId" : self.company_id, "header" : self.header, "token" : self.connector.config['token'] } return json.dumps(obj,indent=4) def refreshToken(self, token: str = None): if token is None: raise AttributeError( 'Expected "token" to be referenced.\nPlease ensure you pass the token.') self.header['Authorization'] = "Bearer " + token def decodeAArequests(self,file:IO=None,urls:Union[list,str]=None,save:bool=False,**kwargs)->pd.DataFrame: """ Takes any of the parameter to load adobe url and decompose the requests into a dataframe, that you can save if you want. Arguments: file : OPTIONAL : file referencing the different requests saved (excel, or txt) urls : OPTIONAL : list of requests (or a single request) that you want to decode. save : OPTIONAL : parameter to save your decode list into a csv file. Returns a dataframe. possible kwargs: encoding : the type of encoding to decode the file """ if self.loggingEnabled: self.logger.debug(f"Starting decodeAArequests") if file is None and urls is None: raise ValueError("Require at least file or urls to contains data") if file is not None: if '.txt' in file: with open(file,'r',encoding=kwargs.get('encoding','utf-8')) as f: urls = f.readlines() ## passing decoding to urls elif '.xlsx' in file: temp_df = pd.read_excel(file,header=None) urls = list(temp_df[0]) ## passing decoding to urls if urls is not None: if type(urls) == str: data = parse.parse_qsl(urls) df = pd.DataFrame(data) df.columns = ['index','request'] df.set_index('index',inplace=True) if save: df.to_csv(f'request_{int(time.time())}.csv') return df elif type(urls) == list: ## decoding list of strings tmp_list = [parse.parse_qsl(data) for data in urls] tmp_dfs = [pd.DataFrame(data) for data in tmp_list] tmp_dfs2 = [] for df, index in zip(tmp_dfs,range(len(tmp_dfs))): df.columns = ['index',f"request {index+1}"] ## cleanup timestamp from request url string = df.iloc[0,0] df.iloc[0,0] = re.search('http.*://(.+?)/s[0-9]+.*',string).group(1) # tracking server df.set_index('index',inplace=True) new_df = df tmp_dfs2.append(new_df) df_full = pd.concat(tmp_dfs2,axis=1) if save: df_full.to_csv(f'requests_{int(time.time())}.csv') return df_full def getReportSuites(self, txt: str = None, rsid_list: str = None, limit: int = 100, extended_info: bool = False, save: bool = False) -> list: """ Get the reportSuite IDs data. Returns a dataframe of reportSuite name and report suite id. Arguments: txt : OPTIONAL : returns the reportSuites that matches a speific text field rsid_list : OPTIONAL : returns the reportSuites that matches the list of rsids set limit : OPTIONAL : How many reportSuite retrieves per serverCall save : OPTIONAL : if set to True, it will save the list in a file. (Default False) """ if self.loggingEnabled: self.logger.debug(f"Starting getReportSuite") nb_error, nb_empty = 0, 0 # use for multi-thread loop params = {} params.update({'limit': str(limit)}) params.update({'page': '0'}) if txt is not None: params.update({'rsidContains': str(txt)}) if rsid_list is not None: params.update({'rsids': str(rsid_list)}) params.update( {"expansion": "name,parentRsid,currency,calendarType,timezoneZoneinfo"}) if self.loggingEnabled: self.logger.debug(f"parameters : {params}") rsids = self.connector.getData(self.endpoint_company + self._getRS, params=params, headers=self.header) content = rsids['content'] if not extended_info: list_content = [{'name': item['name'], 'rsid': item['rsid']} for item in content] df_rsids = pd.DataFrame(list_content) else: df_rsids = pd.DataFrame(content) total_page = rsids['totalPages'] last_page = rsids['lastPage'] if not last_page: # if last_page =False callsToMake = total_page list_params = [{**params, 'page': page} for page in range(1, callsToMake)] list_urls = [self.endpoint_company + self._getRS for x in range(1, callsToMake)] listheaders = [self.header for x in range(1, callsToMake)] workers = min(10, total_page) with futures.ThreadPoolExecutor(workers) as executor: res = executor.map(lambda x, y, z: self.connector.getData( x, y, headers=z), list_urls, list_params, listheaders) res = list(res) list_data = [val for sublist in [r['content'] for r in res if 'content' in r.keys()] for val in sublist] nb_error = sum(1 for elem in res if 'error_code' in elem.keys()) nb_empty = sum(1 for elem in res if 'content' in elem.keys() and len( elem['content']) == 0) if not extended_info: list_append = [{'name': item['name'], 'rsid': item['rsid']} for item in list_data] df_append = pd.DataFrame(list_append) else: df_append = pd.DataFrame(list_data) df_rsids = df_rsids.append(df_append, ignore_index=True) if save: if self.loggingEnabled: self.logger.debug(f"saving rsids : {params}") df_rsids.to_csv('RSIDS.csv', sep='\t') if nb_error > 0 or nb_empty > 0: message = f'WARNING : Retrieved data are partial.\n{nb_error}/{len(list_urls) + 1} requests returned an error.\n{nb_empty}/{len(list_urls)} requests returned an empty response. \nTry to use filter to retrieve reportSuite or increase limit per request' print(message) if self.loggingEnabled: self.logger.warning(message) return df_rsids def getVirtualReportSuites(self, extended_info: bool = False, limit: int = 100, filterIds: str = None, idContains: str = None, segmentIds: str = None, save: bool = False) -> list: """ return a lit of virtual reportSuites and their id. It can contain more information if expansion is selected. Arguments: extended_info : OPTIONAL : boolean to retrieve the maximum of information. limit : OPTIONAL : How many reportSuite retrieves per serverCall filterIds : OPTIONAL : comma delimited list of virtual reportSuite ID to be retrieved. idContains : OPTIONAL : element that should be contained in the Virtual ReportSuite Id segmentIds : OPTIONAL : comma delimited list of segmentId contained in the VRSID save : OPTIONAL : if set to True, it will save the list in a file. (Default False) """ if self.loggingEnabled: self.logger.debug(f"Starting getVirtualReportSuites") expansion_values = "globalCompanyKey,parentRsid,parentRsidName,timezone,timezoneZoneinfo,currentTimezoneOffset,segmentList,description,modified,isDeleted,dataCurrentAsOf,compatibility,dataSchema,sessionDefinition,curatedComponents,type" params = {"limit": limit} nb_error = 0 nb_empty = 0 list_urls = [] if extended_info: params['expansion'] = expansion_values if filterIds is not None: params['filterByIds'] = filterIds if idContains is not None: params['idContains'] = idContains if segmentIds is not None: params['segmentIds'] = segmentIds path = f"{self.endpoint_company}/reportsuites/virtualreportsuites" if self.loggingEnabled: self.logger.debug(f"params: {params}") vrsid = self.connector.getData( path, params=params, headers=self.header) content = vrsid['content'] if not extended_info: list_content = [{'name': item['name'], 'vrsid': item['id']} for item in content] df_vrsids = pd.DataFrame(list_content) else: df_vrsids = pd.DataFrame(content) total_page = vrsid['totalPages'] last_page = vrsid['lastPage'] if not last_page: # if last_page =False callsToMake = total_page list_params = [{**params, 'page': page} for page in range(1, callsToMake)] list_urls = [path for x in range(1, callsToMake)] listheaders = [self.header for x in range(1, callsToMake)] workers = min(10, total_page) with futures.ThreadPoolExecutor(workers) as executor: res = executor.map(lambda x, y, z: self.connector.getData( x, y, headers=z), list_urls, list_params, listheaders) res = list(res) list_data = [val for sublist in [r['content'] for r in res if 'content' in r.keys()] for val in sublist] nb_error = sum(1 for elem in res if 'error_code' in elem.keys()) nb_empty = sum(1 for elem in res if 'content' in elem.keys() and len( elem['content']) == 0) if not extended_info: list_append = [{'name': item['name'], 'vrsid': item['id']} for item in list_data] df_append = pd.DataFrame(list_append) else: df_append = pd.DataFrame(list_data) df_vrsids = df_vrsids.append(df_append, ignore_index=True) if save: df_vrsids.to_csv('VRSIDS.csv', sep='\t') if nb_error > 0 or nb_empty > 0: message = f'WARNING : Retrieved data are partial.\n{nb_error}/{len(list_urls) + 1} requests returned an error.\n{nb_empty}/{len(list_urls)} requests returned an empty response. \nTry to use filter to retrieve reportSuite or increase limit per request' print(message) if self.loggingEnabled: self.logger.warning(message) return df_vrsids def getVirtualReportSuite(self, vrsid: str = None, extended_info: bool = False, format: str = 'df') -> JsonOrDataFrameType: """ return a single virtual report suite ID information as dataframe. Arguments: vrsid : REQUIRED : The virtual reportSuite to be retrieved extended_info : OPTIONAL : boolean to add more information format : OPTIONAL : format of the output. 2 values "df" for dataframe and "raw" for raw json. """ if vrsid is None: raise Exception("require a Virtual ReportSuite ID") if self.loggingEnabled: self.logger.debug(f"Starting getVirtualReportSuite for {vrsid}") expansion_values = "globalCompanyKey,parentRsid,parentRsidName,timezone,timezoneZoneinfo,currentTimezoneOffset,segmentList,description,modified,isDeleted,dataCurrentAsOf,compatibility,dataSchema,sessionDefinition,curatedComponents,type" params = {} if extended_info: params['expansion'] = expansion_values path = f"{self.endpoint_company}/reportsuites/virtualreportsuites/{vrsid}" data = self.connector.getData(path, params=params, headers=self.header) if format == "df": data = pd.DataFrame({vrsid: data}) return data def getVirtualReportSuiteComponents(self, vrsid: str = None, nan_value=""): """ Uses the getVirtualReportSuite function to get a VRS and returns the VRS components for a VRS as a dataframe. VRS must have Component Curation enabled. Arguments: vrsid : REQUIRED : Virtual Report Suite ID nan_value : OPTIONAL : how to handle empty cells, default = "" """ if self.loggingEnabled: self.logger.debug(f"Starting getVirtualReportSuiteComponents") vrs_data = self.getVirtualReportSuite(extended_info=True, vrsid=vrsid) if "curatedComponents" not in vrs_data.index: return pd.DataFrame() components_cell = vrs_data[vrs_data.index == "curatedComponents"].iloc[0, 0] return pd.DataFrame(components_cell).fillna(value=nan_value) def createVirtualReportSuite(self, name: str = None, parentRsid: str = None, segmentList: list = None, dataSchema: str = "Cache", data_dict: dict = None, **kwargs) -> dict: """ Create a new virtual report suite based on the information provided. Arguments: name : REQUIRED : name of the virtual reportSuite parentRsid : REQUIRED : Parent reportSuite ID for the VRS segmentLists : REQUIRED : list of segment id to be applied on the ReportSuite. dataSchema : REQUIRED : Type of schema used for the VRSID. (default "Cache") data_dict : OPTIONAL : you can pass directly the dictionary. """ if self.loggingEnabled: self.logger.debug(f"Starting createVirtualReportSuite") path = f"{self.endpoint_company}/reportsuites/virtualreportsuites" expansion_values = "globalCompanyKey,parentRsid,parentRsidName,timezone,timezoneZoneinfo,currentTimezoneOffset,segmentList,description,modified,isDeleted,dataCurrentAsOf,compatibility,dataSchema,sessionDefinition,curatedComponents,type" params = {'expansion': expansion_values} if data_dict is None: body = { "name": name, "parentRsid": parentRsid, "segmentList": segmentList, "dataSchema": dataSchema, "description": kwargs.get('description', '') } else: if 'name' not in data_dict.keys() or 'parentRsid' not in data_dict.keys() or 'segmentList' not in data_dict.keys() or 'dataSchema' not in data_dict.keys(): if self.loggingEnabled: self.logger.error(f"Missing one or more fundamental keys : name, parentRsid, segmentList, dataSchema") raise Exception("Missing one or more fundamental keys : name, parentRsid, segmentList, dataSchema") body = data_dict res = self.connector.postData( path, params=params, data=body, headers=self.header) return res def updateVirtualReportSuite(self, vrsid: str = None, data_dict: dict = None, **kwargs) -> dict: """ Updates a Virtual Report Suite based on a JSON-like dictionary (same structure as createVirtualReportSuite) Note that to update components, you need to supply ALL components currently associated with this suite. Supplying only the components you want to change will remove all others from the VR Suite! Arguments: vrsid : REQUIRED : The id of the virtual report suite to update data_dict : a json-like dictionary of the vrs data to update """ if vrsid is None: raise Exception("require a virtual reportSuite ID") if self.loggingEnabled: self.logger.debug(f"Starting updateVirtualReportSuite for {vrsid}") path = f"{self.endpoint_company}/reportsuites/virtualreportsuites/{vrsid}" body = data_dict res = self.connector.putData(path, data=body, headers=self.header) if self.loggingEnabled: self.logger.debug(f"updateVirtualReportSuite response : {res}") return res def deleteVirtualReportSuite(self, vrsid: str = None) -> str: """ Delete a Virtual Report Suite based on the id passed. Arguments: vrsid : REQUIRED : The id of the virtual reportSuite to delete. """ if vrsid is None: raise Exception("require a Virtual ReportSuite ID") if self.loggingEnabled: self.logger.debug(f"Starting deleteVirtualReportSuite for {vrsid}") path = f"{self.endpoint_company}/reportsuites/virtualreportsuites/{vrsid}" res = self.connector.deleteData(path, headers=self.header) if self.loggingEnabled: self.logger.debug(f"deleteVirtualReportSuite {vrsid} response : {res}") return res def validateVirtualReportSuite(self, name: str = None, parentRsid: str = None, segmentList: list = None, dataSchema: str = "Cache", data_dict: dict = None, **kwargs) -> dict: """ Validate the object to create a new virtual report suite based on the information provided. Arguments: name : REQUIRED : name of the virtual reportSuite parentRsid : REQUIRED : Parent reportSuite ID for the VRS segmentLists : REQUIRED : list of segment ids to be applied on the ReportSuite. dataSchema : REQUIRED : Type of schema used for the VRSID (default : Cache). data_dict : OPTIONAL : you can pass directly the dictionary. """ if self.loggingEnabled: self.logger.debug(f"Starting validateVirtualReportSuite") path = f"{self.endpoint_company}/reportsuites/virtualreportsuites/validate" expansion_values = "globalCompanyKey, parentRsid, parentRsidName, timezone, timezoneZoneinfo, currentTimezoneOffset, segmentList, description, modified, isDeleted, dataCurrentAsOf, compatibility, dataSchema, sessionDefinition, curatedComponents, type" if data_dict is None: body = { "name": name, "parentRsid": parentRsid, "segmentList": segmentList, "dataSchema": dataSchema, "description": kwargs.get('description', '') } else: if 'name' not in data_dict.keys() or 'parentRsid' not in data_dict.keys() or 'segmentList' not in data_dict.keys() or 'dataSchema' not in data_dict.keys(): raise Exception( "Missing one or more fundamental keys : name, parentRsid, segmentList, dataSchema") body = data_dict res = self.connector.postData(path, data=body, headers=self.header) if self.loggingEnabled: self.logger.debug(f"validateVirtualReportSuite response : {res}") return res def getDimensions(self, rsid: str, tags: bool = False, description:bool=False, save=False, **kwargs) -> pd.DataFrame: """ Retrieve the list of dimensions from a specific reportSuite. Shrink columns to simplify output. Returns the data frame of available dimensions. Arguments: rsid : REQUIRED : Report Suite ID from which you want the dimensions tags : OPTIONAL : If you would like to have additional information, such as tags. (bool : default False) description : OPTIONAL : Trying to add the description column. It may break the method. save : OPTIONAL : If set to True, it will save the info in a csv file (bool : default False) Possible kwargs: full : Boolean : Doesn't shrink the number of columns if set to true example : getDimensions(rsid,full=True) """ if self.loggingEnabled: self.logger.debug(f"Starting getDimensions") params = {} if tags: params.update({'expansion': 'tags'}) params.update({'rsid': rsid}) dims = self.connector.getData(self.endpoint_company + self._getDimensions, params=params, headers=self.header) df_dims = pd.DataFrame(dims) columns = ['id', 'name', 'category', 'type', 'parent', 'pathable'] if description: columns.append('description') if kwargs.get('full', False): new_cols = pd.DataFrame(df_dims.support.values.tolist(), columns=['support_oberon', 'support_dw']) # extract list in column new_df = df_dims.merge(new_cols, right_index=True, left_index=True) new_df.drop(['reportable', 'support'], axis=1, inplace=True) df_dims = new_df else: df_dims = df_dims[columns] if save: df_dims.to_csv(f'dimensions_{rsid}.csv') return df_dims def getMetrics(self, rsid: str, tags: bool = False, save=False, description:bool=False, dataGroup:bool=False, **kwargs) -> pd.DataFrame: """ Retrieve the list of metrics from a specific reportSuite. Shrink columns to simplify output. Returns the data frame of available metrics. Arguments: rsid : REQUIRED : Report Suite ID from which you want the dimensions (str) tags : OPTIONAL : If you would like to have additional information, such as tags.(bool : default False) dataGroup : OPTIONAL : Adding dataGroups to the column exported. Default False. May break the report. save : OPTIONAL : If set to True, it will save the info in a csv file (bool : default False) Possible kwargs: full : Boolean : Doesn't shrink the number of columns if set to true. """ if self.loggingEnabled: self.logger.debug(f"Starting getMetrics") params = {} if tags: params.update({'expansion': 'tags'}) params.update({'rsid': rsid}) metrics = self.connector.getData(self.endpoint_company + self._getMetrics, params=params, headers=self.header) df_metrics = pd.DataFrame(metrics) columns = ['id', 'name', 'category', 'type', 'precision', 'segmentable'] if dataGroup: columns.append('dataGroup') if description: columns.append('description') if kwargs.get('full', False): new_cols = pd.DataFrame(df_metrics.support.values.tolist(), columns=[ 'support_oberon', 'support_dw']) new_df = df_metrics.merge( new_cols, right_index=True, left_index=True) new_df.drop('support', axis=1, inplace=True) df_metrics = new_df else: df_metrics = df_metrics[columns] if save: df_metrics.to_csv(f'metrics_{rsid}.csv', sep='\t') return df_metrics def getUsers(self, save: bool = False, **kwargs) -> pd.DataFrame: """ Retrieve the list of users for a login company.Returns a data frame. Arguments: save : OPTIONAL : Save the data in a file (bool : default False). Possible kwargs: limit : Nummber of results per requests. Default 100. expansion : string list such as "lastAccess,createDate" """ if self.loggingEnabled: self.logger.debug(f"Starting getUsers") list_urls = [] nb_error, nb_empty = 0, 0 # use for multi-thread loop params = {'limit': kwargs.get('limit', 100)} if kwargs.get("expansion", None) is not None: params["expansion"] = kwargs.get("expansion", None) path = "/users" users = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header) data = users['content'] lastPage = users['lastPage'] if not lastPage: # check if lastpage is inversed of False callsToMake = users['totalPages'] list_params = [{'limit': params['limit'], 'page': page} for page in range(1, callsToMake)] list_urls = [self.endpoint_company + "/users" for x in range(1, callsToMake)] listheaders = [self.header for x in range(1, callsToMake)] workers = min(10, len(list_params)) with futures.ThreadPoolExecutor(workers) as executor: res = executor.map(lambda x, y, z: self.connector.getData(x, y, headers=z), list_urls, list_params, listheaders) res = list(res) users_lists = [elem['content'] for elem in res if 'content' in elem.keys()] nb_error = sum(1 for elem in res if 'error_code' in elem.keys()) nb_empty = sum(1 for elem in res if 'content' in elem.keys() and len(elem['content']) == 0) append_data = [val for sublist in [data for data in users_lists] for val in sublist] # flatten list of list data = data + append_data df_users = pd.DataFrame(data) columns = ['email', 'login', 'fullName', 'firstName', 'lastName', 'admin', 'loginId', 'imsUserId', 'login', 'createDate', 'lastAccess', 'title', 'disabled', 'phoneNumber', 'companyid'] df_users = df_users[columns] df_users['createDate'] = pd.to_datetime(df_users['createDate']) df_users['lastAccess'] = pd.to_datetime(df_users['lastAccess']) if save: df_users.to_csv(f'users_{int(time.time())}.csv', sep='\t') if nb_error > 0 or nb_empty > 0: print( f'WARNING : Retrieved data are partial.\n{nb_error}/{len(list_urls) + 1} requests returned an error.\n{nb_empty}/{len(list_urls)} requests returned an empty response. \nTry to use filter to retrieve users or increase limit') return df_users def getUserMe(self,loginId:str=None)->dict: """ Retrieve a single user based on its loginId Argument: loginId : REQUIRED : Login ID for the user """ path = f"/users/me" res = self.connector.getData(self.endpoint_company + path) return res def getSegments(self, name: str = None, tagNames: str = None, inclType: str = 'all', rsids_list: list = None, sidFilter: list = None, extended_info: bool = False, format: str = "df", save: bool = False, verbose: bool = False, **kwargs) -> JsonListOrDataFrameType: """ Retrieve the list of segments. Returns a data frame. Arguments: name : OPTIONAL : Filter to only include segments that contains the name (str) tagNames : OPTIONAL : Filter list to only include segments that contains one of the tags (string delimited with comma, can be list as well) inclType : OPTIONAL : type of segments to be retrieved.(str) Possible values: - all : Default value (all segments possibles) - shared : shared segments - template : template segments - deleted : deleted segments - internal : internal segments - curatedItem : curated segments rsid_list : OPTIONAL : Filter list to only include segments tied to specified RSID list (list) sidFilter : OPTIONAL : Filter list to only include segments in the specified list (list) extended_info : OPTIONAL : additional segment metadata fields to include on response (bool : default False) if set to true, returns reportSuiteName, ownerFullName, modified, tags, compatibility, definition format : OPTIONAL : defined the format returned by the query. (Default df) possibe values : "df" : default value that return a dataframe "raw": return a list of value. More or less what is return from server. save : OPTIONAL : If set to True, it will save the info in a csv file (bool : default False) verbose : OPTIONAL : If set to True, print some information Possible kwargs: limit : number of segments retrieved by request. default 500: Limited to 1000 by the AnalyticsAPI. NOTE : Segment Endpoint doesn't support multi-threading. Default to 500. """ if self.loggingEnabled: self.logger.debug(f"Starting getSegments") limit = int(kwargs.get('limit', 500)) params = {'includeType': 'all', 'limit': limit} if extended_info: params.update( {'expansion': 'reportSuiteName,ownerFullName,created,modified,tags,compatibility,definition,shares'}) if name is not None: params.update({'name': str(name)}) if tagNames is not None: if type(tagNames) == list: tagNames = ','.join(tagNames) params.update({'tagNames': tagNames}) if inclType != 'all': params['includeType'] = inclType if rsids_list is not None: if type(rsids_list) == list: rsids_list = ','.join(rsids_list) params.update({'rsids': rsids_list}) if sidFilter is not None: if type(sidFilter) == list: sidFilter = ','.join(sidFilter) params.update({'rsids': sidFilter}) data = [] lastPage = False page_nb = 0 if verbose: print("Starting requesting segments") while not lastPage: params['page'] = page_nb segs = self.connector.getData(self.endpoint_company + self._getSegments, params=params, headers=self.header) data += segs['content'] lastPage = segs['lastPage'] page_nb += 1 if verbose and page_nb % 10 == 0: print(f"request #{page_nb / 10}") if format == "df": segments = pd.DataFrame(data) else: segments = data if save and format == "df": segments.to_csv(f'segments_{int(time.time())}.csv', sep='\t') if verbose: print( f'Saving data in file : {os.getcwd()}{os.sep}segments_{int(time.time())}.csv') elif save and format == "raw": with open(f"segments_{int(time.time())}.csv","w") as f: f.write(json.dumps(segments,indent=4)) return segments def getSegment(self, segment_id: str = None,full:bool=False, *args) -> dict: """ Get a specific segment from the ID. Returns the object of the segment. Arguments: segment_id : REQUIRED : the segment id to retrieve. full : OPTIONAL : Add all possible options Possible args: - "reportSuiteName" : string : to retrieve reportSuite attached to the segment - "ownerFullName" : string : to retrieve ownerFullName attached to the segment - "modified" : string : to retrieve when segment was modified - "tags" : string : to retrieve tags attached to the segment - "compatibility" : string : to retrieve which tool is compatible - "definition" : string : definition of the segment - "publishingStatus" : string : status for the segment - "definitionLastModified" : string : last definition of the segment - "categories" : string : categories of the segment """ ValidArgs = ["reportSuiteName", "ownerFullName", "modified", "tags", "compatibility", "definition", "publishingStatus", "publishingStatus", "definitionLastModified", "categories"] if segment_id is None: raise Exception("Expected a segment id") if self.loggingEnabled: self.logger.debug(f"Starting getSegment for {segment_id}") path = f"/segments/{segment_id}" for element in args: if element not in ValidArgs: args.remove(element) params = {'expansion': ','.join(args)} if full: params = {'expansion': ','.join(ValidArgs)} res = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header) return res def scanSegment(self,segment:Union[str,dict],verbose:bool=False)->dict: """ Return the dimensions, metrics and reportSuite used and the main scope of the segment. Arguments: segment : REQUIRED : either the ID of the segment or the full definition. verbose : OPTIONAL : print some comment. """ if self.loggingEnabled: self.logger.debug(f"Starting scanSegment") if type(segment) == str: if verbose: print('retrieving segment definition') defSegment = self.getSegment(segment,full=True) elif type(segment) == dict: defSegment = deepcopy(segment) if 'definition' not in defSegment.keys(): raise KeyError('missing "definition" key ') if verbose: print('copied segment definition') mydef = str(defSegment['definition']) dimensions : list = re.findall("'(variables/.+?)'",mydef) metrics : list = re.findall("'(metrics/.+?)'",mydef) reportSuite = defSegment['rsid'] scope = re.search("'context': '(.+)'}[^'context']+",mydef) res = { 'dimensions' : set(dimensions) if len(dimensions)>0 else {}, 'metrics' : set(metrics) if len(metrics)>0 else {}, 'rsid' : reportSuite, 'scope' : scope.group(1) } return res def createSegment(self, segmentJSON: dict = None) -> dict: """ Method that creates a new segment based on the dictionary passed to it. Arguments: segmentJSON : REQUIRED : the dictionary that represents the JSON statement for the segment. More information at this address <https://adobedocs.github.io/analytics-2.0-apis/#/segments/segments_createSegment> """ if self.loggingEnabled: self.logger.debug(f"starting createSegment") if segmentJSON is None: print('No segment data has been pushed') return None data = deepcopy(segmentJSON) seg = self.connector.postData( self.endpoint_company + self._getSegments, data=data, headers=self.header ) return seg def createSegmentValidate(self, segmentJSON: dict = None) -> object: """ Method that validate a new segment based on the dictionary passed to it. Arguments: segmentJSON : REQUIRED : the dictionary that represents the JSON statement for the segment. More information at this address <https://adobedocs.github.io/analytics-2.0-apis/#/segments/segments_createSegment> """ if self.loggingEnabled: self.logger.debug(f"starting createSegmentValidate") if segmentJSON is None: print('No segment data has been pushed') return None data = deepcopy(segmentJSON) path = "/segments/validate" seg = self.connector.postData(self.endpoint_company +path,data=data) return seg def updateSegment(self, segmentID: str = None, segmentJSON: dict = None) -> object: """ Method that updates a specific segment based on the dictionary passed to it. Arguments: segmentID : REQUIRED : Segment ID to be updated segmentJSON : REQUIRED : the dictionary that represents the JSON statement for the segment. """ if self.loggingEnabled: self.logger.debug(f"starting updateSegment") if segmentJSON is None or segmentID is None: print('No segment or segmentID data has been pushed') if self.loggingEnabled: self.logger.error(f"No segment or segmentID data has been pushed") return None data = deepcopy(segmentJSON) seg = self.connector.putData( self.endpoint_company + self._getSegments + '/' + segmentID, data=data, headers=self.header ) return seg def deleteSegment(self, segmentID: str = None) -> object: """ Method that delete a specific segment based the ID passed. Arguments: segmentID : REQUIRED : Segment ID to be deleted """ if segmentID is None: print('No segmentID data has been pushed') return None if self.loggingEnabled: self.logger.debug(f"starting deleteSegment for {segmentID}") seg = self.connector.deleteData(self.endpoint_company + self._getSegments + '/' + segmentID, headers=self.header) return seg def getCalculatedMetrics( self, name: str = None, tagNames: str = None, inclType: str = 'all', rsids_list: list = None, extended_info: bool = False, save=False, format:str='df', **kwargs ) -> pd.DataFrame: """ Retrieve the list of calculated metrics. Returns a data frame. Arguments: name : OPTIONAL : Filter to only include calculated metrics that contains the name (str) tagNames : OPTIONAL : Filter list to only include calculated metrics that contains one of the tags (string delimited with comma, can be list as well) inclType : OPTIONAL : type of calculated Metrics to be retrieved. (str) Possible values: - all : Default value (all calculated metrics possibles) - shared : shared calculated metrics - template : template calculated metrics rsid_list : OPTIONAL : Filter list to only include segments tied to specified RSID list (list) extended_info : OPTIONAL : additional segment metadata fields to include on response (list) additional infos: reportSuiteName,definition, ownerFullName, modified, tags, compatibility save : OPTIONAL : If set to True, it will save the info in a csv file (Default False) format : OPTIONAL : format of the output. 2 values "df" for dataframe and "raw" for raw json. Possible kwargs: limit : number of segments retrieved by request. default 500: Limited to 1000 by the AnalyticsAPI.(int) """ if self.loggingEnabled: self.logger.debug(f"starting getCalculatedMetrics") limit = int(kwargs.get('limit', 500)) params = {'includeType': inclType, 'limit': limit} if name is not None: params.update({'name': str(name)}) if tagNames is not None: if type(tagNames) == list: tagNames = ','.join(tagNames) params.update({'tagNames': tagNames}) if inclType != 'all': params['includeType'] = inclType if rsids_list is not None: if type(rsids_list) == list: rsids_list = ','.join(rsids_list) params.update({'rsids': rsids_list}) if extended_info: params.update( {'expansion': 'reportSuiteName,definition,ownerFullName,modified,tags,categories,compatibility,shares'}) metrics = self.connector.getData(self.endpoint_company + self._getCalcMetrics, params=params) data = metrics['content'] lastPage = metrics['lastPage'] if not lastPage: # check if lastpage is inversed of False page_nb = 0 while not lastPage: page_nb += 1 params['page'] = page_nb metrics = self.connector.getData(self.endpoint_company + self._getCalcMetrics, params=params, headers=self.header) data += metrics['content'] lastPage = metrics['lastPage'] if format == "raw": if save: with open(f'calculated_metrics_{int(time.time())}.json','w') as f: f.write(json.dumps(data,indent=4)) return data df_calc_metrics = pd.DataFrame(data) if save: df_calc_metrics.to_csv(f'calculated_metrics_{int(time.time())}.csv', sep='\t') return df_calc_metrics def getCalculatedMetric(self,calculatedMetricId:str=None,full:bool=True)->dict: """ Return a dictionary on the calculated metrics requested. Arguments: calculatedMetricId : REQUIRED : The calculated metric ID to be retrieved. full : OPTIONAL : additional segment metadata fields to include on response (list) additional infos: reportSuiteName,definition, ownerFullName, modified, tags, compatibility """ if calculatedMetricId is None: raise ValueError("Require a calculated metrics ID") if self.loggingEnabled: self.logger.debug(f"starting getCalculatedMetric for {calculatedMetricId}") params = {} if full: params.update({'expansion': 'reportSuiteName,definition,ownerFullName,modified,tags,categories,compatibility'}) path = f"/calculatedmetrics/{calculatedMetricId}" res = self.connector.getData(self.endpoint_company+path,params=params) return res def scanCalculatedMetric(self,calculatedMetric:Union[str,dict],verbose:bool=False)->dict: """ Return a dictionary of metrics and dimensions used in the calculated metrics. """ if self.loggingEnabled: self.logger.debug(f"starting scanCalculatedMetric") if type(calculatedMetric) == str: if verbose: print('retrieving calculated metrics definition') cm = self.getCalculatedMetric(calculatedMetric,full=True) elif type(calculatedMetric) == dict: cm = deepcopy(calculatedMetric) if 'definition' not in cm.keys(): raise KeyError('missing "definition" key') if verbose: print('copied calculated metrics definition') mydef = str(cm['definition']) segments:list = cm['compatibility'].get('segments',[]) res = {"dimensions":[],'metrics':[]} for segment in segments: if verbose: print(f"retrieving segment {segment} definition") tmp:dict = self.scanSegment(segment) res['dimensions'] += [dim for dim in tmp['dimensions']] res['metrics'] += [met for met in tmp['metrics']] metrics : list = re.findall("'(metrics/.+?)'",mydef) res['metrics'] += metrics res['rsid'] = cm['rsid'] res['metrics'] = set(res['metrics']) if len(res['metrics'])>0 else {} res['dimensions'] = set(res['dimensions']) if len(res['dimensions'])>0 else {} return res def createCalculatedMetric(self, metricJSON: dict = None) -> dict: """ Method that create a specific calculated metric based on the dictionary passed to it. Arguments: metricJSON : REQUIRED : Calculated Metrics information to create. (Required: name, definition, rsid) More information can be found at this address https://adobedocs.github.io/analytics-2.0-apis/#/calculatedmetrics/calculatedmetrics_createCalculatedMetric """ if self.loggingEnabled: self.logger.debug(f"starting createCalculatedMetric") if metricJSON is None or type(metricJSON) != dict: if self.loggingEnabled: self.logger.error(f'Expected a dictionary to create the calculated metrics') raise Exception( "Expected a dictionary to create the calculated metrics") if 'name' not in metricJSON.keys() or 'definition' not in metricJSON.keys() or 'rsid' not in metricJSON.keys(): if self.loggingEnabled: self.logger.error(f'Expected "name", "definition" and "rsid" in the data') raise KeyError( 'Expected "name", "definition" and "rsid" in the data') cm = self.connector.postData(self.endpoint_company + self._getCalcMetrics, headers=self.header, data=metricJSON) return cm def createCalculatedMetricValidate(self,metricJSON: dict=None)->dict: """ Method that validate a specific calculated metrics definition based on the dictionary passed to it. Arguments: metricJSON : REQUIRED : Calculated Metrics information to create. (Required: name, definition, rsid) More information can be found at this address https://adobedocs.github.io/analytics-2.0-apis/#/calculatedmetrics/calculatedmetrics_createCalculatedMetric """ if self.loggingEnabled: self.logger.debug(f"starting createCalculatedMetricValidate") if metricJSON is None or type(metricJSON) != dict: raise Exception( "Expected a dictionary to create the calculated metrics") if 'name' not in metricJSON.keys() or 'definition' not in metricJSON.keys() or 'rsid' not in metricJSON.keys(): if self.loggingEnabled: self.logger.error(f'Expected "name", "definition" and "rsid" in the data') raise KeyError( 'Expected "name", "definition" and "rsid" in the data') path = "/calculatedmetrics/validate" cm = self.connector.postData(self.endpoint_company+path, data=metricJSON) return cm def updateCalculatedMetric(self, calcID: str = None, calcJSON: dict = None) -> object: """ Method that updates a specific Calculated Metrics based on the dictionary passed to it. Arguments: calcID : REQUIRED : Calculated Metric ID to be updated calcJSON : REQUIRED : the dictionary that represents the JSON statement for the calculated metric. """ if calcJSON is None or calcID is None: print('No calcMetric or calcMetric JSON data has been passed') return None if self.loggingEnabled: self.logger.debug(f"starting updateCalculatedMetric for {calcID}") data = deepcopy(calcJSON) cm = self.connector.putData( self.endpoint_company + self._getCalcMetrics + '/' + calcID, data=data, headers=self.header ) return cm def deleteCalculatedMetric(self, calcID: str = None) -> object: """ Method that delete a specific calculated metrics based on the id passed.. Arguments: calcID : REQUIRED : Calculated Metrics ID to be deleted """ if calcID is None: print('No calculated metrics data has been passed') return None if self.loggingEnabled: self.logger.debug(f"starting deleteCalculatedMetric for {calcID}") cm = self.connector.deleteData( self.endpoint_company + self._getCalcMetrics + '/' + calcID, headers=self.header ) return cm def getDateRanges(self, extended_info: bool = False, save: bool = False, includeType: str = 'all',verbose:bool=False, **kwargs) -> pd.DataFrame: """ Get the list of date ranges available for the user. Arguments: extended_info : OPTIONAL : additional segment metadata fields to include on response additional infos: reportSuiteName, ownerFullName, modified, tags, compatibility, definition save : OPTIONAL : If set to True, it will save the info in a csv file (Default False) includeType : Include additional date ranges not owned by user. The "all" option takes precedence over "shared" Possible values are all, shared, templates. You can add all of them as comma separated string. Possible kwargs: limit : number of segments retrieved by request. default 500: Limited to 1000 by the AnalyticsAPI. full : Boolean : Doesn't shrink the number of columns if set to true """ if self.loggingEnabled: self.logger.debug(f"starting getDateRanges") limit = int(kwargs.get('limit', 500)) includeType = includeType.split(',') params = {'limit': limit, 'includeType': includeType} if extended_info: params.update( {'expansion': 'definition,ownerFullName,modified,tags'}) dateRanges = self.connector.getData( self.endpoint_company + self._getDateRanges, params=params, headers=self.header, verbose=verbose ) data = dateRanges['content'] df_dates = pd.DataFrame(data) if save: df_dates.to_csv('date_range.csv', index=False) return df_dates def getDateRange(self,dateRangeID:str=None)->dict: """ Get a specific Data Range based on the ID Arguments: dateRangeID : REQUIRED : the date range ID to be retrieved. """ if dateRangeID is None: raise ValueError("No date range ID has been passed") if self.loggingEnabled: self.logger.debug(f"starting getDateRange with ID: {dateRangeID}") params ={ "expansion":"definition,ownerFullName,modified,tags" } dr = self.connector.getData( self.endpoint_company + f"{self._getDateRanges}/{dateRangeID}", params=params ) return dr def updateDateRange(self, dateRangeID: str = None, dateRangeJSON: dict = None) -> dict: """ Method that updates a specific Date Range based on the dictionary passed to it. Arguments: dateRangeID : REQUIRED : Date Range ID to be updated dateRangeJSON : REQUIRED : the dictionary that represents the JSON statement for the date Range. """ if dateRangeJSON is None or dateRangeID is None: raise ValueError("No date range or date range JSON data have been passed") if self.loggingEnabled: self.logger.debug(f"starting updateDateRange") data = deepcopy(dateRangeJSON) dr = self.connector.putData( self.endpoint_company + self._getDateRanges + '/' + dateRangeID, data=data, headers=self.header ) return dr def deleteDateRange(self, dateRangeID: str = None) -> object: """ Method that deletes a specific date Range based on the id passed. Arguments: dateRangeID : REQUIRED : ID of Date Range to be deleted """ if dateRangeID is None: print('No Date Range ID has been pushed') return None if self.loggingEnabled: self.logger.debug(f"starting deleteDateRange for {dateRangeID}") response = self.connector.deleteData( self.endpoint_company + self._getDateRanges + '/' + dateRangeID, headers=self.header ) return response def getCalculatedFunctions(self, **kwargs) -> pd.DataFrame: """ Returns the calculated metrics functions. """ if self.loggingEnabled: self.logger.debug(f"starting getCalculatedFunctions") path = "/calculatedmetrics/functions" limit = int(kwargs.get('limit', 500)) params = {'limit': limit} funcs = self.connector.getData( self.endpoint_company + path, params=params, headers=self.header ) df = pd.DataFrame(funcs) return df def getTags(self, limit: int = 100, **kwargs) -> list: """ Return the list of tags Arguments: limit : OPTIONAL : Amount of tag to be returned by request. Default 100 """ if self.loggingEnabled: self.logger.debug(f"starting getTags") path = "/componentmetadata/tags" params = {'limit': limit} if kwargs.get('page', False): params['page'] = kwargs.get('page', 0) res = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header) data = res['content'] if not res['lastPage']: page = res['number'] + 1 data += self.getTags(limit=limit, page=page) return data def getTag(self, tagId: str = None) -> dict: """ Return the a tag by its ID. Arguments: tagId : REQUIRED : the Tag ID to be retrieved. """ if tagId is None: raise Exception("Require a tag ID for this method.") if self.loggingEnabled: self.logger.debug(f"starting getTag for {tagId}") path = f"/componentmetadata/tags/{tagId}" res = self.connector.getData(self.endpoint_company + path, headers=self.header) return res def getComponentTagName(self, tagNames: str = None, componentType: str = None) -> dict: """ Given a comma separated list of tag names, return component ids associated with them. Arguments: tagNames : REQUIRED : Comma separated list of tag names. componentType : REQUIRED : The component type to operate on. Available values : segment, dashboard, bookmark, calculatedMetric, project, dateRange, metric, dimension, virtualReportSuite, scheduledJob, alert, classificationSet """ path = "/componentmetadata/tags/tagnames" if tagNames is None: raise Exception("Requires tag names to be provided") if self.loggingEnabled: self.logger.debug(f"starting getComponentTagName for {tagNames}") if componentType is None: raise Exception("Requires a Component Type to be provided") params = { "tagNames": tagNames, "componentType": componentType } res = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header) return res def searchComponentsTags(self, componentType: str = None, componentIds: list = None) -> dict: """ Search for the tags of a list of component by their ids. Arguments: componentType : REQUIRED : The component type to use in the search. Available values : segment, dashboard, bookmark, calculatedMetric, project, dateRange, metric, dimension, virtualReportSuite, scheduledJob, alert, classificationSet componentIds : REQUIRED : List of components Ids to use. """ if self.loggingEnabled: self.logger.debug(f"starting searchComponentsTags") if componentType is None: raise Exception("ComponentType is required") if componentIds is None or type(componentIds) != list: raise Exception("componentIds is required as a list of ids") path = "/componentmetadata/tags/component/search" obj = { "componentType": componentType, "componentIds": componentIds } if self.loggingEnabled: self.logger.debug(f"params {obj}") res = self.connector.postData(self.endpoint_company + path, data=obj, headers=self.header) return res def createTags(self, data: list = None) -> dict: """ Create a new tag and applies that new tag to the passed components. Arguments: data : REQUIRED : list of the tag to be created with their component relation. Example of data : [ { "id": 0, "name": "string", "description": "string", "components": [ { "componentType": "string", "componentId": "string", "tags": [ "Unknown Type: Tag" ] } ] } ] """ if self.loggingEnabled: self.logger.debug(f"starting createTags") if data is None: raise Exception("Requires a list of tags to be created") path = "โ€‹/componentmetadataโ€‹/tags" if self.loggingEnabled: self.logger.debug(f"data: {data}") res = self.connector.postData(self.endpoint_company + path, data=data, headers=self.header) return res def deleteTags(self, componentType: str = None, componentIds: str = None) -> str: """ Delete all tags from the component Type and the component ids specified. Arguments: componentIds : REQUIRED : the Comma-separated list of componentIds to operate on. componentType : REQUIRED : The component type to operate on. Available values : segment, dashboard, bookmark, calculatedMetric, project, dateRange, metric, dimension, virtualReportSuite, scheduledJob, alert, classificationSet """ if self.loggingEnabled: self.logger.debug(f"starting deleteTags") if componentType is None: raise Exception("require a component type") if componentIds is None: raise Exception("require component ID(s)") path = "/componentmetadata/tags" params = { "componentType": componentType, "componentIds": componentIds } res = self.connector.deleteData(self.endpoint_company + path, params=params, headers=self.header) return res def deleteTag(self, tagId: str = None) -> str: """ Delete a Tag based on its id. Arguments: tagId : REQUIRED : The tag ID to be deleted. """ if tagId is None: raise Exception("A tag ID is required") if self.loggingEnabled: self.logger.debug(f"starting deleteTag for {tagId}") path = "โ€‹/componentmetadataโ€‹/tagsโ€‹/{tagId}" res = self.connector.deleteData(self.endpoint_company + path, headers=self.header) return res def getComponentTags(self, componentId: str = None, componentType: str = None) -> list: """ Given a componentId, return all tags associated with that component. Arguments: componentId : REQUIRED : The componentId to operate on. Currently this is just the segmentId. componentType : REQUIRED : The component type to operate on. segment, dashboard, bookmark, calculatedMetric, project, dateRange, metric, dimension, virtualReportSuite, scheduledJob, alert, classificationSet """ if self.loggingEnabled: self.logger.debug(f"starting getComponentTags") path = "/componentmetadata/tags/search" if componentType is None: raise Exception("require a component type") if componentId is None: raise Exception("require a component ID") params = {"componentId": componentId, "componentType": componentType} res = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header) return res def updateComponentTags(self, data: list = None): """ Overwrite the component Tags with the list send. Arguments: data : REQUIRED : list of the components to be udpated with their respective list of tag names. Object looks like the following: [ { "componentType": "string", "componentId": "string", "tags": [ "Unknown Type: Tag" ] } ] """ if self.loggingEnabled: self.logger.debug(f"starting updateComponentTags") if data is None or type(data) != list: raise Exception("require list of update to be sent.") path = "/componentmetadata/tags/tagitems" res = self.connector.putData(self.endpoint_company + path, data=data, headers=self.header) return res def getScheduledJobs(self, includeType: str = "all", full: bool = True,limit:int=1000,format:str="df",verbose: bool = False) -> JsonListOrDataFrameType: """ Get Scheduled Projects. You can retrieve the projectID out of the tasks column to see for which workspace a schedule Arguments: includeType : OPTIONAL : By default gets all non-expired or deleted projects. (default "all") You can specify e.g. "all,shared,expired,deleted" to get more. Active schedules always get exported,so you need to use the `rsLocalExpirationTime` parameter in the `schedule` column to e.g. see which schedules are expired full : OPTIONAL : By default True. It returns the following additional information "ownerFullName,groups,tags,sharesFullName,modified,favorite,approved,scheduledItemName,scheduledUsersFullNames,deletedReason" limit : OPTIONAL : Number of element retrieved by request (default max 1000) format : OPTIONAL : Define the format you want to output the result. Default "df" for dataframe, other option "raw" verbose: OPTIONAL : set to True for debug output """ if self.loggingEnabled: self.logger.debug(f"starting getScheduledJobs") params = {"includeType": includeType, "pagination": True, "locale": "en_US", "page": 0, "limit": limit } if full is True: params["expansion"] = "ownerFullName,groups,tags,sharesFullName,modified,favorite,approved,scheduledItemName,scheduledUsersFullNames,deletedReason" path = "/scheduler/scheduler/scheduledjobs/" if verbose: print(f"Getting Scheduled Jobs with Parameters {params}") res = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header) if res.get("content") is None: raise Exception(f"Scheduled Job had no content in response. Parameters were: {params}") # get Scheduled Jobs data into Data Frame data = res.get("content") last_page = res.get("lastPage",True) total_el = res.get("totalElements") number_el = res.get("numberOfElements") if verbose: print(f"Last Page {last_page}, total elements: {total_el}, number_el: {number_el}") # iterate through pages if not on last page yet while last_page == False: if verbose: print(f"last_page is {last_page}, next round") params["page"] += 1 res = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header) data += res.get("content") last_page = res.get("lastPage",True) if format == "df": df = pd.DataFrame(data) return df return data def getScheduledJob(self,scheduleId:str=None)->dict: """ Return a scheduled project definition. Arguments: scheduleId : REQUIRED : Schedule project ID """ if scheduleId is None: raise ValueError("A schedule ID is required") if self.loggingEnabled: self.logger.debug(f"starting getScheduledJob with ID: {scheduleId}") path = f"/scheduler/scheduler/scheduledjobs/{scheduleId}" params = { 'expansion': 'modified,favorite,approved,tags,shares,sharesFullName,reportSuiteName,schedule,triggerObject,tasks,deliverySetting'} res = self.connector.getData(self.endpoint_company + path, params=params) return res def createScheduledJob(self,projectId:str=None,type:str="pdf",schedule:dict=None,loginIds:list=None,emails:list=None,groupIds:list=None,width:int=None)->dict: """ Creates a schedule job based on the information provided as arguments. Expiration will be in one year by default. Arguments: projectId : REQUIRED : The workspace project ID to send. type : REQUIRED : how to send the project, default "pdf" schedule : REQUIRED : object to specify the schedule used. example: { "hour": 10, "minute": 45, "second": 25, "interval": 1, "type": "daily" } { 'type': 'weekly', 'second': 53, 'minute': 0, 'hour': 8, 'daysOfWeek': [2], 'interval': 1 } { 'type': 'monthly', 'second': 53, 'minute': 30, 'hour': 16, 'dayOfMonth': 21, 'interval': 1 } loginIds : REQUIRED : A list of login ID of the users that are recipient of the report. It can be retrieved by the getUsers method. emails : OPTIONAL : If users are not registered in AA, you can specify a list of email addresses. groupIds : OPTIONAL : Group Id to send the report to. width : OPTIONAL : width of the report to be sent. (Minimum 800) """ if self.loggingEnabled: self.logger.debug(f"starting createScheduleJob") path = f"/scheduler/scheduler/scheduledjobs/" dateNow = datetime.datetime.now() nowDateTime = datetime.datetime.isoformat(dateNow,timespec='seconds') futureDate = datetime.datetime.isoformat(dateNow.replace(dateNow.year + 1),timespec='seconds') deliveryId_res = self.createDeliverySetting(loginIds=loginIds, emails=emails,groupIds=groupIds) deliveryId = deliveryId_res.get('id','') if deliveryId == "": if self.loggingEnabled: self.logger.error(f"erro creating the delivery ID") self.logger.error(json.dumps(deliveryId_res)) raise Exception("Error creating the delivery ID") me = self.getUserMe() projectDetail = self.getProject(projectId) data = { "approved" : False, "complexity":{}, "curatedItem":False, "description" : "", "favorite" : False, "hidden":False, "internal":False, "intrinsicIdentity" : False, "isDeleted":False, "isDisabled":False, "locale":"en_US", "noAccess":False, "template":False, "version":"1.0.1", "rsid":projectDetail.get('rsid',''), "schedule":{ "rsLocalStartTime":nowDateTime, "rsLocalExpirationTime":futureDate, "triggerObject":schedule }, "tasks":[ { "tasktype":"generate", "tasksubtype":"analysisworkspace", "requestParams":{ "artifacts":[type], "imsOrgId": self.connector.config['org_id'], "imsUserId": me.get('imsUserId',''), "imsUserName":"API", "projectId" : projectDetail.get('id'), "projectName" : projectDetail.get('name') } }, { "tasktype":"deliver", "artifactType":type, "deliverySettingId": deliveryId, } ] } if width is not None and width >= 800: data['tasks'][0]['requestParams']['width'] = width res = self.connector.postData(self.endpoint_company+path,data=data) return res def updateScheduledJob(self,scheduleId:str=None,scheduleObj:dict=None)->dict: """ Update a schedule Job based on its id and the definition attached to it. Arguments: scheduleId : REQUIRED : the jobs to be updated. scheduleObj : REQUIRED : The object to replace the current definition. """ if scheduleId is None: raise ValueError("A schedule ID is required") if scheduleObj is None: raise ValueError('A schedule Object is required') if self.loggingEnabled: self.logger.debug(f"starting updateScheduleJob with ID: {scheduleId}") path = f"/scheduler/scheduler/scheduledjobs/{scheduleId}" res = self.connector.putData(self.endpoint_company+path,data=scheduleObj) return res def deleteScheduledJob(self,scheduleId:str=None)->dict: """ Delete a schedule project based on its ID. Arguments: scheduleId : REQUIRED : the schedule ID to be deleted. """ if scheduleId is None: raise Exception("A schedule ID is required for deletion") if self.loggingEnabled: self.logger.debug(f"starting deleteScheduleJob with ID: {scheduleId}") path = f"/scheduler/scheduler/scheduledjobs/{scheduleId}" res = self.connector.deleteData(self.endpoint_company + path) return res def getDeliverySettings(self)->list: """ Return a list of delivery settings. """ path = f"/scheduler/scheduler/deliverysettings/" params = {'expansion': 'definition',"limit" : 2000} lastPage = False page_nb = 0 data = [] while lastPage != True: params['page'] = page_nb res = self.connector.getData(self.endpoint_company + path, params=params) data += res.get('content',[]) if len(res.get('content',[]))==params["limit"]: lastPage = False else: lastPage = True page_nb += 1 return data def getDeliverySetting(self,deliverySettingId:str=None)->dict: """ Retrieve the delivery setting from a scheduled project. Argument: deliverySettingId : REQUIRED : The delivery setting ID of the scheduled project. """ path = f"/scheduler/scheduler/deliverysettings/{deliverySettingId}/" params = {'expansion': 'definition'} res = self.connector.getData(self.endpoint_company + path, params=params) return res def createDeliverySetting(self,loginIds:list=None,emails:list=None,groupIds:list=None)->dict: """ Create a delivery setting for a specific scheduled project. Automatically used when using `createScheduleJob`. Arguments: loginIds : REQUIRED : List of login ID to send the scheduled project to. Can be retrieved by the getUsers method. emails : OPTIONAL : In case the recipient are not in the analytics interface. groupIds : OPTIONAL : List of group ID to send the scheduled project to. """ path = f"/scheduler/scheduler/deliverysettings/" if loginIds is None: loginIds = [] if emails is None: emails = [] if groupIds is None: groupIds = [] data = { "definition" : { "allAdmins" : False, "emailAddresses" : emails, "groupIds" : groupIds, "loginIds": loginIds, "type": "email" }, "name" : "email-aanalytics2" } res = self.connector.postData(self.endpoint_company + path, data=data) return res def updateDeliverySetting(self,deliveryId:str=None,loginIds:list=None,emails:list=None,groupIds:list=None)->dict: """ Create a delivery setting for a specific scheduled project. Automatically created for email setting. Arguments: deliveryId : REQUIRED : the delivery setting ID to be updated loginIds : REQUIRED : List of login ID to send the scheduled project to. Can be retrieved by the getUsers method. emails : OPTIONAL : In case the recipient are not in the analytics interface. groupIds : OPTIONAL : List of group ID to send the scheduled project to. """ if deliveryId is None: raise ValueError("Require a delivery setting ID") path = f"/scheduler/scheduler/deliverysettings/{deliveryId}" if loginIds is None: loginIds = [] if emails is None: emails = [] if groupIds is None: groupIds = [] data = { "definition" : { "allAdmins" : False, "emailAddresses" : emails, "groupIds" : groupIds, "loginIds": loginIds, "type": "email" }, "name" : "email-aanalytics2" } res = self.connector.putData(self.endpoint_company + path, data=data) return res def deleteDeliverySetting(self,deliveryId:str=None)->dict: """ Delete a delivery setting based on the ID passed. Arguments: deliveryId : REQUIRED : The delivery setting ID to be deleted. """ if deliveryId is None: raise ValueError("Require a delivery setting ID") path = f"/scheduler/scheduler/deliverysettings/{deliveryId}" res = self.connector.deleteData(self.endpoint_company + path) return res def getProjects(self, includeType: str = 'all', full: bool = False, limit: int = None, includeShared: bool = False, includeTemplate: bool = False, format: str = 'df', cache:bool=False, save: bool = False) -> JsonListOrDataFrameType: """ Returns the list of projects through either a dataframe or a list. Arguments: includeType : OPTIONAL : type of projects to be retrieved.(str) Possible values: - all : Default value (all projects possibles) - shared : shared projects full : OPTIONAL : if set to True, returns all information about projects. limit : OPTIONAL : Limit the number of result returned. includeShared : OPTIONAL : If full is set to False, you can retrieve only information about sharing. includeTemplate: OPTIONAL : If full is set to False, you can add information about template here. format : OPTIONAL : format : OPTIONAL : format of the output. 2 values "df" for dataframe (default) and "raw" for raw json. cache : OPTIONAL : Boolean in case you want to cache the result in the "listProjectIds" attribute. save : OPTIONAL : If set to True, it will save the info in a csv file (bool : default False) """ if self.loggingEnabled: self.logger.debug(f"starting getProjects") path = "/projects" params = {"includeType": includeType} if full: params[ "expansion"] = 'reportSuiteName,ownerFullName,tags,shares,sharesFullName,modified,favorite,approved,companyTemplate,externalReferences,accessLevel' else: params["expansion"] = "ownerFullName,modified" if includeShared: params["expansion"] += ',shares,sharesFullName' if includeTemplate: params["expansion"] += ',companyTemplate' if limit is not None: params['limit'] = limit if self.loggingEnabled: self.logger.debug(f"params: {params}") res = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header) if cache: self.listProjectIds = res if format == "raw": if save: with open('projects.json', 'w') as f: f.write(json.dumps(res, indent=2)) return res df = pd.DataFrame(res) if df.empty == False: df['created'] = pd.to_datetime(df['created'], format='%Y-%m-%dT%H:%M:%SZ') df['modified'] = pd.to_datetime(df['modified'], format='%Y-%m-%dT%H:%M:%SZ') if save: df.to_csv(f'projects_{int(time.time())}.csv', index=False) return df def getProject(self, projectId: str = None, projectClass: bool = False, rsidSuffix: bool = False, retry: int = 0, cache:bool=False, verbose: bool = False) -> Union[dict,Project]: """ Return the dictionary of the project information and its definition. It will return a dictionary or a Project class. The project detail will be saved as Project class in the projectsDetails class attribute. Arguments: projectId : REQUIRED : the project ID to be retrieved. projectClass : OPTIONAL : if set to True. Returns a class of the project with prefiltered information rsidSuffix : OPTIONAL : if set to True, returns project class with rsid as suffic to dimensions and metrics. retry : OPTIONAL : If you want to retry the request if it fails. Specify number of retry (0 default) cache : OPTIONAL : If you want to cache the result as Project class in the "projectsDetails" attribute. verbose : OPTIONAL : If you wish to have logs of status """ if projectId is None: raise Exception("Requires a projectId parameter") params = { 'expansion': 'definition,ownerFullName,modified,favorite,approved,tags,shares,sharesFullName,reportSuiteName,companyTemplate,accessLevel'} path = f"/projects/{projectId}" if self.loggingEnabled: self.logger.debug(f"starting getProject for {projectId}") res = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header,retry=retry, verbose=verbose) if projectClass: if self.loggingEnabled: self.logger.info(f"building an instance of Project class") myProject = Project(res,rsidSuffix=rsidSuffix) return myProject if cache: if self.loggingEnabled: self.logger.info(f"caching the project as Project class") try: self.projectsDetails[projectId] = Project(res) except: if verbose: print('WARNING : Cannot convert Project to Project class') if self.loggingEnabled: self.logger.warning(f"Cannot convert Project to Project class") return res def getAllProjectDetails(self, projects:JsonListOrDataFrameType=None, filterNameProject:str=None, filterNameOwner:str=None, useAttribute:bool=True, cache:bool=False, rsidSuffix:bool=False, output:str="dict", verbose:bool=False)->dict: """ Retrieve all projects details. You can either pass the list of dataframe returned from the getProjects methods and some filters. Returns a dict of ProjectId and the value is the Project class for analysis. Arguments: projects : OPTIONAL : Takes the type of object returned from the getProjects (all data - not only the ID). If None is provided and you never ran the getProjects method, we will call the getProjects method and retrieve the elements. Otherwise you can pass either a limited list of elements that you want to check details for. filterNameProject : OPTIONAL : If you want to retrieve project details for project with a specific string in their name. filterNameOwner : OPTIONAL : If you want to retrieve project details for project with an owner having a specific name. useAttribute : OPTIONAL : True by default, it will use the projectList saved in the listProjectIds attribute. If you want to start from scratch on the retrieval process of your projects. rsidSuffix : OPTIONAL : If you want to add rsid as suffix of metrics and dimensions (::rsid) cache : OPTIONAL : If you want to cache the different elements retrieved for future usage. output : OPTIONAL : If you want to return a "list" or "dict" from this method. (default "dict") verbose : OPTIONAL : Set to True to print information. Not using filter may end up taking a while to retrieve the information. """ if self.loggingEnabled: self.logger.debug(f"starting getAllProjectDetails") ## if no project data if projects is None: if self.loggingEnabled: self.logger.debug(f"No projects passed") if len(self.listProjectIds)>0 and useAttribute: fullProjectIds = self.listProjectIds else: fullProjectIds = self.getProjects(format='raw',cache=cache) ## if project data is passed elif projects is not None: if self.loggingEnabled: self.logger.debug(f"projects passed") if isinstance(projects,pd.DataFrame): fullProjectIds = projects.to_dict(orient='records') elif isinstance(projects,list): fullProjectIds = (proj['id'] for proj in projects) if filterNameProject is not None: if self.loggingEnabled: self.logger.debug(f"filterNameProject passed") fullProjectIds = [project for project in fullProjectIds if filterNameProject in project['name']] if filterNameOwner is not None: if self.loggingEnabled: self.logger.debug(f"filterNameOwner passed") fullProjectIds = [project for project in fullProjectIds if filterNameOwner in project['owner'].get('name','')] if verbose: print(f'{len(fullProjectIds)} project details to retrieve') print(f"estimated time required : {int(len(fullProjectIds)/60)} minutes") if self.loggingEnabled: self.logger.debug(f'{len(fullProjectIds)} project details to retrieve') projectIds = (project['id'] for project in fullProjectIds) projectsDetails = {projectId:self.getProject(projectId,projectClass=True,rsidSuffix=rsidSuffix) for projectId in projectIds} if filterNameProject is None and filterNameOwner is None: self.projectsDetails = projectsDetails if output == "list": list_projectsDetails = [projectsDetails[key] for key in projectsDetails] return list_projectsDetails return projectsDetails def deleteProject(self, projectId: str = None) -> dict: """ Delete the project specified by its ID. Arguments: projectId : REQUIRED : the project ID to be deleted. """ if self.loggingEnabled: self.logger.debug(f"starting deleteProject") if projectId is None: raise Exception("Requires a projectId parameter") path = f"/projects/{projectId}" res = self.connector.deleteData(self.endpoint_company + path, headers=self.header) return res def validateProject(self,projectObj:dict = None)->dict: """ Validate a project definition based on the definition passed. Arguments: projectObj : REQUIRED : the dictionary that represents the Workspace definition. requires the following elements: name,description,rsid, definition, owner """ if self.loggingEnabled: self.logger.debug(f"starting validateProject") if projectObj is None and type(projectObj) != dict : raise Exception("Requires a projectObj data to be sent to the server.") if 'project' in projectObj.keys(): rsid = projectObj['project'].get('rsid',None) else: rsid = projectObj.get('rsid',None) projectObj = {'project':projectObj} if rsid is None: raise Exception("Could not find a rsid parameter in your project definition") path = "/projects/validate" params = {'rsid':rsid} res = self.connector.postData(self.endpoint_company + path, data=projectObj, headers=self.header,params=params) return res def updateProject(self, projectId: str = None, projectObj: dict = None) -> dict: """ Update your project with the new object placed as parameter. Arguments: projectId : REQUIRED : the project ID to be updated. projectObj : REQUIRED : the dictionary to replace the previous Workspace. requires the following elements: name,description,rsid, definition, owner """ if self.loggingEnabled: self.logger.debug(f"starting updateProject") if projectId is None: raise Exception("Requires a projectId parameter") path = f"/projects/{projectId}" if projectObj is None: raise Exception("Requires a projectObj parameter") if 'name' not in projectObj.keys(): raise KeyError("Requires name key in the project object") if 'description' not in projectObj.keys(): raise KeyError("Requires description key in the project object") if 'rsid' not in projectObj.keys(): raise KeyError("Requires rsid key in the project object") if 'owner' not in projectObj.keys(): raise KeyError("Requires owner key in the project object") if type(projectObj['owner']) != dict: raise ValueError("Requires owner key to be a dictionary") if 'definition' not in projectObj.keys(): raise KeyError("Requires definition key in the project object") if type(projectObj['definition']) != dict: raise ValueError("Requires definition key to be a dictionary") res = self.connector.putData(self.endpoint_company + path, data=projectObj, headers=self.header) return res def createProject(self, projectObj: dict = None) -> dict: """ Create a project based on the definition you have set. Arguments: projectObj : REQUIRED : the dictionary to create a new Workspace. requires the following elements: name,description,rsid, definition, owner """ if self.loggingEnabled: self.logger.debug(f"starting createProject") path = "/projects/" if projectObj is None: raise Exception("Requires a projectId parameter") if 'name' not in projectObj.keys(): raise KeyError("Requires name key in the project object") if 'description' not in projectObj.keys(): raise KeyError("Requires description key in the project object") if 'rsid' not in projectObj.keys(): raise KeyError("Requires rsid key in the project object") if 'owner' not in projectObj.keys(): raise KeyError("Requires owner key in the project object") if type(projectObj['owner']) != dict: raise ValueError("Requires owner key to be a dictionary") if 'definition' not in projectObj.keys(): raise KeyError("Requires definition key in the project object") if type(projectObj['definition']) != dict: raise ValueError("Requires definition key to be a dictionary") res = self.connector.postData(self.endpoint_company + path, data=projectObj, headers=self.header) return res def findComponentsUsage(self,components:list=None, projectDetails:list=None, segments:Union[list,pd.DataFrame]=None, calculatedMetrics:Union[list,pd.DataFrame]=None, recursive:bool=False, regexUsed:bool=False, verbose:bool=False, resetProjectDetails:bool=False, rsidSuffix:bool=False, )->dict: """ Find the usage of components in the different part of Adobe Analytics setup. Projects, Segment, Calculated metrics. Arguments: components : REQUIRED : list of component to look for. Example : evar10,event1,prop3,segmentId, calculatedMetricsId projectDetails: OPTIONAL : list of instances of Project class. segments : OPTIONAL : If you wish to pass the segments to look for. (should contain definition) calculatedMetrics : OPTIONAL : If you wish to pass the segments to look for. (should contain definition) recursive : OPTIONAL : if set to True, will also find the reference where the meta component are used. segments based on your elements will also be searched to see where they are located. regexUsed : OPTIONAL : If set to True, the element are definied as a regex and some default setup is turned off. resetProjectDetails : OPTIONAL : Set to false by default. If set to True, it will NOT use the cache. rsidSuffix : OPTIONAL : If you do not give projectDetails and you want to look for rsid usage in report for dimensions and metrics. """ if components is None or type(components) != list: raise ValueError("components must be present as a list") if self.loggingEnabled: self.logger.debug(f"starting findComponentsUsage for {components}") listComponentProp = [comp for comp in components if 'prop' in comp] listComponentVar = [comp for comp in components if 'evar' in comp] listComponentEvent = [comp for comp in components if 'event' in comp] listComponentSegs = [comp for comp in components if comp.startswith('s')] listComponentCalcs = [comp for comp in components if comp.startswith('cm')] restComponents = set(components) - set(listComponentProp+listComponentVar+listComponentEvent+listComponentSegs+listComponentCalcs) listDefaultElements = [comp for comp in restComponents] listRecusion = [] ## adding unregular ones regPartSeg = "('|\.)" ## ensure to not catch evar100 for evar10 regPartProj = "($|\.|\::)" ## ensure to not catch evar100 for evar10 if regexUsed: if self.loggingEnabled: self.logger.debug(f"regex is used") regPartSeg = "" regPartProj = "" ## Segments if verbose: print('retrieving segments') if self.loggingEnabled: self.logger.debug(f"retrieving segments") if len(self.segments) == 0 and segments is None: self.segments = self.getSegments(extended_info=True) mySegments = self.segments elif len(self.segments) > 0 and segments is None: mySegments = self.segments elif segments is not None: if type(segments) == list: mySegments = pd.DataFrame(segments) elif type(segments) == pd.DataFrame: mySegments = segments else: mySegments = segments ### Calculated Metrics if verbose: print('retrieving calculated metrics') if self.loggingEnabled: self.logger.debug(f"retrieving calculated metrics") if len(self.calculatedMetrics) == 0 and calculatedMetrics is None: self.calculatedMetrics = self.getCalculatedMetrics(extended_info=True) myMetrics = self.calculatedMetrics elif len(self.segments) > 0 and calculatedMetrics is None: myMetrics = self.calculatedMetrics elif calculatedMetrics is not None: if type(calculatedMetrics) == list: myMetrics = pd.DataFrame(calculatedMetrics) elif type(calculatedMetrics) == pd.DataFrame: myMetrics = calculatedMetrics else: myMetrics = calculatedMetrics ### Projects if (len(self.projectsDetails) == 0 and projectDetails is None) or resetProjectDetails: if self.loggingEnabled: self.logger.debug(f"retrieving projects details") self.projectDetails = self.getAllProjectDetails(verbose=verbose,rsidSuffix=rsidSuffix) myProjectDetails = (self.projectsDetails[key].to_dict() for key in self.projectsDetails) elif len(self.projectsDetails) > 0 and projectDetails is None and resetProjectDetails==False: if self.loggingEnabled: self.logger.debug(f"transforming projects details") myProjectDetails = (self.projectsDetails[key].to_dict() for key in self.projectsDetails) elif projectDetails is not None: if self.loggingEnabled: self.logger.debug(f"setting the project details") if isinstance(projectDetails[0],Project): myProjectDetails = (item.to_dict() for item in projectDetails) elif isinstance(projectDetails[0],dict): myProjectDetails = (Project(item).to_dict() for item in projectDetails) else: raise Exception("Project details were not able to be processed") teeProjects:tuple = tee(myProjectDetails) ## duplicating the project generator for recursive pass (low memory - intensive computation) returnObj = {element : {'segments':[],'calculatedMetrics':[],'projects':[]} for element in components} recurseObj = defaultdict(list) if verbose: print('search started') print(f'recursive option : {recursive}') print('start looking into segments') if self.loggingEnabled: self.logger.debug(f"Analyzing segments") for _,seg in mySegments.iterrows(): for prop in listComponentProp: if re.search(f"{prop+regPartSeg}",str(seg['definition'])): returnObj[prop]['segments'].append({seg['name']:seg['id']}) if recursive: listRecusion.append(seg['id']) for var in listComponentVar: if re.search(f"{var+regPartSeg}",str(seg['definition'])): returnObj[var]['segments'].append({seg['name']:seg['id']}) if recursive: listRecusion.append(seg['id']) for event in listComponentEvent: if re.search(f"{event}'",str(seg['definition'])): returnObj[event]['segments'].append({seg['name']:seg['id']}) if recursive: listRecusion.append(seg['id']) for element in listDefaultElements: if re.search(f"{element}",str(seg['definition'])): returnObj[element]['segments'].append({seg['name']:seg['id']}) if recursive: listRecusion.append(seg['id']) if self.loggingEnabled: self.logger.debug(f"Analyzing calculated metrics") if verbose: print('start looking into calculated metrics') for _,met in myMetrics.iterrows(): for prop in listComponentProp: if re.search(f"{prop+regPartSeg}",str(met['definition'])): returnObj[prop]['calculatedMetrics'].append({met['name']:met['id']}) if recursive: listRecusion.append(met['id']) for var in listComponentVar: if re.search(f"{var+regPartSeg}",str(met['definition'])): returnObj[var]['calculatedMetrics'].append({met['name']:met['id']}) if recursive: listRecusion.append(met['id']) for event in listComponentEvent: if re.search(f"{event}'",str(met['definition'])): returnObj[event]['calculatedMetrics'].append({met['name']:met['id']}) if recursive: listRecusion.append(met['id']) for element in listDefaultElements: if re.search(f"{element}'",str(met['definition'])): returnObj[element]['calculatedMetrics'].append({met['name']:met['id']}) if recursive: listRecusion.append(met['id']) if verbose: print('start looking into projects') if self.loggingEnabled: self.logger.debug(f"Analyzing projects") for proj in teeProjects[0]: ## mobile reports don't have dimensions. if proj['reportType'] == "desktop": for prop in listComponentProp: for element in proj['dimensions']: if re.search(f"{prop+regPartProj}",element): returnObj[prop]['projects'].append({proj['name']:proj['id']}) for var in listComponentVar: for element in proj['dimensions']: if re.search(f"{var+regPartProj}",element): returnObj[var]['projects'].append({proj['name']:proj['id']}) for event in listComponentEvent: for element in proj['metrics']: if re.search(f"{event}",element): returnObj[event]['projects'].append({proj['name']:proj['id']}) for seg in listComponentSegs: for element in proj.get('segments',[]): if re.search(f"{seg}",element): returnObj[seg]['projects'].append({proj['name']:proj['id']}) for met in listComponentCalcs: for element in proj.get('calculatedMetrics',[]): if re.search(f"{met}",element): returnObj[met]['projects'].append({proj['name']:proj['id']}) for element in listDefaultElements: for met in proj['calculatedMetrics']: if re.search(f"{element}",met): returnObj[element]['projects'].append({proj['name']:proj['id']}) for dim in proj['dimensions']: if re.search(f"{element}",dim): returnObj[element]['projects'].append({proj['name']:proj['id']}) for rsid in proj['rsids']: if re.search(f"{element}",rsid): returnObj[element]['projects'].append({proj['name']:proj['id']}) for event in proj['metrics']: if re.search(f"{element}",event): returnObj[element]['projects'].append({proj['name']:proj['id']}) if recursive: if verbose: print('start looking into recursive elements') if self.loggingEnabled: self.logger.debug(f"recursive option checked") for proj in teeProjects[1]: for rec in listRecusion: for element in proj.get('segments',[]): if re.search(f"{rec}",element): recurseObj[rec].append({proj['name']:proj['id']}) for element in proj.get('calculatedMetrics',[]): if re.search(f"{rec}",element): recurseObj[rec].append({proj['name']:proj['id']}) if recursive: returnObj['recursion'] = recurseObj if verbose: print('done') return returnObj def getUsageLogs(self, startDate:str=None, endDate:str=None, eventType:str=None, event:str=None, rsid:str=None, login:str=None, ip:str=None, limit:int=100, max_result:int=None, format:str="df", verbose:bool=False, **kwargs)->dict: """ Returns the Audit Usage Logs from your company analytics setup. Arguments: startDate : REQUIRED : Start date, format : 2020-12-01T00:00:00-07.(default 60 days prior today) endDate : REQUIRED : End date, format : 2020-12-15T14:32:33-07. (default today) Should be a maximum of a 3 month period between startDate and endDate. eventType : OPTIONAL : The numeric id for the event type you want to filter logs by. Please reference the lookup table in the LOGS_EVENT_TYPE event : OPTIONAL : The event description you want to filter logs by. No wildcards are permitted, but this filter is case insensitive and supports partial matches. rsid : OPTIONAL : ReportSuite ID to filter on. login : OPTIONAL : The login value of the user you want to filter logs by. This filter functions as an exact match. ip : OPTIONAL : The IP address you want to filter logs by. This filter supports a partial match. limit : OPTIONAL : Number of results per page. max_result : OPTIONAL : Number of maximum amount of results if you want. If you want to cap the process. Ex : max_result=1000 format : OPTIONAL : If you wish to have a DataFrame ("df" - default) or list("raw") as output. verbose : OPTIONAL : Set it to True if you want to have console info. possible kwargs: page : page number (default 0) """ if self.loggingEnabled: self.logger.debug(f"starting getUsageLogs") import datetime now = datetime.datetime.now() if startDate is None: startDate = datetime.datetime.isoformat(now - datetime.timedelta(days=60)).split('.')[0] if endDate is None: endDate = datetime.datetime.isoformat(now).split('.')[0] path = "/auditlogs/usage" params = {"page":kwargs.get('page',0),"limit":limit,"startDate":startDate,"endDate":endDate} if eventType is not None: params['eventType'] = eventType if event is not None: params['event'] = event if rsid is not None: params['rsid'] = rsid if login is not None: params['login'] = login if ip is not None: params['ip'] = ip if self.loggingEnabled: self.logger.debug(f"params: {params}") res = self.connector.getData(self.endpoint_company + path, params=params,verbose=verbose) data = res['content'] lastPage = res['lastPage'] while lastPage == False: params["page"] += 1 res = self.connector.getData(self.endpoint_company + path, params=params,verbose=verbose) data += res['content'] lastPage = res['lastPage'] if max_result is not None: if len(data) >= max_result: lastPage = True if format == "df": df = pd.DataFrame(data) return df return data def getTopItems(self,rsid:str=None,dimension:str=None,dateRange:str=None,searchClause:str=None,lookupNoneValues:bool = True,limit:int=10,verbose:bool=False,**kwargs)->object: """ Returns the top items of a request. Arguments: rsid : REQUIRED : ReportSuite ID of the data dimension : REQUIRED : The dimension to retrieve dateRange : OPTIONAL : Format YYYY-MM-DD/YYYY-MM-DD (default 90 days) searchClause : OPTIONAL : General search string; wrap with single quotes. Example: 'PageABC' lookupNoneValues : OPTIONAL : None values to be included (default True) limit : OPTIONAL : Number of items to be returned per page. verbose : OPTIONAL : If you want to have comments displayed (default False) possible kwargs: page : page to look for startDate : start date with format YYYY-MM-DD endDate : end date with format YYYY-MM-DD searchAnd, searchOr, searchNot, searchPhrase : Search element to be included (or not), partial match or not. """ if self.loggingEnabled: self.logger.debug(f"starting getTopItems") path = "/reports/topItems" page = kwargs.get("page",0) if rsid is None: raise ValueError("Require a reportSuite ID") if dimension is None: raise ValueError("Require a dimension") params = {"rsid" : rsid, "dimension":dimension,"lookupNoneValues":lookupNoneValues,"limit":limit,"page":page} if searchClause is not None: params["search-clause"] = searchClause if dateRange is not None and '/' in dateRange: params["dateRange"] = dateRange if kwargs.get('page',None) is not None: params["page"] = kwargs.get('page') if kwargs.get("startDate",None) is not None: params["startDate"] = kwargs.get("startDate") if kwargs.get("endDate",None) is not None: params["endDate"] = kwargs.get("endDate") if kwargs.get("searchAnd", None) is not None: params["searchAnd"] = kwargs.get("searchAnd") if kwargs.get("searchOr",None) is not None: params["searchOr"] = kwargs.get("searchOr") if kwargs.get("searchNot",None) is not None: params["searchNot"] = kwargs.get("searchNot") if kwargs.get("searchPhrase",None) is not None: params["searchPhrase"] = kwargs.get("searchPhrase") last_page = False if verbose: print('Starting to fetch the data...') data = [] while not last_page: if verbose: print(f'request page : {page}') res = self.connector.getData(self.endpoint_company+path,params=params) last_page = res.get("lastPage",True) data += res["rows"] page += 1 params["page"] = page df = pd.DataFrame(data) return df def getAnnotations(self,full:bool=True,includeType:str='all',limit:int=1000,page:int=0)->list: """ Returns a list of the available annotations Arguments: full : OPTIONAL : If set to True (default), returned all available information of the annotation. includeType : OPTIONAL : use to return only "shared" or "all"(default) annotation available. limit : OPTIONAL : number of result per page (default 1000) page : OPTIONAL : page used for pagination """ params = {"includeType":includeType,"page":page} if full: params['expansion'] = "name,description,dateRange,color,applyToAllReports,scope,createdDate,modifiedDate,modifiedById,tags,shares,approved,favorite,owner,usageSummary,companyId,reportSuiteName,rsid" path = f"/annotations" lastPage = False data = [] while lastPage == False: res = self.connector.getData(self.endpoint_company + path,params=params) data += res.get('content',[]) lastPage = res.get('lastPage',True) params['page'] += 1 return data def getAnnotation(self,annotationId:str=None)->dict: """ Return a specific annotation definition. Arguments: annotationId : REQUIRED : The annotation ID """ if annotationId is None: raise ValueError("Require an annotation ID") path = f"/annotations/{annotationId}" params ={ "expansion" : "name,description,dateRange,color,applyToAllReports,scope,createdDate,modifiedDate,modifiedById,tags,shares,approved,favorite,owner,usageSummary,companyId,reportSuiteName,rsid" } res = self.connector.getData(self.endpoint_company + path,params=params) return res def deleteAnnotation(self,annotationId:str=None)->dict: """ Delete a specific annotation definition. Arguments: annotationId : REQUIRED : The annotation ID to be deleted """ if annotationId is None: raise ValueError("Require an annotation ID") path = f"/annotations/{annotationId}" res = self.connector.deleteData(self.endpoint_company + path) return res def createAnnotation(self, name:str=None, dateRange:str=None, rsid:str=None, metricIds:list=None, dimensionObj:list=None, description:str=None, filterIds:list=None, applyToAllReports:bool=False, **kwargs)->dict: """ Create an Annotation. Arguments: name : REQUIRED : Name of the annotation dateRange : REQUIRED : Date range of the annotation to be used. Example: 2022-04-19T00:00:00/2022-04-19T23:59:59 rsid : REQUIRED : ReportSuite ID metricIds : OPTIONAL : List of metrics ID to be annotated filterIds : OPTIONAL : List of Segments ID to apply for annotation for context. dimensionObj : OPTIONAL : List of dimensions object specification: { componentType: "dimension" dimensionType: "string" id: "variables/product" operator: "streq" terms: ["unknown"] } applyToAllReports : OPTIONAL : If the annotation apply to all ReportSuites. possible kwargs: colors: Color to be used, examples: "STANDARD1" shares: List of userId for sharing the annotation tags: List of tagIds to be applied favorite: boolean to set the annotation as favorite (false by default) approved: boolean to set the annotation as approved (false by default) """ path = f"/annotations" if name is None: raise ValueError("A name must be specified") if dateRange is None: raise ValueError("A dateRange must be specified") if rsid is None: raise ValueError("a master ReportSuite ID must be specified") description = description or "api generated" data = { "name": name, "description": description, "dateRange": dateRange, "color": kwargs.get('colors',"STANDARD1"), "applyToAllReports": applyToAllReports, "scope": { "metrics":[], "filters":[] }, "tags": [], "approved": kwargs.get('approved',False), "favorite": kwargs.get('favorite',False), "rsid": rsid } if metricIds is not None and type(metricIds) == list: for metric in metricIds: data['scopes']['metrics'].append({ "id" : metric, "componentType":"metric" }) if filterIds is None and type(filterIds) == list: for filter in filterIds: data['scopes']['filters'].append({ "id" : filter, "componentType":"segment" }) if dimensionObj is not None and type(dimensionObj) == list: for obj in dimensionObj: data['scopes']['filters'].append(obj) if kwargs.get("shares",None) is not None: data['shares'] = [] for user in kwargs.get("shares",[]): data['shares'].append({ "shareToId" : user, "shareToType":"user" }) if kwargs.get('tags',None) is not None: for tag in kwargs.get('tags'): res = self.getTag(tag) data['tags'].append({ "id":tag, "name":res['name'] }) res = self.connector.postData(self.endpoint_company + path,data=data) return res def updateAnnotation(self,annotationId:str=None,annotationObj:dict=None)->dict: """ Update an annotation based on its ID. PUT method. Arguments: annotationId : REQUIRED : The annotation ID to be updated annotationObj : REQUIRED : The object to replace the annotation. """ if annotationObj is None or type(annotationObj) != dict: raise ValueError('Require a dictionary representing the annotation definition') if annotationId is None: raise ValueError('Require the annotation ID') path = f"/annotations/{annotationId}" res = self.connector.putData(self.endpoint_company+path,data=annotationObj) return res # def getDataWarehouseReports(self,reportSuite:str=None,reportName:str=None,deliveryUUID:str=None,status:str=None, # ScheduledRequestUUID:str=None,limit:int=1000)-> dict: # """ # Get all DW reports that matched filter parameters. # Arguments: # reportSuite : OPTIONAL : The name of the reportSuite # reportName : OPTIONAL : The name of the report # deliveryUUID : OPTIONAL : the UUID generated for that report # status : OPTIONAL : Status of the report Generation, can be any of [COMPLETED, CANCELED, ERROR_DELIVERY, ERROR_PROCESSING, CREATED, PROCESSING, PENDING] # scheduledRequestUUID : OPTIONAL : THe schedule report UUID generated by this report # limit : OPTIONAL : Maximum amount of data returned # """ # path = '/data_warehouse/report' # params = {"limit":limit} # if reportSuite is not None: # params['ReportSuite'] = reportSuite # if reportName is not None: # params['ReportName'] = reportName # if deliveryUUID is not None: # params['DeliveryProfileUUID'] = deliveryUUID # if status is not None and status in ["COMPLETED", "CANCELED", "ERROR_DELIVERY", "ERROR_PROCESSING", "CREATED", "PROCESSING", "PENDING"]: # params["Status"] = status # if ScheduledRequestUUID is not None: # params['ScheduledRequestUUID'] = ScheduledRequestUUID # res = self.connector.getData('https://analytics.adobe.io/api' + path,params=params) # return res # def getDataWarehouseReport(self,reportUUID:str=None)-> dict: # """ # Return a single report information out of the report UUID. # Arguments: # reportUUID : REQUIRED : the report UUID # """ # if reportUUID is None: # raise ValueError("Require a report UUID") # path = f'/data_warehouse/report/{reportUUID}' # res = self.connector.getData('https://analytics.adobe.io/api' + path) # return res # def getDataWarehouseRequests(self,reportSuite:str=None,reportName:str=None,status:str=None,limit:int=1000)-> dict: # """ # Get all DW requests that matched filter parameters. # Arguments: # reportSuite : OPTIONAL : The name of the reportSuite # reportName : OPTIONAL : The name of the report # status : OPTIONAL : Status of the report Generation, can be any of [COMPLETED, CANCELED, ERROR_DELIVERY, ERROR_PROCESSING, CREATED, PROCESSING, PENDING] # scheduledRequestUUID : OPTIONAL : THe schedule report UUID generated by this report # limit : OPTIONAL : Maximum amount of data returned # """ # path = '/data_warehouse/scheduled' # params = {"limit":limit} # if reportSuite is not None: # params['ReportSuite'] = reportSuite # if reportName is not None: # params['ReportName'] = reportName # if status is not None and status in ["COMPLETED", "CANCELED", "ERROR_DELIVERY", "ERROR_PROCESSING", "CREATED", "PROCESSING", "PENDING"]: # params["Status"] = status # res = self.connector.getData('https://analytics.adobe.io/api'+ path,params=params) # return res # def getDataWarehouseRequest(self,scheduleUUID:str=None)-> dict: # """ # Return a single request information out of the schedule UUID. # Arguments: # scheduleUUID : REQUIRED : the schedule UUID # """ # if scheduleUUID is None: # raise ValueError("Require a report UUID") # path = f'/data_warehouse/scheduled/{scheduleUUID}' # res = self.connector.getData('https://analytics.adobe.io' + path) # return res # def createDataWarehouseRequest(self, # requestDict:dict=None, # reportName:str=None, # login:str=None, # emails:list=None, # emailNote:str=None, # )->dict: # """ # Create a Data Warehouse request based on either the dictionary provided or the parameters filled. # Arguments: # requestDict : OPTIONAL : The complete dictionary definition for a datawarehouse export. # If not provided, require the other parameters to be used. # reportName : OPTIONAL : The name of the report # login : OPTIONAL : The login Id of the user # emails : OPTIONAL : List of emails for notification. example : ['[email protected]'] # dimensions : OPTIONAL : List of dimensions to use, example : ['prop1'] # metrics : OPTIONAL : List of metrics to use, example : ['event1','event2'] # segments : OPTIONAL : List of segments to use, example : ['seg1','seg2'] # dateGranularity : OPTIONAL : # reportPeriod : OPTIONAL : # emailNote : OPTIONAL : Note for the email # """ # f'/data_warehouse/scheduled/' # def getDataWarehouseDeliveryAccounts(self)->dict: # """ # Get All delivery Account used by a company. # """ # path = f'/data_warehouse/delivery/account' # res = self.connector.getData('https://analytics.adobe.io'+path) # return res # def getDataWarehouseDeliveryProfile(self)->dict: # """ # Get all Delivery Profile for a given global company id # """ # path = f'/data_warehouse/delivery/profile' # res = self.connector.getData('https://analytics.adobe.io'+path) # return res def compareReportSuites(self,listRsids:list=None,element:str='dimensions',comparison:str="full",save: bool=False)->pd.DataFrame: """ Compare reportSuite on dimensions (default) or metrics based on the comparison selected. Returns a dataframe with multi-index and a column telling which elements are differents Arguments: listRsids : REQUIRED : list of report suite ID to compare element : REQUIRED : Elements to compare. 2 possible choices: dimensions (default) metrics comparison : REQUIRED : Type of comparison to do: full (default) : compare name and settings name : compare only names save : OPTIONAL : if you want to save in a csv. """ if self.loggingEnabled: self.logger.debug(f"starting compareReportSuites") if listRsids is None or type(listRsids) != list: raise ValueError("Require a list of rsids") if element=="dimensions": if self.loggingEnabled: self.logger.debug(f"dimensions selected") listDFs = [self.getDimensions(rsid,full=True) for rsid in listRsids] elif element == "metrics": listDFs = [self.getMetrics(rsid,full=True) for rsid in listRsids] if self.loggingEnabled: self.logger.debug(f"metrics selected") for df,rsid in zip(listDFs, listRsids): df['rsid']=rsid df.set_index('id',inplace=True) df.set_index('rsid',append=True,inplace=True) df = pd.concat(listDFs) df = df.unstack() if comparison=='name': df_name = df['name'].copy() ## transforming to a new df with boolean value comparison to col 0 temp_df = df_name.eq(df_name.iloc[:, 0], axis=0) ## now doing a complete comparison of all boolean with all df_name['different'] = ~temp_df.eq(temp_df.iloc[:,0],axis=0).all(1) if save: df_name.to_csv(f'comparison_name_{int(time.time())}.csv') if self.loggingEnabled: self.logger.debug(f'Name only comparison, file : comparison_name_{int(time.time())}.csv') return df_name ## retrieve main indexes from multi level indexes mainIndex = set([val[0] for val in list(df.columns)]) dict_temp = {} for index in mainIndex: temp_df = df[index].copy() temp_df.fillna('',inplace=True) ## transforming to a new df with boolean value comparison to col 0 temp_df.eq(temp_df.iloc[:, 0], axis=0) ## now doing a complete comparison of all boolean with all dict_temp[index] = list(temp_df.eq(temp_df.iloc[:,0],axis=0).all(1)) df_bool = pd.DataFrame(dict_temp) df['different'] = list(~df_bool.eq(df_bool.iloc[:,0],axis=0).all(1)) if save: df.to_csv(f'comparison_full_{element}_{int(time.time())}.csv') if self.loggingEnabled: self.logger.debug(f'Full comparison, file : comparison_full_{element}_{int(time.time())}.csv') return df def shareComponent(self, componentId: str = None, componentType: str = None, shareToId: int = None, shareToImsId: int = None, shareToType: str = None, shareToLogin: str = None, accessLevel: str = None, shareFromImsId: str = None) -> dict: """ Shares a component with an individual or a group (product profile ID) a dictionary on the calculated metrics requested. Returns the JSON response from the API. Arguments: componentId : REQUIRED : The component ID to share. componentType : REQUIRED : The component Type ("calculatedMetric", "segment", "project", "dateRange") shareToId: ID of the user or the group to share to shareToImsId: IMS ID of the user to share to (alternative to ID) shareToLogin: Login of the user to share to (alternative to ID) shareToType: "group" => share to a group (product profile), "user" => share to a user, "all" => share to all users (in this case, no shareToId or shareToImsId is needed) """ if self.loggingEnabled: self.logger.debug(f"Starting to share component ID {componentId} with parameters: {locals()}") path = f"/componentmetadata/shares/" data = { "accessLevel": accessLevel, "componentId": componentId, "componentType": componentType, "shareToId": shareToId, "shareToImsId": shareToImsId, "shareToLogin": shareToLogin, "shareToType": shareToType } res = self.connector.postData(self.endpoint_company + path, data=data) return res def _dataDescriptor(self, json_request: dict): """ read the request and returns an object with information about the request. It will be used in order to build the dataclass and the dataframe. """ if self.loggingEnabled: self.logger.debug(f"starting _dataDescriptor") obj = {} if json_request.get('dimension',None) is not None: obj['dimension'] = json_request.get('dimension') obj['filters'] = {'globalFilters': [], 'metricsFilters': {}} obj['rsid'] = json_request['rsid'] metrics_info = json_request['metricContainer'] obj['metrics'] = [metric['id'] for metric in metrics_info['metrics']] if 'metricFilters' in metrics_info.keys(): metricsFilter = {metric['id']: metric['filters'] for metric in metrics_info['metrics'] if len(metric.get('filters', [])) > 0} filters = [] for metric in metricsFilter: for item in metricsFilter[metric]: if 'segmentId' in metrics_info['metricFilters'][int(item)].keys(): filters.append( metrics_info['metricFilters'][int(item)]['segmentId']) if 'dimension' in metrics_info['metricFilters'][int(item)].keys(): filters.append( metrics_info['metricFilters'][int(item)]['dimension']) obj['filters']['metricsFilters'][metric] = set(filters) for fil in json_request['globalFilters']: if 'dateRange' in fil.keys(): obj['filters']['globalFilters'].append(fil['dateRange']) if 'dimension' in fil.keys(): obj['filters']['globalFilters'].append(fil['dimension']) if 'segmentId' in fil.keys(): obj['filters']['globalFilters'].append(fil['segmentId']) return obj def _readData( self, data_rows: list, anomaly: bool = False, cols: list = None, item_id: bool = False ) -> pd.DataFrame: """ read the data from the requests and returns a dataframe. Parameters: data_rows : REQUIRED : Rows that have been returned by the request. anomaly : OPTIONAL : Boolean to tell if the anomaly detection has been used. cols : OPTIONAL : list of columns names """ if self.loggingEnabled: self.logger.debug(f"starting _readData") if cols is None: raise ValueError("list of columns must be specified") data_rows = deepcopy(data_rows) dict_data = {row.get('value', 'missing_value'): row['data'] for row in data_rows} if cols is not None: n_metrics = len(cols) - 1 if item_id: # adding the itemId in the data returned cols.append('item_id') for row in data_rows: dict_data[row.get('value', 'missing_value')].append(row['itemId']) if anomaly: # set full columns cols = cols + [f'{metric}-{suffix}' for metric in cols[1:] for suffix in ['expected', 'UpperBound', 'LowerBound']] # add data to the dictionary for row in data_rows: for item in range(n_metrics): dict_data[row['value']].append( row.get('dataExpected', [0 for i in range(n_metrics)])[item]) dict_data[row['value']].append( row.get('dataUpperBound', [0 for i in range(n_metrics)])[item]) dict_data[row['value']].append( row.get('dataLowerBound', [0 for i in range(n_metrics)])[item]) df = pd.DataFrame(dict_data).T # require to transform the data df.reset_index(inplace=True, ) df.columns = cols return df def getReport( self, json_request: Union[dict, str, IO,RequestCreator], limit: int = 1000, n_results: Union[int, str] = 1000, save: bool = False, item_id: bool = False, unsafe: bool = False, verbose: bool = False, debug=False, **kwargs, ) -> object: """ Retrieve data from a JSON request.Returns an object containing meta info and dataframe. Arguments: json_request: REQUIRED : JSON statement that contains your request for Analytics API 2.0. The argument can be : - a dictionary : It will be used as it is. - a string that is a dictionary : It will be transformed to a dictionary / JSON. - a path to a JSON file that contains the statement (must end with ".json"). - an instance of the RequestCreator class limit : OPTIONAL : number of result per request (defaut 1000) n_results : OPTIONAL : Number of result that you would like to retrieve. (default 1000) if you want to have all possible data, use "inf". item_id : OPTIONAL : Boolean to define if you want to return the item id for sub requests (default False) unsafe : OPTIONAL : If set to True, it will not check "lastPage" parameter and assume first request is complete. This may break the script or return incomplete data. (default False). save : OPTIONAL : If you would like to save the data within a CSV file. (default False) verbose : OPTIONAL : If you want to have comments displayed (default False) """ if unsafe and verbose: print('---- running the getReport in "unsafe" mode ----') obj = {} if isinstance(json_request,RequestCreator): request = json_request.to_dict() elif type(json_request) == dict: request = json_request elif type(json_request) == str and '.json' not in json_request: try: request = json.loads(json_request) except: raise TypeError("expected a parsable string") elif '.json' in json_request: try: with open(Path(json_request), 'r') as file: file_string = file.read() request = json.loads(file_string) except: raise TypeError("expected a parsable string") request['settings']['limit'] = limit # info for creating report data_info = self._dataDescriptor(request) if verbose: print('Request decrypted') obj.update(data_info) anomaly = request['settings'].get('includeAnomalyDetection', False) columns = [data_info['dimension']] + data_info['metrics'] # preparing for the loop # in case "inf" has been used. Turn it to a number n_results = kwargs.get('n_result',n_results) n_results = float(n_results) if n_results != float('inf') and n_results < request['settings']['limit']: # making sure we don't call more than set in wrapper request['settings']['limit'] = n_results data_list = [] last_page = False page_nb, count_elements, total_elements = 0, 0, 0 if verbose: print('Starting to fetch the data...') while not last_page: timestamp = round(time.time()) request['settings']['page'] = page_nb report = self.connector.postData(self.endpoint_company + self._getReport, data=request, headers=self.header) if verbose: print('Data received.') # Recursion to take care of throttling limit while report.get('status_code', 200) == 429 or report.get('error_code',None) == "429050": if verbose: print('reaching the limit : pause for 50 s and entering recursion.') if debug: with open(f'limit_reach_{timestamp}.json', 'w') as f: f.write(json.dumps(report, indent=4)) time.sleep(50) report = self.connector.postData(self.endpoint_company + self._getReport, data=request, headers=self.header) if 'lastPage' not in report and unsafe == False: # checking error when no lastPage key in report if verbose: print(json.dumps(report, indent=2)) print('Warning : Server Error') print(json.dumps(report)) if debug: with open(f'server_failure_request_{timestamp}.json', 'w') as f: f.write(json.dumps(request, indent=4)) with open(f'server_failure_response_{timestamp}.json', 'w') as f: f.write(json.dumps(report, indent=4)) print( f'Warning : Save JSON request : server_failure_request_{timestamp}.json') print( f'Warning : Save JSON response : server_failure_response_{timestamp}.json') obj['data'] = pd.DataFrame() return obj # fallback when no lastPage in report last_page = report.get('lastPage', True) if verbose: print(f'last page status : {last_page}') if 'errorCode' in report.keys(): print('Error with your statement \n' + report['errorDescription']) return {report['errorCode']: report['errorDescription']} count_elements += report.get('numberOfElements', 0) total_elements = report.get( 'totalElements', request['settings']['limit']) if total_elements == 0: obj['data'] = pd.DataFrame() print( 'Warning : No data returned & lastPage is False.\nExit the loop - no save file & empty dataframe.') if debug: with open(f'report_no_element_{timestamp}.json', 'w') as f: f.write(json.dumps(report, indent=4)) if verbose: print( f'% of total elements retrieved. TotalElements: {report.get("totalElements", "no data")}') return obj # in case loop happening with empty data, returns empty data if verbose and total_elements != 0: print( f'% of total elements retrieved: {round((count_elements / total_elements) * 100, 2)} %') if last_page == False and n_results != float('inf'): if count_elements >= n_results: last_page = True data = report['rows'] data_list += deepcopy(data) # do a deepcopy page_nb += 1 if verbose: print(f'# of requests : {page_nb}') # return report df = self._readData(data_list, anomaly=anomaly, cols=columns, item_id=item_id) if save: timestampReport = round(time.time()) df.to_csv(f'report-{timestampReport}.csv', index=False) if verbose: print( f'Saving data in file : {os.getcwd()}{os.sep}report-{timestampReport}.csv') obj['data'] = df if verbose: print( f'Report contains {(count_elements / total_elements) * 100} % of the available dimensions') return obj def _prepareData( self, dataRows: list = None, reportType: str = "normal", ) -> dict: """ Read the data returned by the getReport and returns a dictionary used by the Workspace class. Arguments: dataRows : REQUIRED : data rows data from CJA API getReport reportType : REQUIRED : "normal" or "static" """ if dataRows is None: raise ValueError("Require dataRows") data_rows = deepcopy(dataRows) expanded_rows = {} if reportType == "normal": for row in data_rows: expanded_rows[row["itemId"]] = [row["value"]] expanded_rows[row["itemId"]] += row["data"] elif reportType == "static": expanded_rows = data_rows return expanded_rows def _decrypteStaticData( self, dataRequest: dict = None, response: dict = None,resolveColumns:bool=False ) -> dict: """ From the request dictionary and the response, decrypte the data to standardise the reading. """ dataRows = [] ## retrieve StaticRow ID and segmentID if len([metric for metric in dataRequest['metricContainer'].get('metricFilters',[]) if metric.get('id','').startswith("STATIC_ROW_COMPONENT")])>0: if "dateRange" in list(dataRequest['metricContainer'].get('metricFilters',[])[0].keys()): tableSegmentsRows = { obj["id"]: obj["dateRange"] for obj in dataRequest["metricContainer"]["metricFilters"] if obj["id"].startswith("STATIC_ROW_COMPONENT") } elif "segmentId" in list(dataRequest['metricContainer'].get('metricFilters',[])[0].keys()): tableSegmentsRows = { obj["id"]: obj["segmentId"] for obj in dataRequest["metricContainer"]["metricFilters"] if obj["id"].startswith("STATIC_ROW_COMPONENT") } else: tableSegmentsRows = { obj["id"]: obj["segmentId"] for obj in dataRequest["metricContainer"]["metricFilters"] } ## retrieve place and segmentID segmentApplied = {} for obj in dataRequest["metricContainer"]["metricFilters"]: if obj["id"].startswith("STATIC_ROW") == False: if obj["type"] == "breakdown": segmentApplied[obj["id"]] = f"{obj['dimension']}:::{obj['itemId']}" elif obj["type"] == "segment": segmentApplied[obj["id"]] = obj["segmentId"] elif obj["type"] == "dateRange": segmentApplied[obj["id"]] = obj["dateRange"] ### table columnIds and StaticRow IDs tableColumnIds = { obj["columnId"]: obj["filters"][0] for obj in dataRequest["metricContainer"]["metrics"] } ### create relations for metrics with Filter on top filterRelations = { obj["filters"][0]: obj["filters"][1:] for obj in dataRequest["metricContainer"]["metrics"] if len(obj["filters"]) > 1 } staticRows = set(val for val in tableSegmentsRows.values()) nb_rows = len(staticRows) ## define how many segment used as rows nb_columns = int( len(dataRequest["metricContainer"]["metrics"]) / nb_rows ) ## use to detect rows staticRows = set(val for val in tableSegmentsRows.values()) staticRowsNames = [] for row in staticRows: if row.startswith("s") and "@AdobeOrg" in row: filter = self.Segment(row) staticRowsNames.append(filter["name"]) else: staticRowsNames.append(row) if resolveColumns: staticRowDict = { row: self.getSegment(rowName).get('name',rowName) for row, rowName in zip(staticRows, staticRowsNames) } else: staticRowDict = { row: rowName for row, rowName in zip(staticRows, staticRowsNames) } ### metrics dataRows = defaultdict(list) for row in staticRowDict: ## iter on the different static rows for column, data in zip( response["columns"]["columnIds"], response["summaryData"]["totals"] ): if tableSegmentsRows[tableColumnIds[column]] == row: ## check translation of metricId with Static Row ID if row not in dataRows[staticRowDict[row]]: dataRows[staticRowDict[row]].append(row) dataRows[staticRowDict[row]].append(data) ## should ends like : {'segmentName' : ['STATIC',123,456]} return nb_columns, tableColumnIds, segmentApplied, filterRelations, dataRows def getReport2( self, request: Union[dict, IO,RequestCreator] = None, limit: int = 20000, n_results: Union[int, str] = "inf", allowRemoteLoad: str = "default", useCache: bool = True, useResultsCache: bool = False, includeOberonXml: bool = False, includePredictiveObjects: bool = False, returnsNone: bool = None, countRepeatInstances: bool = None, ignoreZeroes: bool = None, rsid: str = None, resolveColumns: bool = True, save: bool = False, returnClass: bool = True, ) -> Union[Workspace, dict]: """ Return an instance of Workspace that contains the data requested. Argumnents: request : REQUIRED : either a dictionary of a JSON file that contains the request information. limit : OPTIONAL : number of results per request (default 1000) n_results : OPTIONAL : total number of results returns. Use "inf" to return everything (default "inf") allowRemoteLoad : OPTIONAL : Controls if Oberon should remote load data. Default behavior is true with fallback to false if remote data does not exist useCache : OPTIONAL : Use caching for faster requests (Do not do any report caching) useResultsCache : OPTIONAL : Use results caching for faster reporting times (This is a pass through to Oberon which manages the Cache) includeOberonXml : OPTIONAL : Controls if Oberon XML should be returned in the response - DEBUG ONLY includePredictiveObjects : OPTIONAL : Controls if platform Predictive Objects should be returned in the response. Only available when using Anomaly Detection or Forecasting- DEBUG ONLY returnsNone : OPTIONAL: Overwritte the request setting to return None values. countRepeatInstances : OPTIONAL: Overwrite the request setting to count repeatInstances values. ignoreZeroes : OPTIONAL : Ignore zeros in the results rsid : OPTIONAL : Overwrite the ReportSuiteId used for report. Only works if the same components are presents. resolveColumns: OPTIONAL : automatically resolve columns from ID to name for calculated metrics & segments. Default True. (works on returnClass only) save : OPTIONAL : If you want to save the data (in JSON or CSV, depending the class is used or not) returnClass : OPTIONAL : return the class building dataframe and better comprehension of data. (default yes) """ if self.loggingEnabled: self.logger.debug(f"Start getReport") path = "/reports" params = { "allowRemoteLoad": allowRemoteLoad, "useCache": useCache, "useResultsCache": useResultsCache, "includeOberonXml": includeOberonXml, "includePlatformPredictiveObjects": includePredictiveObjects, } if type(request) == dict: dataRequest = request elif isinstance(request,RequestCreator): dataRequest = request.to_dict() elif ".json" in request: with open(request, "r") as f: dataRequest = json.load(f) else: raise ValueError("Require a JSON or Dictionary to request data") ### Settings dataRequest = deepcopy(dataRequest) dataRequest["settings"]["page"] = 0 dataRequest["settings"]["limit"] = limit if returnsNone: dataRequest["settings"]["nonesBehavior"] = "return-nones" elif dataRequest['settings'].get('nonesBehavior',False) != False: pass ## keeping current settings else: dataRequest["settings"]["nonesBehavior"] = "exclude-nones" if countRepeatInstances: dataRequest["settings"]["countRepeatInstances"] = True elif dataRequest["settings"].get("countRepeatInstances",False) != False: pass ## keeping current settings else: dataRequest["settings"]["countRepeatInstances"] = False if rsid is not None: dataRequest["rsid"] = rsid if ignoreZeroes: dataRequest.get("statistics",{'ignoreZeroes':True})["ignoreZeroes"] = True deepCopyRequest = deepcopy(dataRequest) ### Request data if self.loggingEnabled: self.logger.debug(f"getReport request: {json.dumps(dataRequest,indent=4)}") res = self.connector.postData( self.endpoint_company + path, data=dataRequest, params=params ) if "rows" in res.keys(): reportType = "normal" if self.loggingEnabled: self.logger.debug(f"reportType: {reportType}") dataRows = res.get("rows") columns = res.get("columns") summaryData = res.get("summaryData") totalElements = res.get("numberOfElements") lastPage = res.get("lastPage", True) if float(len(dataRows)) >= float(n_results): ## force end of loop when a limit is set on n_results lastPage = True while lastPage != True: dataRequest["settings"]["page"] += 1 res = self.connector.postData( self.endpoint_company + path, data=dataRequest, params=params ) dataRows += res.get("rows") lastPage = res.get("lastPage", True) totalElements += res.get("numberOfElements") if float(len(dataRows)) >= float(n_results): ## force end of loop when a limit is set on n_results lastPage = True if self.loggingEnabled: self.logger.debug(f"loop for report over: {len(dataRows)} results") if returnClass == False: return dataRows ### create relation between metrics and filters applied columnIdRelations = { obj["columnId"]: obj["id"] for obj in dataRequest["metricContainer"]["metrics"] } filterRelations = { obj["columnId"]: obj["filters"] for obj in dataRequest["metricContainer"]["metrics"] if len(obj.get("filters", [])) > 0 } metricFilters = {} metricFilterTranslation = {} for filter in dataRequest["metricContainer"].get("metricFilters", []): filterId = filter["id"] if filter["type"] == "breakdown": filterValue = f"{filter['dimension']}:{filter['itemId']}" metricFilters[filter["dimension"]] = filter["itemId"] if filter["type"] == "dateRange": filterValue = f"{filter['dateRange']}" metricFilters[filterValue] = filterValue if filter["type"] == "segment": filterValue = f"{filter['segmentId']}" if filterValue.startswith("s") and "@AdobeOrg" in filterValue: seg = self.getSegment(filterValue) metricFilters[filterValue] = seg["name"] metricFilterTranslation[filterId] = filterValue metricColumns = {} for colId in columnIdRelations.keys(): metricColumns[colId] = columnIdRelations[colId] for element in filterRelations.get(colId, []): metricColumns[colId] += f":::{metricFilterTranslation[element]}" else: if returnClass == False: return res reportType = "static" if self.loggingEnabled: self.logger.debug(f"reportType: {reportType}") columns = None ## no "columns" key in response summaryData = res.get("summaryData") ( nb_columns, tableColumnIds, segmentApplied, filterRelations, dataRows, ) = self._decrypteStaticData(dataRequest=dataRequest, response=res,resolveColumns=resolveColumns) ### Findings metrics metricFilters = {} metricColumns = [] for i in range(nb_columns): metric: str = res["columns"]["columnIds"][i] metricName = metric.split(":::")[0] if metricName.startswith("cm"): calcMetric = self.getCalculatedMetric(metricName) metricName = calcMetric["name"] correspondingStatic = tableColumnIds[metric] ## if the static row has a filter if correspondingStatic in list(filterRelations.keys()): ## finding segment applied to metrics for element in filterRelations[correspondingStatic]: segId:str = segmentApplied[element] metricName += f":::{segId}" metricFilters[segId] = segId if segId.startswith("s") and "@AdobeOrg" in segId: seg = self.getSegment(segId) metricFilters[segId] = seg["name"] metricColumns.append(metricName) ### ending with ['metric1','metric2 + segId',...] ### preparing data points if self.loggingEnabled: self.logger.debug(f"preparing data") preparedData = self._prepareData(dataRows, reportType=reportType) if returnClass: if self.loggingEnabled: self.logger.debug(f"returning Workspace class") ## Using the class data = Workspace( responseData=preparedData, dataRequest=deepCopyRequest, columns=columns, summaryData=summaryData, analyticsConnector=self, reportType=reportType, metrics=metricColumns, ## for normal type ## for staticReport metricFilters=metricFilters, resolveColumns=resolveColumns, ) if save: data.to_csv() return data
Adobe-Lib-Manual
/Adobe_Lib_Manual-4.2.tar.gz/Adobe_Lib_Manual-4.2/aanalytics2/aanalytics2.py
aanalytics2.py
import json import time from copy import deepcopy # Non standard libraries import requests from aanalytics2 import config, token_provider class AdobeRequest: """ Handle request to Audience Manager and taking care that the request have a valid token set each time. Attributes: restTime : Time to rest before sending new request when reaching too many request status code. """ loggingEnabled = False def __init__(self, config_object: dict = config.config_object, header: dict = config.header, verbose: bool = False, retry: int = 0, loggingEnabled:bool=False, logger:object=None ) -> None: """ Set the connector to be used for handling request to AAM Arguments: config_object : OPTIONAL : Require the importConfig file to have been used. header : OPTIONAL : header of the config modules verbose : OPTIONAL : display comment on the request. retry : OPTIONAL : If you wish to retry failed GET requests loggingEnabled : OPTIONAL : if the logging is enable for that instance. logger : OPTIONAL : instance of the logger created """ if config_object['org_id'] == '': raise Exception( 'You have to upload the configuration file with importConfigFile method.') self.config = deepcopy(config_object) self.header = deepcopy(header) self.loggingEnabled = loggingEnabled self.logger = logger self.restTime = 30 self.retry = retry if self.config['token'] == '' or time.time() > self.config['date_limit']: if 'scopes' in self.config.keys() and self.config.get('scopes',None) is not None: self.connectionType = 'oauthV2' token_and_expiry = token_provider.get_oauth_token_and_expiry_for_config(config=self.config, verbose=verbose) elif self.config.get("private_key",None) is not None or self.config.get("pathToKey",None) is not None: self.connectionType = 'jwt' token_and_expiry = token_provider.get_jwt_token_and_expiry_for_config(config=self.config, verbose=verbose) token = token_and_expiry['token'] expiry = token_and_expiry['expiry'] self.token = token if self.loggingEnabled: self.logger.info("token retrieved : {token}") self.config['token'] = token self.config['date_limit'] = time.time() + expiry - 500 self.header.update({'Authorization': f'Bearer {token}'}) def _checkingDate(self) -> None: """ Checking if the token is still valid """ now = time.time() if now > self.config['date_limit']: if self.loggingEnabled: self.logger.warning("token expired. Trying to retrieve a new token") if self.connectionType =='oauthV2': token_and_expiry = token_provider.get_oauth_token_and_expiry_for_config(config=self.config) elif self.connectionType == 'jwt': token_and_expiry = token_provider.get_jwt_token_and_expiry_for_config(config=self.config) token = token_and_expiry['token'] if self.loggingEnabled: self.logger.info(f"new token retrieved : {token}") self.config['token'] = token self.config['date_limit'] = time.time() + token_and_expiry['expiry'] - 500 self.header.update({'Authorization': f'Bearer {token}'}) def getData(self, endpoint: str, params: dict = None, data: dict = None, headers: dict = None, *args, **kwargs): """ Abstraction for getting data """ internRetry = kwargs.get("retry", self.retry) self._checkingDate() if self.loggingEnabled: self.logger.info(f"endpoint: {endpoint}") self.logger.info(f"params: {params}") if headers is None: headers = self.header if params is None and data is None: res = requests.get( endpoint, headers=headers) elif params is not None and data is None: res = requests.get( endpoint, headers=headers, params=params) elif params is None and data is not None: res = requests.get( endpoint, headers=headers, data=data) elif params is not None and data is not None: res = requests.get(endpoint, headers=headers, params=params, data=data) if kwargs.get("verbose", False): print(f"request URL : {res.request.url}") print(f"statut_code : {res.status_code}") try: while str(res.status_code) == "429": if kwargs.get("verbose", False): print(f'Too many requests: retrying in {self.restTime} seconds') if self.loggingEnabled: self.logger.info(f"Too many requests: retrying in {self.restTime} seconds") time.sleep(self.restTime) res = requests.get(endpoint, headers=headers, params=params, data=data) res_json = res.json() except: ## handling 1.4 if self.loggingEnabled: self.logger.warning(f"handling exception as res.json() cannot be managed") self.logger.warning(f"status code: {res.status_code}") if kwargs.get('legacy',False): try: return json.loads(res.text) except: if self.loggingEnabled: self.logger.error(f"GET method failed: {res.status_code}, {res.text}") return res.text else: if self.loggingEnabled: self.logger.error(f"text: {res.text}") res_json = {'error': 'Request Error'} while internRetry > 0: if self.loggingEnabled: self.logger.warning(f"Trying again with internal retry") if kwargs.get("verbose", False): print('Retry parameter activated') print(f'{internRetry} retry left') if 'error' in res_json.keys(): time.sleep(30) res_json = self.getData(endpoint, params=params, data=data, headers=headers, retry=internRetry-1, **kwargs) return res_json return res_json def postData(self, endpoint: str, params: dict = None, data: dict = None, headers: dict = None, *args, **kwargs): """ Abstraction for posting data """ self._checkingDate() if headers is None: headers = self.header if params is None and data is None: res = requests.post(endpoint, headers=headers) elif params is not None and data is None: res = requests.post(endpoint, headers=headers, params=params) elif params is None and data is not None: res = requests.post(endpoint, headers=headers, data=json.dumps(data)) elif params is not None and data is not None: res = requests.post(endpoint, headers=headers, params=params, data=json.dumps(data)) try: res_json = res.json() if res.status_code == 429 or res_json.get('error_code', None) == "429050": res_json['status_code'] = 429 except: ## handling 1.4 if kwargs.get('legacy',False): try: return json.loads(res.text) except: if self.loggingEnabled: self.logger.error(f"POST method failed: {res.status_code}, {res.text}") return res.text res_json = {'error': res.get('status_code','Request Error')} return res_json def patchData(self, endpoint: str, params: dict = None, data=None, headers: dict = None, *args, **kwargs): """ Abstraction for patching data """ self._checkingDate() if headers is None: headers = self.header if params is not None and data is None: res = requests.patch(endpoint, headers=headers, params=params) elif params is None and data is not None: res = requests.patch(endpoint, headers=headers, data=json.dumps(data)) elif params is not None and data is not None: res = requests.patch(endpoint, headers=headers, params=params, data=json.dumps(data)) try: while str(res.status_code) == "429": if kwargs.get("verbose", False): print(f'Too many requests: retrying in {self.restTime} seconds') time.sleep(self.restTime) res = requests.patch(endpoint, headers=headers, params=params,data=json.dumps(data)) res_json = res.json() except: if self.loggingEnabled: self.logger.error(f"PATCH method failed: {res.status_code}, {res.text}") res_json = {'error': res.get('status_code','Request Error')} return res_json def putData(self, endpoint: str, params: dict = None, data=None, headers: dict = None, *args, **kwargs): """ Abstraction for putting data """ self._checkingDate() if headers is None: headers = self.header if params is not None and data is None: res = requests.put(endpoint, headers=headers, params=params) elif params is None and data is not None: res = requests.put(endpoint, headers=headers, data=json.dumps(data)) elif params is not None and data is not None: res = requests.put(endpoint, headers=headers, params=params, data=json.dumps(data=data)) try: status_code = res.json() except: if self.loggingEnabled: self.logger.error(f"PUT method failed: {res.status_code}, {res.text}") status_code = {'error': res.get('status_code','Request Error')} return status_code def deleteData(self, endpoint: str, params: dict = None, headers: dict = None, *args, **kwargs): """ Abstraction for deleting data """ self._checkingDate() if headers is None: headers = self.header if params is None: res = requests.delete(endpoint, headers=headers) elif params is not None: res = requests.delete(endpoint, headers=headers, params=params) try: while str(res.status_code) == "429": if kwargs.get("verbose", False): print(f'Too many requests: retrying in {self.restTime} seconds') time.sleep(self.restTime) res = requests.delete(endpoint, headers=headers, params=params) status_code = res.status_code except: if self.loggingEnabled: self.logger.error(f"DELETE method failed: {res.status_code}, {res.text}") status_code = {'error': 'Request Error'} return status_code
Adobe-Lib-Manual
/Adobe_Lib_Manual-4.2.tar.gz/Adobe_Lib_Manual-4.2/aanalytics2/connector.py
connector.py
import os import time from typing import Dict, Union import json import jwt import requests from aanalytics2 import configs def get_jwt_token_and_expiry_for_config(config: dict, verbose: bool = False, save: bool = False, *args, **kwargs) -> \ Dict[str, str]: """ Retrieve the token by using the information provided by the user during the import importConfigFile function. ArgumentS : verbose : OPTIONAL : Default False. If set to True, print information. save : OPTIONAL : Default False. If set to True, save the toke in the . """ private_key = configs.get_private_key_from_config(config) header_jwt = { 'cache-control': 'no-cache', 'content-type': 'application/x-www-form-urlencoded' } now_plus_24h = int(time.time()) + 8760 * 60 * 60 jwt_payload = { 'exp': now_plus_24h, 'iss': config['org_id'], 'sub': config['tech_id'], 'https://ims-na1.adobelogin.com/s/ent_analytics_bulk_ingest_sdk': True, 'aud': f'https://ims-na1.adobelogin.com/c/{config["client_id"]}' } encoded_jwt = _get_jwt(payload=jwt_payload, private_key=private_key) payload = { 'client_id': config['client_id'], 'client_secret': config['secret'], 'jwt_token': encoded_jwt } response = requests.post(config['jwtTokenEndpoint'], headers=header_jwt, data=payload) json_response = response.json() try: token = json_response['access_token'] except KeyError: print('Issue retrieving token') print(json_response) raise Exception(json.dumps(json_response,indent=2)) expiry = json_response['expires_in'] / 1000 ## returns milliseconds expiring if save: with open('token.txt', 'w') as f: f.write(token) print(f'token has been saved here: {os.getcwd()}{os.sep}token.txt') if verbose: print('token valid till : ' + time.ctime(time.time() + expiry)) return {'token': token, 'expiry': expiry} def get_oauth_token_and_expiry_for_config(config:dict,verbose:bool=False,save:bool=False)->Dict[str,str]: """ Retrieve the access token by using the OAuth information provided by the user during the import importConfigFile function. Arguments : config : REQUIRED : Configuration object. verbose : OPTIONAL : Default False. If set to True, print information. save : OPTIONAL : Default False. If set to True, save the toke in the . """ if config is None: raise ValueError("config dictionary is required") oauth_payload = { "grant_type": "client_credentials", "client_id": config["client_id"], "client_secret": config["secret"], "scope": config["scopes"] } response = requests.post( config["oauthTokenEndpointV2"], data=oauth_payload) json_response = response.json() if 'access_token' in json_response.keys(): token = json_response['access_token'] expiry = json_response["expires_in"] else: return json.dumps(json_response,indent=2) if save: with open('token.txt', 'w') as f: f.write(token) if verbose: print('token valid till : ' + time.ctime(time.time() + expiry)) return {'token': token, 'expiry': expiry} def _get_jwt(payload: dict, private_key: str) -> str: """ Ensure that jwt enconding return the same type (str) as versions < 2.0.0 returned bytes and >2.0.0 return strings. """ token: Union[str, bytes] = jwt.encode(payload, private_key, algorithm='RS256') if isinstance(token, bytes): return token.decode('utf-8') return token
Adobe-Lib-Manual
/Adobe_Lib_Manual-4.2.tar.gz/Adobe_Lib_Manual-4.2/aanalytics2/token_provider.py
token_provider.py
from dataclasses import dataclass import json @dataclass class Project: """ This dataclass extract the information retrieved from the getProjet method. It flatten the elements and gives you insights on what your project contains. """ def __init__(self, projectDict: dict = None,rsidSuffix:bool=False): """ Instancialize the class. Arguments: projectDict : REQUIRED : the dictionary of the project (returned by getProject method) rsidSuffix : OPTIONAL : If you want to have the rsid suffix to dimension and metrics. """ if projectDict is None: raise Exception("require a dictionary with project information. Retrievable via getProject") self.id: str = projectDict.get('id', '') self.name: str = projectDict.get('name', '') self.description: str = projectDict.get('description', '') self.rsid: str = projectDict.get('rsid', '') self.ownerName: str = projectDict['owner'].get('name', '') self.ownerId: int = projectDict['owner'].get('id', '') self.ownerEmail: int = projectDict['owner'].get('login', '') self.template: bool = projectDict.get('companyTemplate', False) self.version: str = None if 'definition' in projectDict.keys(): definition: dict = projectDict['definition'] self.version: str = definition.get('version',None) self.curation: bool = definition.get('isCurated', False) if definition.get('device', 'desktop') != 'cell': self.reportType = "desktop" infos = self._findPanelsInfos(definition['workspaces'][0]) self.nbPanels: int = infos["nb_Panels"] self.nbSubPanels: int = 0 self.subPanelsTypes: list = [] for panel in infos["panels"]: self.nbSubPanels += infos["panels"][panel]['nb_subPanels'] self.subPanelsTypes += infos["panels"][panel]['subPanels_types'] self.elementsUsed: dict = self._findElements(definition['workspaces'][0],rsidSuffix=rsidSuffix) self.nbElementsUsed: int = len(self.elementsUsed['dimensions']) + len( self.elementsUsed['metrics']) + len(self.elementsUsed['segments']) + len( self.elementsUsed['calculatedMetrics']) else: self.reportType = "mobile" def __str__(self)->str: return json.dumps(self.to_dict(),indent=4) def __repr__(self)->str: return json.dumps(self.to_dict(),indent=4) def _findPanelsInfos(self, workspace: dict = None) -> dict: """ Return a dict of the different information for each Panel. Arguments: workspace : REQUIRED : the workspace dictionary. """ dict_data = {'workspace_id': workspace['id']} dict_data['nb_Panels'] = len(workspace['panels']) dict_data['panels'] = {} for panel in workspace['panels']: dict_data["panels"][panel['id']] = {} dict_data["panels"][panel['id']]['name'] = panel.get('name', 'Default Name') dict_data["panels"][panel['id']]['nb_subPanels'] = len(panel['subPanels']) dict_data["panels"][panel['id']]['subPanels_types'] = [subPanel['reportlet']['type'] for subPanel in panel['subPanels']] return dict_data def _findElements(self, workspace: dict,rsidSuffix:bool=False) -> list: """ Returns the list of dimensions used in the FreeformReportlet. Arguments : workspace : REQUIRED : the workspace dictionary. """ dict_elements: dict = {'dimensions': [], "metrics": [], 'segments': [], "reportSuites": [], 'calculatedMetrics': []} tmp_rsid = "" # default empty value for panel in workspace['panels']: if "reportSuite" in panel.keys(): dict_elements['reportSuites'].append(panel['reportSuite']['id']) if rsidSuffix: tmp_rsid = f"::{panel['reportSuite']['id']}" elif "rsid" in panel.keys(): dict_elements['reportSuites'].append(panel['rsid']) if rsidSuffix: tmp_rsid = f"::{panel['rsid']}" filters: list = panel.get('segmentGroups',[]) if len(filters) > 0: for element in filters: typeElement = element['componentOptions'][0].get('component',{}).get('type','') idElement = element['componentOptions'][0].get('component',{}).get('id','') if typeElement == "Segment": dict_elements['segments'].append(idElement) if typeElement == "DimensionItem": clean_id: str = idElement[:idElement.find( '::')] ## cleaning this type of element : 'variables/evar7.6::3000623228' dict_elements['dimensions'].append(clean_id) for subPanel in panel['subPanels']: if subPanel['reportlet']['type'] == "FreeformReportlet": reportlet = subPanel['reportlet'] rows = reportlet['freeformTable'] if 'dimension' in rows.keys(): dict_elements['dimensions'].append(f"{rows['dimension']['id']}{tmp_rsid}") if len(rows["staticRows"]) > 0: for row in rows["staticRows"]: ## I have to get a temp dimension to clean them before loading them in order to avoid counting them multiple time for each rows. temp_list_dim = [] componentType: str = row.get('component',{}).get('type','') if componentType == "DimensionItem": temp_list_dim.append(f"{row['component']['id']}{tmp_rsid}") elif componentType == "Segments" or componentType == "Segment": dict_elements['segments'].append(row['component']['id']) elif componentType == "Metric": dict_elements['metrics'].append(f"{row['component']['id']}{tmp_rsid}") elif componentType == "CalculatedMetric": dict_elements['calculatedMetrics'].append(row['component']['id']) if len(temp_list_dim) > 0: temp_list_dim = list(set([el[:el.find('::')] for el in temp_list_dim])) for dim in temp_list_dim: dict_elements['dimensions'].append(f"{dim}{tmp_rsid}") columns = reportlet['columnTree'] for node in columns['nodes']: temp_data = self._recursiveColumn(node,tmp_rsid=tmp_rsid) dict_elements['calculatedMetrics'] += temp_data['calculatedMetrics'] dict_elements['segments'] += temp_data['segments'] dict_elements['metrics'] += temp_data['metrics'] if len(temp_data['dimensions']) > 0: for dim in set(temp_data['dimensions']): dict_elements['dimensions'].append(dim) dict_elements['metrics'] = list(set(dict_elements['metrics'])) dict_elements['segments'] = list(set(dict_elements['segments'])) dict_elements['dimensions'] = list(set(dict_elements['dimensions'])) dict_elements['calculatedMetrics'] = list(set(dict_elements['calculatedMetrics'])) return dict_elements def _recursiveColumn(self, node: dict = None, temp_data: dict = None,tmp_rsid:str=""): """ recursive function to fetch elements in column stack tmp_rsid : OPTIONAL : empty by default, if rsid is pass, it will add the value to dimension and metrics """ if temp_data is None: temp_data: dict = {'dimensions': [], "metrics": [], 'segments': [], "reportSuites": [], 'calculatedMetrics': []} componentType: str = node.get('component',{}).get('type','') if componentType == "Metric": temp_data['metrics'].append(f"{node['component']['id']}{tmp_rsid}") elif componentType == "CalculatedMetric": temp_data['calculatedMetrics'].append(node['component']['id']) elif componentType == "Segment": temp_data['segments'].append(node['component']['id']) elif componentType == "DimensionItem": old_id: str = node['component']['id'] new_id: str = old_id[:old_id.find('::')] temp_data['dimensions'].append(f"{new_id}{tmp_rsid}") if len(node['nodes']) > 0: for new_node in node['nodes']: temp_data = self._recursiveColumn(new_node, temp_data=temp_data,tmp_rsid=tmp_rsid) return temp_data def to_dict(self) -> dict: """ transform the class into a dictionary """ obj = { 'id': self.id, 'name': self.name, 'description': self.description, 'rsid': self.rsid, 'ownerName': self.ownerName, 'ownerId': self.ownerId, 'ownerEmail': self.ownerEmail, 'template': self.template, 'reportType':self.reportType, 'curation': self.curation or False, 'version': self.version or None, } add_object = {} if hasattr(self, 'nbPanels'): add_object = { 'curation': self.curation, 'version': self.version, 'nbPanels': self.nbPanels, 'nbSubPanels': self.nbSubPanels, 'subPanelsTypes': self.subPanelsTypes, 'nbElementsUsed': self.nbElementsUsed, 'dimensions': self.elementsUsed['dimensions'], 'metrics': self.elementsUsed['metrics'], 'segments': self.elementsUsed['segments'], 'calculatedMetrics': self.elementsUsed['calculatedMetrics'], 'rsids': self.elementsUsed['reportSuites'], } full_obj = {**obj, **add_object} return full_obj
Adobe-Lib-Manual
/Adobe_Lib_Manual-4.2.tar.gz/Adobe_Lib_Manual-4.2/aanalytics2/projects.py
projects.py
import gzip import io from concurrent import futures from pathlib import Path from typing import IO, Union # Non standard libraries import pandas as pd import requests from aanalytics2 import config, connector class DIAPI: """ This class provide an easy way to use the Data Insertion API. You can initialize it with the required information to be present in the request and then select to send POST or GET request. Arguments to instantiate: rsid : REQUIRED : Report Suite ID tracking_server : REQUIRED : tracking server for tracking. example : "xxxx.sc.omtrdc.net" """ def __init__(self, rsid: str = None, tracking_server: str = None): """ Arguments: rsid : REQUIRED : Report Suite ID tracking_server : REQUIRED : tracking server for tracking. """ if rsid is None: raise Exception("Expecting a ReportSuite ID (rsid)") self.rsid = rsid if tracking_server is None: raise Exception("Expecting a tracking server") self.tracking_server = tracking_server try: import importlib.resources as pkg_resources path = pkg_resources.path("aanalytics2", "supported_tags.pickle") except ImportError: # Try backported to PY<37 with pkg_resources. try: import pkg_resources path = pkg_resources.resource_filename( "aanalytics2", "supported_tags.pickle") except: print('no supported_tags file') try: with path as f: self.REFERENCE = pd.read_pickle(f) except: self.REFERENCE = None def getMethod(self, pageName: str = None, g: str = None, pe: str = None, pev1: str = None, pev2: str = None, events: str = None, **kwargs): """ Use the GET method to send information to Adobe Analytics Arguments: pageName : REQUIRED : The Web page name. g : REQUIRED : The Web page URL pe : OPTIONAL : For custom link tracking (Type of link ("d", "e", or "o")) if selected, require "pev1" or "pev2", additionally pageName is set to Null pev1 : OPTIONAL : The link's HREF. For custom links, page values are ignored. pev2 : OPTIONAL : Name of link. events : OPTIONAL : If you want to pass some events Possible kwargs: - see the SUPPORTED_TAGS attributes. Tags should be in the supported format. """ if pageName is None and g is None: raise Exception("Expecting a pageName or g arguments") if pe is not None and pe not in ["d", "e", "o"]: raise Exception('Expecting pe argument to be ("d", "e", or "o")') header = {'Content-Type': 'application/json'} endpoint = f"https://{self.tracking_server}/b/ss/{self.rsid}/0" params = {"pageName": pageName, "g": g, "pe": pe, "pev1": pev1, "pev2": pev2, "events": events, **kwargs} res = requests.get(endpoint, params=params, headers=header) return res def postMethod(self, pageName: str = None, pageURL: str = None, linkType: str = None, linkURL: str = None, linkName: str = None, events: str = None, **kwargs): """ Use the POST method to send information to Adobe Analytics Arguments: pageName : REQUIRED : The Web page name. pageURL : REQUIRED : The Web page URL linkType : OPTIONAL : For custom link tracking (Type of link ("d", "e", or "o")) if selected, require "pev1" or "pev2", additionally pageName is set to Null linkURL : OPTIONAL : The link's HREF. For custom links, page values are ignored. linkName : OPTIONAL : Name of link. events : OPTIONAL : If you want to pass some events Possible kwargs: - see the SUPPORTED_TAGS attributes. Tags should be in the supported format. """ if pageName is None and pageURL is None: raise Exception("Expecting a pageName or pageURL argument") if linkType is not None and linkType not in ["d", "e", "o"]: raise Exception('Expecting pe argument to be ("d", "e", or "o")') header = {'Content-Type': 'application/xml'} endpoint = f"https://{self.tracking_server}/b/ss//6" dictionary = {"pageName": pageName, "pageURL": pageURL, "linkType": linkType, "linkURL": linkURL, "linkName": linkName, "events": events, "reportSuite": self.rsid, **kwargs} import dicttoxml as dxml myxml = dxml.dicttoxml( dictionary, custom_root='request', attr_type=False) xml_data = myxml.decode() res = requests.post(endpoint, data=xml_data, headers=header) return res class Bulkapi: """ This is the bulk API from Adobe Analytics. By default, the file are sent to the global endpoints for auto-routing. If you wish to select a specific endpoint, you can modify it during instantiation. It requires you to upload some adobeio configuration file through the main aanalytics2 module. Arguments: endpoint : OPTIONAL : by default using https://analytics-collection.adobe.io """ def __init__(self, endpoint: str = "https://analytics-collection.adobe.io", config_object: dict = config.config_object): """ Initialize the Bulk API connection. Returns an object with methods to send data to Analytics. Arguments: endpoint : REQUIRED : Endpoint to send data to. Default to analytics-collection.adobe.io possible values, on top of the default choice are: - https://analytics-collection-va7.adobe.io (US) - https://analytics-collection-nld2.adobe.io (EU) config_object : REQUIRED : config object containing the different information to send data. """ self.endpoint = endpoint try: import importlib.resources as pkg_resources path = pkg_resources.path( "aanalytics2", "CSV_Column_and_Query_String_Reference.pickle") except ImportError: try: # Try backported to PY<37 `importlib_resources`. import pkg_resources path = pkg_resources.resource_filename( "aanalytics2", "CSV_Column_and_Query_String_Reference.pickle") except: print('no CSV_Column_and_Query_string_Reference file') try: with path as f: self.REFERENCE = pd.read_pickle(f) except: self.REFERENCE = None # if no token has been generated. self.connector = connector.AdobeRequest() self.header = self.connector.header self.header["x-adobe-vgid"] = "ingestion" del self.header["Content-Type"] self._createdFiles = [] def validation(self, file: IO = None,encoding:str='utf-8', **kwargs): """ Send the file to a validation endpoint. Return the response object from requests. Argument: file : REQUIRED : File in a string of byte format. encoding : OPTIONAL : type of encoding used for the file. Possible kwargs: compress_level : handle the compression level, from 0 (no compression) to 9 (slow but more compressed). default 5. """ compress_level = kwargs.get("compress_level", 5) if file is None: raise Exception("Expecting a file") path = "/aa/collect/v1/events/validate" if file.endswith(".gz") == False: with open(file, "r",encoding=encoding) as f: content = f.read() data = gzip.compress(content.encode('utf-8'), compresslevel=compress_level) filename = f"{file}.gz" elif file.endswith(".gz"): filename = file with open(file, "rb") as f: data = f.read() res = requests.post(self.endpoint + path, files={"file": (None, data)}, headers=self.header) return res def generateTemplate(self, includeAdv: bool = False, returnDF: bool = False, save: bool = True): """ Generate a CSV file with minimum fields. Arguments: includeAdv : OPTIONAL : Include advanced fields in the csv (pe & queryString). Not included by default to avoid confusion for new users. (Default False) returnDF : OPTIONAL : Return a pandas dataFrame if you want to work directly with a data frame.(default False) save : OPTIONAL : Save the file created directly in your working folder. """ ## 2 rows being created string = """timestamp,marketingCloudVisitorID,events,pageName,pageURL,reportSuiteID,userAgent,pe,queryString\ntimestampValuePOSIX/Epoch Time (e.g. 1486769029) or ISO-8601 (e.g. 2017-02-10T16:23:49-07:00),marketingCloudVisitorIDValue,eventsValue,pageNameValue,pageURLValue,reportSuiteIDValue,userAgentValue,peValue,queryStringValue """ data = io.StringIO(string) df = pd.read_csv(data, sep=',') if includeAdv == False: df.drop(["pe", "queryString"], axis=1, inplace=True) if save: df.to_csv('template.csv', index=False) if returnDF: return df def _checkFiles(self, file: str = None,encoding:str = "utf-8"): """ Internal method that check content and format of the file """ if file.endswith(".gz"): return file else: # if sending not gzipped file. new_folder = Path('tmp/') new_folder.mkdir(exist_ok=True) with open(file, "r",encoding=encoding) as f: content = f.read() new_path = new_folder / f"{file}.gz" with gzip.open(Path(new_path), 'wb') as f: f.write(content.encode('utf-8')) # save the filename to delete self._createdFiles.append(new_path) return new_path def sendFiles(self, files: Union[list, IO] = None,encoding:str='utf-8',**kwargs): """ Method to send the file(s) through the Bulk API. Returns a list with the different status file sent. Arguments: files : REQUIRED : file to be send to the aalytics collection server. It can be a list or the name of the file to be send. If list is being send, we assume that each file are to be sent in different visitor groups. If file are not gzipped, we will compress the file and saved it as gz in the folder. encoding : OPTIONAL : if encoding is different that default utf-8. possible kwargs: workers : maximum amount of worker for parallele processing. (default 4) """ path = "/aa/collect/v1/events" if files is None: raise Exception("Expecting a file") compress_level = kwargs.get("compress_level", 5) files_gz = list() if type(files) == list: for file in files: fileName = self._checkFiles(file,encoding=encoding) files_gz.append(fileName) elif type(files) == str: fileName = self._checkFiles(files,encoding=encoding) files_gz.append(fileName) vgid_headers = [f"ingestion_{x}" for x in range(len(files_gz))] list_headers = [{**self.header, 'x-adobe-vgid': vgid} for vgid in vgid_headers] list_urls = [self.endpoint + path for x in range(len(files_gz))] list_files = ({"file": (None, open(Path(file), "rb").read())} for file in files_gz) # generator for files workers_input = kwargs.get("workers", 4) workers = max(1, workers_input) with futures.ThreadPoolExecutor(workers) as executor: res = executor.map(lambda x, y, z: requests.post( x, headers=y, files=z), list_urls, list_headers, list_files) list_res = [response.json() for response in res] # cleaning temp folder if len(self._createdFiles) > 0: for file in self._createdFiles: file_path = Path(file) file_path.unlink() self._createdFiles = [] tmp = Path('tmp/') tmp.rmdir() return list_res
Adobe-Lib-Manual
/Adobe_Lib_Manual-4.2.tar.gz/Adobe_Lib_Manual-4.2/aanalytics2/ingestion.py
ingestion.py
import pandas as pd from aanalytics2 import config, connector from typing import Union class LegacyAnalytics: """ Class that will help you realize basic requests to the old API 1.4 endpoints """ def __init__(self,company_name:str=None,config:dict=config.config_object)->None: """ Instancialize the Legacy Analytics wrapper. """ if company_name is None: raise Exception("Require a company name") self.connector = connector.AdobeRequest(config_object=config) self.token = self.connector.token self.endpoint = "https://api.omniture.com/admin/1.4/rest" self.header = header = { 'Accept': 'application/json', 'Authorization': f'Bearer {self.token}', 'X-ADOBE-DMA-COMPANY': company_name } def getData(self,path:str="/",method:str=None,params:dict=None)->dict: """ Use the GET method to the parameter used. Arguments: path : REQUIRED : If you need a specific path (default "/") method : OPTIONAL : if you want to pass the method directly there for the parameter. params : OPTIONAL : If you need to pass parameter to your url, use dictionary i.e. : {"param":"value"} """ if params is not None and type(params) != dict: raise TypeError("Require a dictionary") myParams = {} myParams.update(**params or {}) if method is not None: myParams['method'] = method path = path res = self.connector.getData(self.endpoint + path,params=myParams,headers=self.header,legacy=True) return res def postData(self,path:str="/",method:str=None,params:dict=None,data:Union[dict,list]=None)->dict: """ Use the POST method to the parameter used. Arguments: path : REQUIRED : If you need a specific path (default "/") method : OPTIONAL : if you want to pass the method directly there for the parameter. params : OPTIONAL : If you need to pass parameter to your url, use dictionary i.e. : {"param":"value"} data : OPTIONAL : Usually required to pass the dictionary or list to the request """ if params is not None and type(params) != dict: raise TypeError("Require a dictionary") if data is not None and (type(data) != dict and type(data) != list): raise TypeError("data should be dictionary or list") myParams = {} myParams.update(**params or {}) if method is not None: myParams['method'] = method path = path res = self.connector.postData(self.endpoint + path,params=myParams, data=data,headers=self.header,legacy=True) return res
Adobe-Lib-Manual
/Adobe_Lib_Manual-4.2.tar.gz/Adobe_Lib_Manual-4.2/aanalytics2/aanalytics14.py
aanalytics14.py
import struct from _color import Color import _helper class _ColorReader(object): def __init__(self, stream, offset, count): self._stream = stream self._offset = offset self._count = count def __iter__(self): for i in range(self._count): self._offset, color_space = _helper.get_ushort(self._stream, self._offset) self._offset, w = _helper.get_ushort(self._stream, self._offset) self._offset, x = _helper.get_ushort(self._stream, self._offset) self._offset, y = _helper.get_ushort(self._stream, self._offset) self._offset, z = _helper.get_ushort(self._stream, self._offset) yield "Unnamed Color {0}".format(i+1), Color.from_adobe(color_space, w, x, y, z) class _ColorReaderWithName(_ColorReader): def _read_name(self): """ Word size = 2 """ # Marks the start of the string self._offset, _ = _helper.validate_ushort_is_any(self._stream, (0, ), self._offset) self._offset, length = _helper.get_ushort(self._stream, self._offset) data = self._stream[self._offset:self._offset+(length-1)*2] name = data.decode('utf-16-be') self._offset += (length-1)*2 self._offset, _ = _helper.validate_ushort_is_any(self._stream, (0, ), self._offset) return name def __iter__(self): colors = tuple(super(_ColorReaderWithName, self).__iter__()) # Version 1 information can be ignored self._offset, _ = _helper.validate_ushort_is_any(self._stream, (2, ), self._offset) self._offset, length = _helper.get_ushort(self._stream, self._offset) if length != len(colors): raise ValueError("Length of names is not the same as the length of colors") for name, color in super(_ColorReaderWithName, self).__iter__(): name = self._read_name() yield name, color class Aco(object): READERS = [ _ColorReader, _ColorReaderWithName ] def __init__(self, stream): offset, self._version = _helper.validate_ushort_is_any(stream, (0, 1)) offset, self._color_count = _helper.get_ushort(stream, offset) self._colors = [ ] self._key_mapping = { } self._read_colors(stream, offset) def _read_colors(self, stream, offset): reader = self.READERS[self._version](stream, offset, self._color_count) index = 0 for name, color in reader: self._colors.append(color) self._key_mapping[name] = index index += 1 def keys(self): return self._key_mapping.keys() @property def length(self): return self._color_count def __getitem__(self, value): # if we are not a number, assume a key, and look it up if not isinstance(value, (int, long)): value = self._key_mapping[value] return self._colors[value]
AdobeColor
/AdobeColor-0.1.tar.gz/AdobeColor-0.1/adobecolor/_aco.py
_aco.py
import _helper import colour class Color(object): def __init__(self, w, x, y, z): self._w = w self._x = x self._y = y self._z = z self._convert() def _convert(self): """ Override this to convert w,x,y,z values into color local values """ pass @property def hex(self): """ Override this to convert to an RGB hex string """ if hasattr(self, "_rgb"): (r,g,b) = self._rgb return "{0:02X}{1:02X}{2:02X}".format(r,g,b) return "Unknown!" @classmethod def from_adobe(cls, color_space, w, x, y, z): if color_space in _SPACE_MAPPER: return _SPACE_MAPPER[color_space](w,x,y,z) @property def colorspace(self): return type(self).__name__[1:-5] @property def value(self): """ Override to display color value """ return "" def __repr__(self): return "<Color colorspace={0} value={1}>".format( self.colorspace, self.value ) class _RGBColor(Color): def _convert(self): self._r = int(self._w/256) self._g = int(self._x/256) self._b = int(self._y/256) @property def value(self): mapping = ( ("r", self._r), ("g", self._g), ("b", self._b), ) return " ".join(("{0}={1:02X}".format(x,y) for x,y in mapping)) @property def hex(self): return "".join(map( "{0:02X}".format, (self._r, self._g, self._b) )) class _HSBColor(Color): def _convert(self): self._h = int(self._w/182.04) self._s = int(self._x/655.35) self._b = int(self._y/655.35) @property def value(self): mapping = ( ("h", self._h), ("s", self._s), ("b", self._b), ) return " ".join(("{0}={1:d}".format(x,y) for x,y in mapping)) def _map(self, i, t, p, q, brightness): mapper = ( (brightness, t, p), (q, brightness, p), (p, brightness, t), (p, q, brightness), (t, p, brightness), (brightness, p, q) ) if (i > len(mapper)): data = mapper[-1] else: data = mapper[i] return data @property def _rgb(self): h = self._h/360.0 s = self._s/100.0 b = self._b/100.0 return map(lambda x: int(x*255.0), colour.hsl2rgb((h,s,b))) class _CMYKColor(Color): pass class _LabColor(Color): pass class _GrayColor(Color): pass class _WideCMYKColor(Color): pass _SPACE_MAPPER = { 0: _RGBColor, 1: _HSBColor, 2: _CMYKColor, 7: _LabColor, 8: _GrayColor, 9: _WideCMYKColor }
AdobeColor
/AdobeColor-0.1.tar.gz/AdobeColor-0.1/adobecolor/_color.py
_color.py
# AdobeConnect2Video ![PyPI](https://img.shields.io/pypi/v/AdobeConnect2Video?style=for-the-badge) [![MIT License](https://img.shields.io/badge/License-MIT-yellow.svg?style=for-the-badge)](https://github.com/AliRezaBeigy/AdobeConnect2Video/blob/master/LICENSE) [![PR's Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=for-the-badge)](http://makeapullrequest.com) ![GitHub Repo stars](https://img.shields.io/github/stars/AliRezaBeigy/AdobeConnect2Video?style=for-the-badge) ![PyPI - Downloads](https://img.shields.io/pypi/dm/AdobeConnect2Video?style=for-the-badge) A handy tool to convert adobe connect zip data into a single video file ## Requirement - Python 3 - FFMPEG - Download [ffmpeg](https://www.ffmpeg.org/download.html) and put the installation path into PATH enviroment variable ## Quick Start You need to [ffmpeg](https://www.ffmpeg.org) to use this app, so you can simply download ffmpeg from [Official Site](https://www.ffmpeg.org/download.html) then put the installation path into PATH enviroment variable Now you should install AdobeConnect2Video as global app: ```shell $ pip install -U AdobeConnect2Video or $ python -m pip install -U AdobeConnect2Video ``` **Use `-U` option to update AdobeConnect2Video to the last version** ## Usage Download the adobe connect zip data by adding following path to the address of online recorded class ```url output/ClassName.zip?download=zip ``` for exmple if you online recorded class link is ```url http://online.GGGGG.com/p81var0hcdk5/ ``` you can download zip data from following link ```url http://online.GGGGG.com/p81var0hcdk5/output/ClassName.zip?download=zip ``` ```shell $ AdobeConnect2Video $ AdobeConnect2Video -i [classId] ``` Example: If data extracted into 'Course1' directory inside 'data' directory and you want to have output in 'output' directory with 480x470 resolution you can use following command ```shell $ AdobeConnect2Video -i Course1 -d data -o output -r 480x470 ``` For more details: ```text $ AdobeConnect2Video -h usage: AdobeConnect2Video [-h] -i ID [-d DATA_PATH] [-o OUTPUT_PATH] [-r RESOLUTION] options: -h --help show this help message and exit -i --id ID the name of directory data is available -d --data-path the path extracted data must be available as directory -o --output-path the output path that generated data saved -r --resolution the resolution of output video ``` ## Contributions If you're interested in contributing to this project, first of all I would like to extend my heartfelt gratitude. Please feel free to reach out to me if you need help. My Email: [email protected] Telegram: [@AliRezaBeigy](https://t.me/AliRezaBeigyKhu) ## LICENSE MIT
AdobeConnect2Video
/AdobeConnect2Video-1.0.0.tar.gz/AdobeConnect2Video-1.0.0/README.md
README.md
from copy import deepcopy import datetime import json from time import time class RequestCreator: """ A class to help build a request for Adobe Analytics API 2.0 getReport """ template = { "globalFilters": [], "metricContainer": { "metrics": [], "metricFilters": [], }, "settings": { "countRepeatInstances": True, "limit": 20000, "page": 0, "nonesBehavior": "exclude-nones", }, "statistics": {"functions": ["col-max", "col-min"]}, "rsid": "", } def __init__(self, request: dict = None) -> None: """ Instanciate the constructor. Arguments: request : OPTIONAL : overwrite the template with the definition provided. """ if request is not None: if '.json' in request and type(request) == str: with open(request,'r') as f: request = json.load(f) self.__request = deepcopy(request) or deepcopy(self.template) self.__metricCount = len(self.__request["metricContainer"]["metrics"]) self.__metricFilterCount = len( self.__request["metricContainer"].get("metricFilters", []) ) self.__globalFiltersCount = len(self.__request["globalFilters"]) ### Preparing some time statement. today = datetime.datetime.now() today_date_iso = today.isoformat().split("T")[0] ## should give '20XX-XX-XX' tomorrow_date_iso = ( (today + datetime.timedelta(days=1)).isoformat().split("T")[0] ) time_start = "T00:00:00.000" time_end = "T23:59:59.999" startToday_iso = today_date_iso + time_start endToday_iso = today_date_iso + time_end startMonth_iso = f"{today_date_iso[:-2]}01{time_start}" tomorrow_iso = tomorrow_date_iso + time_start next_month = today.replace(day=28) + datetime.timedelta(days=4) last_day_month = next_month - datetime.timedelta(days=next_month.day) last_day_month_date_iso = last_day_month.isoformat().split("T")[0] last_day_month_iso = last_day_month_date_iso + time_end thirty_days_prior_date_iso = ( (today - datetime.timedelta(days=30)).isoformat().split("T")[0] ) thirty_days_prior_iso = thirty_days_prior_date_iso + time_start seven_days_prior_iso_date = ( (today - datetime.timedelta(days=7)).isoformat().split("T")[0] ) seven_days_prior_iso = seven_days_prior_iso_date + time_start ### assigning predefined dates: self.dates = { "thisMonth": f"{startMonth_iso}/{last_day_month_iso}", "untilToday": f"{startMonth_iso}/{startToday_iso}", "todayIncluded": f"{startMonth_iso}/{endToday_iso}", "last30daysTillToday": f"{thirty_days_prior_iso}/{startToday_iso}", "last30daysTodayIncluded": f"{thirty_days_prior_iso}/{tomorrow_iso}", "last7daysTillToday": f"{seven_days_prior_iso}/{startToday_iso}", "last7daysTodayIncluded": f"{seven_days_prior_iso}/{endToday_iso}", } self.today = today def __repr__(self): return json.dumps(self.__request, indent=4) def __str__(self): return json.dumps(self.__request, indent=4) def addMetric(self, metricId: str = None) -> None: """ Add a metric to the template. Arguments: metricId : REQUIRED : The metric to add """ if metricId is None: raise ValueError("Require a metric ID") columnId = self.__metricCount addMetric = {"columnId": str(columnId), "id": metricId} if columnId == 0: addMetric["sort"] = "desc" self.__request["metricContainer"]["metrics"].append(addMetric) self.__metricCount += 1 def removeMetrics(self) -> None: """ Remove all metrics. """ self.__request["metricContainer"]["metrics"] = [] self.__metricCount = 0 def getMetrics(self) -> list: """ return a list of the metrics used """ return [metric["id"] for metric in self.__request["metricContainer"]["metrics"]] def setSearch(self,clause:str=None)->None: """ Add a search clause in the Analytics request. Arguments: clause : REQUIRED : String to tell what search clause to add. Examples: "( CONTAINS 'unspecified' ) OR ( CONTAINS 'none' ) OR ( CONTAINS '' )" "( MATCH 'undefined' )" "( NOT CONTAINS 'undefined' )" "( BEGINS-WITH 'undefined' )" "( BEGINS-WITH 'undefined' ) AND ( BEGINS-WITH 'none' )" """ if clause is None: raise ValueError("Require a clause to add to the request") self.__request["search"] = { "clause" : clause } def removeSearch(self)->None: """ Remove the search associated with the request. """ del self.__request["search"] def addMetricFilter( self, metricId: str = None, filterId: str = None, metricIndex: int = None ) -> None: """ Add a filter to a metric. Arguments: metricId : REQUIRED : metric where the filter is added filterId : REQUIRED : The filter to add. when breakdown, use the following format for the value "dimension:::itemId" metricIndex : OPTIONAL : If used, set the filter to the metric located on that index. """ if metricId is None: raise ValueError("Require a metric ID") if filterId is None: raise ValueError("Require a filter ID") filterIdCount = self.__metricFilterCount if filterId.startswith("s") and "@AdobeOrg" in filterId: filterType = "segment" filter = { "id": str(filterIdCount), "type": filterType, "segmentId": filterId, } elif filterId.startswith("20") and "/20" in filterId: filterType = "dateRange" filter = { "id": str(filterIdCount), "type": filterType, "dateRange": filterId, } elif ":::" in filterId: filterType = "breakdown" dimension, itemId = filterId.split(":::") filter = { "id": str(filterIdCount), "type": filterType, "dimension": dimension, "itemId": itemId, } else: ### case when it is predefined segments like "All_Visits" filterType = "segment" filter = { "id": str(filterIdCount), "type": filterType, "segmentId": filterId, } if filterIdCount == 0: self.__request["metricContainer"]["metricFilters"] = [filter] else: self.__request["metricContainer"]["metricFilters"].append(filter) ### adding filter to the metric if metricIndex is None: for metric in self.__request["metricContainer"]["metrics"]: if metric["id"] == metricId: if "filters" in metric.keys(): metric["filters"].append(str(filterIdCount)) else: metric["filters"] = [str(filterIdCount)] else: metric = self.__request["metricContainer"]["metrics"][metricIndex] if "filters" in metric.keys(): metric["filters"].append(str(filterIdCount)) else: metric["filters"] = [str(filterIdCount)] ### incrementing the filter counter self.__metricFilterCount += 1 def removeMetricFilter(self, filterId: str = None) -> None: """ remove a filter from a metric Arguments: filterId : REQUIRED : The filter to add. when breakdown, use the following format for the value "dimension:::itemId" """ found = False ## flag if filterId is None: raise ValueError("Require a filter ID") if ":::" in filterId: filterId = filterId.split(":::")[1] list_index = [] for metricFilter in self.__request["metricContainer"]["metricFilters"]: if filterId in str(metricFilter): list_index.append(metricFilter["id"]) found = True ## decrementing the filter counter if found: for metricFilterId in reversed(list_index): del self.__request["metricContainer"]["metricFilters"][ int(metricFilterId) ] for metric in self.__request["metricContainer"]["metrics"]: if metricFilterId in metric.get("filters", []): metric["filters"].remove(metricFilterId) self.__metricFilterCount -= 1 def setLimit(self, limit: int = 100) -> None: """ Specific the number of element to retrieve. Default is 10. Arguments: limit : OPTIONAL : number of elements to return """ self.__request["settings"]["limit"] = limit def setRepeatInstance(self, repeat: bool = True) -> None: """ Specify if repeated instances should be counted. Arguments: repeat : OPTIONAL : True or False (True by default) """ self.__request["settings"]["countRepeatInstances"] = repeat def setNoneBehavior(self, returnNones: bool = True) -> None: """ Set the behavior of the None values in that request. Arguments: returnNones : OPTIONAL : True or False (True by default) """ if returnNones: self.__request["settings"]["nonesBehavior"] = "return-nones" else: self.__request["settings"]["nonesBehavior"] = "exclude-nones" def setDimension(self, dimension: str = None) -> None: """ Set the dimension to be used for reporting. Arguments: dimension : REQUIRED : the dimension to build your report on """ if dimension is None: raise ValueError("A dimension must be passed") self.__request["dimension"] = dimension def setRSID(self, rsid: str = None) -> None: """ Set the reportSuite ID to be used for the reporting. Arguments: rsid : REQUIRED : The Data View ID to be passed. """ if rsid is None: raise ValueError("A reportSuite ID must be passed") self.__request["rsid"] = rsid def addGlobalFilter(self, filterId: str = None) -> None: """ Add a global filter to the report. NOTE : You need to have a dateRange filter at least in the global report. Arguments: filterId : REQUIRED : The filter to add to the global filter. example : "s2120430124uf03102jd8021" -> segment "2020-01-01T00:00:00.000/2020-02-01T00:00:00.000" -> dateRange """ if filterId.startswith("s") and "@AdobeOrg" in filterId: filterType = "segment" filter = { "type": filterType, "segmentId": filterId, } elif filterId.startswith("20") and "/20" in filterId: filterType = "dateRange" filter = { "type": filterType, "dateRange": filterId, } elif ":::" in filterId: filterType = "breakdown" dimension, itemId = filterId.split(":::") filter = { "type": filterType, "dimension": dimension, "itemId": itemId, } else: ### case when it is predefined segments like "All_Visits" filterType = "segment" filter = { "type": filterType, "segmentId": filterId, } ### incrementing the count for globalFilter self.__globalFiltersCount += 1 ### adding to the globalFilter list self.__request["globalFilters"].append(filter) def updateDateRange( self, dateRange: str = None, shiftingDays: int = None, shiftingDaysEnd: int = None, shiftingDaysStart: int = None, ) -> None: """ Update the dateRange filter on the globalFilter list One of the 3 elements specified below is required. Arguments: dateRange : OPTIONAL : string representing the new dateRange string, such as: 2020-01-01T00:00:00.000/2020-02-01T00:00:00.000 shiftingDays : OPTIONAL : An integer, if you want to add or remove days from the current dateRange provided. Apply to end and beginning of dateRange. So 2020-01-01T00:00:00.000/2020-02-01T00:00:00.000 with +2 will give 2020-01-03T00:00:00.000/2020-02-03T00:00:00.000 shiftingDaysEnd : : OPTIONAL : An integer, if you want to add or remove days from the last part of the current dateRange. Apply only to end of the dateRange. So 2020-01-01T00:00:00.000/2020-02-01T00:00:00.000 with +2 will give 2020-01-01T00:00:00.000/2020-02-03T00:00:00.000 shiftingDaysStart : OPTIONAL : An integer, if you want to add or remove days from the last first part of the current dateRange. Apply only to beginning of the dateRange. So 2020-01-01T00:00:00.000/2020-02-01T00:00:00.000 with +2 will give 2020-01-03T00:00:00.000/2020-02-01T00:00:00.000 """ pos = -1 for index, filter in enumerate(self.__request["globalFilters"]): if filter["type"] == "dateRange": pos = index curDateRange = filter["dateRange"] start, end = curDateRange.split("/") start = datetime.datetime.fromisoformat(start) end = datetime.datetime.fromisoformat(end) if dateRange is not None and type(dateRange) == str: for index, filter in enumerate(self.__request["globalFilters"]): if filter["type"] == "dateRange": pos = index curDateRange = filter["dateRange"] newDef = { "type": "dateRange", "dateRange": dateRange, } if shiftingDays is not None and type(shiftingDays) == int: newStart = (start + datetime.timedelta(shiftingDays)).isoformat( timespec="milliseconds" ) newEnd = (end + datetime.timedelta(shiftingDays)).isoformat( timespec="milliseconds" ) newDef = { "type": "dateRange", "dateRange": f"{newStart}/{newEnd}", } elif shiftingDaysEnd is not None and type(shiftingDaysEnd) == int: newEnd = (end + datetime.timedelta(shiftingDaysEnd)).isoformat( timespec="milliseconds" ) newDef = { "type": "dateRange", "dateRange": f"{start}/{newEnd}", } elif shiftingDaysStart is not None and type(shiftingDaysStart) == int: newStart = (start + datetime.timedelta(shiftingDaysStart)).isoformat( timespec="milliseconds" ) newDef = { "type": "dateRange", "dateRange": f"{newStart}/{end}", } if pos > -1: self.__request["globalFilters"][pos] = newDef else: ## in case there is no dateRange already self.__request["globalFilters"][pos].append(newDef) def removeGlobalFilter(self, index: int = None, filterId: str = None) -> None: """ Remove a specific filter from the globalFilter list. You can use either the index of the list or the specific Id of the filter used. Arguments: index : REQUIRED : index in the list return filterId : REQUIRED : the id of the filter to be removed (ex: filterId, dateRange) """ pos = -1 if index is not None: del self.__request["globalFilters"][index] elif filterId is not None: for index, filter in enumerate(self.__request["globalFilters"]): if filterId in str(filter): pos = index if pos > -1: del self.__request["globalFilters"][pos] ### decrementing the count for globalFilter self.__globalFiltersCount -= 1 def to_dict(self) -> None: """ Return the request definition """ return deepcopy(self.__request) def save(self, fileName: str = None) -> None: """ save the request definition in a JSON file. Argument: filename : OPTIONAL : Name of the file. (default cjapy_request_<timestamp>.json) """ fileName = fileName or f"aa_request_{int(time())}.json" with open(fileName, "w") as f: f.write(json.dumps(self.to_dict(), indent=4))
AdobeLibManual678
/AdobeLibManual678-4.3.tar.gz/AdobeLibManual678-4.3/aanalytics2/requestCreator.py
requestCreator.py
import pandas as pd import json from typing import Union, IO import time from .requestCreator import RequestCreator from copy import deepcopy class Workspace: """ A class to return data from the getReport method. """ startDate = None endDate = None settings = None def __init__( self, responseData: dict, dataRequest: dict = None, columns: dict = None, summaryData: dict = None, analyticsConnector: object = None, reportType: str = "normal", metrics: Union[dict, list] = None, ## for normal type, static report metricFilters: dict = None, resolveColumns: bool = True, ) -> None: """ Setup the different values from the response of the getReport Argument: responseData : REQUIRED : data returned & predigested by the getReport method. dataRequest : REQUIRED : dataRequest containing the request columns : REQUIRED : the columns element of the response. summaryData : REQUIRED : summary data containing total calculated by CJA analyticsConnector : REQUIRED : analytics object connector. reportType : OPTIONAL : define type of report retrieved.(normal, static, multi) metrics : OPTIONAL : dictionary of the columns Id for normal report and list of columns name for Static report metricFilters : OPTIONAL : Filter name for the id of the filter resolveColumns : OPTIONAL : If you want to resolve the column name and returning ID instead of name """ for filter in dataRequest["globalFilters"]: if filter["type"] == "dateRange": self.startDate = filter["dateRange"].split("/")[0] self.endDate = filter["dateRange"].split("/")[1] self.dataRequest = RequestCreator(dataRequest) self.requestSize = dataRequest["settings"]["limit"] self.settings = dataRequest["settings"] self.pageRequested = dataRequest["settings"]["page"] + 1 self.summaryData = summaryData self.reportType = reportType self.analyticsObject = analyticsConnector ## global filters resolution filters = [] for filter in dataRequest["globalFilters"]: if filter["type"] == "segment": segmentId = filter.get("segmentId",None) if segmentId is not None: seg = self.analyticsObject.getSegment(filter["segmentId"]) filter["segmentName"] = seg["name"] else: context = filter.get('segmentDefinition',{}).get('container',{}).get('context') description = filter.get('segmentDefinition',{}).get('container',{}).get('pred',{}).get('description') listName = ','.join(filter.get('segmentDefinition',{}).get('container',{}).get('pred',{}).get('list',[])) function = filter.get('segmentDefinition',{}).get('container',{}).get('pred',{}).get('func') filter["segmentId"] = f"Dynamic: {context} {description} {function} {listName}" filter["segmentName"] = f"{context} {description} {listName}" filters.append(filter) self.globalFilters = filters self.metricFilters = metricFilters if reportType == "normal" or reportType == "static": df_init = pd.DataFrame(responseData).T df_init = df_init.reset_index() elif reportType == "multi": df_init = responseData if reportType == "normal": columns_data = ["itemId"] elif reportType == "static": columns_data = ["SegmentName"] ### adding dimensions & metrics in columns names when reportType is "normal" if "dimension" in dataRequest.keys() and reportType == "normal": columns_data.append(dataRequest["dimension"]) ### adding metrics in columns names columnIds = columns["columnIds"] # To get readable names of template metrics and Success Events, we need to get the full list of metrics for the Report Suite first. # But we won't do this if there are no such metrics in the report. if (resolveColumns is True) & ( len([metric for metric in metrics.values() if metric.startswith("metrics/")]) > 0): rsMetricsList = self.analyticsObject.getMetrics(rsid=dataRequest["rsid"]) for col in columnIds: metrics: dict = metrics ## case when dict is used metricListName: list = metrics[col].split(":::") if resolveColumns: metricResolvedName = [] for metric in metricListName: if metric.startswith("cm"): cm = self.analyticsObject.getCalculatedMetric(metric) metricName = cm.get("name",metric) metricResolvedName.append(metricName) elif metric.startswith("s"): seg = self.analyticsObject.getSegment(metric) segName = seg.get("name",metric) metricResolvedName.append(segName) elif metric.startswith("metrics/"): metricName = rsMetricsList[rsMetricsList["id"] == metric]["name"].iloc[0] metricResolvedName.append(metricName) else: metricResolvedName.append(metric) colName = ":::".join(metricResolvedName) columns_data.append(colName) else: columns_data.append(metrics[col]) elif reportType == "static": metrics: list = metrics ## case when a list is used columns_data.append("SegmentId") columns_data += metrics if df_init.empty == False and ( reportType == "static" or reportType == "normal" ): df_init.columns = columns_data self.columns = list(df_init.columns) elif reportType == "multi": self.columns = list(df_init.columns) else: self.columns = list(df_init.columns) self.row_numbers = len(df_init) self.dataframe = df_init def __str__(self): return json.dumps( { "startDate": self.startDate, "endDate": self.endDate, "globalFilters": self.globalFilters, "totalRows": self.row_numbers, "columns": self.columns, }, indent=4, ) def __repr__(self): return json.dumps( { "startDate": self.startDate, "endDate": self.endDate, "globalFilters": self.globalFilters, "totalRows": self.row_numbers, "columns": self.columns, }, indent=4, ) def to_csv( self, filename: str = None, delimiter: str = ",", index: bool = False, ) -> IO: """ Save the result in a CSV Arguments: filename : OPTIONAL : name of the file delimiter : OPTIONAL : delimiter of the CSV index : OPTIONAL : should the index be included in the CSV (default False) """ if filename is None: filename = f"cjapy_{int(time.time())}.csv" self.df_init.to_csv(filename, delimiter=delimiter, index=index) def to_json(self, filename: str = None, orient: str = "index") -> IO: """ Save the result to JSON Arguments: filename : OPTIONAL : name of the file orient : OPTIONAL : orientation of the JSON """ if filename is None: filename = f"cjapy_{int(time.time())}.json" self.df_init.to_json(filename, orient=orient) def breakdown( self, index: Union[int, str] = None, dimension: str = None, n_results: Union[int, str] = 10, ) -> object: """ Breakdown a specific index or value of the dataframe, by another dimension. NOTE: breakdowns are possible only from normal reportType. Return a workspace instance. Arguments: index : REQUIRED : Value to use as filter for the breakdown or index of the dataframe to use for the breakdown. dimension : REQUIRED : dimension to report. n_results : OPTIONAL : number of results you want to have on your breakdown. Default 10, can use "inf" """ if index is None or dimension is None: raise ValueError( "Require a value to use as breakdown and dimension to request" ) breadown_dimension = list(self.dataframe.columns)[1] if type(index) == str: row: pd.Series = self.dataframe[self.dataframe.iloc[:, 1] == index] itemValue: str = row["itemId"].values[0] elif type(index) == int: itemValue = self.dataframe.loc[index, "itemId"] breakdown = f"{breadown_dimension}:::{itemValue}" new_request = RequestCreator(self.dataRequest.to_dict()) new_request.setDimension(dimension) metrics = new_request.getMetrics() for metric in metrics: new_request.addMetricFilter(metricId=metric, filterId=breakdown) if n_results < 20000: new_request.setLimit(n_results) report = self.analyticsObject.getReport2( new_request.to_dict(), n_results=n_results ) if n_results == "inf" or n_results > 20000: report = self.analyticsObject.getReport2( new_request.to_dict(), n_results=n_results ) return report
AdobeLibManual678
/AdobeLibManual678-4.3.tar.gz/AdobeLibManual678-4.3/aanalytics2/workspace.py
workspace.py
import json import os from pathlib import Path from typing import Optional import time # Non standard libraries from .config import config_object, header def find_path(path: str) -> Optional[Path]: """Checks if the file denoted by the specified `path` exists and returns the Path object for the file. If the file under the `path` does not exist and the path denotes an absolute path, tries to find the file by converting the absolute path to a relative path. If the file does not exist with either the absolute and the relative path, returns `None`. """ if Path(path).exists(): return Path(path) elif path.startswith('/') and Path('.' + path).exists(): return Path('.' + path) elif path.startswith('\\') and Path('.' + path).exists(): return Path('.' + path) else: return None def createConfigFile(destination: str = 'config_analytics_template.json',auth_type: str = "oauthV2",verbose: bool = False) -> None: """Creates a `config_admin.json` file with the pre-defined configuration format to store the access data in under the specified `destination`. Arguments: destination : OPTIONAL : the name of the file + path if you want auth_type : OPTIONAL : The type of Oauth type you want to use for your config file. Possible value: "jwt" or "oauthV2" """ json_data = { 'org_id': '<orgID>', 'client_id': "<APIkey>", 'secret': "<YourSecret>", } if auth_type == 'oauthV2': json_data['scopes'] = "<scopes>" elif auth_type == 'jwt': json_data["tech_id"] = "<something>@techacct.adobe.com" json_data["pathToKey"] = "<path/to/your/privatekey.key>" if '.json' not in destination: destination += '.json' with open(destination, 'w') as cf: cf.write(json.dumps(json_data, indent=4)) if verbose: print(f" file created at this location : {os.getcwd()}{os.sep}{destination}.json") def importConfigFile(path: str = None,auth_type:str=None) -> None: """Reads the file denoted by the supplied `path` and retrieves the configuration information from it. Arguments: path: REQUIRED : path to the configuration file. Can be either a fully-qualified or relative. auth_type : OPTIONAL : The type of Auth to be used by default. Detected if none is passed, OauthV2 takes precedence. Possible values: "jwt" or "oauthV2" Example of path value. "config.json" "./config.json" "/my-folder/config.json" """ config_file_path: Optional[Path] = find_path(path) if config_file_path is None: raise FileNotFoundError( f"Unable to find the configuration file under path `{path}`." ) with open(config_file_path, 'r') as file: provided_config = json.load(file) provided_keys = provided_config.keys() if 'api_key' in provided_keys: ## old naming for client_id client_id = provided_config['api_key'] elif 'client_id' in provided_keys: client_id = provided_config['client_id'] else: raise RuntimeError(f"Either an `api_key` or a `client_id` should be provided.") if auth_type is None: if 'scopes' in provided_keys: auth_type = 'oauthV2' elif 'tech_id' in provided_keys and "pathToKey" in provided_keys: auth_type = 'jwt' args = { "org_id" : provided_config['org_id'], "secret" : provided_config['secret'], "client_id" : client_id } if auth_type == 'oauthV2': args["scopes"] = provided_config["scopes"].replace(' ','') if auth_type == 'jwt': args["tech_id"] = provided_config["tech_id"] args["path_to_key"] = provided_config["pathToKey"] configure(**args) def configure(org_id: str = None, tech_id: str = None, secret: str = None, client_id: str = None, path_to_key: str = None, private_key: str = None, oauth: bool = False, token: str = None, scopes: str = None ): """Performs programmatic configuration of the API using provided values. Arguments: org_id : REQUIRED : Organization ID tech_id : REQUIRED : Technical Account ID secret : REQUIRED : secret generated for your connection client_id : REQUIRED : The client_id (old api_key) provided by the JWT connection. path_to_key : REQUIRED : If you have a file containing your private key value. private_key : REQUIRED : If you do not use a file but pass a variable directly. oauth : OPTIONAL : If you wish to pass the token generated by oauth token : OPTIONAL : If oauth set to True, you need to pass the token scopes : OPTIONAL : If you use Oauth, you need to pass the scopes """ if not org_id: raise ValueError("`org_id` must be specified in the configuration.") if not client_id: raise ValueError("`client_id` must be specified in the configuration.") if not tech_id and oauth == False and not scopes: raise ValueError("`tech_id` must be specified in the configuration.") if not secret and oauth == False: raise ValueError("`secret` must be specified in the configuration.") if (not path_to_key and not private_key and oauth == False) and not scopes: raise ValueError("scopes must be specified if Oauth setup.\n `pathToKey` or `private_key` must be specified in the configuration if JWT setup.") config_object["org_id"] = org_id config_object["client_id"] = client_id header["x-api-key"] = client_id config_object["tech_id"] = tech_id config_object["secret"] = secret config_object["pathToKey"] = path_to_key config_object["private_key"] = private_key config_object["scopes"] = scopes # ensure the reset of the state by overwriting possible values from previous import. config_object["date_limit"] = 0 config_object["token"] = "" if oauth: date_limit = int(time.time()) + (22 * 60 * 60) config_object["date_limit"] = date_limit config_object["token"] = token header["Authorization"] = f"Bearer {token}" def get_private_key_from_config(config: dict) -> str: """ Returns the private key directly or read a file to return the private key. """ private_key = config.get('private_key') if private_key is not None: return private_key private_key_path = find_path(config['pathToKey']) if private_key_path is None: raise FileNotFoundError(f'Unable to find the private key under path `{config["pathToKey"]}`.') with open(Path(private_key_path), 'r') as f: private_key = f.read() return private_key def generateLoggingObject( level:str="WARNING", stream:bool=True, file:bool=False, filename:str="aanalytics2.log", format:str="%(asctime)s::%(name)s::%(funcName)s::%(levelname)s::%(message)s::%(lineno)d" )->dict: """ Generates a dictionary for the logging object with basic configuration. You can find the information for the different possible values on the logging documentation. https://docs.python.org/3/library/logging.html Arguments: level : Level of the logger to display information (NOTSET, DEBUG,INFO,WARNING,EROR,CRITICAL) stream : If the logger should display print statements file : If the logger should write the messages to a file filename : name of the file where log are written format : format of the log to be written. """ myObject = { "level" : level, "stream" : stream, "file" : file, "format" : format, "filename":filename } return myObject
AdobeLibManual678
/AdobeLibManual678-4.3.tar.gz/AdobeLibManual678-4.3/aanalytics2/configs.py
configs.py
import json, os, re import time, datetime from concurrent import futures from copy import deepcopy from pathlib import Path from typing import IO, Union, List from collections import defaultdict from itertools import tee import logging # Non standard libraries import pandas as pd from urllib import parse from aanalytics2 import config, connector, token_provider from .projects import * from .requestCreator import RequestCreator from .workspace import Workspace JsonOrDataFrameType = Union[pd.DataFrame, dict] JsonListOrDataFrameType = Union[pd.DataFrame, List[dict]] def retrieveToken(verbose: bool = False, save: bool = False, **kwargs)->str: """ LEGACY retrieve token directly following the importConfigFile or Configure method. """ token_with_expiry = token_provider.get_jwt_token_and_expiry_for_config(config.config_object,**kwargs) token = token_with_expiry['token'] config.config_object['token'] = token config.config_object['date_limit'] = time.time() + token_with_expiry['expiry'] / 1000 - 500 config.header.update({'Authorization': f'Bearer {token}'}) if verbose: print(f"token valid till : {time.ctime(time.time() + token_with_expiry['expiry'] / 1000)}") return token class Login: """ Class to connect to the the login company. """ loggingEnabled = False logger = None def __init__(self, config: dict = config.config_object, header: dict = config.header, retry: int = 0,loggingObject:dict=None) -> None: """ Instantiate the Loggin class. Arguments: config : REQUIRED : dictionary with your configuration information. header : REQUIRED : dictionary of your header. retry : OPTIONAL : if you want to retry, the number of time to retry loggingObject : OPTIONAL : If you want to set logging capability for your actions. """ if loggingObject is not None and sorted(["level","stream","format","filename","file"]) == sorted(list(loggingObject.keys())): self.loggingEnabled = True self.logger = logging.getLogger(f"{__name__}.login") self.logger.setLevel(loggingObject["level"]) if type(loggingObject["format"]) == str: formatter = logging.Formatter(loggingObject["format"]) elif type(loggingObject["format"]) == logging.Formatter: formatter = loggingObject["format"] if loggingObject["file"]: fileHandler = logging.FileHandler(loggingObject["filename"]) fileHandler.setFormatter(formatter) self.logger.addHandler(fileHandler) if loggingObject["stream"]: streamHandler = logging.StreamHandler() streamHandler.setFormatter(formatter) self.logger.addHandler(streamHandler) self.connector = connector.AdobeRequest( config_object=config, header=header, retry=retry,loggingEnabled=self.loggingEnabled,logger=self.logger) self.header = self.connector.header self.COMPANY_IDS = {} self.retry = retry def getCompanyId(self,verbose:bool=False) -> dict: """ Retrieve the company ids for later call for the properties. """ if self.loggingEnabled: self.logger.debug("getCompanyId start") res = self.connector.getData( "https://analytics.adobe.io/discovery/me", headers=self.header) json_res = res if self.loggingEnabled: self.logger.debug(f"getCompanyId reponse: {json_res}") try: companies = json_res['imsOrgs'][0]['companies'] self.COMPANY_IDS = json_res['imsOrgs'][0]['companies'] return companies except: if verbose: print("exception when trying to get companies with parameter 'all'") print(json_res) if self.loggingEnabled: self.logger.error(f"Error trying to get companyId: {json_res}") return None def createAnalyticsConnection(self, companyId: str = None,loggingObject:dict=None) -> object: """ Returns an instance of the Analytics class so you can query the different elements from that instance. Arguments: companyId: REQUIRED : The globalCompanyId that you want to use in your connection loggingObject : OPTIONAL : If you want to set logging capability for your actions. the retry parameter set in the previous class instantiation will be used here. """ analytics = Analytics(company_id=companyId, config_object=self.connector.config, header=self.header, retry=self.retry,loggingObject=loggingObject) return analytics class Analytics: """ Class that instantiate a connection to a single login company. """ # Endpoints header = {"Accept": "application/json", "Content-Type": "application/json", "Authorization": "Bearer ", "X-Api-Key": "" } _endpoint = 'https://analytics.adobe.io/api' _getRS = '/collections/suites' _getDimensions = '/dimensions' _getMetrics = '/metrics' _getSegments = '/segments' _getCalcMetrics = '/calculatedmetrics' _getDateRanges = '/dateranges' _getReport = '/reports' loggingEnabled = False logger = None def __init__(self, company_id: str = None, config_object: dict = config.config_object, header: dict = config.header, retry: int = 0,loggingObject:dict=None): """ Instantiate the Analytics class. The Analytics class will be automatically connected to the API 2.0. You have possibility to review the connection detail by looking into the connector instance. "header", "company_id" and "endpoint_company" are attribute accessible for debugging. Arguments: company_id : REQUIRED : company ID retrieved by the getCompanyId retry : OPTIONAL : Number of time you want to retrieve fail calls loggingObject : OPTIONAL : logging object to log actions during runtime. config_object : OPTIONAL : config object to be used for setting token (do not update if you do not know) header : OPTIONAL : template header used for all requests (do not update if you do not know!) """ if company_id is None: raise AttributeError( 'Expected "company_id" to be referenced.\nPlease ensure you pass the globalCompanyId when instantiating this class.') if loggingObject is not None and sorted(["level","stream","format","filename","file"]) == sorted(list(loggingObject.keys())): self.loggingEnabled = True self.logger = logging.getLogger(f"{__name__}.analytics") self.logger.setLevel(loggingObject["level"]) if type(loggingObject["format"]) == str: formatter = logging.Formatter(loggingObject["format"]) elif type(loggingObject["format"]) == logging.Formatter: formatter = loggingObject["format"] if loggingObject["file"]: fileHandler = logging.FileHandler(loggingObject["filename"]) fileHandler.setFormatter(formatter) self.logger.addHandler(fileHandler) if loggingObject["stream"]: streamHandler = logging.StreamHandler() streamHandler.setFormatter(formatter) self.logger.addHandler(streamHandler) self.connector = connector.AdobeRequest( config_object=config_object, header=header, retry=retry,loggingEnabled=self.loggingEnabled,logger=self.logger) self.header = self.connector.header self.connector.header['x-proxy-global-company-id'] = company_id self.header['x-proxy-global-company-id'] = company_id self.endpoint_company = f"{self._endpoint}/{company_id}" self.company_id = company_id self.listProjectIds = [] self.projectsDetails = {} self.segments = [] self.calculatedMetrics = [] try: import importlib.resources as pkg_resources pathLOGS = pkg_resources.path( "aanalytics2", "eventType_usageLogs.pickle") except ImportError: try: # Try backported to PY<37 `importlib_resources`. import pkg_resources pathLOGS = pkg_resources.resource_filename( "aanalytics2", "eventType_usageLogs.pickle") except: print('Empty LOGS_EVENT_TYPE attribute') try: with pathLOGS as f: self.LOGS_EVENT_TYPE = pd.read_pickle(f) except: self.LOGS_EVENT_TYPE = "no data" def __str__(self)->str: obj = { "endpoint" : self.endpoint_company, "companyId" : self.company_id, "header" : self.header, "token" : self.connector.config['token'] } return json.dumps(obj,indent=4) def __repr__(self)->str: obj = { "endpoint" : self.endpoint_company, "companyId" : self.company_id, "header" : self.header, "token" : self.connector.config['token'] } return json.dumps(obj,indent=4) def refreshToken(self, token: str = None): if token is None: raise AttributeError( 'Expected "token" to be referenced.\nPlease ensure you pass the token.') self.header['Authorization'] = "Bearer " + token def decodeAArequests(self,file:IO=None,urls:Union[list,str]=None,save:bool=False,**kwargs)->pd.DataFrame: """ Takes any of the parameter to load adobe url and decompose the requests into a dataframe, that you can save if you want. Arguments: file : OPTIONAL : file referencing the different requests saved (excel, or txt) urls : OPTIONAL : list of requests (or a single request) that you want to decode. save : OPTIONAL : parameter to save your decode list into a csv file. Returns a dataframe. possible kwargs: encoding : the type of encoding to decode the file """ if self.loggingEnabled: self.logger.debug(f"Starting decodeAArequests") if file is None and urls is None: raise ValueError("Require at least file or urls to contains data") if file is not None: if '.txt' in file: with open(file,'r',encoding=kwargs.get('encoding','utf-8')) as f: urls = f.readlines() ## passing decoding to urls elif '.xlsx' in file: temp_df = pd.read_excel(file,header=None) urls = list(temp_df[0]) ## passing decoding to urls if urls is not None: if type(urls) == str: data = parse.parse_qsl(urls) df = pd.DataFrame(data) df.columns = ['index','request'] df.set_index('index',inplace=True) if save: df.to_csv(f'request_{int(time.time())}.csv') return df elif type(urls) == list: ## decoding list of strings tmp_list = [parse.parse_qsl(data) for data in urls] tmp_dfs = [pd.DataFrame(data) for data in tmp_list] tmp_dfs2 = [] for df, index in zip(tmp_dfs,range(len(tmp_dfs))): df.columns = ['index',f"request {index+1}"] ## cleanup timestamp from request url string = df.iloc[0,0] df.iloc[0,0] = re.search('http.*://(.+?)/s[0-9]+.*',string).group(1) # tracking server df.set_index('index',inplace=True) new_df = df tmp_dfs2.append(new_df) df_full = pd.concat(tmp_dfs2,axis=1) if save: df_full.to_csv(f'requests_{int(time.time())}.csv') return df_full def getReportSuites(self, txt: str = None, rsid_list: str = None, limit: int = 100, extended_info: bool = False, save: bool = False) -> list: """ Get the reportSuite IDs data. Returns a dataframe of reportSuite name and report suite id. Arguments: txt : OPTIONAL : returns the reportSuites that matches a speific text field rsid_list : OPTIONAL : returns the reportSuites that matches the list of rsids set limit : OPTIONAL : How many reportSuite retrieves per serverCall save : OPTIONAL : if set to True, it will save the list in a file. (Default False) """ if self.loggingEnabled: self.logger.debug(f"Starting getReportSuite") nb_error, nb_empty = 0, 0 # use for multi-thread loop params = {} params.update({'limit': str(limit)}) params.update({'page': '0'}) if txt is not None: params.update({'rsidContains': str(txt)}) if rsid_list is not None: params.update({'rsids': str(rsid_list)}) params.update( {"expansion": "name,parentRsid,currency,calendarType,timezoneZoneinfo"}) if self.loggingEnabled: self.logger.debug(f"parameters : {params}") rsids = self.connector.getData(self.endpoint_company + self._getRS, params=params, headers=self.header) content = rsids['content'] if not extended_info: list_content = [{'name': item['name'], 'rsid': item['rsid']} for item in content] df_rsids = pd.DataFrame(list_content) else: df_rsids = pd.DataFrame(content) total_page = rsids['totalPages'] last_page = rsids['lastPage'] if not last_page: # if last_page =False callsToMake = total_page list_params = [{**params, 'page': page} for page in range(1, callsToMake)] list_urls = [self.endpoint_company + self._getRS for x in range(1, callsToMake)] listheaders = [self.header for x in range(1, callsToMake)] workers = min(10, total_page) with futures.ThreadPoolExecutor(workers) as executor: res = executor.map(lambda x, y, z: self.connector.getData( x, y, headers=z), list_urls, list_params, listheaders) res = list(res) list_data = [val for sublist in [r['content'] for r in res if 'content' in r.keys()] for val in sublist] nb_error = sum(1 for elem in res if 'error_code' in elem.keys()) nb_empty = sum(1 for elem in res if 'content' in elem.keys() and len( elem['content']) == 0) if not extended_info: list_append = [{'name': item['name'], 'rsid': item['rsid']} for item in list_data] df_append = pd.DataFrame(list_append) else: df_append = pd.DataFrame(list_data) df_rsids = df_rsids.append(df_append, ignore_index=True) if save: if self.loggingEnabled: self.logger.debug(f"saving rsids : {params}") df_rsids.to_csv('RSIDS.csv', sep='\t') if nb_error > 0 or nb_empty > 0: message = f'WARNING : Retrieved data are partial.\n{nb_error}/{len(list_urls) + 1} requests returned an error.\n{nb_empty}/{len(list_urls)} requests returned an empty response. \nTry to use filter to retrieve reportSuite or increase limit per request' print(message) if self.loggingEnabled: self.logger.warning(message) return df_rsids def getVirtualReportSuites(self, extended_info: bool = False, limit: int = 100, filterIds: str = None, idContains: str = None, segmentIds: str = None, save: bool = False) -> list: """ return a lit of virtual reportSuites and their id. It can contain more information if expansion is selected. Arguments: extended_info : OPTIONAL : boolean to retrieve the maximum of information. limit : OPTIONAL : How many reportSuite retrieves per serverCall filterIds : OPTIONAL : comma delimited list of virtual reportSuite ID to be retrieved. idContains : OPTIONAL : element that should be contained in the Virtual ReportSuite Id segmentIds : OPTIONAL : comma delimited list of segmentId contained in the VRSID save : OPTIONAL : if set to True, it will save the list in a file. (Default False) """ if self.loggingEnabled: self.logger.debug(f"Starting getVirtualReportSuites") expansion_values = "globalCompanyKey,parentRsid,parentRsidName,timezone,timezoneZoneinfo,currentTimezoneOffset,segmentList,description,modified,isDeleted,dataCurrentAsOf,compatibility,dataSchema,sessionDefinition,curatedComponents,type" params = {"limit": limit} nb_error = 0 nb_empty = 0 list_urls = [] if extended_info: params['expansion'] = expansion_values if filterIds is not None: params['filterByIds'] = filterIds if idContains is not None: params['idContains'] = idContains if segmentIds is not None: params['segmentIds'] = segmentIds path = f"{self.endpoint_company}/reportsuites/virtualreportsuites" if self.loggingEnabled: self.logger.debug(f"params: {params}") vrsid = self.connector.getData( path, params=params, headers=self.header) content = vrsid['content'] if not extended_info: list_content = [{'name': item['name'], 'vrsid': item['id']} for item in content] df_vrsids = pd.DataFrame(list_content) else: df_vrsids = pd.DataFrame(content) total_page = vrsid['totalPages'] last_page = vrsid['lastPage'] if not last_page: # if last_page =False callsToMake = total_page list_params = [{**params, 'page': page} for page in range(1, callsToMake)] list_urls = [path for x in range(1, callsToMake)] listheaders = [self.header for x in range(1, callsToMake)] workers = min(10, total_page) with futures.ThreadPoolExecutor(workers) as executor: res = executor.map(lambda x, y, z: self.connector.getData( x, y, headers=z), list_urls, list_params, listheaders) res = list(res) list_data = [val for sublist in [r['content'] for r in res if 'content' in r.keys()] for val in sublist] nb_error = sum(1 for elem in res if 'error_code' in elem.keys()) nb_empty = sum(1 for elem in res if 'content' in elem.keys() and len( elem['content']) == 0) if not extended_info: list_append = [{'name': item['name'], 'vrsid': item['id']} for item in list_data] df_append = pd.DataFrame(list_append) else: df_append = pd.DataFrame(list_data) df_vrsids = df_vrsids.append(df_append, ignore_index=True) if save: df_vrsids.to_csv('VRSIDS.csv', sep='\t') if nb_error > 0 or nb_empty > 0: message = f'WARNING : Retrieved data are partial.\n{nb_error}/{len(list_urls) + 1} requests returned an error.\n{nb_empty}/{len(list_urls)} requests returned an empty response. \nTry to use filter to retrieve reportSuite or increase limit per request' print(message) if self.loggingEnabled: self.logger.warning(message) return df_vrsids def getVirtualReportSuite(self, vrsid: str = None, extended_info: bool = False, format: str = 'df') -> JsonOrDataFrameType: """ return a single virtual report suite ID information as dataframe. Arguments: vrsid : REQUIRED : The virtual reportSuite to be retrieved extended_info : OPTIONAL : boolean to add more information format : OPTIONAL : format of the output. 2 values "df" for dataframe and "raw" for raw json. """ if vrsid is None: raise Exception("require a Virtual ReportSuite ID") if self.loggingEnabled: self.logger.debug(f"Starting getVirtualReportSuite for {vrsid}") expansion_values = "globalCompanyKey,parentRsid,parentRsidName,timezone,timezoneZoneinfo,currentTimezoneOffset,segmentList,description,modified,isDeleted,dataCurrentAsOf,compatibility,dataSchema,sessionDefinition,curatedComponents,type" params = {} if extended_info: params['expansion'] = expansion_values path = f"{self.endpoint_company}/reportsuites/virtualreportsuites/{vrsid}" data = self.connector.getData(path, params=params, headers=self.header) if format == "df": data = pd.DataFrame({vrsid: data}) return data def getVirtualReportSuiteComponents(self, vrsid: str = None, nan_value=""): """ Uses the getVirtualReportSuite function to get a VRS and returns the VRS components for a VRS as a dataframe. VRS must have Component Curation enabled. Arguments: vrsid : REQUIRED : Virtual Report Suite ID nan_value : OPTIONAL : how to handle empty cells, default = "" """ if self.loggingEnabled: self.logger.debug(f"Starting getVirtualReportSuiteComponents") vrs_data = self.getVirtualReportSuite(extended_info=True, vrsid=vrsid) if "curatedComponents" not in vrs_data.index: return pd.DataFrame() components_cell = vrs_data[vrs_data.index == "curatedComponents"].iloc[0, 0] return pd.DataFrame(components_cell).fillna(value=nan_value) def createVirtualReportSuite(self, name: str = None, parentRsid: str = None, segmentList: list = None, dataSchema: str = "Cache", data_dict: dict = None, **kwargs) -> dict: """ Create a new virtual report suite based on the information provided. Arguments: name : REQUIRED : name of the virtual reportSuite parentRsid : REQUIRED : Parent reportSuite ID for the VRS segmentLists : REQUIRED : list of segment id to be applied on the ReportSuite. dataSchema : REQUIRED : Type of schema used for the VRSID. (default "Cache") data_dict : OPTIONAL : you can pass directly the dictionary. """ if self.loggingEnabled: self.logger.debug(f"Starting createVirtualReportSuite") path = f"{self.endpoint_company}/reportsuites/virtualreportsuites" expansion_values = "globalCompanyKey,parentRsid,parentRsidName,timezone,timezoneZoneinfo,currentTimezoneOffset,segmentList,description,modified,isDeleted,dataCurrentAsOf,compatibility,dataSchema,sessionDefinition,curatedComponents,type" params = {'expansion': expansion_values} if data_dict is None: body = { "name": name, "parentRsid": parentRsid, "segmentList": segmentList, "dataSchema": dataSchema, "description": kwargs.get('description', '') } else: if 'name' not in data_dict.keys() or 'parentRsid' not in data_dict.keys() or 'segmentList' not in data_dict.keys() or 'dataSchema' not in data_dict.keys(): if self.loggingEnabled: self.logger.error(f"Missing one or more fundamental keys : name, parentRsid, segmentList, dataSchema") raise Exception("Missing one or more fundamental keys : name, parentRsid, segmentList, dataSchema") body = data_dict res = self.connector.postData( path, params=params, data=body, headers=self.header) return res def updateVirtualReportSuite(self, vrsid: str = None, data_dict: dict = None, **kwargs) -> dict: """ Updates a Virtual Report Suite based on a JSON-like dictionary (same structure as createVirtualReportSuite) Note that to update components, you need to supply ALL components currently associated with this suite. Supplying only the components you want to change will remove all others from the VR Suite! Arguments: vrsid : REQUIRED : The id of the virtual report suite to update data_dict : a json-like dictionary of the vrs data to update """ if vrsid is None: raise Exception("require a virtual reportSuite ID") if self.loggingEnabled: self.logger.debug(f"Starting updateVirtualReportSuite for {vrsid}") path = f"{self.endpoint_company}/reportsuites/virtualreportsuites/{vrsid}" body = data_dict res = self.connector.putData(path, data=body, headers=self.header) if self.loggingEnabled: self.logger.debug(f"updateVirtualReportSuite response : {res}") return res def deleteVirtualReportSuite(self, vrsid: str = None) -> str: """ Delete a Virtual Report Suite based on the id passed. Arguments: vrsid : REQUIRED : The id of the virtual reportSuite to delete. """ if vrsid is None: raise Exception("require a Virtual ReportSuite ID") if self.loggingEnabled: self.logger.debug(f"Starting deleteVirtualReportSuite for {vrsid}") path = f"{self.endpoint_company}/reportsuites/virtualreportsuites/{vrsid}" res = self.connector.deleteData(path, headers=self.header) if self.loggingEnabled: self.logger.debug(f"deleteVirtualReportSuite {vrsid} response : {res}") return res def validateVirtualReportSuite(self, name: str = None, parentRsid: str = None, segmentList: list = None, dataSchema: str = "Cache", data_dict: dict = None, **kwargs) -> dict: """ Validate the object to create a new virtual report suite based on the information provided. Arguments: name : REQUIRED : name of the virtual reportSuite parentRsid : REQUIRED : Parent reportSuite ID for the VRS segmentLists : REQUIRED : list of segment ids to be applied on the ReportSuite. dataSchema : REQUIRED : Type of schema used for the VRSID (default : Cache). data_dict : OPTIONAL : you can pass directly the dictionary. """ if self.loggingEnabled: self.logger.debug(f"Starting validateVirtualReportSuite") path = f"{self.endpoint_company}/reportsuites/virtualreportsuites/validate" expansion_values = "globalCompanyKey, parentRsid, parentRsidName, timezone, timezoneZoneinfo, currentTimezoneOffset, segmentList, description, modified, isDeleted, dataCurrentAsOf, compatibility, dataSchema, sessionDefinition, curatedComponents, type" if data_dict is None: body = { "name": name, "parentRsid": parentRsid, "segmentList": segmentList, "dataSchema": dataSchema, "description": kwargs.get('description', '') } else: if 'name' not in data_dict.keys() or 'parentRsid' not in data_dict.keys() or 'segmentList' not in data_dict.keys() or 'dataSchema' not in data_dict.keys(): raise Exception( "Missing one or more fundamental keys : name, parentRsid, segmentList, dataSchema") body = data_dict res = self.connector.postData(path, data=body, headers=self.header) if self.loggingEnabled: self.logger.debug(f"validateVirtualReportSuite response : {res}") return res def getDimensions(self, rsid: str, tags: bool = False, description:bool=False, save=False, **kwargs) -> pd.DataFrame: """ Retrieve the list of dimensions from a specific reportSuite. Shrink columns to simplify output. Returns the data frame of available dimensions. Arguments: rsid : REQUIRED : Report Suite ID from which you want the dimensions tags : OPTIONAL : If you would like to have additional information, such as tags. (bool : default False) description : OPTIONAL : Trying to add the description column. It may break the method. save : OPTIONAL : If set to True, it will save the info in a csv file (bool : default False) Possible kwargs: full : Boolean : Doesn't shrink the number of columns if set to true example : getDimensions(rsid,full=True) """ if self.loggingEnabled: self.logger.debug(f"Starting getDimensions") params = {} if tags: params.update({'expansion': 'tags'}) params.update({'rsid': rsid}) dims = self.connector.getData(self.endpoint_company + self._getDimensions, params=params, headers=self.header) df_dims = pd.DataFrame(dims) columns = ['id', 'name', 'category', 'type', 'parent', 'pathable'] if description: columns.append('description') if kwargs.get('full', False): new_cols = pd.DataFrame(df_dims.support.values.tolist(), columns=['support_oberon', 'support_dw']) # extract list in column new_df = df_dims.merge(new_cols, right_index=True, left_index=True) new_df.drop(['reportable', 'support'], axis=1, inplace=True) df_dims = new_df else: df_dims = df_dims[columns] if save: df_dims.to_csv(f'dimensions_{rsid}.csv') return df_dims def getMetrics(self, rsid: str, tags: bool = False, save=False, description:bool=False, dataGroup:bool=False, **kwargs) -> pd.DataFrame: """ Retrieve the list of metrics from a specific reportSuite. Shrink columns to simplify output. Returns the data frame of available metrics. Arguments: rsid : REQUIRED : Report Suite ID from which you want the dimensions (str) tags : OPTIONAL : If you would like to have additional information, such as tags.(bool : default False) dataGroup : OPTIONAL : Adding dataGroups to the column exported. Default False. May break the report. save : OPTIONAL : If set to True, it will save the info in a csv file (bool : default False) Possible kwargs: full : Boolean : Doesn't shrink the number of columns if set to true. """ if self.loggingEnabled: self.logger.debug(f"Starting getMetrics") params = {} if tags: params.update({'expansion': 'tags'}) params.update({'rsid': rsid}) metrics = self.connector.getData(self.endpoint_company + self._getMetrics, params=params, headers=self.header) df_metrics = pd.DataFrame(metrics) columns = ['id', 'name', 'category', 'type', 'precision', 'segmentable'] if dataGroup: columns.append('dataGroup') if description: columns.append('description') if kwargs.get('full', False): new_cols = pd.DataFrame(df_metrics.support.values.tolist(), columns=[ 'support_oberon', 'support_dw']) new_df = df_metrics.merge( new_cols, right_index=True, left_index=True) new_df.drop('support', axis=1, inplace=True) df_metrics = new_df else: df_metrics = df_metrics[columns] if save: df_metrics.to_csv(f'metrics_{rsid}.csv', sep='\t') return df_metrics def getUsers(self, save: bool = False, **kwargs) -> pd.DataFrame: """ Retrieve the list of users for a login company.Returns a data frame. Arguments: save : OPTIONAL : Save the data in a file (bool : default False). Possible kwargs: limit : Nummber of results per requests. Default 100. expansion : string list such as "lastAccess,createDate" """ if self.loggingEnabled: self.logger.debug(f"Starting getUsers") list_urls = [] nb_error, nb_empty = 0, 0 # use for multi-thread loop params = {'limit': kwargs.get('limit', 100)} if kwargs.get("expansion", None) is not None: params["expansion"] = kwargs.get("expansion", None) path = "/users" users = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header) data = users['content'] lastPage = users['lastPage'] if not lastPage: # check if lastpage is inversed of False callsToMake = users['totalPages'] list_params = [{'limit': params['limit'], 'page': page} for page in range(1, callsToMake)] list_urls = [self.endpoint_company + "/users" for x in range(1, callsToMake)] listheaders = [self.header for x in range(1, callsToMake)] workers = min(10, len(list_params)) with futures.ThreadPoolExecutor(workers) as executor: res = executor.map(lambda x, y, z: self.connector.getData(x, y, headers=z), list_urls, list_params, listheaders) res = list(res) users_lists = [elem['content'] for elem in res if 'content' in elem.keys()] nb_error = sum(1 for elem in res if 'error_code' in elem.keys()) nb_empty = sum(1 for elem in res if 'content' in elem.keys() and len(elem['content']) == 0) append_data = [val for sublist in [data for data in users_lists] for val in sublist] # flatten list of list data = data + append_data df_users = pd.DataFrame(data) columns = ['email', 'login', 'fullName', 'firstName', 'lastName', 'admin', 'loginId', 'imsUserId', 'login', 'createDate', 'lastAccess', 'title', 'disabled', 'phoneNumber', 'companyid'] df_users = df_users[columns] df_users['createDate'] = pd.to_datetime(df_users['createDate']) df_users['lastAccess'] = pd.to_datetime(df_users['lastAccess']) if save: df_users.to_csv(f'users_{int(time.time())}.csv', sep='\t') if nb_error > 0 or nb_empty > 0: print( f'WARNING : Retrieved data are partial.\n{nb_error}/{len(list_urls) + 1} requests returned an error.\n{nb_empty}/{len(list_urls)} requests returned an empty response. \nTry to use filter to retrieve users or increase limit') return df_users def getUserMe(self,loginId:str=None)->dict: """ Retrieve a single user based on its loginId Argument: loginId : REQUIRED : Login ID for the user """ path = f"/users/me" res = self.connector.getData(self.endpoint_company + path) return res def getSegments(self, name: str = None, tagNames: str = None, inclType: str = 'all', rsids_list: list = None, sidFilter: list = None, extended_info: bool = False, format: str = "df", save: bool = False, verbose: bool = False, **kwargs) -> JsonListOrDataFrameType: """ Retrieve the list of segments. Returns a data frame. Arguments: name : OPTIONAL : Filter to only include segments that contains the name (str) tagNames : OPTIONAL : Filter list to only include segments that contains one of the tags (string delimited with comma, can be list as well) inclType : OPTIONAL : type of segments to be retrieved.(str) Possible values: - all : Default value (all segments possibles) - shared : shared segments - template : template segments - deleted : deleted segments - internal : internal segments - curatedItem : curated segments rsid_list : OPTIONAL : Filter list to only include segments tied to specified RSID list (list) sidFilter : OPTIONAL : Filter list to only include segments in the specified list (list) extended_info : OPTIONAL : additional segment metadata fields to include on response (bool : default False) if set to true, returns reportSuiteName, ownerFullName, modified, tags, compatibility, definition format : OPTIONAL : defined the format returned by the query. (Default df) possibe values : "df" : default value that return a dataframe "raw": return a list of value. More or less what is return from server. save : OPTIONAL : If set to True, it will save the info in a csv file (bool : default False) verbose : OPTIONAL : If set to True, print some information Possible kwargs: limit : number of segments retrieved by request. default 500: Limited to 1000 by the AnalyticsAPI. NOTE : Segment Endpoint doesn't support multi-threading. Default to 500. """ if self.loggingEnabled: self.logger.debug(f"Starting getSegments") limit = int(kwargs.get('limit', 500)) params = {'includeType': 'all', 'limit': limit} if extended_info: params.update( {'expansion': 'reportSuiteName,ownerFullName,created,modified,tags,compatibility,definition,shares'}) if name is not None: params.update({'name': str(name)}) if tagNames is not None: if type(tagNames) == list: tagNames = ','.join(tagNames) params.update({'tagNames': tagNames}) if inclType != 'all': params['includeType'] = inclType if rsids_list is not None: if type(rsids_list) == list: rsids_list = ','.join(rsids_list) params.update({'rsids': rsids_list}) if sidFilter is not None: if type(sidFilter) == list: sidFilter = ','.join(sidFilter) params.update({'rsids': sidFilter}) data = [] lastPage = False page_nb = 0 if verbose: print("Starting requesting segments") while not lastPage: params['page'] = page_nb segs = self.connector.getData(self.endpoint_company + self._getSegments, params=params, headers=self.header) data += segs['content'] lastPage = segs['lastPage'] page_nb += 1 if verbose and page_nb % 10 == 0: print(f"request #{page_nb / 10}") if format == "df": segments = pd.DataFrame(data) else: segments = data if save and format == "df": segments.to_csv(f'segments_{int(time.time())}.csv', sep='\t') if verbose: print( f'Saving data in file : {os.getcwd()}{os.sep}segments_{int(time.time())}.csv') elif save and format == "raw": with open(f"segments_{int(time.time())}.csv","w") as f: f.write(json.dumps(segments,indent=4)) return segments def getSegment(self, segment_id: str = None,full:bool=False, *args) -> dict: """ Get a specific segment from the ID. Returns the object of the segment. Arguments: segment_id : REQUIRED : the segment id to retrieve. full : OPTIONAL : Add all possible options Possible args: - "reportSuiteName" : string : to retrieve reportSuite attached to the segment - "ownerFullName" : string : to retrieve ownerFullName attached to the segment - "modified" : string : to retrieve when segment was modified - "tags" : string : to retrieve tags attached to the segment - "compatibility" : string : to retrieve which tool is compatible - "definition" : string : definition of the segment - "publishingStatus" : string : status for the segment - "definitionLastModified" : string : last definition of the segment - "categories" : string : categories of the segment """ ValidArgs = ["reportSuiteName", "ownerFullName", "modified", "tags", "compatibility", "definition", "publishingStatus", "publishingStatus", "definitionLastModified", "categories"] if segment_id is None: raise Exception("Expected a segment id") if self.loggingEnabled: self.logger.debug(f"Starting getSegment for {segment_id}") path = f"/segments/{segment_id}" for element in args: if element not in ValidArgs: args.remove(element) params = {'expansion': ','.join(args)} if full: params = {'expansion': ','.join(ValidArgs)} res = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header) return res def scanSegment(self,segment:Union[str,dict],verbose:bool=False)->dict: """ Return the dimensions, metrics and reportSuite used and the main scope of the segment. Arguments: segment : REQUIRED : either the ID of the segment or the full definition. verbose : OPTIONAL : print some comment. """ if self.loggingEnabled: self.logger.debug(f"Starting scanSegment") if type(segment) == str: if verbose: print('retrieving segment definition') defSegment = self.getSegment(segment,full=True) elif type(segment) == dict: defSegment = deepcopy(segment) if 'definition' not in defSegment.keys(): raise KeyError('missing "definition" key ') if verbose: print('copied segment definition') mydef = str(defSegment['definition']) dimensions : list = re.findall("'(variables/.+?)'",mydef) metrics : list = re.findall("'(metrics/.+?)'",mydef) reportSuite = defSegment['rsid'] scope = re.search("'context': '(.+)'}[^'context']+",mydef) res = { 'dimensions' : set(dimensions) if len(dimensions)>0 else {}, 'metrics' : set(metrics) if len(metrics)>0 else {}, 'rsid' : reportSuite, 'scope' : scope.group(1) } return res def createSegment(self, segmentJSON: dict = None) -> dict: """ Method that creates a new segment based on the dictionary passed to it. Arguments: segmentJSON : REQUIRED : the dictionary that represents the JSON statement for the segment. More information at this address <https://adobedocs.github.io/analytics-2.0-apis/#/segments/segments_createSegment> """ if self.loggingEnabled: self.logger.debug(f"starting createSegment") if segmentJSON is None: print('No segment data has been pushed') return None data = deepcopy(segmentJSON) seg = self.connector.postData( self.endpoint_company + self._getSegments, data=data, headers=self.header ) return seg def createSegmentValidate(self, segmentJSON: dict = None) -> object: """ Method that validate a new segment based on the dictionary passed to it. Arguments: segmentJSON : REQUIRED : the dictionary that represents the JSON statement for the segment. More information at this address <https://adobedocs.github.io/analytics-2.0-apis/#/segments/segments_createSegment> """ if self.loggingEnabled: self.logger.debug(f"starting createSegmentValidate") if segmentJSON is None: print('No segment data has been pushed') return None data = deepcopy(segmentJSON) path = "/segments/validate" seg = self.connector.postData(self.endpoint_company +path,data=data) return seg def updateSegment(self, segmentID: str = None, segmentJSON: dict = None) -> object: """ Method that updates a specific segment based on the dictionary passed to it. Arguments: segmentID : REQUIRED : Segment ID to be updated segmentJSON : REQUIRED : the dictionary that represents the JSON statement for the segment. """ if self.loggingEnabled: self.logger.debug(f"starting updateSegment") if segmentJSON is None or segmentID is None: print('No segment or segmentID data has been pushed') if self.loggingEnabled: self.logger.error(f"No segment or segmentID data has been pushed") return None data = deepcopy(segmentJSON) seg = self.connector.putData( self.endpoint_company + self._getSegments + '/' + segmentID, data=data, headers=self.header ) return seg def deleteSegment(self, segmentID: str = None) -> object: """ Method that delete a specific segment based the ID passed. Arguments: segmentID : REQUIRED : Segment ID to be deleted """ if segmentID is None: print('No segmentID data has been pushed') return None if self.loggingEnabled: self.logger.debug(f"starting deleteSegment for {segmentID}") seg = self.connector.deleteData(self.endpoint_company + self._getSegments + '/' + segmentID, headers=self.header) return seg def getCalculatedMetrics( self, name: str = None, tagNames: str = None, inclType: str = 'all', rsids_list: list = None, extended_info: bool = False, save=False, format:str='df', **kwargs ) -> pd.DataFrame: """ Retrieve the list of calculated metrics. Returns a data frame. Arguments: name : OPTIONAL : Filter to only include calculated metrics that contains the name (str) tagNames : OPTIONAL : Filter list to only include calculated metrics that contains one of the tags (string delimited with comma, can be list as well) inclType : OPTIONAL : type of calculated Metrics to be retrieved. (str) Possible values: - all : Default value (all calculated metrics possibles) - shared : shared calculated metrics - template : template calculated metrics rsid_list : OPTIONAL : Filter list to only include segments tied to specified RSID list (list) extended_info : OPTIONAL : additional segment metadata fields to include on response (list) additional infos: reportSuiteName,definition, ownerFullName, modified, tags, compatibility save : OPTIONAL : If set to True, it will save the info in a csv file (Default False) format : OPTIONAL : format of the output. 2 values "df" for dataframe and "raw" for raw json. Possible kwargs: limit : number of segments retrieved by request. default 500: Limited to 1000 by the AnalyticsAPI.(int) """ if self.loggingEnabled: self.logger.debug(f"starting getCalculatedMetrics") limit = int(kwargs.get('limit', 500)) params = {'includeType': inclType, 'limit': limit} if name is not None: params.update({'name': str(name)}) if tagNames is not None: if type(tagNames) == list: tagNames = ','.join(tagNames) params.update({'tagNames': tagNames}) if inclType != 'all': params['includeType'] = inclType if rsids_list is not None: if type(rsids_list) == list: rsids_list = ','.join(rsids_list) params.update({'rsids': rsids_list}) if extended_info: params.update( {'expansion': 'reportSuiteName,definition,ownerFullName,modified,tags,categories,compatibility,shares'}) metrics = self.connector.getData(self.endpoint_company + self._getCalcMetrics, params=params) data = metrics['content'] lastPage = metrics['lastPage'] if not lastPage: # check if lastpage is inversed of False page_nb = 0 while not lastPage: page_nb += 1 params['page'] = page_nb metrics = self.connector.getData(self.endpoint_company + self._getCalcMetrics, params=params, headers=self.header) data += metrics['content'] lastPage = metrics['lastPage'] if format == "raw": if save: with open(f'calculated_metrics_{int(time.time())}.json','w') as f: f.write(json.dumps(data,indent=4)) return data df_calc_metrics = pd.DataFrame(data) if save: df_calc_metrics.to_csv(f'calculated_metrics_{int(time.time())}.csv', sep='\t') return df_calc_metrics def getCalculatedMetric(self,calculatedMetricId:str=None,full:bool=True)->dict: """ Return a dictionary on the calculated metrics requested. Arguments: calculatedMetricId : REQUIRED : The calculated metric ID to be retrieved. full : OPTIONAL : additional segment metadata fields to include on response (list) additional infos: reportSuiteName,definition, ownerFullName, modified, tags, compatibility """ if calculatedMetricId is None: raise ValueError("Require a calculated metrics ID") if self.loggingEnabled: self.logger.debug(f"starting getCalculatedMetric for {calculatedMetricId}") params = {} if full: params.update({'expansion': 'reportSuiteName,definition,ownerFullName,modified,tags,categories,compatibility'}) path = f"/calculatedmetrics/{calculatedMetricId}" res = self.connector.getData(self.endpoint_company+path,params=params) return res def scanCalculatedMetric(self,calculatedMetric:Union[str,dict],verbose:bool=False)->dict: """ Return a dictionary of metrics and dimensions used in the calculated metrics. """ if self.loggingEnabled: self.logger.debug(f"starting scanCalculatedMetric") if type(calculatedMetric) == str: if verbose: print('retrieving calculated metrics definition') cm = self.getCalculatedMetric(calculatedMetric,full=True) elif type(calculatedMetric) == dict: cm = deepcopy(calculatedMetric) if 'definition' not in cm.keys(): raise KeyError('missing "definition" key') if verbose: print('copied calculated metrics definition') mydef = str(cm['definition']) segments:list = cm['compatibility'].get('segments',[]) res = {"dimensions":[],'metrics':[]} for segment in segments: if verbose: print(f"retrieving segment {segment} definition") tmp:dict = self.scanSegment(segment) res['dimensions'] += [dim for dim in tmp['dimensions']] res['metrics'] += [met for met in tmp['metrics']] metrics : list = re.findall("'(metrics/.+?)'",mydef) res['metrics'] += metrics res['rsid'] = cm['rsid'] res['metrics'] = set(res['metrics']) if len(res['metrics'])>0 else {} res['dimensions'] = set(res['dimensions']) if len(res['dimensions'])>0 else {} return res def createCalculatedMetric(self, metricJSON: dict = None) -> dict: """ Method that create a specific calculated metric based on the dictionary passed to it. Arguments: metricJSON : REQUIRED : Calculated Metrics information to create. (Required: name, definition, rsid) More information can be found at this address https://adobedocs.github.io/analytics-2.0-apis/#/calculatedmetrics/calculatedmetrics_createCalculatedMetric """ if self.loggingEnabled: self.logger.debug(f"starting createCalculatedMetric") if metricJSON is None or type(metricJSON) != dict: if self.loggingEnabled: self.logger.error(f'Expected a dictionary to create the calculated metrics') raise Exception( "Expected a dictionary to create the calculated metrics") if 'name' not in metricJSON.keys() or 'definition' not in metricJSON.keys() or 'rsid' not in metricJSON.keys(): if self.loggingEnabled: self.logger.error(f'Expected "name", "definition" and "rsid" in the data') raise KeyError( 'Expected "name", "definition" and "rsid" in the data') cm = self.connector.postData(self.endpoint_company + self._getCalcMetrics, headers=self.header, data=metricJSON) return cm def createCalculatedMetricValidate(self,metricJSON: dict=None)->dict: """ Method that validate a specific calculated metrics definition based on the dictionary passed to it. Arguments: metricJSON : REQUIRED : Calculated Metrics information to create. (Required: name, definition, rsid) More information can be found at this address https://adobedocs.github.io/analytics-2.0-apis/#/calculatedmetrics/calculatedmetrics_createCalculatedMetric """ if self.loggingEnabled: self.logger.debug(f"starting createCalculatedMetricValidate") if metricJSON is None or type(metricJSON) != dict: raise Exception( "Expected a dictionary to create the calculated metrics") if 'name' not in metricJSON.keys() or 'definition' not in metricJSON.keys() or 'rsid' not in metricJSON.keys(): if self.loggingEnabled: self.logger.error(f'Expected "name", "definition" and "rsid" in the data') raise KeyError( 'Expected "name", "definition" and "rsid" in the data') path = "/calculatedmetrics/validate" cm = self.connector.postData(self.endpoint_company+path, data=metricJSON) return cm def updateCalculatedMetric(self, calcID: str = None, calcJSON: dict = None) -> object: """ Method that updates a specific Calculated Metrics based on the dictionary passed to it. Arguments: calcID : REQUIRED : Calculated Metric ID to be updated calcJSON : REQUIRED : the dictionary that represents the JSON statement for the calculated metric. """ if calcJSON is None or calcID is None: print('No calcMetric or calcMetric JSON data has been passed') return None if self.loggingEnabled: self.logger.debug(f"starting updateCalculatedMetric for {calcID}") data = deepcopy(calcJSON) cm = self.connector.putData( self.endpoint_company + self._getCalcMetrics + '/' + calcID, data=data, headers=self.header ) return cm def deleteCalculatedMetric(self, calcID: str = None) -> object: """ Method that delete a specific calculated metrics based on the id passed.. Arguments: calcID : REQUIRED : Calculated Metrics ID to be deleted """ if calcID is None: print('No calculated metrics data has been passed') return None if self.loggingEnabled: self.logger.debug(f"starting deleteCalculatedMetric for {calcID}") cm = self.connector.deleteData( self.endpoint_company + self._getCalcMetrics + '/' + calcID, headers=self.header ) return cm def getDateRanges(self, extended_info: bool = False, save: bool = False, includeType: str = 'all',verbose:bool=False, **kwargs) -> pd.DataFrame: """ Get the list of date ranges available for the user. Arguments: extended_info : OPTIONAL : additional segment metadata fields to include on response additional infos: reportSuiteName, ownerFullName, modified, tags, compatibility, definition save : OPTIONAL : If set to True, it will save the info in a csv file (Default False) includeType : Include additional date ranges not owned by user. The "all" option takes precedence over "shared" Possible values are all, shared, templates. You can add all of them as comma separated string. Possible kwargs: limit : number of segments retrieved by request. default 500: Limited to 1000 by the AnalyticsAPI. full : Boolean : Doesn't shrink the number of columns if set to true """ if self.loggingEnabled: self.logger.debug(f"starting getDateRanges") limit = int(kwargs.get('limit', 500)) includeType = includeType.split(',') params = {'limit': limit, 'includeType': includeType} if extended_info: params.update( {'expansion': 'definition,ownerFullName,modified,tags'}) dateRanges = self.connector.getData( self.endpoint_company + self._getDateRanges, params=params, headers=self.header, verbose=verbose ) data = dateRanges['content'] df_dates = pd.DataFrame(data) if save: df_dates.to_csv('date_range.csv', index=False) return df_dates def getDateRange(self,dateRangeID:str=None)->dict: """ Get a specific Data Range based on the ID Arguments: dateRangeID : REQUIRED : the date range ID to be retrieved. """ if dateRangeID is None: raise ValueError("No date range ID has been passed") if self.loggingEnabled: self.logger.debug(f"starting getDateRange with ID: {dateRangeID}") params ={ "expansion":"definition,ownerFullName,modified,tags" } dr = self.connector.getData( self.endpoint_company + f"{self._getDateRanges}/{dateRangeID}", params=params ) return dr def updateDateRange(self, dateRangeID: str = None, dateRangeJSON: dict = None) -> dict: """ Method that updates a specific Date Range based on the dictionary passed to it. Arguments: dateRangeID : REQUIRED : Date Range ID to be updated dateRangeJSON : REQUIRED : the dictionary that represents the JSON statement for the date Range. """ if dateRangeJSON is None or dateRangeID is None: raise ValueError("No date range or date range JSON data have been passed") if self.loggingEnabled: self.logger.debug(f"starting updateDateRange") data = deepcopy(dateRangeJSON) dr = self.connector.putData( self.endpoint_company + self._getDateRanges + '/' + dateRangeID, data=data, headers=self.header ) return dr def deleteDateRange(self, dateRangeID: str = None) -> object: """ Method that deletes a specific date Range based on the id passed. Arguments: dateRangeID : REQUIRED : ID of Date Range to be deleted """ if dateRangeID is None: print('No Date Range ID has been pushed') return None if self.loggingEnabled: self.logger.debug(f"starting deleteDateRange for {dateRangeID}") response = self.connector.deleteData( self.endpoint_company + self._getDateRanges + '/' + dateRangeID, headers=self.header ) return response def getCalculatedFunctions(self, **kwargs) -> pd.DataFrame: """ Returns the calculated metrics functions. """ if self.loggingEnabled: self.logger.debug(f"starting getCalculatedFunctions") path = "/calculatedmetrics/functions" limit = int(kwargs.get('limit', 500)) params = {'limit': limit} funcs = self.connector.getData( self.endpoint_company + path, params=params, headers=self.header ) df = pd.DataFrame(funcs) return df def getTags(self, limit: int = 100, **kwargs) -> list: """ Return the list of tags Arguments: limit : OPTIONAL : Amount of tag to be returned by request. Default 100 """ if self.loggingEnabled: self.logger.debug(f"starting getTags") path = "/componentmetadata/tags" params = {'limit': limit} if kwargs.get('page', False): params['page'] = kwargs.get('page', 0) res = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header) data = res['content'] if not res['lastPage']: page = res['number'] + 1 data += self.getTags(limit=limit, page=page) return data def getTag(self, tagId: str = None) -> dict: """ Return the a tag by its ID. Arguments: tagId : REQUIRED : the Tag ID to be retrieved. """ if tagId is None: raise Exception("Require a tag ID for this method.") if self.loggingEnabled: self.logger.debug(f"starting getTag for {tagId}") path = f"/componentmetadata/tags/{tagId}" res = self.connector.getData(self.endpoint_company + path, headers=self.header) return res def getComponentTagName(self, tagNames: str = None, componentType: str = None) -> dict: """ Given a comma separated list of tag names, return component ids associated with them. Arguments: tagNames : REQUIRED : Comma separated list of tag names. componentType : REQUIRED : The component type to operate on. Available values : segment, dashboard, bookmark, calculatedMetric, project, dateRange, metric, dimension, virtualReportSuite, scheduledJob, alert, classificationSet """ path = "/componentmetadata/tags/tagnames" if tagNames is None: raise Exception("Requires tag names to be provided") if self.loggingEnabled: self.logger.debug(f"starting getComponentTagName for {tagNames}") if componentType is None: raise Exception("Requires a Component Type to be provided") params = { "tagNames": tagNames, "componentType": componentType } res = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header) return res def searchComponentsTags(self, componentType: str = None, componentIds: list = None) -> dict: """ Search for the tags of a list of component by their ids. Arguments: componentType : REQUIRED : The component type to use in the search. Available values : segment, dashboard, bookmark, calculatedMetric, project, dateRange, metric, dimension, virtualReportSuite, scheduledJob, alert, classificationSet componentIds : REQUIRED : List of components Ids to use. """ if self.loggingEnabled: self.logger.debug(f"starting searchComponentsTags") if componentType is None: raise Exception("ComponentType is required") if componentIds is None or type(componentIds) != list: raise Exception("componentIds is required as a list of ids") path = "/componentmetadata/tags/component/search" obj = { "componentType": componentType, "componentIds": componentIds } if self.loggingEnabled: self.logger.debug(f"params {obj}") res = self.connector.postData(self.endpoint_company + path, data=obj, headers=self.header) return res def createTags(self, data: list = None) -> dict: """ Create a new tag and applies that new tag to the passed components. Arguments: data : REQUIRED : list of the tag to be created with their component relation. Example of data : [ { "id": 0, "name": "string", "description": "string", "components": [ { "componentType": "string", "componentId": "string", "tags": [ "Unknown Type: Tag" ] } ] } ] """ if self.loggingEnabled: self.logger.debug(f"starting createTags") if data is None: raise Exception("Requires a list of tags to be created") path = "โ€‹/componentmetadataโ€‹/tags" if self.loggingEnabled: self.logger.debug(f"data: {data}") res = self.connector.postData(self.endpoint_company + path, data=data, headers=self.header) return res def deleteTags(self, componentType: str = None, componentIds: str = None) -> str: """ Delete all tags from the component Type and the component ids specified. Arguments: componentIds : REQUIRED : the Comma-separated list of componentIds to operate on. componentType : REQUIRED : The component type to operate on. Available values : segment, dashboard, bookmark, calculatedMetric, project, dateRange, metric, dimension, virtualReportSuite, scheduledJob, alert, classificationSet """ if self.loggingEnabled: self.logger.debug(f"starting deleteTags") if componentType is None: raise Exception("require a component type") if componentIds is None: raise Exception("require component ID(s)") path = "/componentmetadata/tags" params = { "componentType": componentType, "componentIds": componentIds } res = self.connector.deleteData(self.endpoint_company + path, params=params, headers=self.header) return res def deleteTag(self, tagId: str = None) -> str: """ Delete a Tag based on its id. Arguments: tagId : REQUIRED : The tag ID to be deleted. """ if tagId is None: raise Exception("A tag ID is required") if self.loggingEnabled: self.logger.debug(f"starting deleteTag for {tagId}") path = "โ€‹/componentmetadataโ€‹/tagsโ€‹/{tagId}" res = self.connector.deleteData(self.endpoint_company + path, headers=self.header) return res def getComponentTags(self, componentId: str = None, componentType: str = None) -> list: """ Given a componentId, return all tags associated with that component. Arguments: componentId : REQUIRED : The componentId to operate on. Currently this is just the segmentId. componentType : REQUIRED : The component type to operate on. segment, dashboard, bookmark, calculatedMetric, project, dateRange, metric, dimension, virtualReportSuite, scheduledJob, alert, classificationSet """ if self.loggingEnabled: self.logger.debug(f"starting getComponentTags") path = "/componentmetadata/tags/search" if componentType is None: raise Exception("require a component type") if componentId is None: raise Exception("require a component ID") params = {"componentId": componentId, "componentType": componentType} res = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header) return res def updateComponentTags(self, data: list = None): """ Overwrite the component Tags with the list send. Arguments: data : REQUIRED : list of the components to be udpated with their respective list of tag names. Object looks like the following: [ { "componentType": "string", "componentId": "string", "tags": [ "Unknown Type: Tag" ] } ] """ if self.loggingEnabled: self.logger.debug(f"starting updateComponentTags") if data is None or type(data) != list: raise Exception("require list of update to be sent.") path = "/componentmetadata/tags/tagitems" res = self.connector.putData(self.endpoint_company + path, data=data, headers=self.header) return res def getScheduledJobs(self, includeType: str = "all", full: bool = True,limit:int=1000,format:str="df",verbose: bool = False) -> JsonListOrDataFrameType: """ Get Scheduled Projects. You can retrieve the projectID out of the tasks column to see for which workspace a schedule Arguments: includeType : OPTIONAL : By default gets all non-expired or deleted projects. (default "all") You can specify e.g. "all,shared,expired,deleted" to get more. Active schedules always get exported,so you need to use the `rsLocalExpirationTime` parameter in the `schedule` column to e.g. see which schedules are expired full : OPTIONAL : By default True. It returns the following additional information "ownerFullName,groups,tags,sharesFullName,modified,favorite,approved,scheduledItemName,scheduledUsersFullNames,deletedReason" limit : OPTIONAL : Number of element retrieved by request (default max 1000) format : OPTIONAL : Define the format you want to output the result. Default "df" for dataframe, other option "raw" verbose: OPTIONAL : set to True for debug output """ if self.loggingEnabled: self.logger.debug(f"starting getScheduledJobs") params = {"includeType": includeType, "pagination": True, "locale": "en_US", "page": 0, "limit": limit } if full is True: params["expansion"] = "ownerFullName,groups,tags,sharesFullName,modified,favorite,approved,scheduledItemName,scheduledUsersFullNames,deletedReason" path = "/scheduler/scheduler/scheduledjobs/" if verbose: print(f"Getting Scheduled Jobs with Parameters {params}") res = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header) if res.get("content") is None: raise Exception(f"Scheduled Job had no content in response. Parameters were: {params}") # get Scheduled Jobs data into Data Frame data = res.get("content") last_page = res.get("lastPage",True) total_el = res.get("totalElements") number_el = res.get("numberOfElements") if verbose: print(f"Last Page {last_page}, total elements: {total_el}, number_el: {number_el}") # iterate through pages if not on last page yet while last_page == False: if verbose: print(f"last_page is {last_page}, next round") params["page"] += 1 res = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header) data += res.get("content") last_page = res.get("lastPage",True) if format == "df": df = pd.DataFrame(data) return df return data def getScheduledJob(self,scheduleId:str=None)->dict: """ Return a scheduled project definition. Arguments: scheduleId : REQUIRED : Schedule project ID """ if scheduleId is None: raise ValueError("A schedule ID is required") if self.loggingEnabled: self.logger.debug(f"starting getScheduledJob with ID: {scheduleId}") path = f"/scheduler/scheduler/scheduledjobs/{scheduleId}" params = { 'expansion': 'modified,favorite,approved,tags,shares,sharesFullName,reportSuiteName,schedule,triggerObject,tasks,deliverySetting'} res = self.connector.getData(self.endpoint_company + path, params=params) return res def createScheduledJob(self,projectId:str=None,type:str="pdf",schedule:dict=None,loginIds:list=None,emails:list=None,groupIds:list=None,width:int=None)->dict: """ Creates a schedule job based on the information provided as arguments. Expiration will be in one year by default. Arguments: projectId : REQUIRED : The workspace project ID to send. type : REQUIRED : how to send the project, default "pdf" schedule : REQUIRED : object to specify the schedule used. example: { "hour": 10, "minute": 45, "second": 25, "interval": 1, "type": "daily" } { 'type': 'weekly', 'second': 53, 'minute': 0, 'hour': 8, 'daysOfWeek': [2], 'interval': 1 } { 'type': 'monthly', 'second': 53, 'minute': 30, 'hour': 16, 'dayOfMonth': 21, 'interval': 1 } loginIds : REQUIRED : A list of login ID of the users that are recipient of the report. It can be retrieved by the getUsers method. emails : OPTIONAL : If users are not registered in AA, you can specify a list of email addresses. groupIds : OPTIONAL : Group Id to send the report to. width : OPTIONAL : width of the report to be sent. (Minimum 800) """ if self.loggingEnabled: self.logger.debug(f"starting createScheduleJob") path = f"/scheduler/scheduler/scheduledjobs/" dateNow = datetime.datetime.now() nowDateTime = datetime.datetime.isoformat(dateNow,timespec='seconds') futureDate = datetime.datetime.isoformat(dateNow.replace(dateNow.year + 1),timespec='seconds') deliveryId_res = self.createDeliverySetting(loginIds=loginIds, emails=emails,groupIds=groupIds) deliveryId = deliveryId_res.get('id','') if deliveryId == "": if self.loggingEnabled: self.logger.error(f"erro creating the delivery ID") self.logger.error(json.dumps(deliveryId_res)) raise Exception("Error creating the delivery ID") me = self.getUserMe() projectDetail = self.getProject(projectId) data = { "approved" : False, "complexity":{}, "curatedItem":False, "description" : "", "favorite" : False, "hidden":False, "internal":False, "intrinsicIdentity" : False, "isDeleted":False, "isDisabled":False, "locale":"en_US", "noAccess":False, "template":False, "version":"1.0.1", "rsid":projectDetail.get('rsid',''), "schedule":{ "rsLocalStartTime":nowDateTime, "rsLocalExpirationTime":futureDate, "triggerObject":schedule }, "tasks":[ { "tasktype":"generate", "tasksubtype":"analysisworkspace", "requestParams":{ "artifacts":[type], "imsOrgId": self.connector.config['org_id'], "imsUserId": me.get('imsUserId',''), "imsUserName":"API", "projectId" : projectDetail.get('id'), "projectName" : projectDetail.get('name') } }, { "tasktype":"deliver", "artifactType":type, "deliverySettingId": deliveryId, } ] } if width is not None and width >= 800: data['tasks'][0]['requestParams']['width'] = width res = self.connector.postData(self.endpoint_company+path,data=data) return res def updateScheduledJob(self,scheduleId:str=None,scheduleObj:dict=None)->dict: """ Update a schedule Job based on its id and the definition attached to it. Arguments: scheduleId : REQUIRED : the jobs to be updated. scheduleObj : REQUIRED : The object to replace the current definition. """ if scheduleId is None: raise ValueError("A schedule ID is required") if scheduleObj is None: raise ValueError('A schedule Object is required') if self.loggingEnabled: self.logger.debug(f"starting updateScheduleJob with ID: {scheduleId}") path = f"/scheduler/scheduler/scheduledjobs/{scheduleId}" res = self.connector.putData(self.endpoint_company+path,data=scheduleObj) return res def deleteScheduledJob(self,scheduleId:str=None)->dict: """ Delete a schedule project based on its ID. Arguments: scheduleId : REQUIRED : the schedule ID to be deleted. """ if scheduleId is None: raise Exception("A schedule ID is required for deletion") if self.loggingEnabled: self.logger.debug(f"starting deleteScheduleJob with ID: {scheduleId}") path = f"/scheduler/scheduler/scheduledjobs/{scheduleId}" res = self.connector.deleteData(self.endpoint_company + path) return res def getDeliverySettings(self)->list: """ Return a list of delivery settings. """ path = f"/scheduler/scheduler/deliverysettings/" params = {'expansion': 'definition',"limit" : 2000} lastPage = False page_nb = 0 data = [] while lastPage != True: params['page'] = page_nb res = self.connector.getData(self.endpoint_company + path, params=params) data += res.get('content',[]) if len(res.get('content',[]))==params["limit"]: lastPage = False else: lastPage = True page_nb += 1 return data def getDeliverySetting(self,deliverySettingId:str=None)->dict: """ Retrieve the delivery setting from a scheduled project. Argument: deliverySettingId : REQUIRED : The delivery setting ID of the scheduled project. """ path = f"/scheduler/scheduler/deliverysettings/{deliverySettingId}/" params = {'expansion': 'definition'} res = self.connector.getData(self.endpoint_company + path, params=params) return res def createDeliverySetting(self,loginIds:list=None,emails:list=None,groupIds:list=None)->dict: """ Create a delivery setting for a specific scheduled project. Automatically used when using `createScheduleJob`. Arguments: loginIds : REQUIRED : List of login ID to send the scheduled project to. Can be retrieved by the getUsers method. emails : OPTIONAL : In case the recipient are not in the analytics interface. groupIds : OPTIONAL : List of group ID to send the scheduled project to. """ path = f"/scheduler/scheduler/deliverysettings/" if loginIds is None: loginIds = [] if emails is None: emails = [] if groupIds is None: groupIds = [] data = { "definition" : { "allAdmins" : False, "emailAddresses" : emails, "groupIds" : groupIds, "loginIds": loginIds, "type": "email" }, "name" : "email-aanalytics2" } res = self.connector.postData(self.endpoint_company + path, data=data) return res def updateDeliverySetting(self,deliveryId:str=None,loginIds:list=None,emails:list=None,groupIds:list=None)->dict: """ Create a delivery setting for a specific scheduled project. Automatically created for email setting. Arguments: deliveryId : REQUIRED : the delivery setting ID to be updated loginIds : REQUIRED : List of login ID to send the scheduled project to. Can be retrieved by the getUsers method. emails : OPTIONAL : In case the recipient are not in the analytics interface. groupIds : OPTIONAL : List of group ID to send the scheduled project to. """ if deliveryId is None: raise ValueError("Require a delivery setting ID") path = f"/scheduler/scheduler/deliverysettings/{deliveryId}" if loginIds is None: loginIds = [] if emails is None: emails = [] if groupIds is None: groupIds = [] data = { "definition" : { "allAdmins" : False, "emailAddresses" : emails, "groupIds" : groupIds, "loginIds": loginIds, "type": "email" }, "name" : "email-aanalytics2" } res = self.connector.putData(self.endpoint_company + path, data=data) return res def deleteDeliverySetting(self,deliveryId:str=None)->dict: """ Delete a delivery setting based on the ID passed. Arguments: deliveryId : REQUIRED : The delivery setting ID to be deleted. """ if deliveryId is None: raise ValueError("Require a delivery setting ID") path = f"/scheduler/scheduler/deliverysettings/{deliveryId}" res = self.connector.deleteData(self.endpoint_company + path) return res def getProjects(self, includeType: str = 'all', full: bool = False, limit: int = None, includeShared: bool = False, includeTemplate: bool = False, format: str = 'df', cache:bool=False, save: bool = False) -> JsonListOrDataFrameType: """ Returns the list of projects through either a dataframe or a list. Arguments: includeType : OPTIONAL : type of projects to be retrieved.(str) Possible values: - all : Default value (all projects possibles) - shared : shared projects full : OPTIONAL : if set to True, returns all information about projects. limit : OPTIONAL : Limit the number of result returned. includeShared : OPTIONAL : If full is set to False, you can retrieve only information about sharing. includeTemplate: OPTIONAL : If full is set to False, you can add information about template here. format : OPTIONAL : format : OPTIONAL : format of the output. 2 values "df" for dataframe (default) and "raw" for raw json. cache : OPTIONAL : Boolean in case you want to cache the result in the "listProjectIds" attribute. save : OPTIONAL : If set to True, it will save the info in a csv file (bool : default False) """ if self.loggingEnabled: self.logger.debug(f"starting getProjects") path = "/projects" params = {"includeType": includeType} if full: params[ "expansion"] = 'reportSuiteName,ownerFullName,tags,shares,sharesFullName,modified,favorite,approved,companyTemplate,externalReferences,accessLevel' else: params["expansion"] = "ownerFullName,modified" if includeShared: params["expansion"] += ',shares,sharesFullName' if includeTemplate: params["expansion"] += ',companyTemplate' if limit is not None: params['limit'] = limit if self.loggingEnabled: self.logger.debug(f"params: {params}") res = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header) if cache: self.listProjectIds = res if format == "raw": if save: with open('projects.json', 'w') as f: f.write(json.dumps(res, indent=2)) return res df = pd.DataFrame(res) if df.empty == False: df['created'] = pd.to_datetime(df['created'], format='%Y-%m-%dT%H:%M:%SZ') df['modified'] = pd.to_datetime(df['modified'], format='%Y-%m-%dT%H:%M:%SZ') if save: df.to_csv(f'projects_{int(time.time())}.csv', index=False) return df def getProject(self, projectId: str = None, projectClass: bool = False, rsidSuffix: bool = False, retry: int = 0, cache:bool=False, verbose: bool = False) -> Union[dict,Project]: """ Return the dictionary of the project information and its definition. It will return a dictionary or a Project class. The project detail will be saved as Project class in the projectsDetails class attribute. Arguments: projectId : REQUIRED : the project ID to be retrieved. projectClass : OPTIONAL : if set to True. Returns a class of the project with prefiltered information rsidSuffix : OPTIONAL : if set to True, returns project class with rsid as suffic to dimensions and metrics. retry : OPTIONAL : If you want to retry the request if it fails. Specify number of retry (0 default) cache : OPTIONAL : If you want to cache the result as Project class in the "projectsDetails" attribute. verbose : OPTIONAL : If you wish to have logs of status """ if projectId is None: raise Exception("Requires a projectId parameter") params = { 'expansion': 'definition,ownerFullName,modified,favorite,approved,tags,shares,sharesFullName,reportSuiteName,companyTemplate,accessLevel'} path = f"/projects/{projectId}" if self.loggingEnabled: self.logger.debug(f"starting getProject for {projectId}") res = self.connector.getData(self.endpoint_company + path, params=params, headers=self.header,retry=retry, verbose=verbose) if projectClass: if self.loggingEnabled: self.logger.info(f"building an instance of Project class") myProject = Project(res,rsidSuffix=rsidSuffix) return myProject if cache: if self.loggingEnabled: self.logger.info(f"caching the project as Project class") try: self.projectsDetails[projectId] = Project(res) except: if verbose: print('WARNING : Cannot convert Project to Project class') if self.loggingEnabled: self.logger.warning(f"Cannot convert Project to Project class") return res def getAllProjectDetails(self, projects:JsonListOrDataFrameType=None, filterNameProject:str=None, filterNameOwner:str=None, useAttribute:bool=True, cache:bool=False, rsidSuffix:bool=False, output:str="dict", verbose:bool=False)->dict: """ Retrieve all projects details. You can either pass the list of dataframe returned from the getProjects methods and some filters. Returns a dict of ProjectId and the value is the Project class for analysis. Arguments: projects : OPTIONAL : Takes the type of object returned from the getProjects (all data - not only the ID). If None is provided and you never ran the getProjects method, we will call the getProjects method and retrieve the elements. Otherwise you can pass either a limited list of elements that you want to check details for. filterNameProject : OPTIONAL : If you want to retrieve project details for project with a specific string in their name. filterNameOwner : OPTIONAL : If you want to retrieve project details for project with an owner having a specific name. useAttribute : OPTIONAL : True by default, it will use the projectList saved in the listProjectIds attribute. If you want to start from scratch on the retrieval process of your projects. rsidSuffix : OPTIONAL : If you want to add rsid as suffix of metrics and dimensions (::rsid) cache : OPTIONAL : If you want to cache the different elements retrieved for future usage. output : OPTIONAL : If you want to return a "list" or "dict" from this method. (default "dict") verbose : OPTIONAL : Set to True to print information. Not using filter may end up taking a while to retrieve the information. """ if self.loggingEnabled: self.logger.debug(f"starting getAllProjectDetails") ## if no project data if projects is None: if self.loggingEnabled: self.logger.debug(f"No projects passed") if len(self.listProjectIds)>0 and useAttribute: fullProjectIds = self.listProjectIds else: fullProjectIds = self.getProjects(format='raw',cache=cache) ## if project data is passed elif projects is not None: if self.loggingEnabled: self.logger.debug(f"projects passed") if isinstance(projects,pd.DataFrame): fullProjectIds = projects.to_dict(orient='records') elif isinstance(projects,list): fullProjectIds = (proj['id'] for proj in projects) if filterNameProject is not None: if self.loggingEnabled: self.logger.debug(f"filterNameProject passed") fullProjectIds = [project for project in fullProjectIds if filterNameProject in project['name']] if filterNameOwner is not None: if self.loggingEnabled: self.logger.debug(f"filterNameOwner passed") fullProjectIds = [project for project in fullProjectIds if filterNameOwner in project['owner'].get('name','')] if verbose: print(f'{len(fullProjectIds)} project details to retrieve') print(f"estimated time required : {int(len(fullProjectIds)/60)} minutes") if self.loggingEnabled: self.logger.debug(f'{len(fullProjectIds)} project details to retrieve') projectIds = (project['id'] for project in fullProjectIds) projectsDetails = {projectId:self.getProject(projectId,projectClass=True,rsidSuffix=rsidSuffix) for projectId in projectIds} if filterNameProject is None and filterNameOwner is None: self.projectsDetails = projectsDetails if output == "list": list_projectsDetails = [projectsDetails[key] for key in projectsDetails] return list_projectsDetails return projectsDetails def deleteProject(self, projectId: str = None) -> dict: """ Delete the project specified by its ID. Arguments: projectId : REQUIRED : the project ID to be deleted. """ if self.loggingEnabled: self.logger.debug(f"starting deleteProject") if projectId is None: raise Exception("Requires a projectId parameter") path = f"/projects/{projectId}" res = self.connector.deleteData(self.endpoint_company + path, headers=self.header) return res def validateProject(self,projectObj:dict = None)->dict: """ Validate a project definition based on the definition passed. Arguments: projectObj : REQUIRED : the dictionary that represents the Workspace definition. requires the following elements: name,description,rsid, definition, owner """ if self.loggingEnabled: self.logger.debug(f"starting validateProject") if projectObj is None and type(projectObj) != dict : raise Exception("Requires a projectObj data to be sent to the server.") if 'project' in projectObj.keys(): rsid = projectObj['project'].get('rsid',None) else: rsid = projectObj.get('rsid',None) projectObj = {'project':projectObj} if rsid is None: raise Exception("Could not find a rsid parameter in your project definition") path = "/projects/validate" params = {'rsid':rsid} res = self.connector.postData(self.endpoint_company + path, data=projectObj, headers=self.header,params=params) return res def updateProject(self, projectId: str = None, projectObj: dict = None) -> dict: """ Update your project with the new object placed as parameter. Arguments: projectId : REQUIRED : the project ID to be updated. projectObj : REQUIRED : the dictionary to replace the previous Workspace. requires the following elements: name,description,rsid, definition, owner """ if self.loggingEnabled: self.logger.debug(f"starting updateProject") if projectId is None: raise Exception("Requires a projectId parameter") path = f"/projects/{projectId}" if projectObj is None: raise Exception("Requires a projectObj parameter") if 'name' not in projectObj.keys(): raise KeyError("Requires name key in the project object") if 'description' not in projectObj.keys(): raise KeyError("Requires description key in the project object") if 'rsid' not in projectObj.keys(): raise KeyError("Requires rsid key in the project object") if 'owner' not in projectObj.keys(): raise KeyError("Requires owner key in the project object") if type(projectObj['owner']) != dict: raise ValueError("Requires owner key to be a dictionary") if 'definition' not in projectObj.keys(): raise KeyError("Requires definition key in the project object") if type(projectObj['definition']) != dict: raise ValueError("Requires definition key to be a dictionary") res = self.connector.putData(self.endpoint_company + path, data=projectObj, headers=self.header) return res def createProject(self, projectObj: dict = None) -> dict: """ Create a project based on the definition you have set. Arguments: projectObj : REQUIRED : the dictionary to create a new Workspace. requires the following elements: name,description,rsid, definition, owner """ if self.loggingEnabled: self.logger.debug(f"starting createProject") path = "/projects/" if projectObj is None: raise Exception("Requires a projectId parameter") if 'name' not in projectObj.keys(): raise KeyError("Requires name key in the project object") if 'description' not in projectObj.keys(): raise KeyError("Requires description key in the project object") if 'rsid' not in projectObj.keys(): raise KeyError("Requires rsid key in the project object") if 'owner' not in projectObj.keys(): raise KeyError("Requires owner key in the project object") if type(projectObj['owner']) != dict: raise ValueError("Requires owner key to be a dictionary") if 'definition' not in projectObj.keys(): raise KeyError("Requires definition key in the project object") if type(projectObj['definition']) != dict: raise ValueError("Requires definition key to be a dictionary") res = self.connector.postData(self.endpoint_company + path, data=projectObj, headers=self.header) return res def findComponentsUsage(self,components:list=None, projectDetails:list=None, segments:Union[list,pd.DataFrame]=None, calculatedMetrics:Union[list,pd.DataFrame]=None, recursive:bool=False, regexUsed:bool=False, verbose:bool=False, resetProjectDetails:bool=False, rsidSuffix:bool=False, )->dict: """ Find the usage of components in the different part of Adobe Analytics setup. Projects, Segment, Calculated metrics. Arguments: components : REQUIRED : list of component to look for. Example : evar10,event1,prop3,segmentId, calculatedMetricsId projectDetails: OPTIONAL : list of instances of Project class. segments : OPTIONAL : If you wish to pass the segments to look for. (should contain definition) calculatedMetrics : OPTIONAL : If you wish to pass the segments to look for. (should contain definition) recursive : OPTIONAL : if set to True, will also find the reference where the meta component are used. segments based on your elements will also be searched to see where they are located. regexUsed : OPTIONAL : If set to True, the element are definied as a regex and some default setup is turned off. resetProjectDetails : OPTIONAL : Set to false by default. If set to True, it will NOT use the cache. rsidSuffix : OPTIONAL : If you do not give projectDetails and you want to look for rsid usage in report for dimensions and metrics. """ if components is None or type(components) != list: raise ValueError("components must be present as a list") if self.loggingEnabled: self.logger.debug(f"starting findComponentsUsage for {components}") listComponentProp = [comp for comp in components if 'prop' in comp] listComponentVar = [comp for comp in components if 'evar' in comp] listComponentEvent = [comp for comp in components if 'event' in comp] listComponentSegs = [comp for comp in components if comp.startswith('s')] listComponentCalcs = [comp for comp in components if comp.startswith('cm')] restComponents = set(components) - set(listComponentProp+listComponentVar+listComponentEvent+listComponentSegs+listComponentCalcs) listDefaultElements = [comp for comp in restComponents] listRecusion = [] ## adding unregular ones regPartSeg = "('|\.)" ## ensure to not catch evar100 for evar10 regPartProj = "($|\.|\::)" ## ensure to not catch evar100 for evar10 if regexUsed: if self.loggingEnabled: self.logger.debug(f"regex is used") regPartSeg = "" regPartProj = "" ## Segments if verbose: print('retrieving segments') if self.loggingEnabled: self.logger.debug(f"retrieving segments") if len(self.segments) == 0 and segments is None: self.segments = self.getSegments(extended_info=True) mySegments = self.segments elif len(self.segments) > 0 and segments is None: mySegments = self.segments elif segments is not None: if type(segments) == list: mySegments = pd.DataFrame(segments) elif type(segments) == pd.DataFrame: mySegments = segments else: mySegments = segments ### Calculated Metrics if verbose: print('retrieving calculated metrics') if self.loggingEnabled: self.logger.debug(f"retrieving calculated metrics") if len(self.calculatedMetrics) == 0 and calculatedMetrics is None: self.calculatedMetrics = self.getCalculatedMetrics(extended_info=True) myMetrics = self.calculatedMetrics elif len(self.segments) > 0 and calculatedMetrics is None: myMetrics = self.calculatedMetrics elif calculatedMetrics is not None: if type(calculatedMetrics) == list: myMetrics = pd.DataFrame(calculatedMetrics) elif type(calculatedMetrics) == pd.DataFrame: myMetrics = calculatedMetrics else: myMetrics = calculatedMetrics ### Projects if (len(self.projectsDetails) == 0 and projectDetails is None) or resetProjectDetails: if self.loggingEnabled: self.logger.debug(f"retrieving projects details") self.projectDetails = self.getAllProjectDetails(verbose=verbose,rsidSuffix=rsidSuffix) myProjectDetails = (self.projectsDetails[key].to_dict() for key in self.projectsDetails) elif len(self.projectsDetails) > 0 and projectDetails is None and resetProjectDetails==False: if self.loggingEnabled: self.logger.debug(f"transforming projects details") myProjectDetails = (self.projectsDetails[key].to_dict() for key in self.projectsDetails) elif projectDetails is not None: if self.loggingEnabled: self.logger.debug(f"setting the project details") if isinstance(projectDetails[0],Project): myProjectDetails = (item.to_dict() for item in projectDetails) elif isinstance(projectDetails[0],dict): myProjectDetails = (Project(item).to_dict() for item in projectDetails) else: raise Exception("Project details were not able to be processed") teeProjects:tuple = tee(myProjectDetails) ## duplicating the project generator for recursive pass (low memory - intensive computation) returnObj = {element : {'segments':[],'calculatedMetrics':[],'projects':[]} for element in components} recurseObj = defaultdict(list) if verbose: print('search started') print(f'recursive option : {recursive}') print('start looking into segments') if self.loggingEnabled: self.logger.debug(f"Analyzing segments") for _,seg in mySegments.iterrows(): for prop in listComponentProp: if re.search(f"{prop+regPartSeg}",str(seg['definition'])): returnObj[prop]['segments'].append({seg['name']:seg['id']}) if recursive: listRecusion.append(seg['id']) for var in listComponentVar: if re.search(f"{var+regPartSeg}",str(seg['definition'])): returnObj[var]['segments'].append({seg['name']:seg['id']}) if recursive: listRecusion.append(seg['id']) for event in listComponentEvent: if re.search(f"{event}'",str(seg['definition'])): returnObj[event]['segments'].append({seg['name']:seg['id']}) if recursive: listRecusion.append(seg['id']) for element in listDefaultElements: if re.search(f"{element}",str(seg['definition'])): returnObj[element]['segments'].append({seg['name']:seg['id']}) if recursive: listRecusion.append(seg['id']) if self.loggingEnabled: self.logger.debug(f"Analyzing calculated metrics") if verbose: print('start looking into calculated metrics') for _,met in myMetrics.iterrows(): for prop in listComponentProp: if re.search(f"{prop+regPartSeg}",str(met['definition'])): returnObj[prop]['calculatedMetrics'].append({met['name']:met['id']}) if recursive: listRecusion.append(met['id']) for var in listComponentVar: if re.search(f"{var+regPartSeg}",str(met['definition'])): returnObj[var]['calculatedMetrics'].append({met['name']:met['id']}) if recursive: listRecusion.append(met['id']) for event in listComponentEvent: if re.search(f"{event}'",str(met['definition'])): returnObj[event]['calculatedMetrics'].append({met['name']:met['id']}) if recursive: listRecusion.append(met['id']) for element in listDefaultElements: if re.search(f"{element}'",str(met['definition'])): returnObj[element]['calculatedMetrics'].append({met['name']:met['id']}) if recursive: listRecusion.append(met['id']) if verbose: print('start looking into projects') if self.loggingEnabled: self.logger.debug(f"Analyzing projects") for proj in teeProjects[0]: ## mobile reports don't have dimensions. if proj['reportType'] == "desktop": for prop in listComponentProp: for element in proj['dimensions']: if re.search(f"{prop+regPartProj}",element): returnObj[prop]['projects'].append({proj['name']:proj['id']}) for var in listComponentVar: for element in proj['dimensions']: if re.search(f"{var+regPartProj}",element): returnObj[var]['projects'].append({proj['name']:proj['id']}) for event in listComponentEvent: for element in proj['metrics']: if re.search(f"{event}",element): returnObj[event]['projects'].append({proj['name']:proj['id']}) for seg in listComponentSegs: for element in proj.get('segments',[]): if re.search(f"{seg}",element): returnObj[seg]['projects'].append({proj['name']:proj['id']}) for met in listComponentCalcs: for element in proj.get('calculatedMetrics',[]): if re.search(f"{met}",element): returnObj[met]['projects'].append({proj['name']:proj['id']}) for element in listDefaultElements: for met in proj['calculatedMetrics']: if re.search(f"{element}",met): returnObj[element]['projects'].append({proj['name']:proj['id']}) for dim in proj['dimensions']: if re.search(f"{element}",dim): returnObj[element]['projects'].append({proj['name']:proj['id']}) for rsid in proj['rsids']: if re.search(f"{element}",rsid): returnObj[element]['projects'].append({proj['name']:proj['id']}) for event in proj['metrics']: if re.search(f"{element}",event): returnObj[element]['projects'].append({proj['name']:proj['id']}) if recursive: if verbose: print('start looking into recursive elements') if self.loggingEnabled: self.logger.debug(f"recursive option checked") for proj in teeProjects[1]: for rec in listRecusion: for element in proj.get('segments',[]): if re.search(f"{rec}",element): recurseObj[rec].append({proj['name']:proj['id']}) for element in proj.get('calculatedMetrics',[]): if re.search(f"{rec}",element): recurseObj[rec].append({proj['name']:proj['id']}) if recursive: returnObj['recursion'] = recurseObj if verbose: print('done') return returnObj def getUsageLogs(self, startDate:str=None, endDate:str=None, eventType:str=None, event:str=None, rsid:str=None, login:str=None, ip:str=None, limit:int=100, max_result:int=None, format:str="df", verbose:bool=False, **kwargs)->dict: """ Returns the Audit Usage Logs from your company analytics setup. Arguments: startDate : REQUIRED : Start date, format : 2020-12-01T00:00:00-07.(default 60 days prior today) endDate : REQUIRED : End date, format : 2020-12-15T14:32:33-07. (default today) Should be a maximum of a 3 month period between startDate and endDate. eventType : OPTIONAL : The numeric id for the event type you want to filter logs by. Please reference the lookup table in the LOGS_EVENT_TYPE event : OPTIONAL : The event description you want to filter logs by. No wildcards are permitted, but this filter is case insensitive and supports partial matches. rsid : OPTIONAL : ReportSuite ID to filter on. login : OPTIONAL : The login value of the user you want to filter logs by. This filter functions as an exact match. ip : OPTIONAL : The IP address you want to filter logs by. This filter supports a partial match. limit : OPTIONAL : Number of results per page. max_result : OPTIONAL : Number of maximum amount of results if you want. If you want to cap the process. Ex : max_result=1000 format : OPTIONAL : If you wish to have a DataFrame ("df" - default) or list("raw") as output. verbose : OPTIONAL : Set it to True if you want to have console info. possible kwargs: page : page number (default 0) """ if self.loggingEnabled: self.logger.debug(f"starting getUsageLogs") import datetime now = datetime.datetime.now() if startDate is None: startDate = datetime.datetime.isoformat(now - datetime.timedelta(days=60)).split('.')[0] if endDate is None: endDate = datetime.datetime.isoformat(now).split('.')[0] path = "/auditlogs/usage" params = {"page":kwargs.get('page',0),"limit":limit,"startDate":startDate,"endDate":endDate} if eventType is not None: params['eventType'] = eventType if event is not None: params['event'] = event if rsid is not None: params['rsid'] = rsid if login is not None: params['login'] = login if ip is not None: params['ip'] = ip if self.loggingEnabled: self.logger.debug(f"params: {params}") res = self.connector.getData(self.endpoint_company + path, params=params,verbose=verbose) data = res['content'] lastPage = res['lastPage'] while lastPage == False: params["page"] += 1 res = self.connector.getData(self.endpoint_company + path, params=params,verbose=verbose) data += res['content'] lastPage = res['lastPage'] if max_result is not None: if len(data) >= max_result: lastPage = True if format == "df": df = pd.DataFrame(data) return df return data def getTopItems(self,rsid:str=None,dimension:str=None,dateRange:str=None,searchClause:str=None,lookupNoneValues:bool = True,limit:int=10,verbose:bool=False,**kwargs)->object: """ Returns the top items of a request. Arguments: rsid : REQUIRED : ReportSuite ID of the data dimension : REQUIRED : The dimension to retrieve dateRange : OPTIONAL : Format YYYY-MM-DD/YYYY-MM-DD (default 90 days) searchClause : OPTIONAL : General search string; wrap with single quotes. Example: 'PageABC' lookupNoneValues : OPTIONAL : None values to be included (default True) limit : OPTIONAL : Number of items to be returned per page. verbose : OPTIONAL : If you want to have comments displayed (default False) possible kwargs: page : page to look for startDate : start date with format YYYY-MM-DD endDate : end date with format YYYY-MM-DD searchAnd, searchOr, searchNot, searchPhrase : Search element to be included (or not), partial match or not. """ if self.loggingEnabled: self.logger.debug(f"starting getTopItems") path = "/reports/topItems" page = kwargs.get("page",0) if rsid is None: raise ValueError("Require a reportSuite ID") if dimension is None: raise ValueError("Require a dimension") params = {"rsid" : rsid, "dimension":dimension,"lookupNoneValues":lookupNoneValues,"limit":limit,"page":page} if searchClause is not None: params["search-clause"] = searchClause if dateRange is not None and '/' in dateRange: params["dateRange"] = dateRange if kwargs.get('page',None) is not None: params["page"] = kwargs.get('page') if kwargs.get("startDate",None) is not None: params["startDate"] = kwargs.get("startDate") if kwargs.get("endDate",None) is not None: params["endDate"] = kwargs.get("endDate") if kwargs.get("searchAnd", None) is not None: params["searchAnd"] = kwargs.get("searchAnd") if kwargs.get("searchOr",None) is not None: params["searchOr"] = kwargs.get("searchOr") if kwargs.get("searchNot",None) is not None: params["searchNot"] = kwargs.get("searchNot") if kwargs.get("searchPhrase",None) is not None: params["searchPhrase"] = kwargs.get("searchPhrase") last_page = False if verbose: print('Starting to fetch the data...') data = [] while not last_page: if verbose: print(f'request page : {page}') res = self.connector.getData(self.endpoint_company+path,params=params) last_page = res.get("lastPage",True) data += res["rows"] page += 1 params["page"] = page df = pd.DataFrame(data) return df def getAnnotations(self,full:bool=True,includeType:str='all',limit:int=1000,page:int=0)->list: """ Returns a list of the available annotations Arguments: full : OPTIONAL : If set to True (default), returned all available information of the annotation. includeType : OPTIONAL : use to return only "shared" or "all"(default) annotation available. limit : OPTIONAL : number of result per page (default 1000) page : OPTIONAL : page used for pagination """ params = {"includeType":includeType,"page":page} if full: params['expansion'] = "name,description,dateRange,color,applyToAllReports,scope,createdDate,modifiedDate,modifiedById,tags,shares,approved,favorite,owner,usageSummary,companyId,reportSuiteName,rsid" path = f"/annotations" lastPage = False data = [] while lastPage == False: res = self.connector.getData(self.endpoint_company + path,params=params) data += res.get('content',[]) lastPage = res.get('lastPage',True) params['page'] += 1 return data def getAnnotation(self,annotationId:str=None)->dict: """ Return a specific annotation definition. Arguments: annotationId : REQUIRED : The annotation ID """ if annotationId is None: raise ValueError("Require an annotation ID") path = f"/annotations/{annotationId}" params ={ "expansion" : "name,description,dateRange,color,applyToAllReports,scope,createdDate,modifiedDate,modifiedById,tags,shares,approved,favorite,owner,usageSummary,companyId,reportSuiteName,rsid" } res = self.connector.getData(self.endpoint_company + path,params=params) return res def deleteAnnotation(self,annotationId:str=None)->dict: """ Delete a specific annotation definition. Arguments: annotationId : REQUIRED : The annotation ID to be deleted """ if annotationId is None: raise ValueError("Require an annotation ID") path = f"/annotations/{annotationId}" res = self.connector.deleteData(self.endpoint_company + path) return res def createAnnotation(self, name:str=None, dateRange:str=None, rsid:str=None, metricIds:list=None, dimensionObj:list=None, description:str=None, filterIds:list=None, applyToAllReports:bool=False, **kwargs)->dict: """ Create an Annotation. Arguments: name : REQUIRED : Name of the annotation dateRange : REQUIRED : Date range of the annotation to be used. Example: 2022-04-19T00:00:00/2022-04-19T23:59:59 rsid : REQUIRED : ReportSuite ID metricIds : OPTIONAL : List of metrics ID to be annotated filterIds : OPTIONAL : List of Segments ID to apply for annotation for context. dimensionObj : OPTIONAL : List of dimensions object specification: { componentType: "dimension" dimensionType: "string" id: "variables/product" operator: "streq" terms: ["unknown"] } applyToAllReports : OPTIONAL : If the annotation apply to all ReportSuites. possible kwargs: colors: Color to be used, examples: "STANDARD1" shares: List of userId for sharing the annotation tags: List of tagIds to be applied favorite: boolean to set the annotation as favorite (false by default) approved: boolean to set the annotation as approved (false by default) """ path = f"/annotations" if name is None: raise ValueError("A name must be specified") if dateRange is None: raise ValueError("A dateRange must be specified") if rsid is None: raise ValueError("a master ReportSuite ID must be specified") description = description or "api generated" data = { "name": name, "description": description, "dateRange": dateRange, "color": kwargs.get('colors',"STANDARD1"), "applyToAllReports": applyToAllReports, "scope": { "metrics":[], "filters":[] }, "tags": [], "approved": kwargs.get('approved',False), "favorite": kwargs.get('favorite',False), "rsid": rsid } if metricIds is not None and type(metricIds) == list: for metric in metricIds: data['scopes']['metrics'].append({ "id" : metric, "componentType":"metric" }) if filterIds is None and type(filterIds) == list: for filter in filterIds: data['scopes']['filters'].append({ "id" : filter, "componentType":"segment" }) if dimensionObj is not None and type(dimensionObj) == list: for obj in dimensionObj: data['scopes']['filters'].append(obj) if kwargs.get("shares",None) is not None: data['shares'] = [] for user in kwargs.get("shares",[]): data['shares'].append({ "shareToId" : user, "shareToType":"user" }) if kwargs.get('tags',None) is not None: for tag in kwargs.get('tags'): res = self.getTag(tag) data['tags'].append({ "id":tag, "name":res['name'] }) res = self.connector.postData(self.endpoint_company + path,data=data) return res def updateAnnotation(self,annotationId:str=None,annotationObj:dict=None)->dict: """ Update an annotation based on its ID. PUT method. Arguments: annotationId : REQUIRED : The annotation ID to be updated annotationObj : REQUIRED : The object to replace the annotation. """ if annotationObj is None or type(annotationObj) != dict: raise ValueError('Require a dictionary representing the annotation definition') if annotationId is None: raise ValueError('Require the annotation ID') path = f"/annotations/{annotationId}" res = self.connector.putData(self.endpoint_company+path,data=annotationObj) return res # def getDataWarehouseReports(self,reportSuite:str=None,reportName:str=None,deliveryUUID:str=None,status:str=None, # ScheduledRequestUUID:str=None,limit:int=1000)-> dict: # """ # Get all DW reports that matched filter parameters. # Arguments: # reportSuite : OPTIONAL : The name of the reportSuite # reportName : OPTIONAL : The name of the report # deliveryUUID : OPTIONAL : the UUID generated for that report # status : OPTIONAL : Status of the report Generation, can be any of [COMPLETED, CANCELED, ERROR_DELIVERY, ERROR_PROCESSING, CREATED, PROCESSING, PENDING] # scheduledRequestUUID : OPTIONAL : THe schedule report UUID generated by this report # limit : OPTIONAL : Maximum amount of data returned # """ # path = '/data_warehouse/report' # params = {"limit":limit} # if reportSuite is not None: # params['ReportSuite'] = reportSuite # if reportName is not None: # params['ReportName'] = reportName # if deliveryUUID is not None: # params['DeliveryProfileUUID'] = deliveryUUID # if status is not None and status in ["COMPLETED", "CANCELED", "ERROR_DELIVERY", "ERROR_PROCESSING", "CREATED", "PROCESSING", "PENDING"]: # params["Status"] = status # if ScheduledRequestUUID is not None: # params['ScheduledRequestUUID'] = ScheduledRequestUUID # res = self.connector.getData('https://analytics.adobe.io/api' + path,params=params) # return res # def getDataWarehouseReport(self,reportUUID:str=None)-> dict: # """ # Return a single report information out of the report UUID. # Arguments: # reportUUID : REQUIRED : the report UUID # """ # if reportUUID is None: # raise ValueError("Require a report UUID") # path = f'/data_warehouse/report/{reportUUID}' # res = self.connector.getData('https://analytics.adobe.io/api' + path) # return res # def getDataWarehouseRequests(self,reportSuite:str=None,reportName:str=None,status:str=None,limit:int=1000)-> dict: # """ # Get all DW requests that matched filter parameters. # Arguments: # reportSuite : OPTIONAL : The name of the reportSuite # reportName : OPTIONAL : The name of the report # status : OPTIONAL : Status of the report Generation, can be any of [COMPLETED, CANCELED, ERROR_DELIVERY, ERROR_PROCESSING, CREATED, PROCESSING, PENDING] # scheduledRequestUUID : OPTIONAL : THe schedule report UUID generated by this report # limit : OPTIONAL : Maximum amount of data returned # """ # path = '/data_warehouse/scheduled' # params = {"limit":limit} # if reportSuite is not None: # params['ReportSuite'] = reportSuite # if reportName is not None: # params['ReportName'] = reportName # if status is not None and status in ["COMPLETED", "CANCELED", "ERROR_DELIVERY", "ERROR_PROCESSING", "CREATED", "PROCESSING", "PENDING"]: # params["Status"] = status # res = self.connector.getData('https://analytics.adobe.io/api'+ path,params=params) # return res # def getDataWarehouseRequest(self,scheduleUUID:str=None)-> dict: # """ # Return a single request information out of the schedule UUID. # Arguments: # scheduleUUID : REQUIRED : the schedule UUID # """ # if scheduleUUID is None: # raise ValueError("Require a report UUID") # path = f'/data_warehouse/scheduled/{scheduleUUID}' # res = self.connector.getData('https://analytics.adobe.io' + path) # return res # def createDataWarehouseRequest(self, # requestDict:dict=None, # reportName:str=None, # login:str=None, # emails:list=None, # emailNote:str=None, # )->dict: # """ # Create a Data Warehouse request based on either the dictionary provided or the parameters filled. # Arguments: # requestDict : OPTIONAL : The complete dictionary definition for a datawarehouse export. # If not provided, require the other parameters to be used. # reportName : OPTIONAL : The name of the report # login : OPTIONAL : The login Id of the user # emails : OPTIONAL : List of emails for notification. example : ['[email protected]'] # dimensions : OPTIONAL : List of dimensions to use, example : ['prop1'] # metrics : OPTIONAL : List of metrics to use, example : ['event1','event2'] # segments : OPTIONAL : List of segments to use, example : ['seg1','seg2'] # dateGranularity : OPTIONAL : # reportPeriod : OPTIONAL : # emailNote : OPTIONAL : Note for the email # """ # f'/data_warehouse/scheduled/' # def getDataWarehouseDeliveryAccounts(self)->dict: # """ # Get All delivery Account used by a company. # """ # path = f'/data_warehouse/delivery/account' # res = self.connector.getData('https://analytics.adobe.io'+path) # return res # def getDataWarehouseDeliveryProfile(self)->dict: # """ # Get all Delivery Profile for a given global company id # """ # path = f'/data_warehouse/delivery/profile' # res = self.connector.getData('https://analytics.adobe.io'+path) # return res def compareReportSuites(self,listRsids:list=None,element:str='dimensions',comparison:str="full",save: bool=False)->pd.DataFrame: """ Compare reportSuite on dimensions (default) or metrics based on the comparison selected. Returns a dataframe with multi-index and a column telling which elements are differents Arguments: listRsids : REQUIRED : list of report suite ID to compare element : REQUIRED : Elements to compare. 2 possible choices: dimensions (default) metrics comparison : REQUIRED : Type of comparison to do: full (default) : compare name and settings name : compare only names save : OPTIONAL : if you want to save in a csv. """ if self.loggingEnabled: self.logger.debug(f"starting compareReportSuites") if listRsids is None or type(listRsids) != list: raise ValueError("Require a list of rsids") if element=="dimensions": if self.loggingEnabled: self.logger.debug(f"dimensions selected") listDFs = [self.getDimensions(rsid,full=True) for rsid in listRsids] elif element == "metrics": listDFs = [self.getMetrics(rsid,full=True) for rsid in listRsids] if self.loggingEnabled: self.logger.debug(f"metrics selected") for df,rsid in zip(listDFs, listRsids): df['rsid']=rsid df.set_index('id',inplace=True) df.set_index('rsid',append=True,inplace=True) df = pd.concat(listDFs) df = df.unstack() if comparison=='name': df_name = df['name'].copy() ## transforming to a new df with boolean value comparison to col 0 temp_df = df_name.eq(df_name.iloc[:, 0], axis=0) ## now doing a complete comparison of all boolean with all df_name['different'] = ~temp_df.eq(temp_df.iloc[:,0],axis=0).all(1) if save: df_name.to_csv(f'comparison_name_{int(time.time())}.csv') if self.loggingEnabled: self.logger.debug(f'Name only comparison, file : comparison_name_{int(time.time())}.csv') return df_name ## retrieve main indexes from multi level indexes mainIndex = set([val[0] for val in list(df.columns)]) dict_temp = {} for index in mainIndex: temp_df = df[index].copy() temp_df.fillna('',inplace=True) ## transforming to a new df with boolean value comparison to col 0 temp_df.eq(temp_df.iloc[:, 0], axis=0) ## now doing a complete comparison of all boolean with all dict_temp[index] = list(temp_df.eq(temp_df.iloc[:,0],axis=0).all(1)) df_bool = pd.DataFrame(dict_temp) df['different'] = list(~df_bool.eq(df_bool.iloc[:,0],axis=0).all(1)) if save: df.to_csv(f'comparison_full_{element}_{int(time.time())}.csv') if self.loggingEnabled: self.logger.debug(f'Full comparison, file : comparison_full_{element}_{int(time.time())}.csv') return df def shareComponent(self, componentId: str = None, componentType: str = None, shareToId: int = None, shareToImsId: int = None, shareToType: str = None, shareToLogin: str = None, accessLevel: str = None, shareFromImsId: str = None) -> dict: """ Shares a component with an individual or a group (product profile ID) a dictionary on the calculated metrics requested. Returns the JSON response from the API. Arguments: componentId : REQUIRED : The component ID to share. componentType : REQUIRED : The component Type ("calculatedMetric", "segment", "project", "dateRange") shareToId: ID of the user or the group to share to shareToImsId: IMS ID of the user to share to (alternative to ID) shareToLogin: Login of the user to share to (alternative to ID) shareToType: "group" => share to a group (product profile), "user" => share to a user, "all" => share to all users (in this case, no shareToId or shareToImsId is needed) """ if self.loggingEnabled: self.logger.debug(f"Starting to share component ID {componentId} with parameters: {locals()}") path = f"/componentmetadata/shares/" data = { "accessLevel": accessLevel, "componentId": componentId, "componentType": componentType, "shareToId": shareToId, "shareToImsId": shareToImsId, "shareToLogin": shareToLogin, "shareToType": shareToType } res = self.connector.postData(self.endpoint_company + path, data=data) return res def _dataDescriptor(self, json_request: dict): """ read the request and returns an object with information about the request. It will be used in order to build the dataclass and the dataframe. """ if self.loggingEnabled: self.logger.debug(f"starting _dataDescriptor") obj = {} if json_request.get('dimension',None) is not None: obj['dimension'] = json_request.get('dimension') obj['filters'] = {'globalFilters': [], 'metricsFilters': {}} obj['rsid'] = json_request['rsid'] metrics_info = json_request['metricContainer'] obj['metrics'] = [metric['id'] for metric in metrics_info['metrics']] if 'metricFilters' in metrics_info.keys(): metricsFilter = {metric['id']: metric['filters'] for metric in metrics_info['metrics'] if len(metric.get('filters', [])) > 0} filters = [] for metric in metricsFilter: for item in metricsFilter[metric]: if 'segmentId' in metrics_info['metricFilters'][int(item)].keys(): filters.append( metrics_info['metricFilters'][int(item)]['segmentId']) if 'dimension' in metrics_info['metricFilters'][int(item)].keys(): filters.append( metrics_info['metricFilters'][int(item)]['dimension']) obj['filters']['metricsFilters'][metric] = set(filters) for fil in json_request['globalFilters']: if 'dateRange' in fil.keys(): obj['filters']['globalFilters'].append(fil['dateRange']) if 'dimension' in fil.keys(): obj['filters']['globalFilters'].append(fil['dimension']) if 'segmentId' in fil.keys(): obj['filters']['globalFilters'].append(fil['segmentId']) return obj def _readData( self, data_rows: list, anomaly: bool = False, cols: list = None, item_id: bool = False ) -> pd.DataFrame: """ read the data from the requests and returns a dataframe. Parameters: data_rows : REQUIRED : Rows that have been returned by the request. anomaly : OPTIONAL : Boolean to tell if the anomaly detection has been used. cols : OPTIONAL : list of columns names """ if self.loggingEnabled: self.logger.debug(f"starting _readData") if cols is None: raise ValueError("list of columns must be specified") data_rows = deepcopy(data_rows) dict_data = {row.get('value', 'missing_value'): row['data'] for row in data_rows} if cols is not None: n_metrics = len(cols) - 1 if item_id: # adding the itemId in the data returned cols.append('item_id') for row in data_rows: dict_data[row.get('value', 'missing_value')].append(row['itemId']) if anomaly: # set full columns cols = cols + [f'{metric}-{suffix}' for metric in cols[1:] for suffix in ['expected', 'UpperBound', 'LowerBound']] # add data to the dictionary for row in data_rows: for item in range(n_metrics): dict_data[row['value']].append( row.get('dataExpected', [0 for i in range(n_metrics)])[item]) dict_data[row['value']].append( row.get('dataUpperBound', [0 for i in range(n_metrics)])[item]) dict_data[row['value']].append( row.get('dataLowerBound', [0 for i in range(n_metrics)])[item]) df = pd.DataFrame(dict_data).T # require to transform the data df.reset_index(inplace=True, ) df.columns = cols return df def getReport( self, json_request: Union[dict, str, IO,RequestCreator], limit: int = 1000, n_results: Union[int, str] = 1000, save: bool = False, item_id: bool = False, unsafe: bool = False, verbose: bool = False, debug=False, **kwargs, ) -> object: """ Retrieve data from a JSON request.Returns an object containing meta info and dataframe. Arguments: json_request: REQUIRED : JSON statement that contains your request for Analytics API 2.0. The argument can be : - a dictionary : It will be used as it is. - a string that is a dictionary : It will be transformed to a dictionary / JSON. - a path to a JSON file that contains the statement (must end with ".json"). - an instance of the RequestCreator class limit : OPTIONAL : number of result per request (defaut 1000) n_results : OPTIONAL : Number of result that you would like to retrieve. (default 1000) if you want to have all possible data, use "inf". item_id : OPTIONAL : Boolean to define if you want to return the item id for sub requests (default False) unsafe : OPTIONAL : If set to True, it will not check "lastPage" parameter and assume first request is complete. This may break the script or return incomplete data. (default False). save : OPTIONAL : If you would like to save the data within a CSV file. (default False) verbose : OPTIONAL : If you want to have comments displayed (default False) """ if unsafe and verbose: print('---- running the getReport in "unsafe" mode ----') obj = {} if isinstance(json_request,RequestCreator): request = json_request.to_dict() elif type(json_request) == dict: request = json_request elif type(json_request) == str and '.json' not in json_request: try: request = json.loads(json_request) except: raise TypeError("expected a parsable string") elif '.json' in json_request: try: with open(Path(json_request), 'r') as file: file_string = file.read() request = json.loads(file_string) except: raise TypeError("expected a parsable string") request['settings']['limit'] = limit # info for creating report data_info = self._dataDescriptor(request) if verbose: print('Request decrypted') obj.update(data_info) anomaly = request['settings'].get('includeAnomalyDetection', False) columns = [data_info['dimension']] + data_info['metrics'] # preparing for the loop # in case "inf" has been used. Turn it to a number n_results = kwargs.get('n_result',n_results) n_results = float(n_results) if n_results != float('inf') and n_results < request['settings']['limit']: # making sure we don't call more than set in wrapper request['settings']['limit'] = n_results data_list = [] last_page = False page_nb, count_elements, total_elements = 0, 0, 0 if verbose: print('Starting to fetch the data...') while not last_page: timestamp = round(time.time()) request['settings']['page'] = page_nb report = self.connector.postData(self.endpoint_company + self._getReport, data=request, headers=self.header) if verbose: print('Data received.') # Recursion to take care of throttling limit while report.get('status_code', 200) == 429 or report.get('error_code',None) == "429050": if verbose: print('reaching the limit : pause for 50 s and entering recursion.') if debug: with open(f'limit_reach_{timestamp}.json', 'w') as f: f.write(json.dumps(report, indent=4)) time.sleep(50) report = self.connector.postData(self.endpoint_company + self._getReport, data=request, headers=self.header) if 'lastPage' not in report and unsafe == False: # checking error when no lastPage key in report if verbose: print(json.dumps(report, indent=2)) print('Warning : Server Error') print(json.dumps(report)) if debug: with open(f'server_failure_request_{timestamp}.json', 'w') as f: f.write(json.dumps(request, indent=4)) with open(f'server_failure_response_{timestamp}.json', 'w') as f: f.write(json.dumps(report, indent=4)) print( f'Warning : Save JSON request : server_failure_request_{timestamp}.json') print( f'Warning : Save JSON response : server_failure_response_{timestamp}.json') obj['data'] = pd.DataFrame() return obj # fallback when no lastPage in report last_page = report.get('lastPage', True) if verbose: print(f'last page status : {last_page}') if 'errorCode' in report.keys(): print('Error with your statement \n' + report['errorDescription']) return {report['errorCode']: report['errorDescription']} count_elements += report.get('numberOfElements', 0) total_elements = report.get( 'totalElements', request['settings']['limit']) if total_elements == 0: obj['data'] = pd.DataFrame() print( 'Warning : No data returned & lastPage is False.\nExit the loop - no save file & empty dataframe.') if debug: with open(f'report_no_element_{timestamp}.json', 'w') as f: f.write(json.dumps(report, indent=4)) if verbose: print( f'% of total elements retrieved. TotalElements: {report.get("totalElements", "no data")}') return obj # in case loop happening with empty data, returns empty data if verbose and total_elements != 0: print( f'% of total elements retrieved: {round((count_elements / total_elements) * 100, 2)} %') if last_page == False and n_results != float('inf'): if count_elements >= n_results: last_page = True data = report['rows'] data_list += deepcopy(data) # do a deepcopy page_nb += 1 if verbose: print(f'# of requests : {page_nb}') # return report df = self._readData(data_list, anomaly=anomaly, cols=columns, item_id=item_id) if save: timestampReport = round(time.time()) df.to_csv(f'report-{timestampReport}.csv', index=False) if verbose: print( f'Saving data in file : {os.getcwd()}{os.sep}report-{timestampReport}.csv') obj['data'] = df if verbose: print( f'Report contains {(count_elements / total_elements) * 100} % of the available dimensions') return obj def _prepareData( self, dataRows: list = None, reportType: str = "normal", ) -> dict: """ Read the data returned by the getReport and returns a dictionary used by the Workspace class. Arguments: dataRows : REQUIRED : data rows data from CJA API getReport reportType : REQUIRED : "normal" or "static" """ if dataRows is None: raise ValueError("Require dataRows") data_rows = deepcopy(dataRows) expanded_rows = {} if reportType == "normal": for row in data_rows: expanded_rows[row["itemId"]] = [row["value"]] expanded_rows[row["itemId"]] += row["data"] elif reportType == "static": expanded_rows = data_rows return expanded_rows def _decrypteStaticData( self, dataRequest: dict = None, response: dict = None,resolveColumns:bool=False ) -> dict: """ From the request dictionary and the response, decrypte the data to standardise the reading. """ dataRows = [] ## retrieve StaticRow ID and segmentID if len([metric for metric in dataRequest['metricContainer'].get('metricFilters',[]) if metric.get('id','').startswith("STATIC_ROW_COMPONENT")])>0: if "dateRange" in list(dataRequest['metricContainer'].get('metricFilters',[])[0].keys()): tableSegmentsRows = { obj["id"]: obj["dateRange"] for obj in dataRequest["metricContainer"]["metricFilters"] if obj["id"].startswith("STATIC_ROW_COMPONENT") } elif "segmentId" in list(dataRequest['metricContainer'].get('metricFilters',[])[0].keys()): tableSegmentsRows = { obj["id"]: obj["segmentId"] for obj in dataRequest["metricContainer"]["metricFilters"] if obj["id"].startswith("STATIC_ROW_COMPONENT") } else: tableSegmentsRows = { obj["id"]: obj["segmentId"] for obj in dataRequest["metricContainer"]["metricFilters"] } ## retrieve place and segmentID segmentApplied = {} for obj in dataRequest["metricContainer"]["metricFilters"]: if obj["id"].startswith("STATIC_ROW") == False: if obj["type"] == "breakdown": segmentApplied[obj["id"]] = f"{obj['dimension']}:::{obj['itemId']}" elif obj["type"] == "segment": segmentApplied[obj["id"]] = obj["segmentId"] elif obj["type"] == "dateRange": segmentApplied[obj["id"]] = obj["dateRange"] ### table columnIds and StaticRow IDs tableColumnIds = { obj["columnId"]: obj["filters"][0] for obj in dataRequest["metricContainer"]["metrics"] } ### create relations for metrics with Filter on top filterRelations = { obj["filters"][0]: obj["filters"][1:] for obj in dataRequest["metricContainer"]["metrics"] if len(obj["filters"]) > 1 } staticRows = set(val for val in tableSegmentsRows.values()) nb_rows = len(staticRows) ## define how many segment used as rows nb_columns = int( len(dataRequest["metricContainer"]["metrics"]) / nb_rows ) ## use to detect rows staticRows = set(val for val in tableSegmentsRows.values()) staticRowsNames = [] for row in staticRows: if row.startswith("s") and "@AdobeOrg" in row: filter = self.Segment(row) staticRowsNames.append(filter["name"]) else: staticRowsNames.append(row) if resolveColumns: staticRowDict = { row: self.getSegment(rowName).get('name',rowName) for row, rowName in zip(staticRows, staticRowsNames) } else: staticRowDict = { row: rowName for row, rowName in zip(staticRows, staticRowsNames) } ### metrics dataRows = defaultdict(list) for row in staticRowDict: ## iter on the different static rows for column, data in zip( response["columns"]["columnIds"], response["summaryData"]["totals"] ): if tableSegmentsRows[tableColumnIds[column]] == row: ## check translation of metricId with Static Row ID if row not in dataRows[staticRowDict[row]]: dataRows[staticRowDict[row]].append(row) dataRows[staticRowDict[row]].append(data) ## should ends like : {'segmentName' : ['STATIC',123,456]} return nb_columns, tableColumnIds, segmentApplied, filterRelations, dataRows def getReport2( self, request: Union[dict, IO,RequestCreator] = None, limit: int = 20000, n_results: Union[int, str] = "inf", allowRemoteLoad: str = "default", useCache: bool = True, useResultsCache: bool = False, includeOberonXml: bool = False, includePredictiveObjects: bool = False, returnsNone: bool = None, countRepeatInstances: bool = None, ignoreZeroes: bool = None, rsid: str = None, resolveColumns: bool = True, save: bool = False, returnClass: bool = True, ) -> Union[Workspace, dict]: """ Return an instance of Workspace that contains the data requested. Argumnents: request : REQUIRED : either a dictionary of a JSON file that contains the request information. limit : OPTIONAL : number of results per request (default 1000) n_results : OPTIONAL : total number of results returns. Use "inf" to return everything (default "inf") allowRemoteLoad : OPTIONAL : Controls if Oberon should remote load data. Default behavior is true with fallback to false if remote data does not exist useCache : OPTIONAL : Use caching for faster requests (Do not do any report caching) useResultsCache : OPTIONAL : Use results caching for faster reporting times (This is a pass through to Oberon which manages the Cache) includeOberonXml : OPTIONAL : Controls if Oberon XML should be returned in the response - DEBUG ONLY includePredictiveObjects : OPTIONAL : Controls if platform Predictive Objects should be returned in the response. Only available when using Anomaly Detection or Forecasting- DEBUG ONLY returnsNone : OPTIONAL: Overwritte the request setting to return None values. countRepeatInstances : OPTIONAL: Overwrite the request setting to count repeatInstances values. ignoreZeroes : OPTIONAL : Ignore zeros in the results rsid : OPTIONAL : Overwrite the ReportSuiteId used for report. Only works if the same components are presents. resolveColumns: OPTIONAL : automatically resolve columns from ID to name for calculated metrics & segments. Default True. (works on returnClass only) save : OPTIONAL : If you want to save the data (in JSON or CSV, depending the class is used or not) returnClass : OPTIONAL : return the class building dataframe and better comprehension of data. (default yes) """ if self.loggingEnabled: self.logger.debug(f"Start getReport") path = "/reports" params = { "allowRemoteLoad": allowRemoteLoad, "useCache": useCache, "useResultsCache": useResultsCache, "includeOberonXml": includeOberonXml, "includePlatformPredictiveObjects": includePredictiveObjects, } if type(request) == dict: dataRequest = request elif isinstance(request,RequestCreator): dataRequest = request.to_dict() elif ".json" in request: with open(request, "r") as f: dataRequest = json.load(f) else: raise ValueError("Require a JSON or Dictionary to request data") ### Settings dataRequest = deepcopy(dataRequest) dataRequest["settings"]["page"] = 0 dataRequest["settings"]["limit"] = limit if returnsNone: dataRequest["settings"]["nonesBehavior"] = "return-nones" elif dataRequest['settings'].get('nonesBehavior',False) != False: pass ## keeping current settings else: dataRequest["settings"]["nonesBehavior"] = "exclude-nones" if countRepeatInstances: dataRequest["settings"]["countRepeatInstances"] = True elif dataRequest["settings"].get("countRepeatInstances",False) != False: pass ## keeping current settings else: dataRequest["settings"]["countRepeatInstances"] = False if rsid is not None: dataRequest["rsid"] = rsid if ignoreZeroes: dataRequest.get("statistics",{'ignoreZeroes':True})["ignoreZeroes"] = True deepCopyRequest = deepcopy(dataRequest) ### Request data if self.loggingEnabled: self.logger.debug(f"getReport request: {json.dumps(dataRequest,indent=4)}") res = self.connector.postData( self.endpoint_company + path, data=dataRequest, params=params ) if "rows" in res.keys(): reportType = "normal" if self.loggingEnabled: self.logger.debug(f"reportType: {reportType}") dataRows = res.get("rows") columns = res.get("columns") summaryData = res.get("summaryData") totalElements = res.get("numberOfElements") lastPage = res.get("lastPage", True) if float(len(dataRows)) >= float(n_results): ## force end of loop when a limit is set on n_results lastPage = True while lastPage != True: dataRequest["settings"]["page"] += 1 res = self.connector.postData( self.endpoint_company + path, data=dataRequest, params=params ) dataRows += res.get("rows") lastPage = res.get("lastPage", True) totalElements += res.get("numberOfElements") if float(len(dataRows)) >= float(n_results): ## force end of loop when a limit is set on n_results lastPage = True if self.loggingEnabled: self.logger.debug(f"loop for report over: {len(dataRows)} results") if returnClass == False: return dataRows ### create relation between metrics and filters applied columnIdRelations = { obj["columnId"]: obj["id"] for obj in dataRequest["metricContainer"]["metrics"] } filterRelations = { obj["columnId"]: obj["filters"] for obj in dataRequest["metricContainer"]["metrics"] if len(obj.get("filters", [])) > 0 } metricFilters = {} metricFilterTranslation = {} for filter in dataRequest["metricContainer"].get("metricFilters", []): filterId = filter["id"] if filter["type"] == "breakdown": filterValue = f"{filter['dimension']}:{filter['itemId']}" metricFilters[filter["dimension"]] = filter["itemId"] if filter["type"] == "dateRange": filterValue = f"{filter['dateRange']}" metricFilters[filterValue] = filterValue if filter["type"] == "segment": filterValue = f"{filter['segmentId']}" if filterValue.startswith("s") and "@AdobeOrg" in filterValue: seg = self.getSegment(filterValue) metricFilters[filterValue] = seg["name"] metricFilterTranslation[filterId] = filterValue metricColumns = {} for colId in columnIdRelations.keys(): metricColumns[colId] = columnIdRelations[colId] for element in filterRelations.get(colId, []): metricColumns[colId] += f":::{metricFilterTranslation[element]}" else: if returnClass == False: return res reportType = "static" if self.loggingEnabled: self.logger.debug(f"reportType: {reportType}") columns = None ## no "columns" key in response summaryData = res.get("summaryData") ( nb_columns, tableColumnIds, segmentApplied, filterRelations, dataRows, ) = self._decrypteStaticData(dataRequest=dataRequest, response=res,resolveColumns=resolveColumns) ### Findings metrics metricFilters = {} metricColumns = [] for i in range(nb_columns): metric: str = res["columns"]["columnIds"][i] metricName = metric.split(":::")[0] if metricName.startswith("cm"): calcMetric = self.getCalculatedMetric(metricName) metricName = calcMetric["name"] correspondingStatic = tableColumnIds[metric] ## if the static row has a filter if correspondingStatic in list(filterRelations.keys()): ## finding segment applied to metrics for element in filterRelations[correspondingStatic]: segId:str = segmentApplied[element] metricName += f":::{segId}" metricFilters[segId] = segId if segId.startswith("s") and "@AdobeOrg" in segId: seg = self.getSegment(segId) metricFilters[segId] = seg["name"] metricColumns.append(metricName) ### ending with ['metric1','metric2 + segId',...] ### preparing data points if self.loggingEnabled: self.logger.debug(f"preparing data") preparedData = self._prepareData(dataRows, reportType=reportType) if returnClass: if self.loggingEnabled: self.logger.debug(f"returning Workspace class") ## Using the class data = Workspace( responseData=preparedData, dataRequest=deepCopyRequest, columns=columns, summaryData=summaryData, analyticsConnector=self, reportType=reportType, metrics=metricColumns, ## for normal type ## for staticReport metricFilters=metricFilters, resolveColumns=resolveColumns, ) if save: data.to_csv() return data
AdobeLibManual678
/AdobeLibManual678-4.3.tar.gz/AdobeLibManual678-4.3/aanalytics2/aanalytics2.py
aanalytics2.py
import json import time from copy import deepcopy # Non standard libraries import requests from aanalytics2 import config, token_provider class AdobeRequest: """ Handle request to Audience Manager and taking care that the request have a valid token set each time. Attributes: restTime : Time to rest before sending new request when reaching too many request status code. """ loggingEnabled = False def __init__(self, config_object: dict = config.config_object, header: dict = config.header, verbose: bool = False, retry: int = 0, loggingEnabled:bool=False, logger:object=None ) -> None: """ Set the connector to be used for handling request to AAM Arguments: config_object : OPTIONAL : Require the importConfig file to have been used. header : OPTIONAL : header of the config modules verbose : OPTIONAL : display comment on the request. retry : OPTIONAL : If you wish to retry failed GET requests loggingEnabled : OPTIONAL : if the logging is enable for that instance. logger : OPTIONAL : instance of the logger created """ if config_object['org_id'] == '': raise Exception( 'You have to upload the configuration file with importConfigFile method.') self.config = deepcopy(config_object) self.header = deepcopy(header) self.loggingEnabled = loggingEnabled self.logger = logger self.restTime = 30 self.retry = retry if self.config['token'] == '' or time.time() > self.config['date_limit']: if 'scopes' in self.config.keys() and self.config.get('scopes',None) is not None: self.connectionType = 'oauthV2' token_and_expiry = token_provider.get_oauth_token_and_expiry_for_config(config=self.config, verbose=verbose) elif self.config.get("private_key",None) is not None or self.config.get("pathToKey",None) is not None: self.connectionType = 'jwt' token_and_expiry = token_provider.get_jwt_token_and_expiry_for_config(config=self.config, verbose=verbose) token = token_and_expiry['token'] expiry = token_and_expiry['expiry'] self.token = token if self.loggingEnabled: self.logger.info("token retrieved : {token}") self.config['token'] = token self.config['date_limit'] = time.time() + expiry - 500 self.header.update({'Authorization': f'Bearer {token}'}) def _checkingDate(self) -> None: """ Checking if the token is still valid """ now = time.time() if now > self.config['date_limit']: if self.loggingEnabled: self.logger.warning("token expired. Trying to retrieve a new token") if self.connectionType =='oauthV2': token_and_expiry = token_provider.get_oauth_token_and_expiry_for_config(config=self.config) elif self.connectionType == 'jwt': token_and_expiry = token_provider.get_jwt_token_and_expiry_for_config(config=self.config) token = token_and_expiry['token'] if self.loggingEnabled: self.logger.info(f"new token retrieved : {token}") self.config['token'] = token self.config['date_limit'] = time.time() + token_and_expiry['expiry'] - 500 self.header.update({'Authorization': f'Bearer {token}'}) def getData(self, endpoint: str, params: dict = None, data: dict = None, headers: dict = None, *args, **kwargs): """ Abstraction for getting data """ internRetry = kwargs.get("retry", self.retry) self._checkingDate() if self.loggingEnabled: self.logger.info(f"endpoint: {endpoint}") self.logger.info(f"params: {params}") if headers is None: headers = self.header if params is None and data is None: res = requests.get( endpoint, headers=headers) elif params is not None and data is None: res = requests.get( endpoint, headers=headers, params=params) elif params is None and data is not None: res = requests.get( endpoint, headers=headers, data=data) elif params is not None and data is not None: res = requests.get(endpoint, headers=headers, params=params, data=data) if kwargs.get("verbose", False): print(f"request URL : {res.request.url}") print(f"statut_code : {res.status_code}") try: while str(res.status_code) == "429": if kwargs.get("verbose", False): print(f'Too many requests: retrying in {self.restTime} seconds') if self.loggingEnabled: self.logger.info(f"Too many requests: retrying in {self.restTime} seconds") time.sleep(self.restTime) res = requests.get(endpoint, headers=headers, params=params, data=data) res_json = res.json() except: ## handling 1.4 if self.loggingEnabled: self.logger.warning(f"handling exception as res.json() cannot be managed") self.logger.warning(f"status code: {res.status_code}") if kwargs.get('legacy',False): try: return json.loads(res.text) except: if self.loggingEnabled: self.logger.error(f"GET method failed: {res.status_code}, {res.text}") return res.text else: if self.loggingEnabled: self.logger.error(f"text: {res.text}") res_json = {'error': 'Request Error'} while internRetry > 0: if self.loggingEnabled: self.logger.warning(f"Trying again with internal retry") if kwargs.get("verbose", False): print('Retry parameter activated') print(f'{internRetry} retry left') if 'error' in res_json.keys(): time.sleep(30) res_json = self.getData(endpoint, params=params, data=data, headers=headers, retry=internRetry-1, **kwargs) return res_json return res_json def postData(self, endpoint: str, params: dict = None, data: dict = None, headers: dict = None, *args, **kwargs): """ Abstraction for posting data """ self._checkingDate() if headers is None: headers = self.header if params is None and data is None: res = requests.post(endpoint, headers=headers) elif params is not None and data is None: res = requests.post(endpoint, headers=headers, params=params) elif params is None and data is not None: res = requests.post(endpoint, headers=headers, data=json.dumps(data)) elif params is not None and data is not None: res = requests.post(endpoint, headers=headers, params=params, data=json.dumps(data)) try: res_json = res.json() if res.status_code == 429 or res_json.get('error_code', None) == "429050": res_json['status_code'] = 429 except: ## handling 1.4 if kwargs.get('legacy',False): try: return json.loads(res.text) except: if self.loggingEnabled: self.logger.error(f"POST method failed: {res.status_code}, {res.text}") return res.text res_json = {'error': res.get('status_code','Request Error')} return res_json def patchData(self, endpoint: str, params: dict = None, data=None, headers: dict = None, *args, **kwargs): """ Abstraction for patching data """ self._checkingDate() if headers is None: headers = self.header if params is not None and data is None: res = requests.patch(endpoint, headers=headers, params=params) elif params is None and data is not None: res = requests.patch(endpoint, headers=headers, data=json.dumps(data)) elif params is not None and data is not None: res = requests.patch(endpoint, headers=headers, params=params, data=json.dumps(data)) try: while str(res.status_code) == "429": if kwargs.get("verbose", False): print(f'Too many requests: retrying in {self.restTime} seconds') time.sleep(self.restTime) res = requests.patch(endpoint, headers=headers, params=params,data=json.dumps(data)) res_json = res.json() except: if self.loggingEnabled: self.logger.error(f"PATCH method failed: {res.status_code}, {res.text}") res_json = {'error': res.get('status_code','Request Error')} return res_json def putData(self, endpoint: str, params: dict = None, data=None, headers: dict = None, *args, **kwargs): """ Abstraction for putting data """ self._checkingDate() if headers is None: headers = self.header if params is not None and data is None: res = requests.put(endpoint, headers=headers, params=params) elif params is None and data is not None: res = requests.put(endpoint, headers=headers, data=json.dumps(data)) elif params is not None and data is not None: res = requests.put(endpoint, headers=headers, params=params, data=json.dumps(data=data)) try: status_code = res.json() except: if self.loggingEnabled: self.logger.error(f"PUT method failed: {res.status_code}, {res.text}") status_code = {'error': res.get('status_code','Request Error')} return status_code def deleteData(self, endpoint: str, params: dict = None, headers: dict = None, *args, **kwargs): """ Abstraction for deleting data """ self._checkingDate() if headers is None: headers = self.header if params is None: res = requests.delete(endpoint, headers=headers) elif params is not None: res = requests.delete(endpoint, headers=headers, params=params) try: while str(res.status_code) == "429": if kwargs.get("verbose", False): print(f'Too many requests: retrying in {self.restTime} seconds') time.sleep(self.restTime) res = requests.delete(endpoint, headers=headers, params=params) status_code = res.status_code except: if self.loggingEnabled: self.logger.error(f"DELETE method failed: {res.status_code}, {res.text}") status_code = {'error': 'Request Error'} return status_code
AdobeLibManual678
/AdobeLibManual678-4.3.tar.gz/AdobeLibManual678-4.3/aanalytics2/connector.py
connector.py
import os import time from typing import Dict, Union import json import jwt import requests from aanalytics2 import configs def get_jwt_token_and_expiry_for_config(config: dict, verbose: bool = False, save: bool = False, *args, **kwargs) -> \ Dict[str, str]: """ Retrieve the token by using the information provided by the user during the import importConfigFile function. ArgumentS : verbose : OPTIONAL : Default False. If set to True, print information. save : OPTIONAL : Default False. If set to True, save the toke in the . """ private_key = configs.get_private_key_from_config(config) header_jwt = { 'cache-control': 'no-cache', 'content-type': 'application/x-www-form-urlencoded' } now_plus_24h = int(time.time()) + 8760 * 60 * 60 jwt_payload = { 'exp': now_plus_24h, 'iss': config['org_id'], 'sub': config['tech_id'], 'https://ims-na1.adobelogin.com/s/ent_analytics_bulk_ingest_sdk': True, 'aud': f'https://ims-na1.adobelogin.com/c/{config["client_id"]}' } encoded_jwt = _get_jwt(payload=jwt_payload, private_key=private_key) payload = { 'client_id': config['client_id'], 'client_secret': config['secret'], 'jwt_token': encoded_jwt } response = requests.post(config['jwtTokenEndpoint'], headers=header_jwt, data=payload) json_response = response.json() try: token = json_response['access_token'] except KeyError: print('Issue retrieving token') print(json_response) raise Exception(json.dumps(json_response,indent=2)) expiry = json_response['expires_in'] / 1000 ## returns milliseconds expiring if save: with open('token.txt', 'w') as f: f.write(token) print(f'token has been saved here: {os.getcwd()}{os.sep}token.txt') if verbose: print('token valid till : ' + time.ctime(time.time() + expiry)) return {'token': token, 'expiry': expiry} def get_oauth_token_and_expiry_for_config(config:dict,verbose:bool=False,save:bool=False)->Dict[str,str]: """ Retrieve the access token by using the OAuth information provided by the user during the import importConfigFile function. Arguments : config : REQUIRED : Configuration object. verbose : OPTIONAL : Default False. If set to True, print information. save : OPTIONAL : Default False. If set to True, save the toke in the . """ if config is None: raise ValueError("config dictionary is required") oauth_payload = { "grant_type": "client_credentials", "client_id": config["client_id"], "client_secret": config["secret"], "scope": config["scopes"] } response = requests.post( config["oauthTokenEndpointV2"], data=oauth_payload) json_response = response.json() if 'access_token' in json_response.keys(): token = json_response['access_token'] expiry = json_response["expires_in"] else: return json.dumps(json_response,indent=2) if save: with open('token.txt', 'w') as f: f.write(token) if verbose: print('token valid till : ' + time.ctime(time.time() + expiry)) return {'token': token, 'expiry': expiry} def _get_jwt(payload: dict, private_key: str) -> str: """ Ensure that jwt enconding return the same type (str) as versions < 2.0.0 returned bytes and >2.0.0 return strings. """ token: Union[str, bytes] = jwt.encode(payload, private_key, algorithm='RS256') if isinstance(token, bytes): return token.decode('utf-8') return token
AdobeLibManual678
/AdobeLibManual678-4.3.tar.gz/AdobeLibManual678-4.3/aanalytics2/token_provider.py
token_provider.py
from dataclasses import dataclass import json @dataclass class Project: """ This dataclass extract the information retrieved from the getProjet method. It flatten the elements and gives you insights on what your project contains. """ def __init__(self, projectDict: dict = None,rsidSuffix:bool=False): """ Instancialize the class. Arguments: projectDict : REQUIRED : the dictionary of the project (returned by getProject method) rsidSuffix : OPTIONAL : If you want to have the rsid suffix to dimension and metrics. """ if projectDict is None: raise Exception("require a dictionary with project information. Retrievable via getProject") self.id: str = projectDict.get('id', '') self.name: str = projectDict.get('name', '') self.description: str = projectDict.get('description', '') self.rsid: str = projectDict.get('rsid', '') self.ownerName: str = projectDict['owner'].get('name', '') self.ownerId: int = projectDict['owner'].get('id', '') self.ownerEmail: int = projectDict['owner'].get('login', '') self.template: bool = projectDict.get('companyTemplate', False) self.version: str = None if 'definition' in projectDict.keys(): definition: dict = projectDict['definition'] self.version: str = definition.get('version',None) self.curation: bool = definition.get('isCurated', False) if definition.get('device', 'desktop') != 'cell': self.reportType = "desktop" infos = self._findPanelsInfos(definition['workspaces'][0]) self.nbPanels: int = infos["nb_Panels"] self.nbSubPanels: int = 0 self.subPanelsTypes: list = [] for panel in infos["panels"]: self.nbSubPanels += infos["panels"][panel]['nb_subPanels'] self.subPanelsTypes += infos["panels"][panel]['subPanels_types'] self.elementsUsed: dict = self._findElements(definition['workspaces'][0],rsidSuffix=rsidSuffix) self.nbElementsUsed: int = len(self.elementsUsed['dimensions']) + len( self.elementsUsed['metrics']) + len(self.elementsUsed['segments']) + len( self.elementsUsed['calculatedMetrics']) else: self.reportType = "mobile" def __str__(self)->str: return json.dumps(self.to_dict(),indent=4) def __repr__(self)->str: return json.dumps(self.to_dict(),indent=4) def _findPanelsInfos(self, workspace: dict = None) -> dict: """ Return a dict of the different information for each Panel. Arguments: workspace : REQUIRED : the workspace dictionary. """ dict_data = {'workspace_id': workspace['id']} dict_data['nb_Panels'] = len(workspace['panels']) dict_data['panels'] = {} for panel in workspace['panels']: dict_data["panels"][panel['id']] = {} dict_data["panels"][panel['id']]['name'] = panel.get('name', 'Default Name') dict_data["panels"][panel['id']]['nb_subPanels'] = len(panel['subPanels']) dict_data["panels"][panel['id']]['subPanels_types'] = [subPanel['reportlet']['type'] for subPanel in panel['subPanels']] return dict_data def _findElements(self, workspace: dict,rsidSuffix:bool=False) -> list: """ Returns the list of dimensions used in the FreeformReportlet. Arguments : workspace : REQUIRED : the workspace dictionary. """ dict_elements: dict = {'dimensions': [], "metrics": [], 'segments': [], "reportSuites": [], 'calculatedMetrics': []} tmp_rsid = "" # default empty value for panel in workspace['panels']: if "reportSuite" in panel.keys(): dict_elements['reportSuites'].append(panel['reportSuite']['id']) if rsidSuffix: tmp_rsid = f"::{panel['reportSuite']['id']}" elif "rsid" in panel.keys(): dict_elements['reportSuites'].append(panel['rsid']) if rsidSuffix: tmp_rsid = f"::{panel['rsid']}" filters: list = panel.get('segmentGroups',[]) if len(filters) > 0: for element in filters: typeElement = element['componentOptions'][0].get('component',{}).get('type','') idElement = element['componentOptions'][0].get('component',{}).get('id','') if typeElement == "Segment": dict_elements['segments'].append(idElement) if typeElement == "DimensionItem": clean_id: str = idElement[:idElement.find( '::')] ## cleaning this type of element : 'variables/evar7.6::3000623228' dict_elements['dimensions'].append(clean_id) for subPanel in panel['subPanels']: if subPanel['reportlet']['type'] == "FreeformReportlet": reportlet = subPanel['reportlet'] rows = reportlet['freeformTable'] if 'dimension' in rows.keys(): dict_elements['dimensions'].append(f"{rows['dimension']['id']}{tmp_rsid}") if len(rows["staticRows"]) > 0: for row in rows["staticRows"]: ## I have to get a temp dimension to clean them before loading them in order to avoid counting them multiple time for each rows. temp_list_dim = [] componentType: str = row.get('component',{}).get('type','') if componentType == "DimensionItem": temp_list_dim.append(f"{row['component']['id']}{tmp_rsid}") elif componentType == "Segments" or componentType == "Segment": dict_elements['segments'].append(row['component']['id']) elif componentType == "Metric": dict_elements['metrics'].append(f"{row['component']['id']}{tmp_rsid}") elif componentType == "CalculatedMetric": dict_elements['calculatedMetrics'].append(row['component']['id']) if len(temp_list_dim) > 0: temp_list_dim = list(set([el[:el.find('::')] for el in temp_list_dim])) for dim in temp_list_dim: dict_elements['dimensions'].append(f"{dim}{tmp_rsid}") columns = reportlet['columnTree'] for node in columns['nodes']: temp_data = self._recursiveColumn(node,tmp_rsid=tmp_rsid) dict_elements['calculatedMetrics'] += temp_data['calculatedMetrics'] dict_elements['segments'] += temp_data['segments'] dict_elements['metrics'] += temp_data['metrics'] if len(temp_data['dimensions']) > 0: for dim in set(temp_data['dimensions']): dict_elements['dimensions'].append(dim) dict_elements['metrics'] = list(set(dict_elements['metrics'])) dict_elements['segments'] = list(set(dict_elements['segments'])) dict_elements['dimensions'] = list(set(dict_elements['dimensions'])) dict_elements['calculatedMetrics'] = list(set(dict_elements['calculatedMetrics'])) return dict_elements def _recursiveColumn(self, node: dict = None, temp_data: dict = None,tmp_rsid:str=""): """ recursive function to fetch elements in column stack tmp_rsid : OPTIONAL : empty by default, if rsid is pass, it will add the value to dimension and metrics """ if temp_data is None: temp_data: dict = {'dimensions': [], "metrics": [], 'segments': [], "reportSuites": [], 'calculatedMetrics': []} componentType: str = node.get('component',{}).get('type','') if componentType == "Metric": temp_data['metrics'].append(f"{node['component']['id']}{tmp_rsid}") elif componentType == "CalculatedMetric": temp_data['calculatedMetrics'].append(node['component']['id']) elif componentType == "Segment": temp_data['segments'].append(node['component']['id']) elif componentType == "DimensionItem": old_id: str = node['component']['id'] new_id: str = old_id[:old_id.find('::')] temp_data['dimensions'].append(f"{new_id}{tmp_rsid}") if len(node['nodes']) > 0: for new_node in node['nodes']: temp_data = self._recursiveColumn(new_node, temp_data=temp_data,tmp_rsid=tmp_rsid) return temp_data def to_dict(self) -> dict: """ transform the class into a dictionary """ obj = { 'id': self.id, 'name': self.name, 'description': self.description, 'rsid': self.rsid, 'ownerName': self.ownerName, 'ownerId': self.ownerId, 'ownerEmail': self.ownerEmail, 'template': self.template, 'reportType':self.reportType, 'curation': self.curation or False, 'version': self.version or None, } add_object = {} if hasattr(self, 'nbPanels'): add_object = { 'curation': self.curation, 'version': self.version, 'nbPanels': self.nbPanels, 'nbSubPanels': self.nbSubPanels, 'subPanelsTypes': self.subPanelsTypes, 'nbElementsUsed': self.nbElementsUsed, 'dimensions': self.elementsUsed['dimensions'], 'metrics': self.elementsUsed['metrics'], 'segments': self.elementsUsed['segments'], 'calculatedMetrics': self.elementsUsed['calculatedMetrics'], 'rsids': self.elementsUsed['reportSuites'], } full_obj = {**obj, **add_object} return full_obj
AdobeLibManual678
/AdobeLibManual678-4.3.tar.gz/AdobeLibManual678-4.3/aanalytics2/projects.py
projects.py
import gzip import io from concurrent import futures from pathlib import Path from typing import IO, Union # Non standard libraries import pandas as pd import requests from aanalytics2 import config, connector class DIAPI: """ This class provide an easy way to use the Data Insertion API. You can initialize it with the required information to be present in the request and then select to send POST or GET request. Arguments to instantiate: rsid : REQUIRED : Report Suite ID tracking_server : REQUIRED : tracking server for tracking. example : "xxxx.sc.omtrdc.net" """ def __init__(self, rsid: str = None, tracking_server: str = None): """ Arguments: rsid : REQUIRED : Report Suite ID tracking_server : REQUIRED : tracking server for tracking. """ if rsid is None: raise Exception("Expecting a ReportSuite ID (rsid)") self.rsid = rsid if tracking_server is None: raise Exception("Expecting a tracking server") self.tracking_server = tracking_server try: import importlib.resources as pkg_resources path = pkg_resources.path("aanalytics2", "supported_tags.pickle") except ImportError: # Try backported to PY<37 with pkg_resources. try: import pkg_resources path = pkg_resources.resource_filename( "aanalytics2", "supported_tags.pickle") except: print('no supported_tags file') try: with path as f: self.REFERENCE = pd.read_pickle(f) except: self.REFERENCE = None def getMethod(self, pageName: str = None, g: str = None, pe: str = None, pev1: str = None, pev2: str = None, events: str = None, **kwargs): """ Use the GET method to send information to Adobe Analytics Arguments: pageName : REQUIRED : The Web page name. g : REQUIRED : The Web page URL pe : OPTIONAL : For custom link tracking (Type of link ("d", "e", or "o")) if selected, require "pev1" or "pev2", additionally pageName is set to Null pev1 : OPTIONAL : The link's HREF. For custom links, page values are ignored. pev2 : OPTIONAL : Name of link. events : OPTIONAL : If you want to pass some events Possible kwargs: - see the SUPPORTED_TAGS attributes. Tags should be in the supported format. """ if pageName is None and g is None: raise Exception("Expecting a pageName or g arguments") if pe is not None and pe not in ["d", "e", "o"]: raise Exception('Expecting pe argument to be ("d", "e", or "o")') header = {'Content-Type': 'application/json'} endpoint = f"https://{self.tracking_server}/b/ss/{self.rsid}/0" params = {"pageName": pageName, "g": g, "pe": pe, "pev1": pev1, "pev2": pev2, "events": events, **kwargs} res = requests.get(endpoint, params=params, headers=header) return res def postMethod(self, pageName: str = None, pageURL: str = None, linkType: str = None, linkURL: str = None, linkName: str = None, events: str = None, **kwargs): """ Use the POST method to send information to Adobe Analytics Arguments: pageName : REQUIRED : The Web page name. pageURL : REQUIRED : The Web page URL linkType : OPTIONAL : For custom link tracking (Type of link ("d", "e", or "o")) if selected, require "pev1" or "pev2", additionally pageName is set to Null linkURL : OPTIONAL : The link's HREF. For custom links, page values are ignored. linkName : OPTIONAL : Name of link. events : OPTIONAL : If you want to pass some events Possible kwargs: - see the SUPPORTED_TAGS attributes. Tags should be in the supported format. """ if pageName is None and pageURL is None: raise Exception("Expecting a pageName or pageURL argument") if linkType is not None and linkType not in ["d", "e", "o"]: raise Exception('Expecting pe argument to be ("d", "e", or "o")') header = {'Content-Type': 'application/xml'} endpoint = f"https://{self.tracking_server}/b/ss//6" dictionary = {"pageName": pageName, "pageURL": pageURL, "linkType": linkType, "linkURL": linkURL, "linkName": linkName, "events": events, "reportSuite": self.rsid, **kwargs} import dicttoxml as dxml myxml = dxml.dicttoxml( dictionary, custom_root='request', attr_type=False) xml_data = myxml.decode() res = requests.post(endpoint, data=xml_data, headers=header) return res class Bulkapi: """ This is the bulk API from Adobe Analytics. By default, the file are sent to the global endpoints for auto-routing. If you wish to select a specific endpoint, you can modify it during instantiation. It requires you to upload some adobeio configuration file through the main aanalytics2 module. Arguments: endpoint : OPTIONAL : by default using https://analytics-collection.adobe.io """ def __init__(self, endpoint: str = "https://analytics-collection.adobe.io", config_object: dict = config.config_object): """ Initialize the Bulk API connection. Returns an object with methods to send data to Analytics. Arguments: endpoint : REQUIRED : Endpoint to send data to. Default to analytics-collection.adobe.io possible values, on top of the default choice are: - https://analytics-collection-va7.adobe.io (US) - https://analytics-collection-nld2.adobe.io (EU) config_object : REQUIRED : config object containing the different information to send data. """ self.endpoint = endpoint try: import importlib.resources as pkg_resources path = pkg_resources.path( "aanalytics2", "CSV_Column_and_Query_String_Reference.pickle") except ImportError: try: # Try backported to PY<37 `importlib_resources`. import pkg_resources path = pkg_resources.resource_filename( "aanalytics2", "CSV_Column_and_Query_String_Reference.pickle") except: print('no CSV_Column_and_Query_string_Reference file') try: with path as f: self.REFERENCE = pd.read_pickle(f) except: self.REFERENCE = None # if no token has been generated. self.connector = connector.AdobeRequest() self.header = self.connector.header self.header["x-adobe-vgid"] = "ingestion" del self.header["Content-Type"] self._createdFiles = [] def validation(self, file: IO = None,encoding:str='utf-8', **kwargs): """ Send the file to a validation endpoint. Return the response object from requests. Argument: file : REQUIRED : File in a string of byte format. encoding : OPTIONAL : type of encoding used for the file. Possible kwargs: compress_level : handle the compression level, from 0 (no compression) to 9 (slow but more compressed). default 5. """ compress_level = kwargs.get("compress_level", 5) if file is None: raise Exception("Expecting a file") path = "/aa/collect/v1/events/validate" if file.endswith(".gz") == False: with open(file, "r",encoding=encoding) as f: content = f.read() data = gzip.compress(content.encode('utf-8'), compresslevel=compress_level) filename = f"{file}.gz" elif file.endswith(".gz"): filename = file with open(file, "rb") as f: data = f.read() res = requests.post(self.endpoint + path, files={"file": (None, data)}, headers=self.header) return res def generateTemplate(self, includeAdv: bool = False, returnDF: bool = False, save: bool = True): """ Generate a CSV file with minimum fields. Arguments: includeAdv : OPTIONAL : Include advanced fields in the csv (pe & queryString). Not included by default to avoid confusion for new users. (Default False) returnDF : OPTIONAL : Return a pandas dataFrame if you want to work directly with a data frame.(default False) save : OPTIONAL : Save the file created directly in your working folder. """ ## 2 rows being created string = """timestamp,marketingCloudVisitorID,events,pageName,pageURL,reportSuiteID,userAgent,pe,queryString\ntimestampValuePOSIX/Epoch Time (e.g. 1486769029) or ISO-8601 (e.g. 2017-02-10T16:23:49-07:00),marketingCloudVisitorIDValue,eventsValue,pageNameValue,pageURLValue,reportSuiteIDValue,userAgentValue,peValue,queryStringValue """ data = io.StringIO(string) df = pd.read_csv(data, sep=',') if includeAdv == False: df.drop(["pe", "queryString"], axis=1, inplace=True) if save: df.to_csv('template.csv', index=False) if returnDF: return df def _checkFiles(self, file: str = None,encoding:str = "utf-8"): """ Internal method that check content and format of the file """ if file.endswith(".gz"): return file else: # if sending not gzipped file. new_folder = Path('tmp/') new_folder.mkdir(exist_ok=True) with open(file, "r",encoding=encoding) as f: content = f.read() new_path = new_folder / f"{file}.gz" with gzip.open(Path(new_path), 'wb') as f: f.write(content.encode('utf-8')) # save the filename to delete self._createdFiles.append(new_path) return new_path def sendFiles(self, files: Union[list, IO] = None,encoding:str='utf-8',**kwargs): """ Method to send the file(s) through the Bulk API. Returns a list with the different status file sent. Arguments: files : REQUIRED : file to be send to the aalytics collection server. It can be a list or the name of the file to be send. If list is being send, we assume that each file are to be sent in different visitor groups. If file are not gzipped, we will compress the file and saved it as gz in the folder. encoding : OPTIONAL : if encoding is different that default utf-8. possible kwargs: workers : maximum amount of worker for parallele processing. (default 4) """ path = "/aa/collect/v1/events" if files is None: raise Exception("Expecting a file") compress_level = kwargs.get("compress_level", 5) files_gz = list() if type(files) == list: for file in files: fileName = self._checkFiles(file,encoding=encoding) files_gz.append(fileName) elif type(files) == str: fileName = self._checkFiles(files,encoding=encoding) files_gz.append(fileName) vgid_headers = [f"ingestion_{x}" for x in range(len(files_gz))] list_headers = [{**self.header, 'x-adobe-vgid': vgid} for vgid in vgid_headers] list_urls = [self.endpoint + path for x in range(len(files_gz))] list_files = ({"file": (None, open(Path(file), "rb").read())} for file in files_gz) # generator for files workers_input = kwargs.get("workers", 4) workers = max(1, workers_input) with futures.ThreadPoolExecutor(workers) as executor: res = executor.map(lambda x, y, z: requests.post( x, headers=y, files=z), list_urls, list_headers, list_files) list_res = [response.json() for response in res] # cleaning temp folder if len(self._createdFiles) > 0: for file in self._createdFiles: file_path = Path(file) file_path.unlink() self._createdFiles = [] tmp = Path('tmp/') tmp.rmdir() return list_res
AdobeLibManual678
/AdobeLibManual678-4.3.tar.gz/AdobeLibManual678-4.3/aanalytics2/ingestion.py
ingestion.py
import pandas as pd from aanalytics2 import config, connector from typing import Union class LegacyAnalytics: """ Class that will help you realize basic requests to the old API 1.4 endpoints """ def __init__(self,company_name:str=None,config:dict=config.config_object)->None: """ Instancialize the Legacy Analytics wrapper. """ if company_name is None: raise Exception("Require a company name") self.connector = connector.AdobeRequest(config_object=config) self.token = self.connector.token self.endpoint = "https://api.omniture.com/admin/1.4/rest" self.header = header = { 'Accept': 'application/json', 'Authorization': f'Bearer {self.token}', 'X-ADOBE-DMA-COMPANY': company_name } def getData(self,path:str="/",method:str=None,params:dict=None)->dict: """ Use the GET method to the parameter used. Arguments: path : REQUIRED : If you need a specific path (default "/") method : OPTIONAL : if you want to pass the method directly there for the parameter. params : OPTIONAL : If you need to pass parameter to your url, use dictionary i.e. : {"param":"value"} """ if params is not None and type(params) != dict: raise TypeError("Require a dictionary") myParams = {} myParams.update(**params or {}) if method is not None: myParams['method'] = method path = path res = self.connector.getData(self.endpoint + path,params=myParams,headers=self.header,legacy=True) return res def postData(self,path:str="/",method:str=None,params:dict=None,data:Union[dict,list]=None)->dict: """ Use the POST method to the parameter used. Arguments: path : REQUIRED : If you need a specific path (default "/") method : OPTIONAL : if you want to pass the method directly there for the parameter. params : OPTIONAL : If you need to pass parameter to your url, use dictionary i.e. : {"param":"value"} data : OPTIONAL : Usually required to pass the dictionary or list to the request """ if params is not None and type(params) != dict: raise TypeError("Require a dictionary") if data is not None and (type(data) != dict and type(data) != list): raise TypeError("data should be dictionary or list") myParams = {} myParams.update(**params or {}) if method is not None: myParams['method'] = method path = path res = self.connector.postData(self.endpoint + path,params=myParams, data=data,headers=self.header,legacy=True) return res
AdobeLibManual678
/AdobeLibManual678-4.3.tar.gz/AdobeLibManual678-4.3/aanalytics2/aanalytics14.py
aanalytics14.py
# AdonisAI Official Website: [Click Here](https://adonis-ai.herokuapp.com) Official Instagram Page: [Click Here](https://www.instagram.com/_jarvisai_) ![enter image description here](https://source.unsplash.com/1600x900/?robots) 1. What is AdonisAI? 2. Prerequisite 3. Getting Started- How to use it? 4. What it can do (Features it supports) 5. Future / Request Features 6. What's new? 7. Contribute 8. Contact me 9. Donate 10. Thank me on- ## 1. What is AdonisAI? AdonisAI is as advance version of [JarvisAI](https://pypi.org/project/JarvisAI/). AdonisAI is a Python Module which is able to perform task like Chatbot, Assistant etc. It provides base functionality for any assistant application. This library is built using Tensorflow, Pytorch, Transformers and other opensource libraries and frameworks. Well, you can contribute on this project to make it more powerful. This project is crated only for those who is having interest in building Virtual Assistant. Generally it took lots of time to write code from scratch to build Virtual Assistant. So, I have build an Library called "Adonis", which gives you easy functionality to build your own Virtual Assistant. **AdonisAI is more powerful and light weight version of https://pypi.org/project/JarvisAI/** ## 2. Prerequisite - Get your Free API key from https://adonis-ai.herokuapp.com - To use it only Python (> 3.6) is required. - To contribute in project: Python is the only prerequisite for basic scripting, Machine Learning and Deep Learning knowledge will help this model to do task like AI-ML. Read How to contribute section of this page. ## 3. Getting Started- How to use it? - Install the latest version- `pip install AdonisAI` It will install all the required package automatically. - You need only this piece of code- ``` # create your own function # RULES (Optional)- # It must contain parameter 'feature_command' (What ever input you provide when AI ask for input will be passed to this function) # Return is optional # If you want to provide return value it should only return text (str) def pprint(feature_command="custom feature (What ever input you provide when AI ask for input will be passed to this function)"): # write your code here to do something with the command # perform some tasks # return is optional return feature_command + ' Executed' obj = AdonisEngine(bot_name='alexa', input_mechanism=InputOutput.speech_to_text_deepspeech_streaming, output_mechanism=[InputOutput.text_output, InputOutput.text_to_speech], backend_tts_api='pyttsx3', wake_word_detection_status=True, wake_word_detection_mechanism=InputOutput.speech_to_text_deepspeech_streaming, shutdown_command='shutdown', secret_key='your_secret_key') # Check existing list of commands, Existing command you can not use while registering your function print(obj.check_registered_command()) # Register your function (Optional) obj.register_feature(feature_obj=pprint, feature_command='custom feature') # Start AI in background. It will always run forever until you don't stop it manually. obj.engine_start() ``` **Whats now?** It will start your AI, it will ask you to give input and accordingly it will produce output. You can configure `input_mechanism` and `output_mechanism` parameter for voice input/output or text input/output. ### Parameters- - ![enter image description here](https://i.imgur.com/rliCjBE.png) # 4. What it can do (Features it supports)- 1. Currently, it supports only english language 2. Supports voice and text input/output. 3. Supports AI based voice input and by using google api voice input. ### 4.1. Supported Commands- ![enter image description here](https://i.postimg.cc/9M66tfwP/raycast-untitled-9.png) ### 4.3. Supported Input/Output Methods (Which option do I need to choose?)- ![enter image description here](https://i.ibb.co/sCDWW7K/raycast-untitled-5.png) # 5. Future/Request Features- **WIP** **You tell me** # 6. What new- 1. AdonisAI==1.0: Initial Release. 2. AdonisAI==1.1: Added news and weather features. Added AdonisAI.InputOutput.wake_word_detection_mechanism. 3. AdonisAI==1.2: Added new input mechanism (AdonisAI.InputOutput.speech_to_text_deepspeech_streaming) fast and free. And new features (jokes, about). 4. AdonisAI==1.3: Added New feature (send whatsapp, open website, play on youtube, send email). 5. AdonisAI==1.4: Added new feature (AI Based Chatbot Added, from now you need Secret key for AdonisAI, it's used for security purpose. Get your free key from https://adonis-ai.herokuapp.com). 6. AdonisAI==1.5: Major Bug Fix from version 1.4. *[DO NOT USE AdonisAI==1.4]* 7. AdonisAI==1.6: New features added (screenshot, photo click, youtube video download, play games, covid updates, internet speed check) 8. AdonisAI==1.7: Bug Fixes from version 1.6. *[DO NOT USE AdonisAI==1.6]* # 7. Contribute- Instructions Coming Soon # 8. Contact me- - [Instagram](https://www.instagram.com/dipesh_pal17) - [YouTube](https://www.youtube.com/dipeshpal17) # 9. Donate- [Donate and Contribute to run me this project, and buy a domain](https://www.buymeacoffee.com/dipeshpal) **_Feel free to use my code, don't forget to mention credit. All the contributors will get credits in this repo._** **_Mention below line for credits-_** Credits- - https://jarvis-ai-api.herokuapp.com/ - https://github.com/Dipeshpal/Jarvis_AI/ - https://www.youtube.com/dipeshpal17 - https://www.instagram.com/dipesh_pal17/ # 10. Thank me on- - Follow me on Instagram: https://www.instagram.com/dipesh_pal17/ - Subscribe me on YouTube: https://www.youtube.com/dipeshpal17
AdonisAI
/AdonisAI-1.7.tar.gz/AdonisAI-1.7/README.md
README.md
![GitHub](https://img.shields.io/github/license/MaximeChallon/AdresseParser?logo=License) ![GitHub contributors](https://img.shields.io/github/contributors/MaximeChallon/AdresseParser) ![Python package](https://github.com/MaximeChallon/AdresseParser/workflows/Python%20package/badge.svg?branch=master) ![PyPI](https://img.shields.io/pypi/v/AdresseParser) ![PyPI - Format](https://img.shields.io/pypi/format/AdresseParser?label=PyPi%20format) [![Build Status](https://travis-ci.org/MaximeChallon/AdresseParser.svg?branch=master)](https://travis-ci.org/MaximeChallon/AdresseParser) ![GitHub Release Date](https://img.shields.io/github/release-date/MaximeChallon/AdresseParser) # AddresseParser Package Python pour parser et comparer les adresses franรงaises. # Lancement Package disponible sur [PyPI](https://pypi.org/project/AdresseParser) Vous pouvez l'installer avec pip: ```bash pip install AdresseParser ``` Exemple d'utilisation en console Python: ```bash >>> from AdresseParser import AdresseParser >>> adr_parser = AdresseParser() >>> result = adr_parser.parse("88 rue de rivoli 75002 paris") >>> print(result) {'numero': '88', 'indice': None, 'rue': {'type': 'RUE', 'nom': 'RIVOLI'}, 'code_postal': '75002', 'ville': {'arrondissement': 2, 'nom': 'PARIS'}, 'departement': {'numero': 75, 'nom': 'Paris'}, 'region': 'รŽle-de-France', 'pays': 'France'} >>> print(result['rue']) {'type': 'RUE', 'nom': 'RIVOLI'} >>> print(result['ville']['arrondissement']) 2 ``` # Return ```json { "numero": "str", "indice": "str", "rue":{ "type": "str", "nom": "str" }, "code_postal": "str", "ville": { "arrondissement": "int", "nom": "str" }, "departement": { "numero": "str", "nom": "str" }, "region": "str", "pays": "France" } ```
AdresseParser
/AdresseParser-1.0.2.tar.gz/AdresseParser-1.0.2/README.md
README.md
example! ```python from AdroitFisherman.DoubleLinkedListWithoutHeadNode.Double import DoubleLinkedListWithoutHeadNode_double if __name__ == '__main__': operate=0 test=DoubleLinkedListWithoutHeadNode_double() while operate<16: print("1:Init linear list 2:Destroy linear list 3:Clear Linear list", end='\n') print("4:Is list empty 5:Get list length 6:Get elem's value", end='\n') print("7:Get elem's index 8:Get elem's prior elem 9:Get elem's next elem", end='\n') print("10:Add elem to the first position 11:Add elem to the last position 12:Insert elem into list", end='\n') print("13:Delete elem 14:View list 15:View list by reverse order", end='\n') operate = int(input("please choose operation options:")) if operate==1: test.init_list() elif operate==2: test.destroy_list() elif operate==3: test.clear_list() elif operate==4: if test.list_empty()==True: print("empty",end='\n') else: print("not empty",end='\n') elif operate==5: print(f"length:{test.list_length()}",end='\n') elif operate==6: index=int(input("please input elem's position:")) print(f"elem value:{test.get_elem(index)}") elif operate==7: elem=float(input("please input elem's value:")) print("elem position:%d"%test.locate_elem(elem)) elif operate==8: elem = float(input("please input elem's value:")) print("prior elem's value:%f" % test.prior_elem(elem)) elif operate==9: elem = float(input("please input elem's value:")) print("next elem's value:%f" % test.next_elem(elem)) elif operate==10: elem = float(input("please input elem's value:")) test.add_first(elem) for i in range(0,test.list_length(),1): print(test.get_elem(i),end='\t') print(end='\n') elif operate==11: elem = float(input("please input elem's value:")) test.add_after(elem) for i in range(0,test.list_length(),1): print(test.get_elem(i),end='\t') print(end='\n') elif operate==12: index = int(input("please input elem's position:")) elem = float(input("please input elem's value:")) test.list_insert(index, elem) for i in range(0,test.list_length(),1): print(test.get_elem(i),end='\t') print(end='\n') elif operate==13: index = int(input("please input elem's position:")) test.list_delete(index) for i in range(0,test.list_length(),1): print(test.get_elem(i),end='\t') print(end='\n') elif operate==14: for i in range(0,test.list_length(),1): print(test.get_elem(i),end='\t') print(end='\n') elif operate==15: test.traverse_list_by_reverse_order() ```
AdroitFisherman
/AdroitFisherman-0.0.22.tar.gz/AdroitFisherman-0.0.22/README.md
README.md
![Title_logo](Documentation/source/Images/icon_image/icon_drawing_github.png?raw=true "Logo") # The Adsorber Program [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/Adsorber)](https://docs.python.org/3/) [![GitHub release (latest by date)](https://img.shields.io/github/v/release/GardenGroupUO/Adsorber)](https://github.com/GardenGroupUO/Adsorber) [![PyPI](https://img.shields.io/pypi/v/Adsorber)](https://pypi.org/project/Adsorber/) [![Conda](https://img.shields.io/conda/v/gardengroupuo/adsorber)](https://anaconda.org/GardenGroupUO/adsorber) [![Documentation](https://img.shields.io/badge/Docs-click%20here-brightgreen)](https://adsorber.readthedocs.io/en/latest/) [![Licence](https://img.shields.io/github/license/GardenGroupUO/Adsorber)](https://www.gnu.org/licenses/agpl-3.0.en.html) [![LGTM Grade](https://img.shields.io/lgtm/grade/python/github/GardenGroupUO/Adsorber)](https://lgtm.com/projects/g/GardenGroupUO/Adsorber/context:python) Authors: Dr. Geoffrey R. Weal and Dr. Anna L. Garden (University of Otago, Dunedin, New Zealand) Group page: https://blogs.otago.ac.nz/annagarden/ ## What is Adsorber Adsorber is designed to create a number of models that have adsorbates adsorbed to various top, bridge, three-fold, and four-fold site on a cluster or surface model. ## Installation It is recommended to read the installation page before using the Adsorber program. [adsorber.readthedocs.io/en/latest/Installation.html](https://adsorber.readthedocs.io/en/latest/Installation.html) Note that you can install Adsorber through ``pip3`` and ``conda``. Jmol is also used for looking at your cluster/surface model with adsorbed atoms and molecules upon it. You can see how to install and use it at [Installing and Using ASE GUI and Jmol](https://adsorber.readthedocs.io/en/latest/External_programs_that_will_be_useful_to_install_for_using_Adsorber.html). ## Output files that are created by Adsorber Adsorber will adsorb atoms and molecules on various binding sites across your cluster or surface model. These include top, bridge, three-fold, and four-fold sites. An example of an COOH molecules adsorbed to a corner top-site on a Cu<sub>78</sub> cluster is shown below <p align="center"> <img src="https://github.com/GardenGroupUO/Adsorber/blob/main/Documentation/source/Images/COOH_site_1_rotation_0.png"> </p> ## Where can I find the documentation for Adsorber All the information about this program is found online at [adsorber.readthedocs.io/en/latest/](https://adsorber.readthedocs.io/en/latest/). Click the button below to also see the documentation: [![Documentation](https://img.shields.io/badge/Docs-click%20here-brightgreen)](https://adsorber.readthedocs.io/en/latest/) ## The ``Adsorber`` Program is a "work in progress" This program is definitely a "work in progress". I have made it as easy to use as possible, but there are always oversights to program development and some parts of it may not be as easy to use as it could be. If you have any issues with the program or you think there would be better/easier ways to use and implement things in ``Adsorber``, feel free to email Geoffrey about these ([email protected]). Feedback is very much welcome! ## About <div align="center"> | Python | [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/Adsorber)](https://docs.python.org/3/) | |:----------------------:|:-------------------------------------------------------------:| | Repositories | [![GitHub release (latest by date)](https://img.shields.io/github/v/release/GardenGroupUO/Adsorber)](https://github.com/GardenGroupUO/Adsorber) [![PyPI](https://img.shields.io/pypi/v/Adsorber)](https://pypi.org/project/Adsorber/) [![Conda](https://img.shields.io/conda/v/gardengroupuo/adsorber)](https://anaconda.org/GardenGroupUO/adsorber) | | Documentation | [![Documentation](https://img.shields.io/badge/Docs-click%20here-brightgreen)](https://adsorber.readthedocs.io/en/latest/) | | Tests | [![LGTM Grade](https://img.shields.io/lgtm/grade/python/github/GardenGroupUO/Adsorber)](https://lgtm.com/projects/g/GardenGroupUO/Adsorber/context:python) | License | [![Licence](https://img.shields.io/github/license/GardenGroupUO/Adsorber)](https://www.gnu.org/licenses/agpl-3.0.en.html) | | Authors | Geoffrey R. Weal, Dr. Anna L. Garden | | Group Website | https://blogs.otago.ac.nz/annagarden/ | </div>
Adsorber
/Adsorber-1.10.tar.gz/Adsorber-1.10/README.md
README.md
<!-- <p align="center"> <img src="https://github.com/RCCS-CaptureTeam/Adsorption_Breakthrough_Analysis/raw/main/docs/source/logo.png" height="150"> </p> --> <h1 align="center"> AdsorptionBreakthroughAnalysis </h1> <p align="center"> <a href="https://github.com/RCCS-CaptureTeam/Adsorption_Breakthrough_Analysis/actions/workflows/tests.yml"> <img alt="Tests" src="https://github.com/RCCS-CaptureTeam/Adsorption_Breakthrough_Analysis/workflows/tests.yml/badge.svg" /> </a> <a href="https://pypi.org/project/AdsorptionBreakthroughAnalysis"> <img alt="PyPI" src="https://img.shields.io/pypi/v/AdsorptionBreakthroughAnalysis" /> </a> <a href="https://pypi.org/project/AdsorptionBreakthroughAnalysis"> <img alt="PyPI - Python Version" src="https://img.shields.io/pypi/pyversions/AdsorptionBreakthroughAnalysis" /> </a> <a href="https://github.com/RCCS-CaptureTeam/Adsorption_Breakthrough_Analysis/blob/main/LICENSE"> <img alt="PyPI - License" src="https://img.shields.io/pypi/l/AdsorptionBreakthroughAnalysis" /> </a> <a href='https://AdsorptionBreakthroughAnalysis.readthedocs.io/en/latest/?badge=latest'> <img src='https://readthedocs.org/projects/AdsorptionBreakthroughAnalysis/badge/?version=latest' alt='Documentation Status' /> </a> <a href="https://codecov.io/gh/RCCS-CaptureTeam/Adsorption_Breakthrough_Analysis/branch/main"> <img src="https://codecov.io/gh/RCCS-CaptureTeam/Adsorption_Breakthrough_Analysis/branch/main/graph/badge.svg" alt="Codecov status" /> </a> <a href="https://github.com/cthoyt/cookiecutter-python-package"> <img alt="Cookiecutter template from @cthoyt" src="https://img.shields.io/badge/Cookiecutter-snekpack-blue" /> </a> <a href='https://github.com/psf/black'> <img src='https://img.shields.io/badge/code%20style-black-000000.svg' alt='Code style: black' /> </a> <a href="https://github.com/RCCS-CaptureTeam/Adsorption_Breakthrough_Analysis/blob/main/.github/CODE_OF_CONDUCT.md"> <img src="https://img.shields.io/badge/Contributor%20Covenant-2.1-4baaaa.svg" alt="Contributor Covenant"/> </a> </p> # Adsorption Breakthrough Analysis This program is used to analyse the breakthrough curves generated by adsorption in the rig created by Dr E.G (in the lab as of 1st of September 2022). The folders include: * Code * Contains the python script * Explaining program * This contains a jupyter notebook explaining how to use the Python script and some extra information if neeeded * Generating results * Contains a simple jupyter notebook with the essentials needed to run and produce the required outputs ## ๐Ÿš€ Package interaction There are two ways to interact with this package. <b> 1. Local installation </b> - Install via pip, (`pip install AdsorptionBreakthroughAnalysis`) - Cloning the github repo (`git clone https://github.com/dm937/Adsorption_Breakthrough_Analysis/`) - Use the jupyter notebook locally. (Check `Explaining program/Explaining_program.ipynb`) <b> 2. Using the online notebook (easy) </b> - Using the [online notebook](https://deepnote.com/workspace/fmcil-1f244322-b560-46a9-bfe3-cb29fad834c7/project/AdsorptionBreakthroughAnalysis-06bd4f69-f127-42b0-bbc2-792ba35155d4/%2FExplaining_program.ipynb) If you are unfamiliar with pip then we recommend using online notebook. ## Usage The program takes in MS and coriolis readings and then creates a dataframe containing only the relevant breakthrough data This is done by the use of classes. Each part of the experiment will be an object containing the related data. for example 14%_CO2_UiO66_sample may be one object. To create the object setup the ExperimentalSetup dictionary with the relevant values and the MS and coriolis files must be in the same folder and inputted to the ExperimentalSetup aswell. The object is then created by calling the class and inputting the relevant conditions. Once a blank and sample object are created you can call the standard output function in order to produce the standard set of results. This is all explained further (with functions/methods called in code cells) in the "Explaining program" notebook ## Acknowledgements This work is part of the PrISMa Project (299659), funded through the ACT Programme (Accelerating CCS Technologies, Horizon 2020 Project 294766). Financial contributions from the Department for Business, Energy & Industrial Strategy (BEIS) together with extra funding from the NERC and EPSRC Research Councils, United Kingdom, the Research Council of Norway (RCN), the Swiss Federal Office of Energy (SFOE), and the U.S. Department of Energy are gratefully acknowledged. Additional financial support from TOTAL and Equinor is also gratefully acknowledged. This work is also part of the USorb-DAC Project, which is supported by a grant from The Grantham Foundation for the Protection of the Environment to RMIโ€™s climate tech accelerator program, Third Derivative. ### โš–๏ธ License The code in this package is licensed under the MIT License. <!-- ### ๐Ÿ“– Citation Citation goes here! --> <!-- ### ๐ŸŽ Support This project has been supported by the following organizations (in alphabetical order): - [Harvard Program in Therapeutic Science - Laboratory of Systems Pharmacology](https://hits.harvard.edu/the-program/laboratory-of-systems-pharmacology/) --> <!-- ### ๐Ÿ’ฐ Funding This project has been supported by the following grants: | Funding Body | Program | Grant | |----------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------|-----------------| | DARPA | [Automating Scientific Knowledge Extraction (ASKE)](https://www.darpa.mil/program/automating-scientific-knowledge-extraction) | HR00111990009 | --> ### ๐Ÿช Cookiecutter This package was created with [@audreyfeldroy](https://github.com/audreyfeldroy)'s [cookiecutter](https://github.com/cookiecutter/cookiecutter) package using [@cthoyt](https://github.com/cthoyt)'s [cookiecutter-snekpack](https://github.com/cthoyt/cookiecutter-snekpack) template. ## ๐Ÿ› ๏ธ For Developers <details> <summary>See developer instructions</summary> ### Development Installation To install in development mode, use the following: ```bash $ git clone git+https://github.com/RCCS-CaptureTeam/Adsorption_Breakthrough_Analysis.git $ cd Adsorption_Breakthrough_Analysis $ pip install -e . ``` ### ๐Ÿฅผ Testing After cloning the repository and installing `tox` with `pip install tox`, the unit tests in the `tests/` folder can be run reproducibly with: ```shell $ tox ``` Additionally, these tests are automatically re-run with each commit in a [GitHub Action](https://github.com/RCCS-CaptureTeam/Adsorption_Breakthrough_Analysis/actions?query=workflow%3ATests). ### ๐Ÿ“– Building the Documentation The documentation can be built locally using the following: ```shell $ git clone git+https://github.com/RCCS-CaptureTeam/Adsorption_Breakthrough_Analysis.git $ cd Adsorption_Breakthrough_Analysis $ tox -e docs $ open docs/build/html/index.html ``` The documentation automatically installs the package as well as the `docs` extra specified in the [`setup.cfg`](setup.cfg). `sphinx` plugins like `texext` can be added there. Additionally, they need to be added to the `extensions` list in [`docs/source/conf.py`](docs/source/conf.py). ### ๐Ÿ“ฆ Making a Release After installing the package in development mode and installing `tox` with `pip install tox`, the commands for making a new release are contained within the `finish` environment in `tox.ini`. Run the following from the shell: ```shell $ tox -e finish ``` This script does the following: 1. Uses [Bump2Version](https://github.com/c4urself/bump2version) to switch the version number in the `setup.cfg`, `src/AdsorptionBreakthroughAnalysis/version.py`, and [`docs/source/conf.py`](docs/source/conf.py) to not have the `-dev` suffix 2. Packages the code in both a tar archive and a wheel using [`build`](https://github.com/pypa/build) 3. Uploads to PyPI using [`twine`](https://github.com/pypa/twine). Be sure to have a `.pypirc` file configured to avoid the need for manual input at this step 4. Push to GitHub. You'll need to make a release going with the commit where the version was bumped. 5. Bump the version to the next patch. If you made big changes and want to bump the version by minor, you can use `tox -e bumpversion -- minor` after. </details>
AdsorptionBreakthroughAnalysis
/AdsorptionBreakthroughAnalysis-0.0.2.tar.gz/AdsorptionBreakthroughAnalysis-0.0.2/README.md
README.md
AdsorptionBreakthroughAnalysis |release| Documentation ====================================================== Cookiecutter ------------ This package was created with the `cookiecutter <https://github.com/cookiecutter/cookiecutter>`_ package using `cookiecutter-snekpack <https://github.com/cthoyt/cookiecutter-snekpack>`_ template. It comes with the following: - Standard `src/` layout - Declarative setup with `setup.cfg` and `pyproject.toml` - Reproducible tests with `pytest` and `tox` - A command line interface with `click` - A vanity CLI via python entrypoints - Version management with `bumpversion` - Documentation build with `sphinx` - Testing of code quality with `flake8` in `tox` - Testing of documentation coverage with `docstr-coverage` in `tox` - Testing of documentation format and build in `tox` - Testing of package metadata completeness with `pyroma` in `tox` - Testing of MANIFEST correctness with `check-manifest` in `tox` - Testing of optional static typing with `mypy` in `tox` - A `py.typed` file so other packages can use your type hints - Automated running of tests on each push with GitHub Actions - Configuration for `ReadTheDocs <https://readthedocs.org/>`_ - A good base `.gitignore` generated from `gitignore.io <https://gitignore.io>`_. - A pre-formatted README with badges - A pre-formatted LICENSE file with the MIT License (you can change this to whatever you want, though) - A pre-formatted CONTRIBUTING guide - Automatic tool for releasing to PyPI with ``tox -e finish`` - A copy of the `Contributor Covenant <https://www.contributor-covenant.org>`_ as a basic code of conduct Table of Contents ----------------- .. toctree:: :maxdepth: 2 :caption: Getting Started :name: start installation usage cli Indices and Tables ------------------ * :ref:`genindex` * :ref:`modindex` * :ref:`search`
AdsorptionBreakthroughAnalysis
/AdsorptionBreakthroughAnalysis-0.0.2.tar.gz/AdsorptionBreakthroughAnalysis-0.0.2/docs/source/index.rst
index.rst
Installation ============ The most recent release can be installed from `PyPI <https://pypi.org/project/AdsorptionBreakthroughAnalysis>`_ with: .. code-block:: shell $ pip install AdsorptionBreakthroughAnalysis The most recent code and data can be installed directly from GitHub with: .. code-block:: shell $ pip install git+https://github.com/RCCS-CaptureTeam/Adsorption_Breakthrough_Analysis.git To install in development mode, use the following: .. code-block:: shell $ git clone git+https://github.com/RCCS-CaptureTeam/Adsorption_Breakthrough_Analysis.git $ cd Adsorption_Breakthrough_Analysis $ pip install -e .
AdsorptionBreakthroughAnalysis
/AdsorptionBreakthroughAnalysis-0.0.2.tar.gz/AdsorptionBreakthroughAnalysis-0.0.2/docs/source/installation.rst
installation.rst
============ tfidfpackage ============ .. image:: https://img.shields.io/pypi/v/term_frequency.svg :target: https://pypi.python.org/pypi/term_frequency .. image:: https://img.shields.io/travis/dsmall/term_frequency.svg :target: https://travis-ci.org/dsmall/term_frequency .. image:: https://readthedocs.org/projects/term-frequency/badge/?version=latest :target: https://term-frequency.readthedocs.io/en/latest/?badge=latest :alt: Documentation Status Python Boilerplate contains all the boilerplate you need to create a Python package. * Free software: MIT license * Documentation: https://term-frequency.readthedocs.io. Features -------- * TODO Credits ------- This package was created with Cookiecutter_ and the `audreyr/cookiecutter-pypackage`_ project template. .. _Cookiecutter: https://github.com/audreyr/cookiecutter .. _`audreyr/cookiecutter-pypackage`: https://github.com/audreyr/cookiecutter-pypackage
Adsys-PDFReaderTool
/Adsys_PDFReaderTool-0.0.1.tar.gz/Adsys_PDFReaderTool-0.0.1/README.rst
README.rst
.. highlight:: shell ============ Contributing ============ Contributions are welcome, and they are greatly appreciated! Every little bit helps, and credit will always be given. You can contribute in many ways: Types of Contributions ---------------------- Report Bugs ~~~~~~~~~~~ Report bugs at https://github.com/dsmall/term_frequency/issues. If you are reporting a bug, please include: * Your operating system name and version. * Any details about your local setup that might be helpful in troubleshooting. * Detailed steps to reproduce the bug. Fix Bugs ~~~~~~~~ Look through the GitHub issues for bugs. Anything tagged with "bug" and "help wanted" is open to whoever wants to implement it. Implement Features ~~~~~~~~~~~~~~~~~~ Look through the GitHub issues for features. Anything tagged with "enhancement" and "help wanted" is open to whoever wants to implement it. Write Documentation ~~~~~~~~~~~~~~~~~~~ tfidfpackage could always use more documentation, whether as part of the official tfidfpackage docs, in docstrings, or even on the web in blog posts, articles, and such. Submit Feedback ~~~~~~~~~~~~~~~ The best way to send feedback is to file an issue at https://github.com/dsmall/term_frequency/issues. If you are proposing a feature: * Explain in detail how it would work. * Keep the scope as narrow as possible, to make it easier to implement. * Remember that this is a volunteer-driven project, and that contributions are welcome :) Get Started! ------------ Ready to contribute? Here's how to set up `term_frequency` for local development. 1. Fork the `term_frequency` repo on GitHub. 2. Clone your fork locally:: $ git clone [email protected]:your_name_here/term_frequency.git 3. Install your local copy into a virtualenv. Assuming you have virtualenvwrapper installed, this is how you set up your fork for local development:: $ mkvirtualenv term_frequency $ cd term_frequency/ $ python setup.py develop 4. Create a branch for local development:: $ git checkout -b name-of-your-bugfix-or-feature Now you can make your changes locally. 5. When you're done making changes, check that your changes pass flake8 and the tests, including testing other Python versions with tox:: $ flake8 term_frequency tests $ python setup.py test or py.test $ tox To get flake8 and tox, just pip install them into your virtualenv. 6. Commit your changes and push your branch to GitHub:: $ git add . $ git commit -m "Your detailed description of your changes." $ git push origin name-of-your-bugfix-or-feature 7. Submit a pull request through the GitHub website. Pull Request Guidelines ----------------------- Before you submit a pull request, check that it meets these guidelines: 1. The pull request should include tests. 2. If the pull request adds functionality, the docs should be updated. Put your new functionality into a function with a docstring, and add the feature to the list in README.rst. 3. The pull request should work for Python 2.7, 3.4, 3.5 and 3.6, and for PyPy. Check https://travis-ci.org/dsmall/term_frequency/pull_requests and make sure that the tests pass for all supported Python versions. Tips ---- To run a subset of tests:: $ python -m unittest tests.test_term_frequency Deploying --------- A reminder for the maintainers on how to deploy. Make sure all your changes are committed (including an entry in HISTORY.rst). Then run:: $ bumpversion patch # possible: major / minor / patch $ git push $ git push --tags Travis will then deploy to PyPI if tests pass.
Adsys-PDFReaderTool
/Adsys_PDFReaderTool-0.0.1.tar.gz/Adsys_PDFReaderTool-0.0.1/CONTRIBUTING.rst
CONTRIBUTING.rst
.. highlight:: shell ============ Installation ============ Stable release -------------- To install tfidfpackage, run this command in your terminal: .. code-block:: console $ pip install term_frequency This is the preferred method to install tfidfpackage, as it will always install the most recent stable release. If you don't have `pip`_ installed, this `Python installation guide`_ can guide you through the process. .. _pip: https://pip.pypa.io .. _Python installation guide: http://docs.python-guide.org/en/latest/starting/installation/ From sources ------------ The sources for tfidfpackage can be downloaded from the `Github repo`_. You can either clone the public repository: .. code-block:: console $ git clone git://github.com/dsmall/term_frequency Or download the `tarball`_: .. code-block:: console $ curl -OL https://github.com/dsmall/term_frequency/tarball/master Once you have a copy of the source, you can install it with: .. code-block:: console $ python setup.py install .. _Github repo: https://github.com/dsmall/term_frequency .. _tarball: https://github.com/dsmall/term_frequency/tarball/master
Adsys-PDFReaderTool
/Adsys_PDFReaderTool-0.0.1.tar.gz/Adsys_PDFReaderTool-0.0.1/docs/installation.rst
installation.rst
from os import listdir from os.path import isfile, join import pprint as pp import extract import kivy import config DEFAULT_DIR = config.pathName normalizedTermFrequency = {} dictOFIDFNoDuplicates = {} def run_tfidf(dirname): #Here is where load needs to be called, disable other buttons if filename_ None def create_dirifles(fName = None): if fName is None: return "Select File/Folder to Generate PDF Keywords" docu = extract.extractTexttoarray(fName) docs = [] for indx in docu: docs.append(", ".join(map(str, indx))) return docs #---Calculate term frequency -- documents = create_dirifles(DEFAULT_DIR) #First: tokenize words dictOfWords = {} for index, sentence in enumerate(documents): tokenizedWords = sentence.split(' ') dictOfWords[index] = [(word,tokenizedWords.count(word)) for word in tokenizedWords] #print(dictOfWords) #second: remove duplicates termFrequency = {} for i in range(0, len(documents)): listOfNoDuplicates = [] for wordFreq in dictOfWords[i]: if wordFreq not in listOfNoDuplicates: listOfNoDuplicates.append(wordFreq) termFrequency[i] = listOfNoDuplicates #print(termFrequency) #Third: normalized term frequency #normalizedTermFrequency = {} for i in range(0, len(documents)): sentence = dictOfWords[i] lenOfSentence = len(sentence) listOfNormalized = [] for wordFreq in termFrequency[i]: normalizedFreq = wordFreq[1]/lenOfSentence listOfNormalized.append((wordFreq[0],normalizedFreq)) normalizedTermFrequency[i] = listOfNormalized #print(normalizedTermFrequency) #---Calculate IDF #First: put all sentences together and tokenze words allDocuments = '' for sentence in documents: allDocuments += sentence + ' ' allDocumentsTokenized = allDocuments.split(' ') #print(allDocumentsTokenized) allDocumentsNoDuplicates = [] for word in allDocumentsTokenized: if word not in allDocumentsNoDuplicates: allDocumentsNoDuplicates.append(word) #print(allDocumentsNoDuplicates) #Calculate the number of documents where the term t appears dictOfNumberOfDocumentsWithTermInside = {} for index, vocab in enumerate(allDocumentsNoDuplicates): count = 0 for sentence in documents: if vocab in sentence: count += 1 dictOfNumberOfDocumentsWithTermInside[index] = (vocab, count) #print(dictOfNumberOfDocumentsWithTermInside) #calculate IDF #dictOFIDFNoDuplicates = {} import math for i in range(0, len(normalizedTermFrequency)): listOfIDFCalcs = [] for word in normalizedTermFrequency[i]: for x in range(0, len(dictOfNumberOfDocumentsWithTermInside)): if word[0] == dictOfNumberOfDocumentsWithTermInside[x][0]: listOfIDFCalcs.append((word[0],math.log(len(documents)/dictOfNumberOfDocumentsWithTermInside[x][1]))) dictOFIDFNoDuplicates[i] = listOfIDFCalcs #return normalizedTermFrequency dictOFIDFNoDuplicates # for word,b in dictOFIDFNoDuplicates.items(): # print(word, ":",b) def PDF_keywords(): run_tfidf(DEFAULT_DIR) dictOFTF_IDF = {} bad_chars = [';', ':', '!', "*", "'", ")", ".", "-", "...", "(",',','``'] for i in range(0,len(normalizedTermFrequency)): listOFTF_IDF = [] TFIDF_Sort = {} TFsentence = normalizedTermFrequency[i] IDFsentence = dictOFIDFNoDuplicates[i] for doc_Keyidx in range(0, len(TFsentence)): if TFsentence[doc_Keyidx ][0] not in bad_chars and not TFsentence[doc_Keyidx][0].isdigit(): #listOFTF_IDF.append((TFsentence[x][0],TFsentence[x][1]*IDFsentence[x][1])) tf_Generated_Keywords = TFsentence[doc_Keyidx ][0] tf_keywordscores = TFsentence[doc_Keyidx][1]*IDFsentence[doc_Keyidx][1] #Need to format output text of tf_keywords,tf_keywordscores listOFTF_IDF.append((("%s" % (tf_Generated_Keywords)),tf_keywordscores)) TFIDF_Sort[tf_Generated_Keywords] = tf_keywordscores dictOFTF_IDF[i] = listOFTF_IDF #sort Functionality # pairs = [(word, tfidf) for word, tfidf in TFIDF_Sort.items()] # # Why by [1] ? # pairs.sort(key = lambda p: p[1]) # top_10 = pairs[-20:] # print("TOP 10 TFIDF") # pp.pprint(top_10) # print("BOTTOM 10 TFIDF") # pp.pprint(pairs[0:20]) return dictOFTF_IDF if __name__ == '__main__': run_tfidf(DEFAULT_DIR)
Adsys-PDFReaderTool
/Adsys_PDFReaderTool-0.0.1.tar.gz/Adsys_PDFReaderTool-0.0.1/term_frequency/tf_idf.py
tf_idf.py
import tf_idf as terms, extract import kivy,pprint,os,re from kivy.app import App from kivy.uix.gridlayout import GridLayout from kivy.uix.floatlayout import FloatLayout from kivy.uix.widget import Widget from kivy.uix.textinput import TextInput from kivy.uix.button import Button from kivy.uix.label import Label from kivy.uix.popup import Popup from kivy.properties import StringProperty from kivy.properties import ObjectProperty from kivy.factory import Factory import pandas as pd import config kivy.require('1.9.1') class LoadDialog(FloatLayout): load = ObjectProperty(None) cancel = ObjectProperty(None) class PdfReaderMainScreen(Widget): #instance variables textinputtext = StringProperty() dirtext = StringProperty() #Dont need to pass in FILENAME here consider refactoring method call files = extract.opendir(config.pathName) # Why is this returning nothing when it getting output tf_idf_Keywords = terms.PDF_keywords() loadfile = ObjectProperty(None) text_input = ObjectProperty(None) def __init__(self,currentPage=None,**kwargs): super(PdfReaderMainScreen, self).__init__(**kwargs) self.currentPage = 0 self.max = len(PdfReaderMainScreen.tf_idf_Keywords) self.textinputtext = str(PdfReaderMainScreen.tf_idf_Keywords) self.dirtext = str(PdfReaderMainScreen.files) def dismiss_popup(self): self._popup.dismiss() def show_load(self): content = LoadDialog(load=self.load, cancel=self.dismiss_popup) self._popup = Popup(title="Load file", content=content, size_hint=( 0.5,None), size=(400, 400)) self._popup.open() def load(self,path, filename): # filename = path #with open(os.path.join(path, filename[0])) as stream: #self.text_input.text = stream.read() # result = str(os.path.join(path, filename)) terms.run_tfidf(filename[0]) opendir(filename[0]) print("FilePath " + str(filename[0])) self.dismiss_popup() #Really dont need to return anything here ! return filename[0] def generate_KeyWord_Btn(self): str_holder = "" # pd.set_option('display.max_columns', 100) # pd.set_option('display.max_rows', 500) # pd.set_option('display.max_columns', 500) # df = pd.DataFrame(PdfReaderMainScreen.tf_idf_Keywords[0],columns=['Term',' TDIDF']) for a_tuple in PdfReaderMainScreen.tf_idf_Keywords[0]: # iterates through each tuple str_holder+='{}, '.format(*a_tuple) self.textinputtext = str(str_holder) self.dirtext = str(PdfReaderMainScreen.files[0]) #self.textinputtext = ' term : {}'.format(str(terms.PDF_keywords()[0])) def generate_doclist(self): self.dirtext = str(PdfReaderMainScreen.files[self.currentPage]) def next_Btn(self): #Utlize instance variable to save the state if(self.currentPage <= self.max): try: print("Current Page %s" % (self.currentPage)) #increment the counter self.currentPage +=1 # pd.set_option('display.max_columns', 100) # pd.set_option('display.max_rows', 500) # pd.set_option('display.max_columns', 500) #df = pd.DataFrame(PdfReaderMainScreen.tf_idf_Keywords[self.currentPage],columns=['Term','TDIDF']) str_holder = "" for a_tuple in PdfReaderMainScreen.tf_idf_Keywords[self.currentPage]: # iterates through each tuple #Unpack tuple and format with comma str_holder+='{}, '.format(*a_tuple) #Unpack tuple and format with fix spaces #str_holder+='{:<20} {}\n'.format(*a_tuple) #Display the keywords in GUI/make it viewable self.textinputtext = str(str_holder) self.dirtext = str(PdfReaderMainScreen.files[self.currentPage]) except KeyError: self.textinputteIxt = "" self.currentPage = self.max print("Set to last Page %s" % (self.max)) return self.currentPage else: print("Page %s" % (self.currentPage)) return True def previous_Btn(self): if(self.currentPage > 0): print("Current Page %s" % (self.currentPage)) try: #decrement the counter str_holder = "" #increment the counter self.currentPage -=1 # pd.set_option('display.max_columns', 100) # pd.set_option('display.max_rows', 500) # pd.set_option('display.max_columns', 500) # df = pd.DataFrame(PdfReaderMainScreen.tf_idf_Keywords[self.currentPage],columns=['Term','TDIDF']) for a_tuple in PdfReaderMainScreen.tf_idf_Keywords[self.currentPage]: # iterates through each tuple #Unpack tuple and format with fix spaces str_holder+='{}, '.format(*a_tuple) #Display the keywords in GUI/make it viewable self.textinputtext = str(str_holder) self.dirtext = str(PdfReaderMainScreen.files[self.currentPage]) except KeyError: if self.currentPage == self.max: self.currentPage = self.max print('decrement the counter"') print("Previous Page %s" % (self.currentPage)) def run_model(): self.generate_KeyWord_Btn.disabled=True self.previous_Btn.disabled=True self.next_Btn = True t=Thread(target=run, args=()) t.start() Clock.schedule_interval(partial(disable, t), 8) def disable(t, what): if not t.isAlive(): self.load.disabled=False self.generate_KeyWord_Btn.disabled=False self.previous_Btn.disabled=False self.next_Btn= False return False class PdfReaderUI(Widget): pass class PdfReaderApp(App): def build(self): return PdfReaderUI() Factory.register('LoadDialog', cls=LoadDialog) if __name__ == '__main__': PdfReaderApp().run()
Adsys-PDFReaderTool
/Adsys_PDFReaderTool-0.0.1.tar.gz/Adsys_PDFReaderTool-0.0.1/term_frequency/PdfReader.py
PdfReader.py
from pprint import pprint from collections import defaultdict import PyPDF2 from os import listdir from os.path import isfile, join import pprint as pp import nltk from nltk.tokenize import sent_tokenize, word_tokenize from nltk.corpus import stopwords import extract filename = "/Users/dontesmall/Desktop/pdf_test_folder" CORPUS = extract.extractTexttoarray((filename)) documents = [] for indx in CORPUS: documents.append(", ".join(map(str, indx))) # Format of the corpus is that each newline has a new 'document' # CORPUS = """ # In information retrieval, tfโ€“idf or TFIDF, short for term frequencyโ€“inverse document frequency, is a numerical statistic that is intended to reflect how important a word is to a document in a collection or corpus.[1] It is often used as a weighting factor in searches of information retrieval, text mining, and user modeling. The tfโ€“idf value increases proportionally to the number of times a word appears in the document and is offset by the number of documents in the corpus that contain the word, which helps to adjust for the fact that some words appear more frequently in general. Tfโ€“idf is one of the most popular term-weighting schemes today; 83% of text-based recommender systems in digital libraries use tfโ€“idf. # LeBron Raymone James Sr. (/lษ™หˆbrษ’n/; born December 30, 1984), often referred to mononymously as LeBron, is an American professional basketball player for the Los Angeles Lakers of the National Basketball Association (NBA). He is often considered the best basketball player in the world and regarded by some as the greatest player of all time.[1][2][3][4] His accomplishments include four NBA Most Valuable Player Awards, three NBA Finals MVP Awards, and two Olympic gold medals. James has appeared in fifteen NBA All-Star Games and been named NBA All-Star MVP three times. He won the 2008 NBA scoring title, is the all-time NBA playoffs scoring leader, and is fourth in all-time career points scored. He has been voted onto the All-NBA First Team twelve times and the All-Defensive First Team five times. # Marie Skล‚odowska Curie (/หˆkjสŠษ™ri/;[3] French: [kyสi]; Polish: [kสฒiหˆri]; born Maria Salomea Skล‚odowska;[a] 7 November 1867 โ€“ 4 July 1934) was a Polish and naturalized-French physicist and chemist who conducted pioneering research on radioactivity. She was the first woman to win a Nobel Prize, the first person and only woman to win twice, and the only person to win a Nobel Prize in two different sciences. She was part of the Curie family legacy of five Nobel Prizes. She was also the first woman to become a professor at the University of Paris, and in 1995 became the first woman to be entombed on her own merits in the Panthรฉon in Paris. # """.strip().lower() DOC_ID_TO_TF = {} # doc-id -> {tf: term_freq_map where term_freq_map is word -> percentage of words in doc that is this one, CORPUS_CONTINER = str(documents).strip('[]') # tfidf: ...} DOCS = CORPUS_CONTINER.split("\n") # Documents where the index is the doc id WORDS = CORPUS_CONTINER.split() DF = defaultdict(lambda: 0) for word in WORDS: DF[word] += 1 for doc_id, doc in enumerate(DOCS): #print("HERE IS THE DOCS :" + str(DOCS)) #Num of times of the word showed up in doc TF = defaultdict(lambda: 0) TFIDF = {} doc_words = doc.split() word_count = len(doc_words) # percentage of words in doc that is this one = count of this word in this doc / total number of words in this doc for word in doc_words: # Here is the total num of count TF[word] +=1 for word in TF.keys(): TF[word] /= word_count TFIDF[word] = TF[word] / DF[word] # loop over tfidt to sort it as a map pairs = [(word, tfidf) for word, tfidf in TFIDF.items()] # Why by [1] ? pairs.sort(key = lambda p: p[1]) top_10 = pairs[-15:] print("TOP 10 TFIDF") pprint(top_10) print("BOTTOM 10 TFIDF") pprint(pairs[0:15]) DOC_ID_TO_TF[doc_id] = {'tf': TF, 'tfidf': TFIDF} # pprint(DOC_ID_TO_TF)
Adsys-PDFReaderTool
/Adsys_PDFReaderTool-0.0.1.tar.gz/Adsys_PDFReaderTool-0.0.1/term_frequency/tfidf.py
tfidf.py
from .events import AddEvent, SetReadingEvent, SetFinishedEvent, ReadEvent, \ KindleEvent class ReadingStatus(object): """An enum representing the three possible progress states of a book. """ NOT_STARTED, CURRENT, COMPLETED = xrange(3) class BookSnapshot(object): """A book's state of progress. Args: asin: The ASIN of the book status: The book's ReadingStatus value progress: An integral value representing the current reading progress. This value is meaningless unless `status` is CURRENT as progress is untracked for books not currently being read. """ def __init__(self, asin, status=ReadingStatus.NOT_STARTED, progress=None): self.asin = asin self.status = status self.progress = progress class KindleLibrarySnapshot(object): """A snapshot of the state of a Kindle library. Args: events: An iterable of ``KindleEvent``s which are applied in sequence to build the snapshot's state. """ def __init__(self, events=()): self._data = {} for event in events: self.process_event(event) def process_event(self, event): """Apply an event to the snapshot instance """ if not isinstance(event, KindleEvent): pass elif isinstance(event, AddEvent): self._data[event.asin] = BookSnapshot(event.asin) elif isinstance(event, SetReadingEvent): self._data[event.asin].status = ReadingStatus.CURRENT self._data[event.asin].progress = event.initial_progress elif isinstance(event, ReadEvent): self._data[event.asin].progress += event.progress elif isinstance(event, SetFinishedEvent): self._data[event.asin].status = ReadingStatus.COMPLETED else: raise TypeError def get_book(self, asin): """Return the `BookSnapshot` object associated with `asin` Raises: KeyError: If asin not found in current snapshot """ return self._data[asin] def calc_update_events(self, asin_to_progress): """Calculate and return an iterable of `KindleEvent`s which, when applied to the current snapshot, result in the the current snapshot reflecting the progress state of the `asin_to_progress` mapping. Functionally, this method generates `AddEvent`s and `ReadEvent`s from updated Kindle Library state. Args: asin_to_progress: A map of book asins to the integral representation of progress used in the current snapshot. Returns: A list of Event objects that account for the changes detected in the `asin_to_progress`. """ new_events = [] for asin, new_progress in asin_to_progress.iteritems(): try: book_snapshot = self.get_book(asin) except KeyError: new_events.append(AddEvent(asin)) else: if book_snapshot.status == ReadingStatus.CURRENT: change = new_progress - book_snapshot.progress if change > 0: new_events.append(ReadEvent(asin, change)) return new_events
Aduro
/Aduro-0.0.1a0.tar.gz/Aduro-0.0.1a0/aduro/snapshot.py
snapshot.py
from .events import UpdateEvent from .snapshot import KindleLibrarySnapshot from lector.reader import KindleCloudReaderAPI, KindleAPIError from datetime import datetime class KindleProgressMgr(object): """Manages the Kindle reading progress state held in the the `EventStore` instance, `store` Args: store: An `EventStore` instance containing the past events kindle_uname: The email associated with the Kindle account kindle_pword: The password associated with the Kindle account """ def __init__(self, store, kindle_uname, kindle_pword): self.store = store self._snapshot = KindleLibrarySnapshot(store.get_events()) self._event_buf = [] self.uname = kindle_uname self.pword = kindle_pword self.books = None self.progress = None @property def uncommited_events(self): """A logically sorted list of `Events` that are have been registered to be committed to the current object's state but remain uncommitted. """ return list(sorted(self._event_buf)) def detect_events(self, max_attempts=3): """Returns a list of `Event`s detected from differences in state between the current snapshot and the Kindle Library. `books` and `progress` attributes will be set with the latest API results upon successful completion of the function. Returns: If failed to retrieve progress, None Else, the list of `Event`s """ # Attempt to retrieve current state from KindleAPI for _ in xrange(max_attempts): try: with KindleCloudReaderAPI\ .get_instance(self.uname, self.pword) as kcr: self.books = kcr.get_library_metadata() self.progress = kcr.get_library_progress() except KindleAPIError: continue else: break else: return None # Calculate diffs from new progress progress_map = {book.asin: self.progress[book.asin].locs[1] for book in self.books} new_events = self._snapshot.calc_update_events(progress_map) update_event = UpdateEvent(datetime.now().replace(microsecond=0)) new_events.append(update_event) self._event_buf.extend(new_events) return new_events def register_events(self, events=()): """Register `Event` objects in `events` to be committed. NOTE: This does not automatically commit the events. A separate `commit_updates` call must be made to make the commit. """ self._event_buf.extend(events) def commit_events(self): """Applies all outstanding `Event`s to the internal state """ # Events are sorted such that, when applied in order, each event # represents a logical change in state. That is, an event never requires # future events' data in order to be parsed. # e.g. All ADDs must go before START READINGs # All START READINGs before all READs for event in sorted(self._event_buf): self.store.record_event(event) self._snapshot.process_event(event) self._event_buf = []
Aduro
/Aduro-0.0.1a0.tar.gz/Aduro-0.0.1a0/aduro/manager.py
manager.py
import re import dateutil.parser POSITION_MEASURE = 'LOCATION' class EventParseError(Exception): """Indicate an error in parsing an event from a string """ pass class Event(object): """A base event. """ pass class KindleEvent(Event): """A base kindle event. Establishes sortability of Events based on the `weight` property """ _WEIGHT = None asin = None @property def weight(self): """Define the sorting order of events """ return self._WEIGHT @staticmethod def from_str(string): """Generate a `KindleEvent`-type object from a string """ raise NotImplementedError def __eq__(self, other): return self.weight == other.weight and self.asin == other.asin def __lt__(self, other): return self.weight < other.weight and self.asin < other.asin def __gt__(self, other): return self.weight > other.weight and self.asin > other.asin def __ne__(self, other): return not self == other class AddEvent(KindleEvent): """Represent the addition of a book to the Kindle Library """ _WEIGHT = 0 def __init__(self, asin): super(AddEvent, self).__init__() self.asin = asin def __str__(self): return 'ADD %s' % (self.asin,) @staticmethod def from_str(string): """Generate a `AddEvent` object from a string """ match = re.match(r'^ADD (\w+)$', string) if match: return AddEvent(match.group(1)) else: raise EventParseError class SetReadingEvent(KindleEvent): """Represents the user's desire to record progress of a book """ _WEIGHT = 1 def __init__(self, asin, initial_progress): super(SetReadingEvent, self).__init__() self.asin = asin self.initial_progress = initial_progress def __str__(self): return 'START READING %s FROM %s %d' % (self.asin, POSITION_MEASURE, self.initial_progress) @staticmethod def from_str(string): """Generate a `SetReadingEvent` object from a string """ match = re.match(r'^START READING (\w+) FROM \w+ (\d+)$', string) if match: return SetReadingEvent(match.group(1), int(match.group(2))) else: raise EventParseError class ReadEvent(KindleEvent): """Represents the advance of a user's progress in a book """ _WEIGHT = 2 def __init__(self, asin, progress): super(ReadEvent, self).__init__() self.asin = asin self.progress = progress if progress <= 0: raise ValueError('Progress field must be positive') def __str__(self): return 'READ %s FOR %d %sS' % (self.asin, self.progress, POSITION_MEASURE) @staticmethod def from_str(string): """Generate a `ReadEvent` object from a string """ match = re.match(r'^READ (\w+) FOR (\d+) \w+S$', string) if match: return ReadEvent(match.group(1), int(match.group(2))) else: raise EventParseError class SetFinishedEvent(KindleEvent): """Represents a user's completion of a book """ _WEIGHT = 3 def __init__(self, asin): super(SetFinishedEvent, self).__init__() self.asin = asin def __str__(self): return 'FINISH READING %s' % (self.asin) @staticmethod def from_str(string): """Generate a `SetFinishedEvent` object from a string """ match = re.match(r'^FINISH READING (\w+)$', string) if match: return SetFinishedEvent(match.group(1)) else: raise EventParseError class UpdateEvent(Event): """Represents a user's update of the Kindle database """ def __init__(self, a_datetime): super(UpdateEvent, self).__init__() self.datetime_ = a_datetime def __str__(self): return 'UPDATE %s' % self.datetime_.isoformat() @staticmethod def from_str(string): """Generate a `SetFinishedEvent` object from a string """ match = re.match(r'^UPDATE (.+)$', string) if match: parsed_date = dateutil.parser.parse(match.group(1), ignoretz=True) return UpdateEvent(parsed_date) else: raise EventParseError
Aduro
/Aduro-0.0.1a0.tar.gz/Aduro-0.0.1a0/aduro/events.py
events.py
Adv2 ==== This package provides a 'reader' for .adv (AstroDigitalVideo) Version 2 files. It is the result of a collaborative effort involving Bob Anderson and Hristo Pavlov. The specification for Astro Digital Video files can be found at: <http://www.astrodigitalvideoformat.org/spec.html> To install this package on your system: pip install Adv2 Then, sample usage from within your Python code is: from pathlib import Path from Adv2.Adv2File import Adv2reader try: # Create a platform agnostic path to your .adv file (use forward slashes) file_path = str(Path('path/to/your/file.adv')) # Python will make Windows version as needed # Create a 'reader' for the given file rdr = Adv2reader(file_path) except AdvLibException as adverr: print(repr(adverr)) exit() except IOError as ioerr: print(repr(ioerr)) exit() Now that the file has been opened and a 'reader' (rdr) created for it, there are instance variables available that will be useful. Here is how to print some of those out (these give the image size and number of images in the file): print(f'Width: {rdr.Width} Height: {rdr.Height} NumMainFrames: {rdr.CountMainFrames}') There is also an composite instance variable called `FileInfo` which gives access to all of the values defined in the structure `AdvFileInfo` (there are 20 of them). For example: print(rdr.FileInfo.UtcTimestampAccuracyInNanoseconds) To get (and show) the file metadata (returned as a Dict[str, str]): print(f'\nADV_FILE_META_DATA:') meta_data = rdr.getAdvFileMetaData() for key in meta_data: print(f' {key}: {meta_data[key]}') The main thing that one will want to do is read image data, timestamps, and frame status information from image frames. Continuing with the example and assuming that the adv file contains a MAIN stream (it might also contain a CALIBRATION stream): for frame in range(rdr.CountMainFrames): # status is a Dict[str, str] err, image, frameInfo, status = rdr.getMainImageAndStatusData(frameNumber=frame) # To get frames from a CALIBRATION stream, use rdr.getCalibImageAndStatusData() if not err: # If timestamp info was not present in the file (highly unlikely), # the timestamp string returned will be empty (== '') if frameInfo.StartOfExposureTimestampString: print(frameInfo.DateString, frameInfo.StartOfExposureTimestampString) print(f'\nframe: {frame} STATUS:') for entry in status: print(f' {entry}: {status[entry]}') else: print(err) `err` is a string that will be empty if image bytes and metadata where successfully extracted. In that case, `image` will contain a numpy array of uint16 values. If `err` is not empty, it will contain a human-readable description of the error encountered. The 'shape' of image will be `image[Height, Width]` for grayscale images. Color video files are not yet supported. Finally, the file should closed as in the example below: print(f'closeFile returned: {rdr.closeFile()}') rdr = None The value returned will be the version number (2) of the file closed or 0, which indicates an attempt to close a file that was already closed.
Adv2
/Adv2-1.2.0.tar.gz/Adv2-1.2.0/README.md
README.md
from state_machine import StateReader # Some imports import itertools import warnings import console import re # Colour class to make output more pretty. # class Colour: PURPLE = '\033[95m' CYAN = '\033[96m' DARKCYAN = '\033[36m' BLUE = '\033[94m' GREEN = '\033[92m' YELLOW = '\033[93m' RED = '\033[91m' BOLD = '\033[1m' UNDERLINE = '\033[4m' END = '\033[0m' # Some variables to be used. __VALUE__ = '__VALUE__' __PREFIX__ = '__PREFIX__' __FIELD__ = '__FIELD__' # Indicates that a master command can have a 1 to 1 binding # to another parameter afterwards. __BINDING__ = '__bondage__' # Help handle binding __HELPER__ = '__helper__' # Version handle binding __VERSION__ = '__version__' # Helper to indicate that a boolean is being stored __ACTIVE__ = '__kinkysex__' # Binds an outside function to the failsafe handler. # Without a bound function a default error message will be # printed instead. # => failsafe_function # Values to be used in the options tree __ALIASES__ = '__alias__' __FUNCT__ = '__funct__' __DATAFIELD__ = '__defdat__' __TYPE__ = '__type__' # Notes are used to describe commands __NOTE__ = '__note__' # Labels are used to sub-divide master/ sub commands __LABEL__ = '__label__' # Main class for the python advanced options parser. # class AdvOptParse: def __init__(self, masters = None): self.__set_masters(masters) self.failsafe_function = None self.container_name = None self.fields_name = "Fields" self.slave_fields = None self.hidden_subs = False self.debug = False if masters == {}: self.has_commands = False # Set the name of the container application that's used in the help screen # def set_container_name(self, name): self.container_name = name # Set the name of the fields for the help screen # def set_fields_name(self, name): self.fields_name = name # Hash of master level commands. CAN contain a global function to determine actions of # subcommands. # (See docs). # def __set_masters(self, masters): if masters == None: warnings.warn("Warning! You shouldn't init a parser without your master commands set!") # self.masters = master self.opt_hash = {} for key, value in masters.iteritems(): self.opt_hash[key] = {} self.opt_hash[key][__FUNCT__] = value[0] self.opt_hash[key][__NOTE__] = value[1] self.set_master_aliases(key, []) self.set_master_fields(key, False) self.has_commands = True # Setup the version and helper handle. By default '-h' and '--version' are set to 'True' self.opt_hash[__HELPER__] = { __ALIASES__: ['-h'], __ACTIVE__: True} self.opt_hash[__VERSION__] = { __ALIASES__: ['--version'], __ACTIVE__: True} # Takes the master level command and a hash of data # The hash of data needs to be formatted in the following sense: # {'X': funct} where X is any variable, option or command INCLUDING DASHES AND DOUBLE DASHES you want # to add to your parser. # Additionally you pass a function from your parent class that gets called when this option is detected in a # string that is being parsed. The function by detault takes three parameters: # # master command (i.e. copy), parent option (i.e. '-v'), data field default (i.e. 'false'). So in an example for # # "clone -L 2" # # it would call the function: func('clone', '-L', '2') in the specified container class/ env. # # 'use' parameters include: 'value' : -v # (WORK IN PROGRESS) 'prefix': --logging true # 'field' : --file=/some/data # def add_suboptions(self, master, data): if master not in self.opt_hash: self.opt_hash[master] = {} for key, value in data.iteritems(): if value[1] == __PREFIX__: warnings.warn("Not implemented yet") ; return if key not in self.opt_hash[master]: self.opt_hash[master][key] = {} self.opt_hash[master][key][__ALIASES__] = [key] self.opt_hash[master][key][__TYPE__] = value[1] self.opt_hash[master][key][__DATAFIELD__] = value[0] self.opt_hash[master][key][__NOTE__] = value[2] # Create aliases for a master command that invoke the same # functions as the actual master command. # # This can be used to shorten commands that user need to # input (such as 'rails server' vs 'rails s' does it) # def set_master_aliases(self, master, aliases): if master not in self.opt_hash: warnings.warn("Could not identify master command. Aborting!") ; return if master not in aliases: aliases.append(master) self.opt_hash[master][__ALIASES__] = aliases # Allow a master command to bind to a sub field. def set_master_fields(self, master, fields): self.opt_hash[master][__BINDING__] = fields # Define a failsafe function to handle failed parsing attempts # If no function was registered default logging to STOUT will be used # def register_failsafe(self, funct): self.failsafe_function = funct # Defines the helper handles that are used to print the help screen. # def define_help_handle(self, helpers): self.opt_hash[__HELPER__][__ALIASES__] = helpers # Enable the helper handle (and list it in the helper screen) # def set_help_handle(self, boolean): self.opt_hash[__HELPER__][__ACTIVE__] = boolean # Defines the version handles that are used to print the version number of . # def define_version_handle(self, versions): self.opt_hash[__VERSION__][__ALIASES__] = versions # Enables the subs to be hidden from the help screen and thus only prints master, level commands # def set_hidden_subs(self, boolean): self.hidden_subs = boolean # Enable the version handle (and list it in the helper screen) # def set_version_handle(self, boolean): self.opt_hash[__VERSION__][__ACTIVE__] = boolean # Set a version string to be printed in the help screen and/or version handle def set_container_version(self, version): self.container_version = version # Define fields for a command that gets handled above sub commands. # Slave fields should be a hash with a string key and tuple value attached # to it. A one tuple can also be replaced with the actual information. # So: # 'field' => ('information', "Description of the field") # and # 'field' => 'information' # are both valid field types. # def define_fields(self, fields): if self.slave_fields == None: self.slave_fields = {} for key, value in fields.iteritems(): if key not in self.slave_fields: self.slave_fields[key] = {} self.slave_fields[key] = value # Key is the name of a field. # Value is a tuple of information to be passed down to a callback function when the field is # triggered. # A one tuple can also be replaced with the actual information # # So: # 'key' => ('information', "Description of the field") # and # 'key' => 'information' # are both valid field types. # def add_field(self, key, value): if self.slave_fields == None: self.slave_fields = {} self.slave_fields[key] = value # Create aliases for a sub command that invoke the same # functions as the actual sub command. # # This can be used to shorten commands that user need to # input (such as 'poke copy --file' vs 'poke copy -f') # # Can be combined with master alises to make short and nicely # cryptic commands: # poke server cp -f=~/file -t=directory/ # # == USAGE == # Specify the master level command as the first parameter. # Then use a hash with the original subs as the indices and # the aliases in a list as values. This allows for ALL aliases for # a master level command to be set at the same time without having # to call this function multiple times. # def sub_aliases(self, master, aliases): if master not in self.opt_hash: if self.debug: print "[DEBUG]:", "Could not identify master command. Aborting!" ; return for key, value in aliases.iteritems(): if key not in self.opt_hash[master]: warnings.warn("Could not identify sub command. Skipping") ; continue self.opt_hash[master][key][__ALIASES__] = value + list(set(self.opt_hash[master][key][__ALIASES__]) - set(value)) # Enables debug mode on the parser. # Will for example output the parsed and translated/ chopped strings to the console. # def enable_debug(self): self.debug = True # Print tree of options hashes and bound slave fields to master commands # def print_tree(self): if self.debug: print "[DEBUG]:", self.opt_hash # Parse a string either from a method parameter or from a commandline # argument. Calls master command functions with apropriate data attached # to it. # def parse(self, c = None): if self.slave_fields == None: self.define_fields({}) for alias in self.opt_hash[__HELPER__][__ALIASES__]: if c == alias: self.help_screen() return for alias in self.opt_hash[__VERSION__][__ALIASES__]: if c == alias: print self.container_version return content = StateReader().make(c) # content = (sys.args if (c == None) else c.split()) counter = 0 master_indices = [] focus = None for item in content: for master in self.opt_hash: if "__" not in master: if item in self.opt_hash[master][__ALIASES__]: master_indices.append(counter) counter += 1 counter = 0 skipper = False wait_for_slave = False master_indices.append(len(content)) # print master_indices # This loop iterates over the master level commands # of the to-be-parsed string for index in master_indices: if (counter + 1) < len(master_indices): # print (counter + 1), len(master_indices) data_transmit = {} subs = [] sub_counter = 0 slave_field = None has_slave = False # This loop iterates over the sub-commands of several master commands. # for cmd in itertools.islice(content, index, master_indices[counter + 1] + 1): # print sub_counter if sub_counter == 0: focus = self.__alias_to_master(cmd) # print focus, cmd if focus in self.opt_hash: if self.opt_hash[focus][__BINDING__]: wait_for_slave = True sub_counter += 1 continue else: rgged = cmd.replace('=', '=****').split('****') for sub_command in rgged: # print "Sub command:", sub_command if skipper: skipper = False continue if "=" in sub_command: sub_command = sub_command.replace('=', '') trans_sub_cmd = self.__alias_to_sub(focus, sub_command) if trans_sub_cmd in self.opt_hash[focus]: data_transmit[trans_sub_cmd] = rgged[1] skipper = True if trans_sub_cmd not in subs: subs.append(trans_sub_cmd) else: if wait_for_slave: has_slave = True wait_for_slave = False if sub_command in self.slave_fields: slave_field = (sub_command, self.slave_fields[sub_command]) else: if self.failsafe_function == None: print "An Error occured while parsing arguments." else: self.failsafe_function(cmd, 'Unknown field!') return continue trans_sub_cmd = self.__alias_to_sub(focus, sub_command) if trans_sub_cmd == None: if sub_command in self.opt_hash: if self.opt_hash[sub_command][__BINDING__]: # if self.debug: print "Waiting for slave field..." wait_for_slave = True continue if trans_sub_cmd in self.opt_hash[focus]: data_transmit[trans_sub_cmd] = True if trans_sub_cmd not in subs: subs.append(trans_sub_cmd) sub_counter += 1 self.opt_hash[focus][__FUNCT__](focus, slave_field, subs, data_transmit) return counter += 1 if self.failsafe_function == None: print "Error! No arguments recognised." else: self.failsafe_function(content, 'Invalid Options') # Return false if nothing was handled for container application to be able to raise Warning # Generates a help screen for the container appliction. # def help_screen(self): if self.slave_fields == None: self.define_fields({}) _s_ = " " _ds_ = " " _dds_ = " " # print "%-5s" % "Usage: Poke [Options]" # if self.debug: print "[DEBUG]: Your terminal's width is: %d" % width if not self.container_name and self.debug: print "[DEBUG]: Container application name unknown!" ; self.container_name = "default" if not self.opt_hash: print "Usage:", self.container_name else: print "Usage:", self.container_name, "[options]" if self.opt_hash[__VERSION__][__ACTIVE__] or self.opt_hash[__HELPER__][__ACTIVE__]: print "" print _s_ + "General:" if self.opt_hash[__VERSION__][__ACTIVE__]: print _ds_ + "%-20s %s" % (self.__clean_aliases(self.opt_hash[__VERSION__][__ALIASES__]), "Print the version of"), "'%s'" % self.container_name if self.opt_hash[__HELPER__][__ACTIVE__]: print _ds_ + "%-20s %s" % (self.__clean_aliases(self.opt_hash[__HELPER__][__ALIASES__]), "Print this help screen") if self.opt_hash and self.has_commands: print "" ; print _s_ + "Commands:" for key, value in self.opt_hash.iteritems(): if "__" not in key: print _ds_ + "%-20s %s" % (self.__clean_aliases(value[__ALIASES__]), value[__NOTE__]) if not self.hidden_subs: for k, v in self.opt_hash[key].iteritems(): if "__" not in k: print _dds_ + "%-22s %s" % (self.__clean_aliases(v[__ALIASES__]), v[__NOTE__]) print "" if self.slave_fields: print _s_ + self.fields_name + ":" for key, value in self.slave_fields.iteritems(): description = str(value)[1:-1].replace("\'", "") print _ds_ + "%-20s %s" % (key, description) def __clean_aliases(self, aliases): string = "" counter = 0 for alias in aliases: counter += 1 string += alias if counter < len(aliases): string += ", " return string def __alias_to_master(self, alias): for key, value in self.opt_hash.iteritems(): if "__" not in key: for map_alias in value[__ALIASES__]: if alias == map_alias: return key return None def __alias_to_sub(self, master, alias): for key, value in self.opt_hash[master].iteritems(): if "__" not in key: if alias in value[__ALIASES__]: return key return None
AdvOptParse
/AdvOptParse-0.2.13.tar.gz/AdvOptParse-0.2.13/advoptparse/parser.py
parser.py
# ----------------------------------------------------------- # AdvaS Advanced Search 0.2.5 # advanced search algorithms implemented as a python module # advas core module # # (C) 2002 - 2012 Frank Hofmann, Berlin, Germany # Released under GNU Public License (GPL) # email [email protected] # ----------------------------------------------------------- # other modules required by advas import string import re import math class Advas: def __init__(self): "init an Advas object" self.initFilename() self.initLine() self.initWords() self.initList() #self.init_ngrams() return def reInit (self): "re-initializes an Advas object" self.__init__() return # basic functions ========================================== # file name ------------------------------------------------ def initFilename (self): self.filename = "" self.useFilename = False def setFilename (self, filename): self.filename = filename def getFilename (self): return self.filename def setUseFilename (self): self.useFilename = True def getUseFilename (self): return self.useFilename def setUseWordlist (self): self.useFilename = False def getFileContents (self, filename): try: fileId = open(fileame, "r") except: print "[AdvaS] I/O Error - can't open given file:", filename return -1 # get file contents contents = fileId.readlines() # close file fileId.close() return contents # line ----------------------------------------------------- def initLine (self): self.line = "" def setLine (self, line): self.line = line def getLine (self): return self.line def splitLine (self): "split a line of text into single words" # define regexp tokens and split line tokens = re.compile(r"[\w']+") self.words = tokens.findall(self.line) # words ---------------------------------------------------- def initWords (self): self.words = {} def setWords (self, words): self.words = words def getWords (self): return self.words def countWords(self): "count words given in self.words, return pairs word:frequency" list = {} # start with an empty list for item in self.words: # assume a new item frequency = 0 # word already in list? if list.has_key(item): frequency = list[item] frequency += 1 # save frequency , update list list[item] = frequency # save list of words self.set_list (list) # lists ---------------------------------------------------- def initList (self): self.list = {} def setList (self, list): self.list = list def getList (self): return self.list def mergeLists(self, *lists): "merge lists of words" newList = {} # start with an empty list for currentList in lists: key = currentList.keys() for item in key: # assume a new item frequency = 0 # item already in newlist? if newlist.has_key(item): frequency = newList[item] frequency += currentList[item] newList[item] = frequency # set list self.setList (newList) #def mergeListsIdf(self, *lists): # "merge lists of words for calculating idf" # # newlist = {} # # for current_list in lists: # key = current_list.keys() # for item in key: # # assume a new item # frequency = 0 # # # item already in newlist? # if newlist.has_key(item): # frequency = newlist[item] # frequency += 1 # newlist[item] = frequency # # set list # self.set_list (newlist) # #def compact_list(self): # "merges items appearing more than once" # # newlist = {} # original = self.list # key = original.keys() # # for j in key: # item = string.lower(string.strip(j)) # # # assume a new item # frequency = 0 # # # item already in newlist? # if newlist.has_key(item): # frequency = newlist[item] # frequency += original[j] # newlist[item] = frequency # # # set new list # self.set_list (newlist) # #def remove_items(self, remove): # "remove the items from the original list" # # newlist = self.list # # # get number of items to be removed # key = remove.keys() # # for item in key: # # item in original list? # if newlist.has_key(item): # del newlist[item] # # # set newlist # self.set_list(newlist)
AdvaS-Advanced-Search
/advas-advanced-search-0.2.5.tar.gz/advas-0.2.5/advas-20120906.py
advas-20120906.py
# ----------------------------------------------------------- # AdvaS Advanced Search 0.2.5 # advanced search algorithms implemented as a python module # phonetics module # # (C) 2002 - 2014 Frank Hofmann, Berlin, Germany # Released under GNU Public License (GPL) # email [email protected] # ----------------------------------------------------------- import string import re from ngram import Ngram class Phonetics: def __init__(self, term): self.term = term return def setText(self, term): self.term = term return def getText(self): return self.term # covering algorithms def phoneticCode (self): "returns the term's phonetic code using different methods" # build an array to hold the phonetic code for each method phoneticCodeList = { "soundex": self.soundex(), "metaphone": self.metaphone(), "nysiis": self.nysiis(), "caverphone": self.caverphone() } return phoneticCodeList # phonetic algorithms def soundex (self): "Return the soundex value to a given string." # Create and compare soundex codes of English words. # # Soundex is an algorithm that hashes English strings into # alpha-numerical value that represents what the word sounds # like. For more information on soundex and some notes on the # differences in implemenations visit: # http://www.bluepoof.com/Soundex/info.html # # This version modified by Nathan Heagy at Front Logic Inc., to be # compatible with php's soundexing and much faster. # # eAndroid / Nathan Heagy / Jul 29 2000 # changes by Frank Hofmann / Jan 02 2005, Sep 9 2012 # generate translation table only once. used to translate into soundex numbers #table = string.maketrans('abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ', '0123012002245501262301020201230120022455012623010202') table = string.maketrans('ABCDEFGHIJKLMNOPQRSTUVWXYZ', '01230120022455012623010202') # check parameter if not self.term: return "0000" # could be Z000 for compatibility with other implementations # convert into uppercase letters term = string.upper(self.term) firstChar = term[0] # translate the string into soundex code according to the table above term = string.translate(term[1:], table) # remove all 0s term = string.replace(term, "0", "") # remove duplicate numbers in-a-row str2 = firstChar for x in term: if x != str2[-1]: str2 = str2 + x # pad with zeros str2 = str2+"0"*len(str2) # return the first four letters return str2[:4] def metaphone (self): "returns metaphone code for a given string" # implementation of the original algorithm from Lawrence Philips # extended/rewritten by M. Kuhn # improvements with thanks to John Machin <[email protected]> # define return value code = "" i = 0 termLength = len(self.term) if (termLength == 0): # empty string ? return code # extension #1 (added 2005-01-28) # convert to lowercase term = string.lower(self.term) # extension #2 (added 2005-01-28) # remove all non-english characters, first term = re.sub(r'[^a-z]', '', term) if len(term) == 0: # nothing left return code # extension #3 (added 2005-01-24) # conflate repeated letters firstChar = term[0] str2 = firstChar for x in term: if x != str2[-1]: str2 = str2 + x # extension #4 (added 2005-01-24) # remove any vowels unless a vowel is the first letter firstChar = str2[0] str3 = firstChar for x in str2[1:]: if (re.search(r'[^aeiou]', x)): str3 = str3 + x term = str3 termLength = len(term) if termLength == 0: # nothing left return code # check for exceptions if (termLength > 1): # get first two characters firstChars = term[0:2] # build translation table table = { "ae":"e", "gn":"n", "kn":"n", "pn":"n", "wr":"n", "wh":"w" } if firstChars in table.keys(): term = term[2:] code = table[firstChars] termLength = len(term) elif (term[0] == "x"): term = "" code = "s" termLength = 0 # define standard translation table stTrans = { "b":"b", "c":"k", "d":"t", "g":"k", "h":"h", "k":"k", "p":"p", "q":"k", "s":"s", "t":"t", "v":"f", "w":"w", "x":"ks", "y":"y", "z":"s" } i = 0 while (i<termLength): # init character to add, init basic patterns add_char = "" part_n_2 = "" part_n_3 = "" part_n_4 = "" part_c_2 = "" part_c_3 = "" # extract a number of patterns, if possible if (i < (termLength - 1)): part_n_2 = term[i:i+2] if (i>0): part_c_2 = term[i-1:i+1] part_c_3 = term[i-1:i+2] if (i < (termLength - 2)): part_n_3 = term[i:i+3] if (i < (termLength - 3)): part_n_4 = term[i:i+4] # use table with conditions for translations if (term[i] == "b"): addChar = stTrans["b"] if (i == (termLength - 1)): if (i>0): if (term[i-1] == "m"): addChar = "" elif (term[i] == "c"): addChar = stTrans["c"] if (part_n_2 == "ch"): addChar = "x" elif (re.search(r'c[iey]', part_n_2)): addChar = "s" if (part_n_3 == "cia"): addChar = "x" if (re.search(r'sc[iey]', part_c_3)): addChar = "" elif (term[i] == "d"): addChar = stTrans["d"] if (re.search(r'dg[eyi]', part_n_3)): addChar = "j" elif (term[i] == "g"): addChar = stTrans["g"] if (part_n_2 == "gh"): if (i == (termLength - 2)): addChar = "" elif (re.search(r'gh[aeiouy]', part_n_3)): addChar = "" elif (part_n_2 == "gn"): addChar = "" elif (part_n_4 == "gned"): addChar = "" elif (re.search(r'dg[eyi]',part_c_3)): addChar = "" elif (part_n_2 == "gi"): if (part_c_3 != "ggi"): addChar = "j" elif (part_n_2 == "ge"): if (part_c_3 != "gge"): addChar = "j" elif (part_n_2 == "gy"): if (part_c_3 != "ggy"): addChar = "j" elif (part_n_2 == "gg"): addChar = "" elif (term[i] == "h"): addChar = stTrans["h"] if (re.search(r'[aeiouy]h[^aeiouy]', part_c_3)): addChar = "" elif (re.search(r'[csptg]h', part_c_2)): addChar = "" elif (term[i] == "k"): addChar = stTrans["k"] if (part_c_2 == "ck"): addChar = "" elif (term[i] == "p"): addChar = stTrans["p"] if (part_n_2 == "ph"): addChar = "f" elif (term[i] == "q"): addChar = stTrans["q"] elif (term[i] == "s"): addChar = stTrans["s"] if (part_n_2 == "sh"): addChar = "x" if (re.search(r'si[ao]', part_n_3)): addChar = "x" elif (term[i] == "t"): addChar = stTrans["t"] if (part_n_2 == "th"): addChar = "0" if (re.search(r'ti[ao]', part_n_3)): addChar = "x" elif (term[i] == "v"): addChar = stTrans["v"] elif (term[i] == "w"): addChar = stTrans["w"] if (re.search(r'w[^aeiouy]', part_n_2)): addChar = "" elif (term[i] == "x"): addChar = stTrans["x"] elif (term[i] == "y"): addChar = stTrans["y"] elif (term[i] == "z"): addChar = stTrans["z"] else: # alternative addChar = term[i] code = code + addChar i += 1 # end while return code def nysiis (self): "returns New York State Identification and Intelligence Algorithm (NYSIIS) code for the given term" code = "" i = 0 term = self.term termLength = len(term) if (termLength == 0): # empty string ? return code # build translation table for the first characters table = { "mac":"mcc", "ph":"ff", "kn":"nn", "pf":"ff", "k":"c", "sch":"sss" } for tableEntry in table.keys(): tableValue = table[tableEntry] # get table value tableValueLen = len(tableValue) # calculate its length firstChars = term[0:tableValueLen] if (firstChars == tableEntry): term = tableValue + term[tableValueLen:] break # build translation table for the last characters table = { "ee":"y", "ie":"y", "dt":"d", "rt":"d", "rd":"d", "nt":"d", "nd":"d", } for tableEntry in table.keys(): tableValue = table[tableEntry] # get table value tableEntryLen = len(tableEntry) # calculate its length lastChars = term[(0 - tableEntryLen):] #print lastChars, ", ", tableEntry, ", ", tableValue if (lastChars == tableEntry): term = term[:(0 - tableValueLen + 1)] + tableValue break # initialize code code = term # transform ev->af code = re.sub(r'ev', r'af', code) # transform a,e,i,o,u->a code = re.sub(r'[aeiouy]', r'a', code) # transform q->g code = re.sub(r'q', r'g', code) # transform z->s code = re.sub(r'z', r's', code) # transform m->n code = re.sub(r'm', r'n', code) # transform kn->n code = re.sub(r'kn', r'n', code) # transform k->c code = re.sub(r'k', r'c', code) # transform sch->sss code = re.sub(r'sch', r'sss', code) # transform ph->ff code = re.sub(r'ph', r'ff', code) # transform h-> if previous or next is nonvowel -> previous occur = re.findall(r'([a-z]{0,1}?)h([a-z]{0,1}?)', code) #print occur for occurGroup in occur: occurItemPrevious = occurGroup[0] occurItemNext = occurGroup[1] if ((re.match(r'[^aeiouy]', occurItemPrevious)) or (re.match(r'[^aeiouy]', occurItemNext))): if (occurItemPrevious != ""): # make substitution code = re.sub (occurItemPrevious + "h", occurItemPrevious * 2, code, 1) # transform w-> if previous is vowel -> previous occur = re.findall(r'([aeiouy]{1}?)w', code) #print occur for occurGroup in occur: occurItemPrevious = occurGroup[0] # make substitution code = re.sub (occurItemPrevious + "w", occurItemPrevious * 2, code, 1) # check last character # -s, remove code = re.sub (r's$', r'', code) # -ay, replace by -y code = re.sub (r'ay$', r'y', code) # -a, remove code = re.sub (r'a$', r'', code) return code def caverphone (self): "returns the language key using the caverphone algorithm 2.0" # Developed at the University of Otago, New Zealand. # Project: Caversham Project (http://caversham.otago.ac.nz) # Developer: David Hood, University of Otago, New Zealand # Contact: [email protected] # Project Technical Paper: http://caversham.otago.ac.nz/files/working/ctp150804.pdf # Version 2.0 (2004-08-15) code = "" i = 0 term = self.term termLength = len(term) if (termLength == 0): # empty string ? return code # convert to lowercase code = string.lower(term) # remove anything not in the standard alphabet (a-z) code = re.sub(r'[^a-z]', '', code) # remove final e if code.endswith("e"): code = code[:-1] # if the name starts with cough, rough, tough, enough or trough -> cou2f (rou2f, tou2f, enou2f, trough) code = re.sub(r'^([crt]|(en)|(tr))ough', r'\1ou2f', code) # if the name starts with gn -> 2n code = re.sub(r'^gn', r'2n', code) # if the name ends with mb -> m2 code = re.sub(r'mb$', r'm2', code) # replace cq -> 2q code = re.sub(r'cq', r'2q', code) # replace c[i,e,y] -> s[i,e,y] code = re.sub(r'c([iey])', r's\1', code) # replace tch -> 2ch code = re.sub(r'tch', r'2ch', code) # replace c,q,x -> k code = re.sub(r'[cqx]', r'k', code) # replace v -> f code = re.sub(r'v', r'f', code) # replace dg -> 2g code = re.sub(r'dg', r'2g', code) # replace ti[o,a] -> si[o,a] code = re.sub(r'ti([oa])', r'si\1', code) # replace d -> t code = re.sub(r'd', r't', code) # replace ph -> fh code = re.sub(r'ph', r'fh', code) # replace b -> p code = re.sub(r'b', r'p', code) # replace sh -> s2 code = re.sub(r'sh', r's2', code) # replace z -> s code = re.sub(r'z', r's', code) # replace initial vowel [aeiou] -> A code = re.sub(r'^[aeiou]', r'A', code) # replace all other vowels [aeiou] -> 3 code = re.sub(r'[aeiou]', r'3', code) # replace j -> y code = re.sub(r'j', r'y', code) # replace an initial y3 -> Y3 code = re.sub(r'^y3', r'Y3', code) # replace an initial y -> A code = re.sub(r'^y', r'A', code) # replace y -> 3 code = re.sub(r'y', r'3', code) # replace 3gh3 -> 3kh3 code = re.sub(r'3gh3', r'3kh3', code) # replace gh -> 22 code = re.sub(r'gh', r'22', code) # replace g -> k code = re.sub(r'g', r'k', code) # replace groups of s,t,p,k,f,m,n by its single, upper-case equivalent for singleLetter in ["s", "t", "p", "k", "f", "m", "n"]: otherParts = re.split(singleLetter + "+", code) code = string.join(otherParts, string.upper(singleLetter)) # replace w[3,h3] by W[3,h3] code = re.sub(r'w(h?3)', r'W\1', code) # replace final w with 3 code = re.sub(r'w$', r'3', code) # replace w -> 2 code = re.sub(r'w', r'2', code) # replace h at the beginning with an A code = re.sub(r'^h', r'A', code) # replace all other occurrences of h with a 2 code = re.sub(r'h', r'2', code) # replace r3 with R3 code = re.sub(r'r3', r'R3', code) # replace final r -> 3 code = re.sub(r'r$', r'3', code) # replace r with 2 code = re.sub(r'r', r'2', code) # replace l3 with L3 code = re.sub(r'l3', r'L3', code) # replace final l -> 3 code = re.sub(r'l$', r'3', code) # replace l with 2 code = re.sub(r'l', r'2', code) # remove all 2's code = re.sub(r'2', r'', code) # replace the final 3 -> A code = re.sub(r'3$', r'A', code) # remove all 3's code = re.sub(r'3', r'', code) # extend the code by 10 '1' (one) code += '1' * 10 # return the first 10 characters return code[:10] def calcSuccVariety(self): # derive two-letter combinations ngramObject = Ngram(self.term, 2) ngramObject.deriveNgrams() ngramSet = set(ngramObject.getNgrams()) # count appearances of the second letter varietyList = {} for entry in ngramSet: letter1 = entry[0] letter2 = entry[1] if varietyList.has_key(letter1): items = varietyList[letter1] if not letter2 in items: # extend the existing one items.append(letter2) varietyList[letter1] = items else: # create a new one varietyList[letter1] = [letter2] return varietyList def calcSuccVarietyCount(self, varietyList): # save the number of matches, only for entry in varietyList: items = len(varietyList[entry]) varietyList[entry] = items return varietyList def calcSuccVarietyList(self, wordList): result = {} for item in wordList: self.setText(item) varietyList= self.calcSuccVariety() result[item] = varietyList return result def calcSuccVarietyMerge(self, varietyList): result = {} for item in varietyList.values(): for letter in item.keys(): if not letter in result.keys(): result[letter] = item[letter] else: result[letter] = list(set(result[letter])|set(item[letter])) return result
AdvaS-Advanced-Search
/advas-advanced-search-0.2.5.tar.gz/advas-0.2.5/phonetics.py
phonetics.py
# ----------------------------------------------------------- # AdvaS Advanced Search 0.2.5 # advanced search algorithms implemented as a python module # module containing stemming algorithms # # (C) 2002 - 2014 Frank Hofmann, Berlin, Germany # Released under GNU Public License (GPL) # email [email protected] # ----------------------------------------------------------- from advasio import AdvasIo import string from phonetics import Phonetics from ngram import Ngram from advas import Advas class Stemmer: def __init__(self, encoding): self.stemFile = "" self.encoding = encoding self.stemTable = {} return def loadStemFile(self, stemFile): if stemFile: self.stemFile = stemFile fileId = AdvasIo(self.stemFile, self.encoding) success = fileId.readFileContents() if not success: self.stemFile = "" return False else: contents = fileId.getFileContents() for line in contents: left, right = line.split(":") self.stemTable[string.strip(left)] = string.strip(right) return True else: self.stemFile = "" return False def clearStemFile(self): self.stemTable = {} self.stemFile = "" return def tableLookup(self, term): if term in self.stemTable: return self.stemTable[term] return def successorVariety (self, term, wordList): "calculates the terms'stem according to the successor variety algorithm" # get basic list for the variety varObject = Phonetics("") sv = varObject.calcSuccVarietyList(wordList) svm = varObject.calcSuccVarietyMerge(sv) svmList = varObject.calcSuccVarietyCount(svm) # examine given term # use peak-and-plateau method to found word boundaries termLength = len(term) termRange = range(1, termLength-1) # start here start=0 # list of stems stemList = [] for i in termRange: # get slice wordSlice = term[start:i+1] # print word_slice # check for a peak A = term[i-1] B = term[i] C = term[i+1] a = 0 if svmList.has_key(A): a = svmList[A] b = 0 if svmList.has_key(B): b = svmList[B] c = 0 if svmList.has_key(C): c = svmList[C] if (b>a) and (b>c): # save slice as a stem stemList.append(wordSlice) # adjust start start=i+1 # end if # end for if (i<termLength): # still something left in buffer? wordSlice = term[start:] stemList.append(wordSlice) # end if # return result return stemList def ngramStemmer (self, wordList, size, equality): "reduces wordList according to the n-gram stemming method" # use return_list and stop_list for the terms to be removed, later returnList = [] stopList = [] ngramAdvas = Advas("","") # calculate length and range listLength = len(wordList) outerListRange = range(0, listLength) for i in outerListRange: term1 = wordList[i] innerListRange = range (0, i) # define basic n-gram object term1Ngram = Ngram(term1, 2) term1Ngram.deriveNgrams() term1NgramList = term1Ngram.getNgrams() for j in innerListRange: term2 = wordList[j] term2Ngram = Ngram(term2, 2) term2Ngram.deriveNgrams() term2NgramList = term2Ngram.getNgrams() # calculate n-gram value ngramSimilarity = ngramAdvas.compareNgramLists (term1NgramList, term2NgramList) # compare degree = ngramSimilarity - equality if (degree>0): # ... these terms are so similar that they can be conflated # remove the longer term, keep the shorter one if (len(term2)>len(term1)): stopList.append(term2) else: stopList.append(term1) # end if # end if # end for # end for # conflate the matrix # remove all the items which appear in stopList return list(set(wordList) - set(stopList))
AdvaS-Advanced-Search
/advas-advanced-search-0.2.5.tar.gz/advas-0.2.5/stemmer.py
stemmer.py
# ----------------------------------------------------------- # AdvaS Advanced Search 0.2.5 # advanced search algorithms implemented as a python module # search engine module # # (C) 2002 - 2014 Frank Hofmann, Berlin, Germany # Released under GNU Public License (GPL) # email [email protected] # ----------------------------------------------------------- import math class AdvancedSearch: def __init__(self): "initializes a new AdvancedSearch object" self.entryList = [] self.stopList = [] self.setSortOrderDescending() self.setSearchStrategy(50, 50) return # sort order def setSortOrderAscending(self): "changes sort order to ascending" self.sortOrderDescending = False return def setSortOrderDescending(self): "changes sort order to descending" self.sortOrderDescending = True return def reverseSortOrder(self): "reverses the current sort order" if self.getSortOrder() == True: self.setSortOrderAscending() else: self.setSortOrderDescending() return def getSortOrder(self): "get current sort order with True if descending" return self.sortOrderDescending # search entry def addEntry(self, entry): "registers the given entry, and returns its document id" entryId = self.getEmptyId() entry.setEntryId(entryId) self.entryList.append(entry) return entryId def isInEntryList(self, entryId): "returns True if document with entryId was registered" value = False for entry in self.entryList: if entry.getEntryId() == entryId: value = True break return value def removeEntry(self, entryId): "remove document with entryId from list of entries" newEntryList = [] for entry in self.entryList: if entry.getEntryId() != entryId: newEntryList.append(entry) self.entryList = newEntryList return def clearEntryList(self): "unregister all documents -- clear the entry list" self.entryList = [] return def countEntryList(self): "counts the registered documents, and returns its number" return len(self.entryList) def getEntryList(self): "return full list of registered documents" entryList = [] for entry in self.entryList: entryList.append(entry.getEntry()) return entryList def getEmptyId(self): "returns a new, still unused document id" entryId = 0 idList = [] for entry in self.entryList: idList.append(entry.getEntryId()) if (len(idList)): entryId = max(idList) + 1 return entryId # sort entry list def sortEntryList(self, entryList): "sort entry list ascending, or descending" if len(entryList) == 0: return [] else: return sorted(entryList, key=lambda entry: entry[0], reverse = self.getSortOrder()) # merge lists def mergeLists(self, *lists): "merge lists of words" newlist = {} # start with an empty list for currentList in lists: keyList = currentList.keys() for item in keyList: # assume a new item frequency = 0 # item already in newlist? if newlist.has_key(item): frequency = newlist[item] frequency += currentList[item] newlist[item] = frequency return newlist def mergeListsIdf(self, *lists): "merge lists of words for calculating idf" newlist = {} # start with an empty list for currentList in lists: keyList = currentList.keys() for item in keyList: # assume a new item frequency = 0 # item already in newlist? if newlist.has_key(item): frequency = newlist[item] frequency += 1 newlist[item] = frequency return newlist # stop list def setStopList(self, stopList): "fill the stop list with the given values" self.stopList = stopList return def getStopList(self): "return the current stop list" return self.stopList def extendStopList(self, itemList): "extends the current stop list with the given items" for item in itemList: if not item in self.stopList: self.stopList.append(item) return def reduceStopList(self, itemList): "reduces the current stop list by the given items" for item in itemList: if item in self.stopList: self.stopList.remove(item) return # phonetic comparisons def comparePhoneticCode(self, entry1, entry2): "compares two entries of phonetic codes and returns the number of exact matches" matches = 0 for item in entry1.keys(): if entry2.has_key(item): if entry1[item] == entry2[item]: matches += 1 return matches def comparePhoneticCodeLists(self, query, document): "compare phonetic codes of a query and a single document" total = 0 for entry in query: codes = query[entry] #print entry #print codes for entry2 in document: codes2 = document[entry2] #print entry2 #print codes2 matches = self.comparePhoneticCode(codes, codes2) total += matches return total def searchByPhoneticCode(self, query): "find all the documents matching the query in terms of phonetic similarity" matchList = {} for entry in self.getEntryList(): entryId = entry.getEntryId() matches = self.comparePhoneticCodeLists(query, entry) matchList[entryId] = matches return matchList # term frequency for all registered search entries def tf(self): "term frequency for the list of registered documents" occurency = {} for entry in self.entryList: tf = entry.data.tf() occurency = self.mergeLists(occurency, tf) return occurency def tfStop(self): "term frequency with stop list for the list of registered documents" occurency = {} for entry in self.entryList: tfStop = entry.data.tfStop(self.stopList) occurency = self.mergeLists(occurency, tfStop) return occurency # def tfRelation(self, pattern, document): # keysPattern = pattern.keys() # keysDocument = document.keys() # identicalKeys = list(set(keysPattern.keys()) & set(keysDocument.keys()) # # total = 0 # for item in identicalKeys: # total = total + pattern[item] + document[item] # return def idf (self, wordList): "calculates the inverse document frequency for a given list of terms" key = wordList.keys() documents = self.countEntryList() for item in key: frequency = wordList[item] # calculate idf = ln(N/n): # N=number of documents # n=number of documents that contain term idf = math.log(documents/frequency) wordList[item] = idf return wordList # evaluate and compare descriptors def compareDescriptors (self, request, document): "returns the degree of equality between two descriptors (often a request and a document)" compareBinary = self.compareDescriptorsBinary(request, document) compareFuzzy = self.compareDescriptorsFuzzy(request, document) compareKnn = self.compareDescriptorsKNN(request, document) result = { 'binary': compareBinary, 'fuzzy': compareFuzzy, 'knn': compareKnn } return result def compareDescriptorsBinary(self, request, document): "binary comparison" # request, document: document descriptors # return value: either True for similarity, or False # define return value equality = 0 # calc similar descriptors itemsRequest = request.getDescriptorList() itemsDocument = document.getDescriptorList() if set(itemsRequest) & set(itemsDocument) == set(itemsRequest): equality = 1 return equality def compareDescriptorsFuzzy(self, request, document): "fuzzy comparison" # request, document: lists of descriptors # return value: float, between 0 and 1 # define return value equality = 0 # get number of items itemsRequest = request.getDescriptorList() itemsDocument = document.getDescriptorList() # calc similar descriptors similarDescriptors = len(set(itemsRequest) & set(itemsDocument)) # calc equality equality = float(similarDescriptors) / float ((math.sqrt(len(itemsRequest)) * math.sqrt(len(itemsDocument)))) return equality def compareDescriptorsKNN(self, request, document): "k-Nearest Neighbour algorithm" firstList = request otherList = document globalDistance = float(0) for item in firstList.getDescriptorList(): firstValue = float(firstList.getDescriptorValue(item)) otherValue = float(otherList.getDescriptorValue(item)) i = float(firstValue - otherValue) localDistance = float(i * i) globalDistance = globalDistance + localDistance # end for for item in otherList.getDescriptorList(): otherValue = float(otherList.getDescriptorValue(item)) firstValue = 0 if item in firstList.getDescriptorList(): continue # don't count again localDistance = float(otherValue * otherValue) globalDistance = globalDistance + localDistance # end for kNN = math.sqrt(globalDistance) return kNN def calculateRetrievalStatusValue(d, p, q): "calculates the document weight for document descriptors" # d: list of existance (1) or non-existance (0) # p, q: list of probabilities of existance (p) and non-existance (q) itemsP = len(p) itemsQ = len(q) itemsD = len(d) if ((itemsP - itemsQ) <> 0): # different length of lists p and q return 0 if ((itemsD - itemsP) <> 0): # different length of lists d and p return 0 rsv = 0 for i in range(itemsP): eqUpper = float(p[i]) / float(1-p[i]) eqLower = float(q[i]) / float(1-q[i]) value = float(d[i] * math.log (eqUpper / eqLower)) rsv += value return rsv # search strategy def setSearchStrategy(self, fullTextWeight, advancedWeight): "adjust the current search strategy" self.searchStrategy = { "fulltextsearch": fullTextWeight, "advancedsearch": advancedWeight } return def getSearchStrategy(self): "returns the current search strategy" return self.searchStrategy # search def search(self, pattern): "combines both full text, and advanced search" result = [] searchStrategy = self.getSearchStrategy() fullTextWeight = searchStrategy["fulltextsearch"] advancedWeight = searchStrategy["advancedsearch"] if fullTextWeight: result = self.fullTextSearch(pattern) if advancedWeight: resultAdvancedSearch = self.advancedSearch(pattern) if not len(result): result = resultAdvancedSearch else: for item in resultAdvancedSearch: weightAVS, hitsAVS, entryIndexAVS = item for i in xrange(len(result)): entry = result[i] weightFTS, hitsFTS, entryIndexFTS = entry if entryIndexAVS == entryIndexFTS: weight = weightAVS + weightFTS hits = hitsAVS + hitsFTS result[i] = (weight, hits, entryIndexAVS) break return self.sortEntryList(result) def fullTextSearch(self, pattern): "full text search for the registered documents" searchStrategy = self.getSearchStrategy() fullTextWeight = searchStrategy["fulltextsearch"] # search for the given search pattern # both data and query are multiline objects originalQuery = pattern.getText() query = ''.join(originalQuery) result = [] for entry in self.entryList: originalData = entry.getText() data = ''.join(originalData) hits = data.count(query) # set return value entryId = entry.getEntryId() value = fullTextWeight * hits result.append((value, hits, entryId)) # sort the result according to the sort value result = self.sortEntryList(result) return result def advancedSearch(self, pattern): searchStrategy = self.getSearchStrategy() advancedWeight = searchStrategy["advancedsearch"] tfPattern = pattern.data.tf() digramPattern = pattern.data.getNgramsByParagraph(2) trigramPattern = pattern.data.getNgramsByParagraph(3) phoneticPattern = pattern.getPhoneticCode() descriptorsPattern = pattern.getKeywords() result = [] for entry in self.entryList: # calculate tf tfEntry = entry.data.tf() # calculate digrams digramEntry = entry.data.getNgramsByParagraph(2) digramValue = entry.data.compareNgramLists(digramEntry, digramPattern) # calculate trigrams trigramEntry = entry.data.getNgramsByParagraph(3) trigramValue = entry.data.compareNgramLists(trigramEntry, trigramPattern) # phonetic codes phoneticEntry = entry.getPhoneticCode() phoneticValue = self.comparePhoneticCodeLists(phoneticPattern, phoneticEntry) # descriptor comparison descriptorsEntry = entry.getKeywords() desc = self.compareDescriptors (descriptorsPattern, descriptorsEntry) descValue = desc['binary']*0.3 + desc['fuzzy']*0.3 + desc['knn']*0.4 hits = 0 value = digramValue * 0.25 value += trigramValue * 0.25 value += phoneticValue * 0.25 value += descValue * 0.25 # set return value entryId = entry.getEntryId() value = advancedWeight * value result.append((value, hits, entryId)) return result
AdvaS-Advanced-Search
/advas-advanced-search-0.2.5.tar.gz/advas-0.2.5/advancedsearch.py
advancedsearch.py
# ----------------------------------------------------------- # AdvaS Advanced Search 0.2.5 # advanced search algorithms implemented as a python module # advas core module # # (C) 2002 - 2014 Frank Hofmann, Berlin, Germany # Released under GNU Public License (GPL) # email [email protected] # ----------------------------------------------------------- # other modules required by advas import string import re import math from ngram import Ngram from phonetics import Phonetics from advasio import AdvasIo class Advas: def __init__(self, text, encoding): "init an Advas object" self.setText(text) self.setEncoding(encoding) return def getText(self): "return the saved text value" return self.text def setText(self, text): "set a given text value" self.text = text return def getEncoding(self): "return the saved text encoding" return self.encoding def setEncoding(self, encoding): "set the text encoding" self.encoding = encoding return # basic functions ========================================== # line ----------------------------------------------------- def splitLine (self, line): "split a line of text into single words" # define regexp tokens and split line tokens = re.compile(r"[\w']+") return tokens.findall(line) def splitParagraph (self): "split a paragraph into single lines" lines = self.text return lines def splitText(self): "split the text into single words per paragraph line" paragraphList = [] # split text into single lines lines = self.splitParagraph() for line in lines: # split this line into single words wordList = self.splitLine(line) paragraphList.append(wordList) return paragraphList def isComment(self, line): "verifies a line for being a UNIX style comment" # remove any whitespace at the beginning line = string.lstrip(line) # is comment? (UNIX style) if line.startswith("#"): return True else: return False def kmpSearch(self, text, pattern): "search pattern in a text using Knuth-Morris-Pratt algorithm" i = 0 j = -1 next = {0: -1} # initialize next array while 1: if ((j == -1) or (pattern[i] == pattern[j])): i = i + 1 j = j + 1 next[i] = j else: j = next[j] # end if if (i >= len(pattern)): break # end while # search i = 0 j = 0 positions = [] while 1: if ((j == -1) or (text[i] == pattern[j])): i = i + 1 j = j + 1 else: j = next[j] # end if if (i >= len(text)): return positions # end if if (j >= len(pattern)): positions.append(i - len(pattern)) i = i - len(pattern) + 1 j = 0 # end if # end while return # list functions ------------------------------------------- def removeItems(self, originalList, removeList): "remove the items from the original list" for item in removeList: # item in original list? if originalList.has_key(item): del originalList[item] return originalList # advanced functions ======================================= # term frequency (tf) -------------------------------------- def tf (self): "calculates the term frequency for the given text" occurency = {} # split the given text into single lines splittedParagraph = self.splitText() for line in splittedParagraph: for word in line: if occurency.has_key(word): newValue = occurency[word] + 1 else: newValue = 1 occurency[word] = newValue # return list of words and their frequency return occurency def tfStop (self, stopList): "calculates the term frequency and removes the items given in stop list" # get term frequency from self.text wordList = self.tf() # remove items given in stop list ocurrency = self.removeItems(wordList, stopList) # return result return ocurrency def idf(self, numberOfDocuments, frequencyList): "calculates the inverse document frequency for a given list of terms" idfList = {} for item in frequencyList.keys(): # get frequency frequency = frequencyList[item] # calculate idf = ln(numberOfDocuments/n): # n=number of documents that contain term idf = math.log(float(numberOfDocuments)/float(frequency)) # save idf idfList[item] = idf return idfList # n-gram functions ---------------------------------------- def getNgramsByWord (self, word, ngramSize): if not ngramSize: return [] term = Ngram(word, ngramSize) if term.deriveNgrams(): return term.getNgrams() else: return [] def getNgramsByLine (self, ngramSize): if not ngramSize: return [] occurency = [] # split the given text into single lines lines = self.splitParagraph() for line in lines: term = Ngram(line, ngramSize) if term.deriveNgrams(): occurency.append(term.getNgrams()) else: occurency.append([]) return occurency def getNgramsByParagraph(self, ngramSize): if not ngramSize: return [] reducedList = [] occurency = self.getNgramsByLine(ngramSize) for line in occurency: reducedList = list(set(reducedList) | set(line)) return reducedList def compareNgramLists (self, list1, list2): "compares two lists of ngrams and returns their degree of equality" # equality of terms : Dice coefficient # # S = 2C/(A+B) # # S = degree of equality # C = n-grams contained in term 2 as well as in term 2 # A = number of n-grams contained in term 1 # B = number of n-grams contained in term 2 # find n-grams contained in both lists A = len(list1) B = len(list2) # extract the items which appear in both list1 and list2 list3 = list(set(list1) & set(list2)) C = len(list3) # calculate similarity of term 1 and 2 S = float(float(2*C)/float(A+B)) return S # phonetic codes --------------------------------------- def soundex(self): soundexCode = {} # split the given text into single lines splittedParagraph = self.splitText() for line in splittedParagraph: for word in line: if not soundexCode.has_key(word): phoneticsObject = Phonetics(word) soundexValue = phoneticsObject.soundex() soundexCode[word] = soundexValue return soundexCode def metaphone(self): metaphoneCode = {} # split the given text into single lines splittedParagraph = self.splitText() for line in splittedParagraph: for word in line: if not metaphoneCode.has_key(word): phoneticsObject = Phonetics(word) metaphoneValue = phoneticsObject.metaphone() metaphoneCode[word] = metaphoneValue return metaphoneCode def nysiis(self): nysiisCode = {} # split the given text into single lines splittedParagraph = self.splitText() for line in splittedParagraph: for word in line: if not nysiisCode.has_key(word): phoneticsObject = Phonetics(word) nysiisValue = phoneticsObject.nysiis() nysiisCode[word] = nysiisValue return nysiisCode def caverphone(self): caverphoneCode = {} # split the given text into single lines splittedParagraph = self.splitText() for line in splittedParagraph: for word in line: if not caverphoneCode.has_key(word): phoneticsObject = Phonetics(word) caverphoneValue = phoneticsObject.caverphone() caverphoneCode[word] = caverphoneValue return caverphoneCode def phoneticCode(self): codeList = {} # split the given text into single lines splittedParagraph = self.splitText() for line in splittedParagraph: for word in line: if not codeList.has_key(word): phoneticsObject = Phonetics(word) value = phoneticsObject.phoneticCode() codeList[word] = value return codeList # language detection ----------------------------------- def isLanguage (self, keywordList): "given text is written in a certain language" # old function - substituted by isLanguageByKeywords() return self.isLanguageByKeywords (keywordList) def isLanguageByKeywords (self, keywordList): "determine the language of a given text with the use of keywords" # keywordList: list of items used to determine the language # get term frequency using tf textTf = self.tf() # lower each keyword listLength = len(keywordList) for i in range(listLength): keywordList[i] = string.lower(string.strip(keywordList[i])) # end for # derive intersection intersection = list(set(keywordList) & set(textTf.keys())) lineLanguage = len(intersection) # value value = float(float(lineLanguage)/float(listLength)) return value # synonyms --------------------------------------------- def getSynonyms(self, filename, encoding): # works with OpenThesaurus (plain text version) # requires an OpenThesaurus release later than 2003-10-23 # http://www.openthesaurus.de synonymFile = AdvasIo(filename, encoding) success = synonymFile.readFileContents() if not success: return False contents = synonymFile.getFileContents() searchTerm = self.text[0] synonymList = [] for line in contents: if not self.isComment(line): wordList = line.split(";") if searchTerm in wordList: synonymList += wordList # remove extra characters for i in range(len(synonymList)): synonym = synonymList[i] synonymList[i] = synonym.strip() # compact list: remove double entries synonymList = list(set(synonymList)) # maybe the search term is in the list: remove it, too if searchTerm in synonymList: synonymList = list(set(synonymList).difference(set([searchTerm]))) return synonymList def isSynonymOf(self, term, filename, encoding): synonymList = self.getSynonyms(filename, encoding) if term in synonymList: return True return False
AdvaS-Advanced-Search
/advas-advanced-search-0.2.5.tar.gz/advas-0.2.5/advas.py
advas.py
class LinkedList: def __init__(self,Iterable:object="",initialize_list:bool=False,size:int=0,initial_value:object=0,sorted:bool=False,order:bool=False,dtype:object=None)->None: self.next=None self.__dtype=dtype self.__mode=sorted self.__last=self self.__order=order self.__length=0 self.__data=0 if(initialize_list): for item in range(size): self.append(initial_value) else: for item in Iterable: self.append(item) def append(self,data:object=0)->None: self.__length+=1 tem=LinkedList() if(self.__dtype!=None): data=self.__dtype(data) tem.__data = data t=self if(self.__mode): while(t.next!=None and ((t.next.__data<data and not self.__order) or (t.next.__data>data and self.__order))): t=t.next tem.next=t.next t.next=tem else: tem.__last=self.__last self.__last.next=tem self.__last=tem def __str__(self): a="[ " t=self.next while(t!=None): if(isinstance(t.__data,str)): a+=f"'{t.__data}' ," else: a+=f"{t.__data} ," t=t.next a+="\b]" return a def __len__(self): return self.__length def extend(self,_iterable): for item in _iterable: self.append(item) def copy(self): tem=LinkedList() t=self.next while(t!=None): tem.append(t.__data) t=t.next return tem def insert_index(self,index:int,data:object): self.__length+=1 tem=LinkedList() tem.__data=data i=0 t=self while(t!=None): if(i==index or i==self.__length-1): tem.next=t.next t.next=tem break i+=1 t=t.next def __getitem__(self, item): if(not isinstance(item,int) and not isinstance(item,LinkedList)): a=item.start b=item.stop c=item.step if (c == None): c = 1 if(a==None): if(c>=0): a=0 else: a=len(self) if(b==None): if(c>=0): b=len(self) else: b=-1 tem=LinkedList() t=self if(a<-len(self) or a>len(self) or b>len(self) or b<-len(self) ): raise IndexError("Index out of range") else: var1=0 if(c<0): a,b=b+1,a+1 c=-1*c var1=-1 i=0 while(t!=None): if(i==a): break i+=1 t=t.next t=t.next k=i while(t!=None and i<b): if(k==i): tem.append(t.__data) k+=(c) t=t.next i+=1 if(var1==-1): tem.reversed() return tem elif(isinstance(item,LinkedList)): return item.next.__data else: if(item<0): if(item<-len(self)): raise IndexError("Index out of range") else: item=len(self)+item else: if(item>len(self)): raise IndexError("Index out of range") t=self i=0 while(t.next!=None): if(i==item): return t.next.__data t=t.next i+=1 raise StopIteration def __setitem__(self, key, value): if(isinstance(key,int)): t=self i=0 while(t.next!=None): if(i==key): t.next.__data=value break t=t.next i+=1 else: try: key.next.__data=value except: raise SyntaxError("Invalid reffernce pass for Linked list assignment") def insert_reffernce(self,obj:object,data:object)->None: self.__length+=1 tem=LinkedList() tem.__data=data tem.next=obj.next obj.next=tem def swap(self,obj1:object,obj2:object): self[obj1],self[obj2]=self[obj2],self[obj1] def __reversed__(self): t=self.copy() t.reversed() return t def reversed(self): t=self.next if(t!=None): p=self.next.next t.next=None while(p!=None): q=p.next p.next=t t=p p=q self.next=t def __eq__(self, other): self=other def __mul__(self, other:int): if(isinstance(other,int)): if(other==1): return self else: return self+self.__mul__(other-1) else: raise SyntaxError(f"Invalid operator between Linked list object and {type(other).__name__} object") def __add__(self, other): if(isinstance(other,LinkedList)): tem=LinkedList(self) t=other.next while(t!=None): tem.append(t.__data) t=t.next return tem else: raise SyntaxError("Invalid operator between linked list and %s" % (type(other).__name__)) def __abs__(self): t = self while (t.next != None): if (isinstance(t.next.__data, int) or isinstance(t.next.__data, float)): t.next.__data = abs(t.next.__data) t = t.next def Sum(self): return sum(self) def mean(self): return sum(self)/len(self) def meadian(self): # Mode = 3(Median) - 2(Mean) use this relation to compute median in o(n) ans="%.1f"%((self.mode()+(2*self.mean()))/3) return float(ans) def mode(self): d={} for item in self: if(item in d): d[item]+=1 else: d[item]=1 max=list(d.keys()) max=max[0] for item in d: if(d[item]>d[max] and d[item]>1): max=item l=0 s=0 for item in d: if(d[item]==d[max]): s+=item l+=1 return (s/l) def __pow__(self, power,modula=None): t = self while (t.next != None): if (isinstance(t.next.__data, int) or isinstance(t.next.__data, float)): t.next.__data = pow(t.next.__data,power,modula) t = t.next @classmethod def create_sized_list(cls,size=0,intial_value=0): tem=LinkedList() tem.__length=size for item in range(size): tem.append(intial_value) return tem def sqrt(self,modula=None): t=self self.__pow__(0.5,modula) def count(self,element,start=0,end=-1): if(end==-1): end=len(self) count=0 t=self i=0 while(t!=None): if(i>=start): if(t.next.__data==element): count+=1 i+=1 if(i==end): break t=t.next return count def index(self,value,start=0,end=-1): if(end==-1): end=len(self) t = self i = 0 while (t != None): if (i >= start): if (t.next.__data == value): return i i += 1 t=t.next if (i == end): break return -1 def pop(self,index=-1): if(index==-1): index=len(self)-1 t=self i=0 while(t.next!=None): if(i==index): t.next=t.next.next break t=t.next i+=1 self.__length-=1 def serarch(self,value:object)->bool: return self.index(value)!=-1 def remove(self,value): j=self.index(value) if(j!=-1): self.pop(j) def clear(self): self.__length=0 self.next=None def __iadd__(self, other): if(isinstance(other,LinkedList)): t=other while(t.next!=None): self.append(t.next.__data) t=t.next return self else: raise SyntaxError("Invalid operator between linked list and %s"%(type(other).__name__)) def __imul__(self, other): return self.__mul__(other) def __call__(self, *args, **kwargs): for item in args: self.append(item) def replace(self,old_value,new_value,times=-1): count=0 t=self while(t.next!=None): if(count==times): break if(t.next.__data==old_value): count+=1 t.next.__data=new_value t=t.next def rindex(self,value): t=self i=0 j=-1 while(t.next!=None): if(t.next.__data==value): j=i i+=1 t=t.next return j def concatenate(self,item=""): t=self ans="" while(t.next!=None): ans+=str(t.next.__data) ans+=item t=t.next return ans def partition(self,value:int,starts:object=0,ends:object=-1): if(isinstance(starts,int) and isinstance(ends,int)): startp=self midp=self endp=self.__last.__last else: startp = starts midp = starts endp = ends mid=0 end=len(self)-1 while(end>=mid and midp.next!=endp.next): if(midp.next.__data>value): endp.next.__data,midp.next.__data=midp.next.__data,endp.next.__data end-=1 endp=endp.__last elif(midp.next.__data<value): startp.next.__data,midp.next.__data=midp.next.__data,startp.next.__data startp=startp.next midp=midp.next mid+=1 else: midp = midp.next mid += 1 return (startp,endp.next) def cummulativeSum(self): tem=LinkedList([0]) sum=0 t=self while(t.next!=None): if(isinstance(t.next.__data,int) or isinstance(t.next.__data,float)): sum+=t.next.__data tem.append(sum) t=t.next return tem def Max(self): return max(self) def Min(self): return min(self) def join(self,item): t=self while(t.next!=None): self.insert_reffernce(t.next,item) t=t.next.next def maxSum(self): sum=0 t=self try: ans=t.next.__data except: ans=0 while(t.next!=None): if (isinstance(t.next.__data, int) or isinstance(t.next.__data, float)): sum+=(t.next.__data) if(sum<0): sum=0 ans = max(ans, sum) t=t.next return ans def maxProduct(self): t = self try: max1 = t.next.__data max2 = t.next.__data min1 = t.next.__data min2 = -t.next.__data except: min1 = 0 max1 = 0 max2 = 0 min2 = 0 while (t.next != None): if (isinstance(t.next.__data, int) or isinstance(t.next.__data, float)): if(max1<t.next.__data): max1=t.next.__data elif(max2<t.next.__data and t.next.__data<=max1): max2=t.next.__data else: pass if(min1>=t.next.__data): min1=t.next.__data elif (min2 >= t.next.__data and t.next.__data>=min1): min2 = t.next.__data t = t.next if(min1*min2>max2*max1): return (min1*min2,(min1,min2)) return (max1*max2,(max1,max2)) def shift(self,value,side=True): startp = self midp = self endp = self.__last.__last mid = 0 end = len(self) - 1 while (end >= mid and midp.next != None): if (midp.next.__data!=value and side): endp.next.__data, midp.next.__data = midp.next.__data, endp.next.__data end -= 1 endp = endp.__last elif (midp.next.__data!=value and not side): startp.next.__data, midp.next.__data = midp.next.__data, startp.next.__data startp = startp.next midp = midp.next mid += 1 else: midp = midp.next mid += 1 def __fact(self,n): a=1 for i in range(2,n+1): a*=i return a def factorial(self): t = self while (t.next != None): if (isinstance(t.next.__data, int) or isinstance(t.next.__data, float)): t.next.__data=self.__fact(t.next.__data) t = t.next def sort(self,order:bool=False): tem=LinkedList(sorted(self,reverse=order)) t=self while(t.next!=None): t.next.__data=tem.next.__data tem=tem.next t=t.next def power(self,power,modula=None): self.__pow__(power,modula)
Advance-LinkedList
/Advance-LinkedList-1.0.2.tar.gz/Advance-LinkedList-1.0.2/LinkedList/__main__.py
__main__.py
__version__="1.0.2" __author__="Nitin Gupta" class LinkedList: def __init__(self,Iterable:object="",initialize_list:bool=False,size:int=0,initial_value:object=0,sorted:bool=False,reverse:bool=False,dtype:object=None)->None: self.next=None self.dtype=dtype self.sorted=sorted self.__last=self self.reverse=reverse self.__length=0 self.data=0 if(initialize_list): for item in range(size): self.append(initial_value) else: for item in Iterable: self.append(item) def append(self,data:object=0)->None: self.__length+=1 tem=LinkedList() if(self.dtype!=None): data=self.dtype(data) tem.data = data t=self if(self.sorted): while(t.next!=None and ((t.next.data<data and not self.reverse) or (t.next.data>data and self.reverse))): t=t.next tem.next=t.next t.next=tem else: tem.__last=self.__last self.__last.next=tem self.__last=tem def __str__(self): a="[ " t=self.next while(t!=None): if(isinstance(t.data,str)): a+=f"'{t.data}' ," else: a+=f"{t.data} ," t=t.next a+="\b]" return a def __len__(self): return self.__length def extend(self,_iterable): for item in _iterable: self.append(item) def copy(self): tem=LinkedList() t=self.next while(t!=None): tem.append(t.data) t=t.next return tem def insert_index(self,index:int,data:object): self.__length+=1 tem=LinkedList() tem.data=data i=0 t=self while(t!=None): if(i==index or i==self.__length-1): tem.next=t.next t.next=tem break i+=1 t=t.next def __getitem__(self, item): if(not isinstance(item,int) and not isinstance(item,LinkedList)): a=item.start b=item.stop c=item.step if (c == None): c = 1 if(a==None): if(c>=0): a=0 else: a=len(self) if(b==None): if(c>=0): b=len(self) else: b=-1 tem=LinkedList() t=self if(a<-len(self) or a>len(self) or b>len(self) or b<-len(self) ): raise IndexError("Index out of range") else: var1=0 if(c<0): a,b=b+1,a+1 c=-1*c var1=-1 i=0 while(t!=None): if(i==a): break i+=1 t=t.next t=t.next k=i while(t!=None and i<b): if(k==i): tem.append(t.data) k+=(c) t=t.next i+=1 if(var1==-1): tem.reversed() return tem elif(isinstance(item,LinkedList)): return item.next.data else: if(item<0): if(item<-len(self)): raise IndexError("Index out of range") else: item=len(self)+item else: if(item>len(self)): raise IndexError("Index out of range") t=self i=0 while(t.next!=None): if(i==item): return t.next.data t=t.next i+=1 raise StopIteration def __setitem__(self, key, value): if(isinstance(key,int)): t=self i=0 while(t.next!=None): if(i==key): t.next.data=value break t=t.next i+=1 else: try: key.next.data=value except: raise SyntaxError("Invalid reffernce pass for Linked list assignment") def insert_reffernce(self,obj:object,data:object)->None: self.__length+=1 tem=LinkedList() tem.data=data tem.next=obj.next obj.next=tem def swap(self,obj1:object,obj2:object): self[obj1],self[obj2]=self[obj2],self[obj1] def __reversed__(self): t=self.copy() t.reversed() return t def reversed(self): t=self.next if(t!=None): p=self.next.next t.next=None while(p!=None): q=p.next p.next=t t=p p=q self.next=t def __eq__(self, other): self=other def __mul__(self, other:int): if(isinstance(other,int)): if(other==1): return self else: return self+self.__mul__(other-1) else: raise SyntaxError(f"Invalid operator between Linked list object and {type(other).__name__} object") def __add__(self, other): if(isinstance(other,LinkedList)): tem=LinkedList(self) t=other.next while(t!=None): tem.append(t.data) t=t.next return tem else: raise SyntaxError("Invalid operator between linked list and %s" % (type(other).__name__)) def __abs__(self): t = self while (t.next != None): if (isinstance(t.next.data, int) or isinstance(t.next.data, float)): t.next.data = abs(t.next.data) t = t.next def Sum(self): return sum(self) def mean(self): return sum(self)/len(self) def meadian(self): # Mode = 3(Median) - 2(Mean) use this relation to compute median in o(n) ans="%.1f"%((self.mode()+(2*self.mean()))/3) return float(ans) def mode(self): d={} for item in self: if(item in d): d[item]+=1 else: d[item]=1 max=list(d.keys()) max=max[0] for item in d: if(d[item]>d[max] and d[item]>1): max=item l=0 s=0 for item in d: if(d[item]==d[max]): s+=item l+=1 return (s/l) def __pow__(self, power,modula=None): t = self while (t.next != None): if (isinstance(t.next.data, int) or isinstance(t.next.data, float)): t.next.data = pow(t.next.data,power,modula) t = t.next @classmethod def create_sized_list(cls,size=0,intial_value=0): tem=LinkedList() tem.__length=size for item in range(size): tem.append(intial_value) return tem def sqrt(self,modula=None): t=self self.__pow__(0.5,modula) def count(self,element,start=0,end=-1): if(end==-1): end=len(self) count=0 t=self i=0 while(t!=None): if(i>=start): if(t.next.data==element): count+=1 i+=1 if(i==end): break t=t.next return count def index(self,value,start=0,end=-1): if(end==-1): end=len(self) t = self i = 0 while (t != None): if (i >= start): if (t.next.data == value): return i i += 1 t=t.next if (i == end): break return -1 def pop(self,index=-1): if(index==-1): index=len(self)-1 t=self i=0 while(t.next!=None): if(i==index): t.next=t.next.next break t=t.next i+=1 self.__length-=1 def serarch(self,value:object)->bool: return self.index(value)!=-1 def remove(self,value): j=self.index(value) if(j!=-1): self.pop(j) def clear(self): self.__length=0 self.next=None def __iadd__(self, other): if(isinstance(other,LinkedList)): t=other while(t.next!=None): self.append(t.next.data) t=t.next return self else: raise SyntaxError("Invalid operator between linked list and %s"%(type(other).__name__)) def __imul__(self, other): return self.__mul__(other) def __call__(self, *args, **kwargs): for item in args: self.append(item) def replace(self,old_value,new_value,times=-1): count=0 t=self while(t.next!=None): if(count==times): break if(t.next.data==old_value): count+=1 t.next.data=new_value t=t.next def rindex(self,value): t=self i=0 j=-1 while(t.next!=None): if(t.next.data==value): j=i i+=1 t=t.next return j def concatenate(self,item=""): t=self ans="" while(t.next!=None): ans+=str(t.next.data) ans+=item t=t.next return ans def partition(self,value:int,starts:object=0,ends:object=-1): if(isinstance(starts,int) and isinstance(ends,int)): startp=self midp=self endp=self.__last.__last else: startp = starts midp = starts endp = ends mid=0 end=len(self)-1 while(end>=mid and midp.next!=endp.next): if(midp.next.data>value): endp.next.data,midp.next.data=midp.next.data,endp.next.data end-=1 endp=endp.__last elif(midp.next.data<value): startp.next.data,midp.next.data=midp.next.data,startp.next.data startp=startp.next midp=midp.next mid+=1 else: midp = midp.next mid += 1 return (startp,endp.next) def cummulativeSum(self): tem=LinkedList([0]) sum=0 t=self while(t.next!=None): if(isinstance(t.next.data,int) or isinstance(t.next.data,float)): sum+=t.next.data tem.append(sum) t=t.next return tem def Max(self): return max(self) def Min(self): return min(self) def join(self,item): t=self while(t.next!=None): self.insert_reffernce(t.next,item) t=t.next.next def maxSum(self): sum=0 t=self try: ans=t.next.data except: ans=0 while(t.next!=None): if (isinstance(t.next.data, int) or isinstance(t.next.data, float)): sum+=(t.next.data) if(sum<0): sum=0 ans = max(ans, sum) t=t.next return ans def maxProduct(self): t = self try: max1 = t.next.data max2 = t.next.data min1 = t.next.data min2 = -t.next.data except: min1 = 0 max1 = 0 max2 = 0 min2 = 0 while (t.next != None): if (isinstance(t.next.data, int) or isinstance(t.next.data, float)): if(max1<t.next.data): max1=t.next.data elif(max2<t.next.data and t.next.data<=max1): max2=t.next.data else: pass if(min1>=t.next.data): min1=t.next.data elif (min2 >= t.next.data and t.next.data>=min1): min2 = t.next.data t = t.next if(min1*min2>max2*max1): return (min1*min2,(min1,min2)) return (max1*max2,(max1,max2)) def shift(self,value,side=True): startp = self midp = self endp = self.__last.__last mid = 0 end = len(self) - 1 while (end >= mid and midp.next != None): if (midp.next.data!=value and side): endp.next.data, midp.next.data = midp.next.data, endp.next.data end -= 1 endp = endp.__last elif (midp.next.data!=value and not side): startp.next.data, midp.next.data = midp.next.data, startp.next.data startp = startp.next midp = midp.next mid += 1 else: midp = midp.next mid += 1 def __fact(self,n): a=1 for i in range(2,n+1): a*=i return a def factorial(self): t = self while (t.next != None): if (isinstance(t.next.data, int) or isinstance(t.next.data, float)): t.next.data=self.__fact(t.next.data) t = t.next def sort(self,reverse:bool=False): tem=LinkedList(sorted(self,reverse=reverse)) t=self while(t.next!=None): t.next.data=tem.next.data tem=tem.next t=t.next def power(self,power,modula=None): self.__pow__(power,modula)
Advance-LinkedList
/Advance-LinkedList-1.0.2.tar.gz/Advance-LinkedList-1.0.2/LinkedList/__init__.py
__init__.py
class LinkedList: def __init__(self,Iterable:object="",initialize_list:bool=False,size:int=0,initial_value:object=0,sorted:bool=False,order:bool=False,dtype:object=None)->None: self.next=None self.__dtype=dtype self.__mode=sorted self.__last=self self.__order=order self.__length=0 self.__data=0 if(initialize_list): for item in range(size): self.append(initial_value) else: for item in Iterable: self.append(item) def append(self,data:object=0)->None: self.__length+=1 tem=LinkedList() if(self.__dtype!=None): data=self.__dtype(data) tem.__data = data t=self if(self.__mode): while(t.next!=None and ((t.next.__data<data and not self.__order) or (t.next.__data>data and self.__order))): t=t.next tem.next=t.next t.next=tem else: tem.__last=self.__last self.__last.next=tem self.__last=tem def __str__(self): a="[ " t=self.next while(t!=None): if(isinstance(t.__data,str)): a+=f"'{t.__data}' ," else: a+=f"{t.__data} ," t=t.next a+="\b]" return a def __len__(self): return self.__length def extend(self,_iterable): for item in _iterable: self.append(item) def copy(self): tem=LinkedList() t=self.next while(t!=None): tem.append(t.__data) t=t.next return tem def insert_index(self,index:int,data:object): self.__length+=1 tem=LinkedList() tem.__data=data i=0 t=self while(t!=None): if(i==index or i==self.__length-1): tem.next=t.next t.next=tem break i+=1 t=t.next def __getitem__(self, item): if(not isinstance(item,int) and not isinstance(item,LinkedList)): a=item.start b=item.stop c=item.step if (c == None): c = 1 if(a==None): if(c>=0): a=0 else: a=len(self) if(b==None): if(c>=0): b=len(self) else: b=-1 tem=LinkedList() t=self if(a<-len(self) or a>len(self) or b>len(self) or b<-len(self) ): raise IndexError("Index out of range") else: var1=0 if(c<0): a,b=b+1,a+1 c=-1*c var1=-1 i=0 while(t!=None): if(i==a): break i+=1 t=t.next t=t.next k=i while(t!=None and i<b): if(k==i): tem.append(t.__data) k+=(c) t=t.next i+=1 if(var1==-1): tem.reversed() return tem elif(isinstance(item,LinkedList)): return item.next.__data else: if(item<0): if(item<-len(self)): raise IndexError("Index out of range") else: item=len(self)+item else: if(item>len(self)): raise IndexError("Index out of range") t=self i=0 while(t.next!=None): if(i==item): return t.next.__data t=t.next i+=1 raise StopIteration def __setitem__(self, key, value): if(isinstance(key,int)): t=self i=0 while(t.next!=None): if(i==key): t.next.__data=value break t=t.next i+=1 else: try: key.next.__data=value except: raise SyntaxError("Invalid reffernce pass for Linked list assignment") def insert_reffernce(self,obj:object,data:object)->None: self.__length+=1 tem=LinkedList() tem.__data=data tem.next=obj.next obj.next=tem def swap(self,obj1:object,obj2:object): self[obj1],self[obj2]=self[obj2],self[obj1] def __reversed__(self): t=self.copy() t.reversed() return t def reversed(self): t=self.next if(t!=None): p=self.next.next t.next=None while(p!=None): q=p.next p.next=t t=p p=q self.next=t def __eq__(self, other): self=other def __mul__(self, other:int): if(isinstance(other,int)): if(other==1): return self else: return self+self.__mul__(other-1) else: raise SyntaxError(f"Invalid operator between Linked list object and {type(other).__name__} object") def __add__(self, other): if(isinstance(other,LinkedList)): tem=LinkedList(self) t=other.next while(t!=None): tem.append(t.__data) t=t.next return tem else: raise SyntaxError("Invalid operator between linked list and %s" % (type(other).__name__)) def __abs__(self): t = self while (t.next != None): if (isinstance(t.next.__data, int) or isinstance(t.next.__data, float)): t.next.__data = abs(t.next.__data) t = t.next def Sum(self): return sum(self) def mean(self): return sum(self)/len(self) def meadian(self): # Mode = 3(Median) - 2(Mean) use this relation to compute median in o(n) ans="%.1f"%((self.mode()+(2*self.mean()))/3) return float(ans) def mode(self): d={} for item in self: if(item in d): d[item]+=1 else: d[item]=1 max=list(d.keys()) max=max[0] for item in d: if(d[item]>d[max] and d[item]>1): max=item l=0 s=0 for item in d: if(d[item]==d[max]): s+=item l+=1 return (s/l) def __pow__(self, power,modula=None): t = self while (t.next != None): if (isinstance(t.next.__data, int) or isinstance(t.next.__data, float)): t.next.__data = pow(t.next.__data,power,modula) t = t.next @classmethod def create_sized_list(cls,size=0,intial_value=0): tem=LinkedList() tem.__length=size for item in range(size): tem.append(intial_value) return tem def sqrt(self,modula=None): t=self self.__pow__(0.5,modula) def count(self,element,start=0,end=-1): if(end==-1): end=len(self) count=0 t=self i=0 while(t!=None): if(i>=start): if(t.next.__data==element): count+=1 i+=1 if(i==end): break t=t.next return count def index(self,value,start=0,end=-1): if(end==-1): end=len(self) t = self i = 0 while (t != None): if (i >= start): if (t.next.__data == value): return i i += 1 t=t.next if (i == end): break return -1 def pop(self,index=-1): if(index==-1): index=len(self)-1 t=self i=0 while(t.next!=None): if(i==index): t.next=t.next.next break t=t.next i+=1 self.__length-=1 def serarch(self,value:object)->bool: return self.index(value)!=-1 def remove(self,value): j=self.index(value) if(j!=-1): self.pop(j) def clear(self): self.__length=0 self.next=None def __iadd__(self, other): if(isinstance(other,LinkedList)): t=other while(t.next!=None): self.append(t.next.__data) t=t.next return self else: raise SyntaxError("Invalid operator between linked list and %s"%(type(other).__name__)) def __imul__(self, other): return self.__mul__(other) def __call__(self, *args, **kwargs): for item in args: self.append(item) def replace(self,old_value,new_value,times=-1): count=0 t=self while(t.next!=None): if(count==times): break if(t.next.__data==old_value): count+=1 t.next.__data=new_value t=t.next def rindex(self,value): t=self i=0 j=-1 while(t.next!=None): if(t.next.__data==value): j=i i+=1 t=t.next return j def concatenate(self,item=""): t=self ans="" while(t.next!=None): ans+=str(t.next.__data) ans+=item t=t.next return ans def partition(self,value:int,starts:object=0,ends:object=-1): if(isinstance(starts,int) and isinstance(ends,int)): startp=self midp=self endp=self.__last.__last else: startp = starts midp = starts endp = ends mid=0 end=len(self)-1 while(end>=mid and midp.next!=endp.next): if(midp.next.__data>value): endp.next.__data,midp.next.__data=midp.next.__data,endp.next.__data end-=1 endp=endp.__last elif(midp.next.__data<value): startp.next.__data,midp.next.__data=midp.next.__data,startp.next.__data startp=startp.next midp=midp.next mid+=1 else: midp = midp.next mid += 1 return (startp,endp.next) def cummulativeSum(self): tem=LinkedList([0]) sum=0 t=self while(t.next!=None): if(isinstance(t.next.__data,int) or isinstance(t.next.__data,float)): sum+=t.next.__data tem.append(sum) t=t.next return tem def Max(self): return max(self) def Min(self): return min(self) def join(self,item): t=self while(t.next!=None): self.insert_reffernce(t.next,item) t=t.next.next def maxSum(self): sum=0 t=self try: ans=t.next.__data except: ans=0 while(t.next!=None): if (isinstance(t.next.__data, int) or isinstance(t.next.__data, float)): sum+=(t.next.__data) if(sum<0): sum=0 ans = max(ans, sum) t=t.next return ans def maxProduct(self): t = self try: max1 = t.next.__data max2 = t.next.__data min1 = t.next.__data min2 = -t.next.__data except: min1 = 0 max1 = 0 max2 = 0 min2 = 0 while (t.next != None): if (isinstance(t.next.__data, int) or isinstance(t.next.__data, float)): if(max1<t.next.__data): max1=t.next.__data elif(max2<t.next.__data and t.next.__data<=max1): max2=t.next.__data else: pass if(min1>=t.next.__data): min1=t.next.__data elif (min2 >= t.next.__data and t.next.__data>=min1): min2 = t.next.__data t = t.next if(min1*min2>max2*max1): return (min1*min2,(min1,min2)) return (max1*max2,(max1,max2)) def shift(self,value,side=True): startp = self midp = self endp = self.__last.__last mid = 0 end = len(self) - 1 while (end >= mid and midp.next != None): if (midp.next.__data!=value and side): endp.next.__data, midp.next.__data = midp.next.__data, endp.next.__data end -= 1 endp = endp.__last elif (midp.next.__data!=value and not side): startp.next.__data, midp.next.__data = midp.next.__data, startp.next.__data startp = startp.next midp = midp.next mid += 1 else: midp = midp.next mid += 1 def __fact(self,n): a=1 for i in range(2,n+1): a*=i return a def factorial(self): t = self while (t.next != None): if (isinstance(t.next.__data, int) or isinstance(t.next.__data, float)): t.next.__data=self.__fact(t.next.__data) t = t.next def sort(self,order:bool=False): tem=LinkedList(sorted(self,reverse=order)) t=self while(t.next!=None): t.next.__data=tem.next.__data tem=tem.next t=t.next def power(self,power,modula=None): self.__pow__(power,modula)
AdvanceLinkedList
/AdvanceLinkedList-1.0.3.tar.gz/AdvanceLinkedList-1.0.3/LinkedList/__main__.py
__main__.py
__version__="1.0.3" __author__="Nitin Gupta" class LinkedList: def __init__(self,Iterable:object="",initialize_list:bool=False,size:int=0,initial_value:object=0,sorted:bool=False,reverse:bool=False,dtype:object=None)->None: self.next=None self.dtype=dtype self.sorted=sorted self.__last=self self.reverse=reverse self.__length=0 self.data=0 if(initialize_list): for item in range(size): self.append(initial_value) else: for item in Iterable: self.append(item) def append(self,data:object=0)->None: self.__length+=1 tem=LinkedList() if(self.dtype!=None): data=self.dtype(data) tem.data = data t=self if(self.sorted): while(t.next!=None and ((t.next.data<data and not self.reverse) or (t.next.data>data and self.reverse))): t=t.next tem.next=t.next t.next=tem else: tem.__last=self.__last self.__last.next=tem self.__last=tem def __str__(self): a="[ " t=self.next while(t!=None): if(isinstance(t.data,str)): a+=f"'{t.data}' ," else: a+=f"{t.data} ," t=t.next a+="\b]" return a def __len__(self): return self.__length def extend(self,_iterable): for item in _iterable: self.append(item) def copy(self): tem=LinkedList() t=self.next while(t!=None): tem.append(t.data) t=t.next return tem def insert_index(self,index:int,data:object): self.__length+=1 tem=LinkedList() tem.data=data i=0 t=self while(t!=None): if(i==index or i==self.__length-1): tem.next=t.next t.next=tem break i+=1 t=t.next def __getitem__(self, item): if(not isinstance(item,int) and not isinstance(item,LinkedList)): a=item.start b=item.stop c=item.step if (c == None): c = 1 if(a==None): if(c>=0): a=0 else: a=len(self) if(b==None): if(c>=0): b=len(self) else: b=-1 tem=LinkedList() t=self if(a<-len(self) or a>len(self) or b>len(self) or b<-len(self) ): raise IndexError("Index out of range") else: var1=0 if(c<0): a,b=b+1,a+1 c=-1*c var1=-1 i=0 while(t!=None): if(i==a): break i+=1 t=t.next t=t.next k=i while(t!=None and i<b): if(k==i): tem.append(t.data) k+=(c) t=t.next i+=1 if(var1==-1): tem.reversed() return tem elif(isinstance(item,LinkedList)): return item.next.data else: if(item<0): if(item<-len(self)): raise IndexError("Index out of range") else: item=len(self)+item else: if(item>len(self)): raise IndexError("Index out of range") t=self i=0 while(t.next!=None): if(i==item): return t.next.data t=t.next i+=1 raise StopIteration def __setitem__(self, key, value): if(isinstance(key,int)): t=self i=0 while(t.next!=None): if(i==key): t.next.data=value break t=t.next i+=1 else: try: key.next.data=value except: raise SyntaxError("Invalid reffernce pass for Linked list assignment") def insert_reffernce(self,obj:object,data:object)->None: self.__length+=1 tem=LinkedList() tem.data=data tem.next=obj.next obj.next=tem def swap(self,obj1:object,obj2:object): self[obj1],self[obj2]=self[obj2],self[obj1] def __reversed__(self): t=self.copy() t.reversed() return t def reversed(self): t=self.next if(t!=None): p=self.next.next t.next=None while(p!=None): q=p.next p.next=t t=p p=q self.next=t def __eq__(self, other): self=other def __mul__(self, other:int): if(isinstance(other,int)): if(other==1): return self else: return self+self.__mul__(other-1) else: raise SyntaxError(f"Invalid operator between Linked list object and {type(other).__name__} object") def __add__(self, other): if(isinstance(other,LinkedList)): tem=LinkedList(self) t=other.next while(t!=None): tem.append(t.data) t=t.next return tem else: raise SyntaxError("Invalid operator between linked list and %s" % (type(other).__name__)) def __abs__(self): t = self while (t.next != None): if (isinstance(t.next.data, int) or isinstance(t.next.data, float)): t.next.data = abs(t.next.data) t = t.next def Sum(self): return sum(self) def mean(self): return sum(self)/len(self) def meadian(self): # Mode = 3(Median) - 2(Mean) use this relation to compute median in o(n) ans="%.1f"%((self.mode()+(2*self.mean()))/3) return float(ans) def mode(self): d={} for item in self: if(item in d): d[item]+=1 else: d[item]=1 max=list(d.keys()) max=max[0] for item in d: if(d[item]>d[max] and d[item]>1): max=item l=0 s=0 for item in d: if(d[item]==d[max]): s+=item l+=1 return (s/l) def __pow__(self, power,modula=None): t = self while (t.next != None): if (isinstance(t.next.data, int) or isinstance(t.next.data, float)): t.next.data = pow(t.next.data,power,modula) t = t.next @classmethod def create_sized_list(cls,size=0,intial_value=0): tem=LinkedList() tem.__length=size for item in range(size): tem.append(intial_value) return tem def sqrt(self,modula=None): t=self self.__pow__(0.5,modula) def count(self,element,start=0,end=-1): if(end==-1): end=len(self) count=0 t=self i=0 while(t!=None): if(i>=start): if(t.next.data==element): count+=1 i+=1 if(i==end): break t=t.next return count def index(self,value,start=0,end=-1): if(end==-1): end=len(self) t = self i = 0 while (t != None): if (i >= start): if (t.next.data == value): return i i += 1 t=t.next if (i == end): break return -1 def pop(self,index=-1): if(index==-1): index=len(self)-1 t=self i=0 while(t.next!=None): if(i==index): t.next=t.next.next break t=t.next i+=1 self.__length-=1 def serarch(self,value:object)->bool: return self.index(value)!=-1 def remove(self,value): j=self.index(value) if(j!=-1): self.pop(j) def clear(self): self.__length=0 self.next=None def __iadd__(self, other): if(isinstance(other,LinkedList)): t=other while(t.next!=None): self.append(t.next.data) t=t.next return self else: raise SyntaxError("Invalid operator between linked list and %s"%(type(other).__name__)) def __imul__(self, other): return self.__mul__(other) def __call__(self, *args, **kwargs): for item in args: self.append(item) def replace(self,old_value,new_value,times=-1): count=0 t=self while(t.next!=None): if(count==times): break if(t.next.data==old_value): count+=1 t.next.data=new_value t=t.next def rindex(self,value): t=self i=0 j=-1 while(t.next!=None): if(t.next.data==value): j=i i+=1 t=t.next return j def concatenate(self,item=""): t=self ans="" while(t.next!=None): ans+=str(t.next.data) ans+=item t=t.next return ans def partition(self,value:int,starts:object=0,ends:object=-1): if(isinstance(starts,int) and isinstance(ends,int)): startp=self midp=self endp=self.__last.__last else: startp = starts midp = starts endp = ends mid=0 end=len(self)-1 while(end>=mid and midp.next!=endp.next): if(midp.next.data>value): endp.next.data,midp.next.data=midp.next.data,endp.next.data end-=1 endp=endp.__last elif(midp.next.data<value): startp.next.data,midp.next.data=midp.next.data,startp.next.data startp=startp.next midp=midp.next mid+=1 else: midp = midp.next mid += 1 return (startp,endp.next) def cummulativeSum(self): tem=LinkedList([0]) sum=0 t=self while(t.next!=None): if(isinstance(t.next.data,int) or isinstance(t.next.data,float)): sum+=t.next.data tem.append(sum) t=t.next return tem def Max(self): return max(self) def Min(self): return min(self) def join(self,item): t=self while(t.next!=None): self.insert_reffernce(t.next,item) t=t.next.next def maxSum(self): sum=0 t=self try: ans=t.next.data except: ans=0 while(t.next!=None): if (isinstance(t.next.data, int) or isinstance(t.next.data, float)): sum+=(t.next.data) if(sum<0): sum=0 ans = max(ans, sum) t=t.next return ans def maxProduct(self): t = self try: max1 = t.next.data max2 = t.next.data min1 = t.next.data min2 = -t.next.data except: min1 = 0 max1 = 0 max2 = 0 min2 = 0 while (t.next != None): if (isinstance(t.next.data, int) or isinstance(t.next.data, float)): if(max1<t.next.data): max1=t.next.data elif(max2<t.next.data and t.next.data<=max1): max2=t.next.data else: pass if(min1>=t.next.data): min1=t.next.data elif (min2 >= t.next.data and t.next.data>=min1): min2 = t.next.data t = t.next if(min1*min2>max2*max1): return (min1*min2,(min1,min2)) return (max1*max2,(max1,max2)) def shift(self,value,side=True): startp = self midp = self endp = self.__last.__last mid = 0 end = len(self) - 1 while (end >= mid and midp.next != None): if (midp.next.data!=value and side): endp.next.data, midp.next.data = midp.next.data, endp.next.data end -= 1 endp = endp.__last elif (midp.next.data!=value and not side): startp.next.data, midp.next.data = midp.next.data, startp.next.data startp = startp.next midp = midp.next mid += 1 else: midp = midp.next mid += 1 def __fact(self,n): a=1 for i in range(2,n+1): a*=i return a def factorial(self): t = self while (t.next != None): if (isinstance(t.next.data, int) or isinstance(t.next.data, float)): t.next.data=self.__fact(t.next.data) t = t.next def sort(self,reverse:bool=False): tem=LinkedList(sorted(self,reverse=reverse)) t=self while(t.next!=None): t.next.data=tem.next.data tem=tem.next t=t.next def power(self,power,modula=None): self.__pow__(power,modula)
AdvanceLinkedList
/AdvanceLinkedList-1.0.3.tar.gz/AdvanceLinkedList-1.0.3/LinkedList/__init__.py
__init__.py
import math import random #Cell Type: # Input Cell # Backfed Input Cell # Noisy Input Cell # Hidden Cell # Probablistic Hidden Cell # Spiking Hidden Cell # Output Cell # Match Input Output Cell # Recurrent Cell # Memory Cell # Different Memory Cell # Kernel # Convolution or Pool def function(ftype:str,z:float,prime=False,alpha=1): """ type : "sigmoid,ELU,..." z : Pre-activation prime : True/False alpha : Default(1) Funtion : # Binary Step (z) # Linear (z, alpha) # Sigmoid (z) # Tanh (z) # ReLU (z) # Leaky-ReLU (z, alpha) # Parameterised-ReLU (z, alpha) # Exponential-Linear-Unit (z, alpha) """ if ftype == "Binary-Step": if prime == False: if z < 0: y = 0 else: y = 1 # else: pas de deriver if ftype == "Linear": if prime == False: y = z*alpha else: y = alpha if ftype == "Sigmoid": if prime == False: y = 1/(1+math.exp(-z)) else: y = (1/(1+math.exp(-z))) * (1-(1/(1+math.exp(-z)))) if ftype == "Tanh": if prime == False: y = (math.exp(z)-math.exp(-z))/(math.exp(z)+math.exp(-z)) else: y = 1 - (math.exp(z)-math.exp(-z))/(math.exp(z)+math.exp(-z))**2 if ftype == "ReLU": if prime == False: y = max(0,z) else: if z >= 0: y = 1 else: y = 0 if ftype == "Leaky-ReLU": if prime == False: y = max(alpha*z, z) else: if z > 0: y = 1 else: y = alpha if ftype == "Parameterised-ReLU": if prime == False: if z >= 0: y = z else: y = alpha*z else: if z >= 0: y = 1 else: y = alpha if ftype == "Exponential-Linear-Unit": if prime == False: if z >= 0: y = z else: y = alpha*(math.exp(z)-1) else: if z >= 0: y = z else: y = alpha*(math.exp(y)) return y class neural_network: def __init__(self): self.network = [{},{},{}] self.used_neuron_feedforward = {} self.used_neuron_backward = {} def Add_Input_Neuron(self,neuron_name:str,neuron_type:str): """ neuron_type: -Input Cell -Backfed Input Cell #not available -Noisy Input Cell #not available """ self.network[0][neuron_name] = { "type":neuron_type, "output_bridge":{}, "y":0 } self.used_neuron_feedforward[neuron_name] = False self.used_neuron_backward[neuron_name] = False def Add_Hidden_Neuron(self,neuron_name:str,neuron_type:str,activation_type:str,alpha:float=None,biais:float=0.0): """ neuron_type : -Hidden Cell -Probablistic Hidden Cell #not available -Spiking Hidden Cell #not available #----------# activation_type : -Binary Step (z) -Linear (z, alpha) -Sigmoid (z) -Tanh (z) -ReLU (z) -Leaky-ReLU (z, alpha) -Parameterised-ReLU (z, alpha) -Exponential-Linear-Unit (z, alpha) """ self.network[1][neuron_name] = { "type":neuron_type, "activation":{ "ftype":activation_type, "alpha":alpha }, "input_bridge":{}, "output_bridge":{}, "biais":biais, "y":0, "delta":0 } self.used_neuron_feedforward[neuron_name] = False self.used_neuron_backward[neuron_name] = False def Add_Output_Neuron(self,neuron_name:str,neuron_type:str,activation_type:str,alpha:float=None,biais:float=0.0): """ neuron_type : -Output Cell -Match Input Output Cell #not available #----------# activation_type : -Binary Step (z) -Linear (z, alpha) -Sigmoid (z) -Tanh (z) -ReLU (z) -Leaky-ReLU (z, alpha) -Parameterised-ReLU (z, alpha) -Exponential-Linear-Unit (z, alpha) """ self.network[2][neuron_name] = { "type":neuron_type, "activation":{ "ftype":activation_type, "alpha":alpha }, "input_bridge":{}, "biais":biais, "y":0, "delta":0 } self.used_neuron_feedforward[neuron_name] = False self.used_neuron_backward[neuron_name] = False def Add_Bridge(self,bridge_list:list): """ bridge_list: [ [from,to], [from,to], ... ] """ #pour tout les ponts for bridge in bridge_list: #on recherche sur tout l'INPUT_LAYER for input_neuron in self.network[0]: #si un des neurone de la couche est dans le ponts selectionner if input_neuron in bridge: #on ajoute le deuxieme neurone dans l'output self.network[0][input_neuron]["output_bridge"][bridge[1]] = random.uniform(-1,1) #on recherche sur tout l'HIDDEN_LAYER for hidden_neuron in self.network[1]: #si un des neurone de la couche est dans le ponts selectionner if hidden_neuron in bridge: #on verifie le type (out/in)->(0,1) types = bridge.index(hidden_neuron) if types == 0:#si c'est un ponts sortant self.network[1][hidden_neuron]["output_bridge"][bridge[1]] = random.uniform(-1,1) else:#si c'est un ponts entrant self.network[1][hidden_neuron]["input_bridge"][bridge[0]] = random.uniform(-1,1) #on recherche sur tout l'OUTPUT_LAYER for output_neuron in self.network[2]: #si un des neurone de la couche est dans le ponts selectionner if output_neuron in bridge: self.network[2][output_neuron]["input_bridge"][bridge[0]] = random.uniform(-1,1) def train(self,inputs,expected,learning_rate,nb_epoch,display=False): #pour chaque epoch for epoch in range(nb_epoch): #definir l'erreur de l'epoch a 0 error = 0 #pour tout les inputs for x,values in enumerate(inputs): #faire le feet_forward avec les valeurs outputs = self.feed_forward(values) #on fait l'addition de toute les difference de l'output error += sum([(expected[x][i]-outputs[i])**2 for i in range(len(expected[x]))]) #on calcule le taux d'erreur de chaque neurone self.backward(expected[x]) self.update_weights(values,learning_rate) if display == True: print('>epoch=%d, lrate=%.3f, error=%.3f' % (epoch, learning_rate, error)) def feed_forward(self,inputs_values): ####METTRE TOUT LES NEURONES A "NON UTILISER"#### self.used_neuron_feedforward = {x: False for x in self.used_neuron_feedforward} ################################################# #pour chaque neurone de l'INPUT_LAYER for x_input_neuron,input_neuron in enumerate(self.network[0]): #definir la valeur des inputs self.network[0][input_neuron]["y"] = inputs_values[x_input_neuron] self.used_neuron_feedforward[input_neuron] = True #tant que tout les neurones ne sont pas calculer while all(self.used_neuron_feedforward[x] == True for x in self.used_neuron_feedforward) == False: #pour chaque couche de l'HIDDEN_LAYER for hidden_neuron in self.network[1]: #si tout les neurones d'entrรฉ sont calculer if all(self.used_neuron_feedforward[in_neuron]==True for in_neuron in self.network[1][hidden_neuron]["input_bridge"]): #preactivation du neurone z = self.pre_activation(self.network[1][hidden_neuron]) #on calcul l'activation y = function(self.network[1][hidden_neuron]["activation"]["ftype"],z,alpha=self.network[1][hidden_neuron]["activation"]["alpha"]) self.network[1][hidden_neuron]["y"] = y self.used_neuron_feedforward[hidden_neuron] = True #pour chaque couche de l'OUTPUT_LAYER for output_neuron in self.network[2]: #si tout les neurones d'entrรฉ sont calculer if all(self.used_neuron_feedforward[in_neuron]==True for in_neuron in self.network[2][output_neuron]["input_bridge"]): #preactivation du neurone z = self.pre_activation(self.network[2][output_neuron]) #on calcul l'activation y = function(self.network[2][output_neuron]["activation"]["ftype"],z,alpha=self.network[2][output_neuron]["activation"]["alpha"]) self.network[2][output_neuron]["y"] = y self.used_neuron_feedforward[output_neuron] = True outputs = [self.network[2][x]["y"] for x in self.network[2]] return outputs def pre_activation(self,current_neuron): z = current_neuron["biais"] #pour tout les neurones entrant for in_neuron in current_neuron["input_bridge"]: #calculer valeur * poids #chercher couche par couche le neurone demander for layer in self.network: #si la couche contient le neurone if in_neuron in layer.keys(): in_neuron_data = layer[in_neuron] break z += in_neuron_data["y"]*current_neuron["input_bridge"][in_neuron] return z def backward(self,expected): ####METTRE TOUT LES NEURONES A "NON UTILISER"#### self.used_neuron_backward = {x: False for x in self.used_neuron_backward} #on met tout les neurone de l'INPUT_LAYER a True for input_neuron in self.network[0]: self.used_neuron_backward[input_neuron] = True ################################################# #Calcul de l'erreur de l'OUTPUT_LAYER #pour chaque neurones de l'OUTPUT_LAYER for x_output_neuron,output_neuron in enumerate(self.network[2]): #on calcule la difference entre l'attendue et ce qu'on a eu error = expected[x_output_neuron] - self.network[2][output_neuron]["y"] #on calcul le taux d'erreur de ce neurone self.network[2][output_neuron]['delta'] = error* function(self.network[2][output_neuron]["activation"]["ftype"],self.network[2][output_neuron]["y"],prime=True,alpha=self.network[2][output_neuron]["activation"]["alpha"]) self.used_neuron_backward[output_neuron] = True #Tant que tout les hidden while all(self.used_neuron_backward[x] == True for x in self.used_neuron_backward) == False: #pour chaque couche de l'HIDDEN_LAYER for hidden_neuron in self.network[1]: #si tout les neurones de sortie sont calculer if all(self.used_neuron_backward[out_neuron]==True for out_neuron in self.network[1][hidden_neuron]["output_bridge"]) and not all(self.used_neuron_backward[n] == True for n in self.used_neuron_backward): #definir l'erreur ร  0 error = 0.0 #pour chaque neurone de sortie for out_neuron in self.network[1][hidden_neuron]["output_bridge"]: #si le neurones est dans l'HIDDEN_LAYER if out_neuron in self.network[1].keys(): #multiplier le poid du pont et le taux d'erreur du neurone de sortie error += (self.network[1][hidden_neuron]['output_bridge'][out_neuron] * self.network[1][out_neuron]['delta']) #si le neurones est dans l'OUTPUT_LAYER if out_neuron in self.network[2].keys(): #multiplier le poid du pont et le taux d'erreur du neurone de sortie error += (self.network[1][hidden_neuron]['output_bridge'][out_neuron] * self.network[2][out_neuron]['delta']) #on ajoute le taux d'erreur de ce neurone a la liste self.network[1][hidden_neuron]["delta"] = error * function(self.network[1][hidden_neuron]["activation"]["ftype"],self.network[1][hidden_neuron]["y"],prime=True,alpha=self.network[1][hidden_neuron]["activation"]["alpha"]) self.used_neuron_backward[hidden_neuron] = True def update_weights(self,inputs,learning_rate): ####METTRE TOUT LES NEURONES A "NON UTILISER"#### self.used_neuron_feedforward = {x: False for x in self.used_neuron_feedforward} ################################################# #pour chaque neurone de l'INPUT_LAYER for x_input_neuron,input_neuron in enumerate(self.network[0]): #definir la valeur des inputs self.network[0][input_neuron]["y"] = inputs[x_input_neuron] self.used_neuron_feedforward[input_neuron] = True #Tant que tout les neurone ne sont pas calculer while all(self.used_neuron_feedforward[x] == True for x in self.used_neuron_feedforward) == False: #pour chaque neurone de l'HIDDEN_LAYER for hidden_neuron in self.network[1]: #si tout les neurones d'entrรฉ sont calculer if all(self.used_neuron_feedforward[in_neuron]==True for in_neuron in self.network[1][hidden_neuron]["input_bridge"]): #Pour chaque entrรฉ for in_neuron in self.network[1][hidden_neuron]["input_bridge"]: #si l'entrรฉ est dans l'INPUT_LAYER if in_neuron in self.network[0].keys(): #on update le poids de l'entrรฉ #du neuron actuel self.network[1][hidden_neuron]["input_bridge"][in_neuron] += learning_rate * self.network[1][hidden_neuron]["delta"] * self.network[0][in_neuron]["y"] #de l'input neuron self.network[0][in_neuron]["output_bridge"][hidden_neuron] = self.network[1][hidden_neuron]["input_bridge"][in_neuron] #si l'entrรฉ est dans l'HIDDEN_LAYER if in_neuron in self.network[1].keys(): #on update le poids de l'entrรฉ #du neuron actuel self.network[1][hidden_neuron]["input_bridge"][in_neuron] += learning_rate * self.network[1][hidden_neuron]["delta"] * self.network[1][in_neuron]["y"] #du neuron precedent self.network[1][in_neuron]["output_bridge"][hidden_neuron] = self.network[1][hidden_neuron]["input_bridge"][in_neuron] #on met a jour le biais self.network[1][hidden_neuron]["biais"] += learning_rate * self.network[1][hidden_neuron]["delta"] self.used_neuron_feedforward[hidden_neuron] = True #pour chaque neurone de l'OUTPUT_LAYER for output_neuron in self.network[2]: #si tout les neurones d'entrรฉ sont calculer if all(self.used_neuron_feedforward[in_neuron]==True for in_neuron in self.network[2][output_neuron]["input_bridge"]): #Pour chaque entrรฉ for in_neuron in self.network[2][output_neuron]['input_bridge']: #si l'entrรฉ est dans l'INPUT_LAYER if in_neuron in self.network[0].keys(): #on update le poids de l'entrรฉ #du neuron actuel self.network[2][output_neuron]["input_bridge"][in_neuron] += learning_rate * self.network[2][output_neuron]["delta"] * self.network[0][in_neuron]["y"] #de l'input neuron self.network[0][in_neuron]["output_bridge"][output_neuron] = self.network[2][output_neuron]["input_bridge"][in_neuron] #si l'entrรฉ est dans l'HIDDEN_LAYER if in_neuron in self.network[1].keys(): #on update le poids de l'entrรฉ #du neuron actuel self.network[2][output_neuron]["input_bridge"][in_neuron] += learning_rate * self.network[2][output_neuron]["delta"] * self.network[1][in_neuron]["y"] #du neuron precedent self.network[1][in_neuron]["output_bridge"][output_neuron] = self.network[2][output_neuron]["input_bridge"][in_neuron] #on met a jour le biais self.network[2][output_neuron]["biais"] += learning_rate * self.network[2][output_neuron]["delta"] self.used_neuron_feedforward[output_neuron] = True def predict(self,inputs): outputs = self.feed_forward(inputs) return outputs
Advanced-Neural-Network
/Advanced_Neural_Network-1.0.2-py3-none-any.whl/ANN.py
ANN.py
# ============================================================================= # Libraries # ============================================================================= import pandas as pd import numpy as np from sklearn.model_selection import train_test_split from sklearn.preprocessing import LabelEncoder from sklearn.metrics import roc_auc_score as auc from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score from sklearn import tree from sklearn.linear_model import LogisticRegression from sklearn.linear_model import LogisticRegressionCV from regressors import stats # ============================================================================= # Constants Errors # ============================================================================= ERR_ABT = 'Your ABT is not a Pandas DataFrame.\n \ (REMEMBER about na_values in Your read method)' ERR_TARGET_NAME = "There is not a target name in DataFrame" ERR_TARGET_INT = "Your target column is not int or you have NaN values. \n \ Please use: df = df.dropna(subset=[target_name]) \n \ df[target_name] = df[target_name].astype(np.int64)" ERR_TARGET_NUMBER = "You should have just two elements (0 and 1) in target column" RM_ONE_VALUE = 'one value in column' ERR_SCORECARD = "generate scorecard by fit() method" BINS_ERR = 'Number of bins must be greater then 1' PRESELECTION_ERR = 'NO FEATURES AFTER PRESELECTION' SELECTION_ERR = 'NO FEATURES AFTER SELECTION' ERR_GINI_MODEL = "You don't have model yet" ERR_TEST_SIZE = "Test size should be between 0-1" # ============================================================================= # selection parameters # ============================================================================= NUM_OF_FEATURES = 8 # how many features you get from selection method SELECT_METHOD = LogisticRegression() # model for features selection TEST_SIZE = 0.3 # size of test sample, train = 1-TEST_SIZE RANDOM_STATE = 1234 # Random seed N_BINS = 4 # max number of categories MIN_WEIGHT_FRACTION_LEAF = 0.05 # min percent freq in each category MIN_GINI = 0.05 # min gini value for preselection DELTA_GINI = 0.2 # max AR Diff value # ============================================================================= # ASB free ver 1.0.1 # ============================================================================= class AdvancedScorecardBuilder(object): """ Advanced ScoreCard Builder Free Version Parameters ---------- df: DataFrame, abt dataset. With one target column target_name: string, target column name dates: string or list, optional (default=None) A date type columns list Return ------ data: Pandas DataFrame, shape=[n_sample,n_features] data without target target_name: string, target column name target: 1d NumPy array _removed_features: dict with removed features from data - one value if dates is not None _date_names: str, list of date column names _date_columns: Numpy array with date type column Examples -------- >>> import pandas as pd >>> from AmaFree import AdvancedScorecardBuilder as asb >>> from sklearn import datasets >>> X,y = datasets.make_classification(n_samples=10**4, n_features=15, random_state=123) >>> names = [] >>> for el in range(15): >>> names.append("zm"+str(el+1)) >>> df = pd.DataFrame(X, columns=names) >>> df['target'] = y >>> foo = asb(df,'target') >>> foo.fit() >>> foo.get_scorecard() Default parameters for fit() method ----------------------------------- test size = 0.3 , train size = 0.7 Maximal number of bins = 4 Weight Fraction of category = 0.05 (5%) of all data Number of features for model = 8 minimal accepted gini = 0.05 (5%) for one feature maximal accepted delta gini = 0.2 After fit() method You have ---------------------------- train_ - training set with target test_ - test set with target labels_ - dict with all bin labels stats_ - statistics for all features (train) stats_test_ - preselected_features _rejected_low_gini _rejected_AR_gini selected_features model_info_ self._scorecard_ """ def __init__(self, df, target_name, dates=None): """init method load abt data as DataFrame and choose column name for target. All columns with date put as list in dates parameter. """ # check is df a dataFrame if not isinstance(df, (pd.DataFrame)): raise Exception(ERR_ABT) self.DESCR = self.__doc__ # check if target is ok if self.__check_target(df, target_name): self.target_name = target_name # string self.target = df[target_name].values # Numpy array # remove one unique value columns self._removed_features = {} self._log = '' df = df.drop(self.__get_one_value_features(df), axis = 1) # date type data if dates: self._date_names = dates self._date_columns = pd.to_datetime(df[dates], format = '%Y%m') df = df.drop(dates, axis=1) # categorical features for analysis obj_df = df.select_dtypes(include = ['object']).copy() c_list = list(obj_df) if len(c_list) > 0: self._category_df = {} for feature in c_list: le = LabelEncoder() try: le.fit(df[feature]) self._category_df[feature] = le # code for future categorical data analysis except: raise Exception("Your categorical data have NaN") # remove target from data self.__data_ = df # data with target column self.data = df.drop(self.target_name, axis = 1) # features without target, dates and one value columns self.feature_names = list(self.data) def __check_target(self, df, target): ''' init HELPER method target checker: verify if target name is in df, verify if target column is int, verify if there are two unique values in column, and if this two values are 1 and 0. ''' if target not in list(df): # verify target name in df raise Exception(ERR_TARGET_NAME) if not df[target].dtype in ['int64', 'int32']: # verify if target column is int raise Exception(ERR_TARGET_INT) if not len(df[target].unique()) == 2: # and is there 2 unique values in column raise Exception(ERR_TARGET_NUMBER) # if this two values are 1 and 0 if not any(df[target] == 0) & any(df[target] == 1): raise Exception(ERR_TARGET_NUMBER) return True def __get_one_value_features(self, df): ''' init HELPER method Take list with bad (one value) features ''' rm_list = [] for feature in list(df): if not len(df[feature].unique()) > 1: print('feature {} removed - one value in column'.format(feature)) self._log += 'feature '+str(feature)+ ' removed - one value in column \n' rm_list.append(feature) self._removed_features[feature] = RM_ONE_VALUE return rm_list def __str__(self): ''' Just for FUN ''' return '<AMA Institute | Free ASB>' def fit(self, test_size=TEST_SIZE, min_freq_of_category=MIN_WEIGHT_FRACTION_LEAF, n_category=N_BINS, min_gini= MIN_GINI, delta_gini=DELTA_GINI, n_features=NUM_OF_FEATURES): """ Made score card and model for data Parameters ---------- test_size: float, optional (default=0.3) Should be between 0.0 and 1.0 represent the proportion of the dataset to include in the test split. min_freq_of_category: float, optional (default = 0.05) percent of data in each category. n_category: int, maximum of category number in binning. min_gini: float, (default = 0.05) gini minimum value for preselection. delta_gini: float, (default = 0.2) AR value for camparision train and test statistics. n_features: int, maximum number of selected features Return ------ self.train_ - training set with target self.test_ - test set with target self.labels_ - dict with all bin labels self.stats_ - self.stats_test_ self.preselected_features self._rejected_low_gini self._rejected_AR_gini self.selected_features self.model_info_ self._scorecard_ """ # 1. NaN < 0.05 remove self._log += 'NaN analysis \n' if self.__data_.isnull().any().any(): self.__data_ = self.__remove_empty_nan(self.__data_, min_freq_nan=min_freq_of_category) # 2. split data self._log += 'Spliting data \n' self.train_, self.test_ = self.__split_frame(test_size=test_size) # 3. binning features self._log += 'binning features\n' trainBin_, testBin_, self.labels_ = self.__sup_bin_features(self.train_, self.test_, self.feature_names, n_category) self._log += 'NaN analysis' self.trainBin_, self.testBin_ = self.__fillnan(trainBin_,testBin_,min_freq_nan=min_freq_of_category) # stats self._log += 'Get features stats for training set\n' self.stats_ = self.__stats(self.trainBin_,list(self.trainBin_)) self._log += 'Get features stats for test set\n' self.stats_test_ = self.__stats(self.testBin_,list(self.testBin_)) # 4 preselekcja self._log += 'Preselection\n' self.preselected_features, self._rejected_low_gini, self._rejected_AR_gini= self.__preselection(list(self.trainBin_),min_gini,delta_gini) self._log += 'low gini: '+" ".join(str(x) for x in self._rejected_low_gini)+'\n' self._log += 'Delta AR gini: '+" ".join(str(x) for x in self._rejected_AR_gini)+'\n' if not len(self.preselected_features): self._log += PRESELECTION_ERR raise Exception(PRESELECTION_ERR) trainLogit = self.__logit_value(self.trainBin_[self.preselected_features],self.stats_) testLogit = self.__logit_value(self.testBin_[self.preselected_features],self.stats_test_) # 5 selected features from logit train data self.selected_features = self.__selection(trainLogit, n=n_features) # list if not len(self.selected_features): self._log += SELECTION_ERR raise Exception(SELECTION_ERR) self.leader, self.model_info_ = self.__get_model_info( trainLogit[self.selected_features], self.train_[self.target_name],testLogit[self.selected_features],self.test_[self.target_name]) self._scorecard_ = pd.DataFrame.from_dict(self.__scorecard_dict( self.selected_features, self.stats_, self.model_info_['coef'], self.model_info_['intercept'])) def __split_frame(self, test_size): """Sampling data by random method We use sklearn train_test_split method Parameters ---------- test_size: float, optional (default=0.3) Should be between 0.0 and 1.0 represent the proportion of the dataset to include in the test split. Return ---------- train_: DataFrame, shape = [n_samples, (1-test_size)*n_features with target] test_: DataFrame, shape = [n_samples, test_size*n_features with target] """ if test_size <=0 and test_size >=1: raise Exception(ERR_TEST_SIZE) tr, te = train_test_split(self.__data_, random_state = RANDOM_STATE, test_size=test_size) return tr.reset_index(drop=True), te.reset_index(drop=True) def __remove_empty_nan(self,df,min_freq_nan): df_new = df.copy() nAll = df_new.shape[0] for feature in list(df_new): if df_new[feature].isnull().any(): nNan = df_new[feature].isnull().sum() if nNan/nAll < min_freq_nan: print("You have less then {} empty values in {}. I change them by mean value".format(min_freq_nan,feature)) self._log += 'You have less then '+ str(min_freq_nan)+ ' empty values in '+feature+'. I change them by mean value\n' df_new[feature] = df_new[feature].fillna(df_new[feature].mean()) else: self._log += 'in {} you have more then {} NaNs \n'.format(feature,min_freq_nan) return df_new def __is_numeric(self, df, name): """helper method verify is type of column is int or float Parameters ---------- df: DataFrame name: string, column name to check Return ------ True if column is float or int or False if not """ if df[name].dtype in ['int64', 'int32']: return True if df[name].dtype in ['float64', 'float32']: return True return False def __binn_continous_feature(self,df,feature,max_leaf_nodes, min_weight_fraction_leaf=MIN_WEIGHT_FRACTION_LEAF, random_state=RANDOM_STATE): """supervised binning of continue feature by tree Parameters ---------- df: DataFrame, feature: string, analysing feature name max_leaf_nodes: parameter of tree min_weight_fraction_leaf: parameter of tree random_state: parameter of tree Return ------ labs: dict, description """ # new DataFrame with result df_cat = pd.DataFrame() # cut all data to two col DataFrame df_two_col = df[[self.target_name, feature]].copy() # drop nan values (because tree) df_two_col = df_two_col.dropna(axis=0).reset_index(drop=True) # binns list with [min,max] bins = [-np.inf, np.inf] # get Tree classifier - check if we need another parameters !! clf = tree.DecisionTreeClassifier( max_leaf_nodes=max_leaf_nodes, min_weight_fraction_leaf=min_weight_fraction_leaf, random_state=random_state) # fit tree clf.fit(df_two_col[feature].values.reshape(-1, 1), df_two_col[self.target_name]) # get tresholds and remove empty thresh = [round(s, 3) for s in clf.tree_.threshold if s != -2] # add tresholds to binns bins = bins + thresh return sorted(bins) @staticmethod def __cut_bin(data,bins): """helper method """ return pd.cut(data,bins=bins, labels=False, retbins=True, include_lowest=True) def __sup_bin_features(self, df,testdf,features,n_bins): """binn method binning of numerical variables by tree algorithm Parameters ---------- df: train DataFrame with data for binning testdf: test DataFrame with data for binning features: features list n_bins: binns number Return ------ Binned train set, test set and labels of bins """ df_c = pd.DataFrame() # category frame with labels int df_test = pd.DataFrame() # df after bin before checking # remove_list = [] labs = {} # binns lists df_copy = df.copy() # copy of data df_test_copy = testdf.copy() # copy test data # run loop for every feature in data - all should be numeric for feature in features: # check is type is numeric if self.__is_numeric(df_copy, feature): labs[feature] = self.__binn_continous_feature(df_copy, feature,n_bins) if len(labs[feature])>2: # cuts with int labels df_c[feature], _ = self.__cut_bin(df_copy[feature],labs[feature]) df_test[feature], _ = self.__cut_bin(df_test_copy[feature],labs[feature]) elif len(df_copy[feature].unique())==2: df_c[feature] = df_copy[feature] df_test[feature] = df_test_copy[feature] else: print('I removed {} - no binns'.format(feature)) self._log += 'I removed '+ str(feature)+' - no binns \n' self._removed_features[feature] = 'no binns' else: # if is category type data or something else df_c[feature] = df_copy[feature] df_test[feature] = df_test_copy[feature] # raise Exception('You still have non numerical data') # remember add target to the train and test data df_c[self.target_name] = df[self.target_name] df_test[self.target_name] = testdf[self.target_name] return df_c, df_test, labs def __fillnan(self,df, df_2,min_freq_nan): ''' change all nan as last numerical category ''' if not (df.isnull().any().any() and df_2.isnull().any().any()): return df, df_2 for feature in df.columns[df.isnull().any()].tolist(): self._log += 'change NaN values for binned feature '+str(feature) +'\n' if df[feature].isnull().sum()/df[feature].shape[0] > min_freq_nan: self._log += 'more then '+str(min_freq_nan)+' NaN goes to the NEW category \n' df[feature]=df[feature].fillna(df[feature].max()+1) df_2[feature]=df_2[feature].fillna(df_2[feature].max()+1) else: self._log += 'less then '+str(min_freq_nan)+' NaN goes to the FIRST category \n' df[feature]=df[feature].fillna(0) df_2[feature]=df_2[feature].fillna(0) for feature in df_2.columns[df_2.isnull().any()].tolist(): self._log += 'change NaN values for binned feature '+str(feature) +'\n' if df_2[feature].isnull().sum()/df_2[feature].shape[0] > min_freq_nan: self._log += 'more then '+str(min_freq_nan)+' NaN goes to the NEW category \n' df[feature]=df[feature].fillna(df[feature].max()+1) df_2[feature]=df_2[feature].fillna(df_2[feature].max()+1) else: self._log += 'less then '+str(min_freq_nan)+' NaN goes to the FIRST category \n' df[feature]=df[feature].fillna(0) df_2[feature]=df_2[feature].fillna(0) return df, df_2 def __dictStats(self,df,feature): ''' Generate dict with target values for feature ''' slownik = {} elements = list(df[feature].unique()) for el in elements: slownik[el] = dict(df[df[feature] == el][self.target_name].value_counts()) if 0 not in slownik[el]: slownik[el][0] = 0.00000000000000000001 if 1 not in slownik[el]: slownik[el][1] = 0.00000000000000000001 return slownik def __df_stat(self,fd,td,feature): ''' Generate DataFrame for feature stats ''' result = pd.DataFrame.from_dict(fd, orient='index') #population pop_all = result[0]+result[1] result['Population'] = pop_all result['Percent of population [%]'] = round((pop_all)/ td['length'] , 3)*100 result['Good rate [%]'] = round(result[0] / pop_all,3)*100 result['Bad rate [%]'] = round(result[1] / pop_all,3)*100 p_goods_to_all_goods = result[0] / td[0] p_bads_to_all_bads = result[1]/td[1] result['Percent of goods [%]'] = round(p_goods_to_all_goods,3)*100 result['Percent of bad [%]'] = round(p_bads_to_all_bads,3)*100 result['var'] = feature result['logit'] = np.log(result[1] / result[0]) result['WoE'] = np.log((p_goods_to_all_goods)/(p_bads_to_all_bads)) result['IV'] =((p_goods_to_all_goods)-(p_bads_to_all_bads))*result['WoE'] if hasattr(self, '_category_df'): if feature in self._category_df: result['label'] = str(feature)+' = '+result.index else: result['label'] = self.__category_names(result,self.labels_[feature],feature) else: result['label'] = self.__category_names(result,self.labels_[feature],feature) result = result.rename(columns = { 1:'n_bad', 0:'n_good'}) result['bin_label'] = result.index return result def __category_names(self, df, bins, name): '''helper method for getting bins labels as a string''' result = [] string = "" if len(df)==2 and len(bins)==2: return [str(name)+"=0",str(name)+"=1"] if len(df)==2 and len(bins)==3 and bins[1]==0.5: return [str(name)+"=0",str(name)+"=1"] if not len(df) == len(bins): string = "(not missing) and " for ix, el in enumerate(bins): if ix == 0: continue if ix == 1: string += str(name) + ' <= ' + str(el) result.append(string) string = str(el) + ' < ' + str(name) if ix < len(bins)-1 and ix > 1: string += ' <= ' + str(el) result.append(string) string = str(el) + ' < ' + str(name) if ix == len(bins)-1: result.append(str(bins[ix-1]) + ' < ' + str(name)) if len(df) == len(bins): result.append('missing') return result def __stats(self,df,features): '''Generate stats for all features ''' statsDict = {} logit_dict = {} if self.target_name in features: features.remove(self.target_name) target_dict = df[self.target_name].value_counts() if 1 not in target_dict.keys(): target_dict[1] = 0.00000000000000000001 if 0 not in target_dict.keys(): target_dict[0] = 0.00000000000000000001 target_dict['length'] = df.shape[0] for feature in features: # take 0 and 1 for each category in feature and then compute more info statsDict[feature] = self.__df_stat(self.__dictStats(df, feature), target_dict,feature) logit_dict = statsDict[feature]['logit'].to_dict() statsDict[feature]['Gini'] = np.absolute(2 * auc(df[self.target_name],self.__change_dict(df,feature,logit_dict)) - 1) return statsDict def __compute_gini(self, logit_c,df,feature): """ method for computing gini index """ ld = logit_c.to_dict() return np.absolute(2 * auc(df[self.target_name],self.__change_dict(df,feature,ld)) - 1) def __preselection(self, features, min_gini=MIN_GINI, delta_gini=DELTA_GINI): """Preselection of features by gini and AR_DIFF value """ # 1. gini na kolumnie - drop < mini_gini results = [] rejected_low_gini = [] rejected_delta_gini = [] if self.target_name in features: features.remove(self.target_name) for feature in features: print(feature) gini = self.__compute_gini(self.stats_[feature]['logit'],self.trainBin_, feature) if gini > min_gini: # 2. procentowa miedzy testem a treningiem |g_train - g_test|/g_train > delta gini gini_test = self.__compute_gini(self.stats_[feature]['logit'],self.testBin_,feature) AR_Diff = self.__AR_value(gini,gini_test) if AR_Diff < delta_gini: results.append(feature) else: rejected_delta_gini.append([feature, AR_Diff]) else: rejected_low_gini.append( [feature, gini]) return results, rejected_low_gini, rejected_delta_gini def __AR_value(self,train,test): '''compute AR diff value ''' return np.absolute((train-test))/train def __logit_value(self,df, stats): '''helper method change all values in columns with corresponding logit value ''' logit = df.copy() if self.target_name in list(logit): logit = logit.drop(self.target_name, axis=1) for el in logit: logit[el] = self.__change_dict(logit,el,stats[el]['logit'].to_dict()) return logit def __change_dict(self,df,feature,dict_): '''helper method map all elements for feature column''' return df[feature].map(dict_) def __selection(self, df, selector=SELECT_METHOD, n=NUM_OF_FEATURES): """Method for selecting best n features with chosen estimator (logistic regression as default)""" return self.__choose_n_best(list(df), self.__ranking_features(df, selector), n) def __ranking_features(self, df, selector): '''RFE feautre ranking from RFE selection ''' from sklearn.feature_selection import RFE rfe = RFE(estimator=selector, n_features_to_select=1, step=1) rfe.fit(df, self.train_[self.target_name]) return rfe.ranking_ def __choose_n_best(self, features, ranking, n): '''choose n best features from ranking list''' result = list(ranking <= n) selected_features = [features[i] for i, val in enumerate(result) if val == 1] return selected_features def __get_model_info(self, X, y, X_test,y_test): lre = LogisticRegressionCV() features = list(X) result = {"coef": {}, "p_value":{}, "features": features, 'model': str(lre).split("(")[0], 'gini': 0, 'acc': 0, 'Precision':0, 'Recall':0, 'F1':0} lre.fit(X, y) pred = lre.predict(X_test) p_va = stats.coef_pval(lre,X,y) result['acc'] = accuracy_score(y_test,pred) # accuracy classification score result['Precision']=precision_score(y_test,pred) # precision tp/(tp+fp) PPV result['Recall'] = recall_score(y_test,pred) # Recall tp/(tp+fn) NPV result['F1'] = f1_score(y_test,pred) # balanced F-score weighted harmonic mean of the precision and recall for ix, el in enumerate(lre.coef_[0]): result['coef'][features[ix]] = el result['p_value'][features[ix]] = p_va[ix] result['intercept'] = lre.intercept_ partial_score = np.asarray(X) * lre.coef_ for_gini_score = [sum(i) for i in partial_score] + lre.intercept_ result['gini'] = np.absolute(2 * auc(y, for_gini_score) - 1) return lre, result def __scorecard_dict(self,features, stats, coef, inter): '''scorecard dictionary with score points for all categories with beta coefficients > 0. Parameters ---------- features: list, list of all modeled features stats: dict, dictionary with statistics coef: dict, dictionary with all models coefficient inter: list, list with model intercept value. Return ------ scorecarf: dict ''' alpha = inter[0] factor = 20/np.log(2) score_dict = {"variable": [], "label": [], "logit": [], 'score': []} stats_copy = {} v = len(features) alp = 0 # del all features with negative beta coefficient for el in features: if coef[el]<0: features.remove(el) self._scored_features_ = features for el in features: stats_copy[el] = stats[el].sort_values( by='logit', ascending=False).reset_index(drop=True) alp += coef[el]*stats_copy[el]["logit"][0]*factor alp = -alp+300 for el in features: f_beta = coef[el]*stats_copy[el]["logit"][0] for ix, ele in enumerate(stats_copy[el]['var']): score_dict["variable"].append(ele) score_dict["label"].append(stats_copy[el]['label'][ix]) score_dict["logit"].append(stats_copy[el]["logit"][ix]) a1 = -(coef[el]*stats_copy[el]["logit"][ix]-f_beta + alpha/v)*factor a2 = alp/v score_dict["score"].append(int(round(a1+a2))) if score_dict["score"][0]<0: score_dict['score']+=np.absolute(score_dict['score'][0])+1 return score_dict def test_gini(self): """Compute gini for test dataset""" df_bin_test = self.testBin_[self._scored_features_] df = pd.DataFrame() # change bin value to score value for feature in list(df_bin_test): # get score values for beans df_a = self.show_stats(feature,bin_lab=True).sort_values(by="bin_label").reset_index(drop=True) score_dict = df_a.to_dict()['score'] df[feature] = self.__change_dict(df_bin_test,feature,score_dict) df['total_score'] = (df.sum(axis=1)).astype('int') X = df['total_score'].values.reshape(-1, 1) y = self.testBin_[self.target_name] gini = np.absolute(2*auc(y,X)-1) return gini def __summary_features(self, features): summary= "" for feature in features: table = self.show_stats(feature).to_html()\ .replace('<table border="1" class="dataframe">','<table class="table table-striped">') element = '<h3>'+str(feature)+'</h3>'+table summary += element return summary def __summary_report(self): all_train = self.train_.shape[0] all_test = self.test_.shape[0] train_good =self.train_[self.target_name].value_counts()[0] train_bad =self.train_[self.target_name].value_counts()[1] test_good =self.test_[self.target_name].value_counts()[0] test_bad =self.test_[self.target_name].value_counts()[1] info = {'n observed':[all_train,all_test], 'n good':[train_good,test_good], 'n bad':[train_bad, test_bad], 'Percent of good [%]':[round(train_good/all_train,3)*100, round(test_good/all_test,3)*100], 'Percent of bad [%]':[round(train_bad/all_train,3)*100, round(test_bad/all_test,3)*100] } df = pd.DataFrame(info, index=['Training','Test']) summary = df[['n observed','n good','n bad', 'Percent of good [%]', 'Percent of bad [%]']].to_html()\ .replace('<table border="1" class="dataframe">','<table class="table table-striped">') return summary def __model_report(self): summary= self.get_scorecard().to_html()\ .replace('<table border="1" class="dataframe">','<table class="table table-striped">') return summary def __gini_report(self): info = {'Gini train':self.gini_model(),'Gini test':self.test_gini()} df = pd.DataFrame(info, index = [0]) summary = df.to_html()\ .replace('<table border="1" class="dataframe">','<table class="table table-striped">') return summary def html_report(self, name='report.html', features=None): if not features: features = self._scored_features_ html_page = '''<!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="description" content="Advanced Scorecard Builder Report"> <meta name="author" content="Sebastian Zajฤ…c"> <title>Report</title> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css"> <style> h2{text-align:center}</style> <!-- HTML5 shim and Respond.js for IE8 support of HTML5 elements and media queries --> <!--[if lt IE 9]> <script src="https://oss.maxcdn.com/html5shiv/3.7.3/html5shiv.min.js"></script> <script src="https://oss.maxcdn.com/respond/1.4.2/respond.min.js"></script> <![endif]--> </head> <body> <div class="jumbotron"><div class="container"> <div class="text-center"> <img src="https://amainstitute.pl/wp-content/uploads/2018/06/ama_logo.png" alt='AMA Instistute Logo'> </div> <h2>Advanced Scorecard Builder</h2> <h2>Report</h2> </div></div> <div class="container"> <h2>Data Summary</h2><div class="summary table-responsive">'''+self.__summary_report() +'''</div> </div> <div class="container"> <h2>Features report</h2><div class="features table-responsive">''' + self.__summary_features(features) + '''</div> </div><div class="container"> <h2>Scorecard</h2><div class="score table-responsive">'''+self.__model_report()+'''</div> <h2>Gini</h2><div class="gini table-responsive">'''+self.__gini_report() + '''</div> <footer> <p>&copy; 2018 AMA Institute. Advanced Scorecard Builder Free Version</p> </footer> </div> <!-- Bootstrap core JavaScript ================================================== --> <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.12.4/jquery.min.js"></script> <script>$(".summary thead th:first-child").text("Set"); $(".features th:first-child, .score th:first-child, .gini th:first-child").remove()</script> </body> </html>''' f = open(name,'w') f.write(html_page) f.close() def get_scorecard(self): if hasattr(self, '_scorecard_'): lista = ['label', 'variable', 'score'] return self._scorecard_[lista] raise Exception(ERR_SCORECARD) def gini_model(self): if hasattr(self, 'model_info_'): return self.model_info_['gini'] raise Exception(ERR_GINI_MODEL) def show_stats(self,name,set_name='train', bin_lab=None): """ Interface method for feature statistics view as DataFrame Parameters ---------- name: string, feature names set_name: string, (default=train) choose train/test Return ---------- result: DataFrame """ lista = ['label','Bad rate [%]','Percent of population [%]','n_good','n_bad',"IV"] if bin_lab: lista.append('bin_label') if set_name == 'train': r_part = self.stats_[name].sort_values( by='logit', ascending=False).reset_index(drop=True) elif set_name == 'test': r_part = self.stats_test_[name].sort_values( by='logit', ascending=False).reset_index(drop=True) else: raise Exception('Choose train or test') if name in self.model_info_['features'] and self.model_info_['coef'][name]>0: lista = ['score','label','Bad rate [%]','Percent of population [%]','n_good','n_bad',"IV"] if bin_lab: lista.append('bin_label') s_part = self._scorecard_[self._scorecard_['variable']==name][['score']].reset_index(drop=True) result = pd.concat([r_part,s_part],axis=1,join='inner') else: result = r_part return result[lista]
Advanced-scorecard-builder
/advanced_scorecard_builder-1.0.2-py3-none-any.whl/AmaFree/asb.py
asb.py
AdvancedAnalytics =================== A collection of python modules, classes and methods for simplifying the use of machine learning solutions. **AdvancedAnalytics** provides easy access to advanced tools in **Sci-Learn**, **NLTK** and other machine learning packages. **AdvancedAnalytics** was developed to simplify learning python from the book *The Art and Science of Data Analytics*. Description =========== From a high level view, building machine learning applications typically proceeds through three stages: 1. Data Preprocessing 2. Modeling or Analytics 3. Postprocessing The classes and methods in **AdvancedAnalytics** primarily support the first and last stages of machine learning applications. Data scientists report they spend 80% of their total effort in first and last stages. The first stage, *data preprocessing*, is concerned with preparing the data for analysis. This includes: 1. identifying and correcting outliers, 2. imputing missing values, and 3. encoding data. The last stage, *solution postprocessing*, involves developing graphic summaries of the solution, and metrics for evaluating the quality of the solution. Documentation and Examples ============================ The API and documentation for all classes and examples are available at https://github.com/tandonneur/AdvancedAnalytics/. Usage ===== Currently the most popular usage is for supporting solutions developed using these advanced machine learning packages: * Sci-Learn * StatsModels * NLTK The intention is to expand this list to other packages. This is a simple example for linear regression that uses the data map structure to preprocess data: .. code-block:: python from AdvancedAnalytics.ReplaceImputeEncode import DT from AdvancedAnalytics.ReplaceImputeEncode import ReplaceImputeEncode from AdvancedAnalytics.Tree import tree_regressor from sklearn.tree import DecisionTreeRegressor, export_graphviz # Data Map Using DT, Data Types data_map = { "Salary": [DT.Interval, (20000.0, 2000000.0)], "Department": [DT.Nominal, ("HR", "Sales", "Marketing")] "Classification": [DT.Nominal, (1, 2, 3, 4, 5)] "Years": [DT.Interval, (18, 60)] } # Preprocess data from data frame df rie = ReplaceImputeEncode(data_map=data_map, interval_scaling=None, nominal_encoding= "SAS", drop=True) encoded_df = rie.fit_transform(df) y = encoded_df["Salary"] X = encoded_df.drop("Salary", axis=1) dt = DecisionTreeRegressor(criterion= "gini", max_depth=4, min_samples_split=5, min_samples_leaf=5) dt = dt.fit(X,y) tree_regressor.display_importance(dt, encoded_df.columns) tree_regressor.display_metrics(dt, X, y) Current Modules and Classes ============================= ReplaceImputeEncode Classes for Data Preprocessing * DT defines new data types used in the data dictionary * ReplaceImputeEncode a class for data preprocessing Regression Classes for Linear and Logistic Regression * linreg support for linear regressino * logreg support for logistic regression * stepwise a variable selection class Tree Classes for Decision Tree Solutions * tree_regressor support for regressor decision trees * tree_classifier support for classification decision trees Forest Classes for Random Forests * forest_regressor support for regressor random forests * forest_classifier support for classification random forests NeuralNetwork Classes for Neural Networks * nn_regressor support for regressor neural networks * nn_classifier support for classification neural networks Text Classes for Text Analytics * text_analysis support for topic analysis * text_plot for word clouds * sentiment_analysis support for sentiment analysis Internet Classes for Internet Applications * scrape support for web scrapping * metrics a class for solution metrics Installation and Dependencies ============================= **AdvancedAnalytics** is designed to work on any operating system running python 3. It can be installed using **pip** or **conda**. .. code-block:: python pip install AdvancedAnalytics # or conda install -c dr.jones AdvancedAnalytics General Dependencies There are dependencies. Most classes import one or more modules from **Sci-Learn**, referenced as *sklearn* in module imports, and **StatsModels**. These are both installed with the current version of **anaconda**. Installed with AdvancedAnalytics Most packages used by **AdvancedAnalytics** are automatically installed with its installation. These consist of the following packages. * statsmodels * scikit-learn * scikit-image * nltk * pydotplus Other Dependencies The *Tree* and *Forest* modules plot decision trees and importance metrics using **pydotplus** and the **graphviz** packages. These should also be automatically installed with **AdvancedAnalytics**. However, the **graphviz** install is sometimes not fully complete with the conda install. It may require an additional pip install. .. code-block:: python pip install graphviz Text Analytics Dependencies The *TextAnalytics* module uses the **NLTK**, **Sci-Learn**, and **wordcloud** packages. Usually these are also automatically installed automatically with **AdvancedAnalytics**. You can verify they are installed using the following commands. .. code-block:: python conda list nltk conda list sci-learn conda list wordcloud However, when the **NLTK** package is installed, it does not install the data used by the package. In order to load the **NLTK** data run the following code once before using the *TextAnalytics* module. .. code-block:: python #The following NLTK commands should be run once nltk.download("punkt") nltk.download("averaged_preceptron_tagger") nltk.download("stopwords") nltk.download("wordnet") The **wordcloud** package also uses a little know package **tinysegmenter** version 0.3. Run the following code to ensure it is installed. .. code-block:: python conda install -c conda-forge tinysegmenter==0.3 # or pip install tinysegmenter==0.3 Internet Dependencies The *Internet* module contains a class *scrape* which has some functions for scraping newsfeeds. Some of these use the **newspaper3k** package. It should be automatically installed with **AdvancedAnalytics**. However, it also uses the package **newsapi-python**, which is not automatically installed. If you intended to use this news scraping scraping tool, it is necessary to install the package using the following code: .. code-block:: python conda install -c conda-forge newsapi # or pip install newsapi In addition, the newsapi service is sponsored by a commercial company www.newsapi.com. You will need to register with them to obtain an *API* key required to access this service. This is free of charge for developers, but there is a fee if *newsapi* is used to broadcast news with an application or at a website. Code of Conduct --------------- Everyone interacting in the AdvancedAnalytics project's codebases, issue trackers, chat rooms, and mailing lists is expected to follow the PyPA Code of Conduct: https://www.pypa.io/en/latest/code-of-conduct/ .
AdvancedAnalytics
/AdvancedAnalytics-1.39.tar.gz/AdvancedAnalytics-1.39/README.rst
README.rst
from operator import indexOf import cryptography from cryptography.fernet import Fernet from datetime import datetime import string, random def passwordToken(MinLength=100, MaxLength=120): #---- Generates a random token that is stored that will be used to encrypt user data ---- passwordToken = ''.join(random.choice(string.ascii_lowercase + string.digits + string.ascii_uppercase + string.punctuation) for _ in range(random.randint(MinLength,MaxLength))) RandomTextPoint = random.randrange(len(passwordToken)) RandomInputToken, RandomInputKey = encryption(''.join(random.choice(string.ascii_lowercase + string.digits + string.ascii_uppercase + string.punctuation) for _ in range(random.randint(100,120)))) randominputprivateKey = RandomInputKey.decode("UTF-8") + RandomInputToken.decode("UTF-8") text = passwordToken[:RandomTextPoint] + randominputprivateKey + passwordToken[RandomTextPoint:] privateToken, privateKey = encryption(text) return privateKey.decode("UTF-8") + ":" + privateToken.decode("UTF-8") def generateSessionToken(username, MinLength=100, MaxLength=120): #---- Generates a random 128 character long SessionToken sessionToken = ''.join(random.choice(string.ascii_lowercase + string.digits + string.ascii_uppercase) for _ in range(random.randint(MinLength,MaxLength))) return encryption(username + ":" + sessionToken) def dataEncrpytion(text, MinLength=100, MaxLength=120): #---- Generate a random 128 character password with password to show on the servers and files to save ---- RandomText = ''.join(random.choice(text + string.ascii_lowercase + string.digits + string.ascii_uppercase + string.punctuation) for _ in range(random.randint(MinLength,MaxLength))) #---- Creates a random number within the bounds of the length of passwords (basically shoves text in a random location) ---- TextPoint = random.randrange(len(text)) RandomTextPoint = random.randrange(len(RandomText)) RandomToken = passwordToken(MinLength, MaxLength) #---- Combine all the random points and text together to store this password ---- text = text[:TextPoint] + RandomText[:RandomTextPoint] + RandomToken + RandomText[RandomTextPoint:] + text[TextPoint:] TextToken, TextKey = encryption(text) timestamp = datetime.utcfromtimestamp(Fernet(str.encode(RandomToken.split(":")[0])).extract_timestamp(str.encode(RandomToken.split(":")[1]))).strftime(''.join(random.choice(['%d','%H','%d','%M','%d','%S']) for _ in range(int(MinLength/4),int(MaxLength/2)))) RandomTextToken, RandomTextKey = encryption(RandomText) return TextKey.decode("utf-8") +":" + timestamp[0:random.randint(1, len(timestamp))] +"/" + TextToken.decode("utf-8") + ":" + timestamp[0:random.randint(1, len(timestamp))] +"/" + RandomTextToken.decode("UTF-8") + ":" + timestamp[0:random.randint(1, len(timestamp))] + "/" + RandomTextKey.decode("UTF-8") +":" + timestamp[0:random.randint(1, len(timestamp))] + "/" + RandomToken def encryption(text): #---- Changes string to byte format ---- bytetext = str.encode(text) #--- Generates a special key ---- key = Fernet.generate_key() encryption_type = Fernet(key) #---- Makes Token string an encrypted fernet with the generated key for the byte string ---- token = encryption_type.encrypt(bytetext) #---- Returns encrypted text text format ---- return token, key def dataDecryption(EncryptedText): ShortenedText = EncryptedText.split(":")[len(EncryptedText.split(":"))-2] RandomKey = ShortenedText[ShortenedText.index("/")+1:len(ShortenedText)] RandomToken = EncryptedText.split(":")[len(EncryptedText.split(":"))-1] timestamp = datetime.utcfromtimestamp(Fernet(str.encode(RandomKey)).extract_timestamp(str.encode(RandomToken))).strftime('%d%H:%d%M:%d%S') Textkey = EncryptedText.split(":")[0] textToken = CleanToken(EncryptedText.split(":")[1], timestamp) RandomtextToken = CleanToken(EncryptedText.split(":")[2], timestamp) RandomtextKey = CleanToken(EncryptedText.split(":")[3], timestamp) return str(decryption(str.encode(textToken), str.encode(Textkey))).replace(RandomKey +":" + RandomToken, "").replace(str(decryption(str.encode(RandomtextToken), str.encode(RandomtextKey))), "") def CleanToken(TokenString, timestamp): for x in timestamp +"%:dHMS": if x+"/" in TokenString: cleanToken = TokenString[TokenString.index(x+"/") +2: len(TokenString)] if TokenString.index(x+"/") < 30: break return cleanToken def decryption(token, key): #---- Will decrypt the encrypted text with a token and key ---- encryption_type = Fernet(key) return encryption_type.decrypt(token).decode()
AdvancedFernetDataEncryption
/advancedfernetdataencryption-1.1-py3-none-any.whl/AdvancedFernetDataEncryption.py
AdvancedFernetDataEncryption.py
AdvancedHTMLParser ================== AdvancedHTMLParser is an Advanced HTML Parser, with support for adding, removing, modifying, and formatting HTML. It aims to provide the same interface as you would find in a compliant browser through javascript ( i.e. all the getElement methods, appendChild, etc), an XPath implementation, as well as many more complex and sophisticated features not available through a browser. And most importantly, it's in python! There are many potential applications, not limited to: * Webpage Scraping / Data Extraction * Testing and Validation * HTML Modification/Insertion * Outputting your website * Debugging * HTML Document generation * Web Crawling * Formatting HTML documents or web pages It is especially good for servlets/webpages. It is quick to take an expertly crafted page in raw HTML / css, and have your servlet's ingest with AdvancedHTMLParser and create/insert data elements into the existing view using a simple and well-known interface ( javascript-like + HTML DOM ). Another useful scenario is creating automated testing suites which can operate much more quickly and reliably (and at a deeper function-level), unlike in-browser testing suites. Full API -------- Can be found http://htmlpreview.github.io/?https://github.com/kata198/AdvancedHTMLParser/blob/master/doc/AdvancedHTMLParser.html?vers=8.1.8 . Examples -------- Various examples can be found in the "tests" directory. A very old, simple example can also be found as "example.py" in the root directory. Short Doc --------- **The Package and Modules** The top-level module in this package is "*AdvancedHTMLParser*." import AdvancedHTMLParser Most everything "public" is available through this top-level module, but some corner-case usages may require importing from a submodule. All of these associations can be found through the pydocs. For example, to access AdvancedTag, the recommended path is just to import the top-level, and use dot-access: import AdvancedHTMLParser myTag = AdvancedHTMLParser.AdvancedTag('div') However, you can also import AdvancedTag through this top-level module: import AdvancedHTMLParser from AdvancedHTMLParser import AdvancedTag Or, you can import from the specific sub-module, directly: import AdvancedHTMLParser from AdvancedHTMLParser.Tags import AdvancedTag All examples below are written as if "import AdvancedHTMLParser" has already been performed, and all relations in examples are based off usages from the top-level import, only. **AdvancedHTMLParser** Think of this like "document" in a browser. The AdvancedHTMLParser can read in a file (or string) of HTML, and will create a modifiable DOM tree from it. It can also be constructed manually from AdvancedHTMLParser.AdvancedTag objects. To populate an AdvancedHTMLParser from existing HTML: parser = AdvancedHTMLParser.AdvancedHTMLParser() # Parse an HTML string into the document parser.parseStr(htmlStr) # Parse an HTML file into the document parser.parseFile(filename) The parser then exposes many "standard" functions as you'd find on the web for accessing the data, and some others: getElementsByTagName \- Returns a list of all elements matching a tag name getElementsByName \- Returns a list of all elements with a given name attribute getElementById \- Returns a single AdvancedTag (or None) if found an element matching the provided ID getElementsByClassName \- Returns a list of all elements containing one or more space\-separated class names getElementsByAttr \- Returns a list of all elements matching a paticular attribute/value pair. getElementsByXPathExpression \- Return a TagCollection (list) of all elements matching a given XPath expression getElementsWithAttrValues \- Returns a list of all elements with a specific attribute name containing one of a list of values getElementsCustomFilter \- Provide a function/lambda that takes a tag argument, and returns True to "match" it. Returns all matched objects getRootNodes \- Get a list of nodes at root level (0) getAllNodes \- Get all the nodes contained within this document getHTML \- Returns string of HTML representing this DOM getFormattedHTML \- Returns a formatted string (using AdvancedHTMLFormatter; see below) of the HTML. Takes as argument an indent (defaults to four spaces) getMiniHTML \- Returns a "mini" HTML representation which disregards all whitespace and indentation beyond the functional single\-space The results of all of these getElement\* functions are TagCollection objects. This is a special kind of list which contains additional functions. See the "TagCollection" section below for more info. These objects can be modified, and will be reflected in the parent DOM. The parser also contains some expected properties, like head \- The "head" tag associated with this document, or None body \- The "body" tag associated with this document, or None forms \- All "forms" on this document as a TagCollection **General Attributes** In general, attributes can be accessed with dot-syntax, i.e. tagEm.id = "Hello" will set the "id" attribute. If it works in HTML javascript on a tag element, it should work on an AdvancedTag element with python. setAttribute, getAttribute, and removeAttribute are more explicit and recommended ways of getting/setting/deleting attributes on elements. The same names are used in python as in the javascript/DOM, such as 'className' corrosponding to a space-separated string of the 'class' attribute, 'classList' corrosponding to a list of classes, etc. **Style Attribute** Style attributes can be manipulated just like in javascript, so element.style.position = 'relative' for setting, or element.style.position for access. You can also assign the tag.style as a string, like: myTag.style = "display: block; float: right; font\-weight: bold" in addition to individual properties: myTag.style.display = 'block' myTag.style.float = 'right' myTag.style.fontWeight = 'bold' You can remove style properties by setting its value to an empty string. For example, to clear "display" property: myTag.style.display = '' A standard method *setProperty* can also obe used to set or remove individual properties For example: myTag.style.setProperty("display", "block") # Set display: block myTag.style.setProperty("display", '') # Clear display: property The naming conventions are the same as in javascript, like "element.style.paddingTop" for "padding-top" attribute. **TagCollection** A TagCollection can be used like a list. Every element has a unique uuid associated with it, and a TagCollection will ensure that the same element does not appear twice within its list (so it acts like an ordered set) It also exposes the various getElement\* functions which operate on the elements within the list (and their children). For example: # Filter off the parser all tags with "item" in class tagCollection = document.getElementsByClassName('item') # Return all nodes which are nested within any class="item" object # and also contains the class name "onsale" itemsWithOnSaleClass = tagCollection.getElementsByClassName('onsale') To operate just on items in the list, you can use the TagCollection method, *filterCollection*, which takes a lambda/function and returns True to retain that tag in the return. For example: # Filter off the parser all tags with "item" in class tagCollection = document.getElementsByClassName('item') # Provide a lambda to filter this collection, returning in tagCollection2 # those items which have a "value" attribute > 20 and contains at least # 1 child element with "specialPrice" class tagCollection2 = tagCollection.filterCollection( lambda node : int(node.getAttribute('value') or 0) > 20 and len(node.getElementsByClassName('specialPrice')) > 1 ) TagCollections also support advanced filtering (find/filter methods), see "Advanced Filtering" section below. **AdvancedTag** The AdvancedTag represents a single tag and its inner text. It exposes many of the functions and properties you would expect to be present if using javascript. each AdvancedTag also supports the same getElementsBy\* functions as the parser. It adds several additional that are not found in javascript, such as peers and arbitrary attribute searching. some of these include: appendText \- Append text to this element appendChild \- Append a child to this element appendBlock \- Append a block (text or AdvancedTag) to this element append \- alias of appendBlock removeChild \- Removes a child removeText \- Removes first occurance of some text from any text nodes removeTextAll \- Removes ALL occurances of some text from any text nodes insertBefore \- Inserts a child before an existing child insertAfter \- Inserts a child after an existing child getChildren \- Returns the children as a list getStartTag \- Start Tag, with attributes getEndTag \- End Tag getPeersByName \- Gets "peers" (elements with same parent, at same level in tree) with a given name getPeersByAttr \- Gets peers by an arbitrary attribute/value combination getPeersWithAttrValues \- Gets peers by an arbitrary attribute/values combination. getPeersByClassName \- Gets peers that contain a given class name getElement\\\* \- Same as above, but act on the children of this element. getParentElementCustomFilter \- Takes a lambda/function and applies on all parents of this element upward until the document root. Returns the first node that when passed to this function returns True, or None if no matches on any parent nodes getHTML / toHTML / asHTML \- Get the HTML representation using this node as a root (so start tag and attributes, innerHTML (text and child nodes), and end tag) firstChild \- Get the first child of this node, be it text or an element (AdvancedTag) firstElementChild \- Get the first child of this node that is an element lastChild \- Get the last child of this node, be it text or an element (AdvancedTag) lastElementChild \- Get the last child of this node that is an element nextSibling \- Get next sibling, be it text or an element nextElementSibling \- Get next sibling, that is an element previousSibling \- Get previous sibling, be it text or an element previousElementSibling \- Get previous sibling, that is an element {get,set,has,remove}Attribute \- get/set/test/remove an attribute {add,remove}Class \- Add/remove a class from the list of classes setStyle \- Set a specific style property [like: setStyle("font\-weight", "bold") ] isTagEqual \- Compare if two tags have the same attributes. Using the == operator will compare if they are the same exact tag (by uuid) getUid \- Get a unique ID for this tag (internal) getAllChildNodes \- Gets all nodes beneath this node in the document (its children, its children's children, etc) getAllNodes \- Same as getAllChildNodes, but also includes this node contains \- Check if a provided node appears anywhere beneath this node (as child, child\-of\-child, etc) remove \- Remove this node from its parent element, and disassociates this and all sub\-nodes from the associated document \_\_str\_\_ \- str(tag) will show start tag with attributes, inner text, and end tag \_\_repr\_\_ \- Shows a reconstructable representation of this tag \_\_getitem\_\_ \- Can be indexed like tag[2] to access second child. And some properties: children/childNodes \- The children (tags) as a list NOTE: This returns only AdvancedTag objects, not text. childBlocks \- All direct child blocks. This includes both AdvnacedTag objects and text nodes (str) innerHTML \- The innerHTML including the html of all children innerText \- The text nodes, in order, as they appear as direct children to this node as a string textContent \- All the text nodes, in order, as they appear within this node or any children (or their children, etc.) outerHTML \- innerHTML wrapped in this tag classNames/classList \- a list of the classes parentNode/parentElement \- The parent tag tagName \- The tag name ownerDocument \- The document associated with this node, if any And many others. See the pydocs for a full list, and associated docstrings. **Appending raw HTML** You can append raw HTML to a tag by calling: tagEm.appendInnerHTML('<div id="Some sample HTML"> <span> Yes </span> </div>') which acts like, in javascript: tagEm.innerHTML += '<div id="Some sample HTML"> <span> Yes </span> </div>'; **Creating Tags from HTML** Tags can be created from HTML strings outside of AdvancedHTMLParser.parseStr (which parses an entire document) by: * Parser.AdvancedHTMLParser.createElement - Like document.createElement, creates a tag with a given tag name. Not associated with any document. * Parser.AdvancedHTMLParser.createElementFromHTML - Creates a single tag from HTML. * Parser.AdvancedHTMLParser.createElementsFromHTML - Creates and returns a list of one or more tags from HTML. * Parser.AdvancedHTMLParser.createBlocksFromHTML - Creates and returns a list of blocks. These can be AdvancedTag objects (A tag), or a str object (if raw text outside of tags). This is recommended for parsing arbitrary HTML outside of parsing the entire document. The createElement{,s}FromHTML functions will discard any text outside of the tags passed in. Advanced Filtering ------------------ AdvancedHTMLParser contains two kinds of "Advanced Filtering": **find** The most basic unified-search, AdvancedHTMLParser has a "find" method on it. This will search all nodes with a single, simple query. This is not as robust as the "filter" method (which can also be used on any tag or TagCollection), but does not require any dependency packages. find \- Perform a search of elements using attributes as keys and potential values as values (i.e. parser.find(name='blah', tagname='span') will return all elements in this document with the name "blah" of the tag type "span" ) Arguments are key = value, or key can equal a tuple/list of values to match ANY of those values. Append a key with \_\_contains to test if some strs (or several possible strs) are within an element Append a key with \_\_icontains to perform the same \_\_contains op, but ignoring case Special keys: tagname \- The tag name of the element text \- The text within an element NOTE: Empty string means both "not set" and "no value" in this implementation. Example: cheddarElements = parser.find(name='items', text\_\_icontains='cheddar') **filter** If you have QueryableList installed (a default dependency since 7.0.0 to AdvancedHTMLParser, but can be skipped with '\-\-no\-deps' passed to setup.py) then you can take advantage of the advanced "filter" methods, on either the parser (entire document), any tag (that tag and nodes beneath), or tag collection (any of those tags, or any tags beneath them). A full explanation of the various filter modes that QueryableList supports can be found at https://github.com/kata198/QueryableList Special keys are: "tagname" for the tag name, and "text" for the inner text of a node. An attribute that is unset has a value of None, which is different than a set attribute with an empty value ''. For example: cheddarElements = parser.filter(name='items', text\_\_icontains='cheddar') The AdvancedHTMLParser has: filter / filterAnd \- Perform a filter query on all nodes in this document, returning a TagCollection of elements matching ALL criteria filterOr \- Perform a filter query on all nodes in this document, returning a TagCollection of elements matching ANY criteria Every AdvancedTag has: filter / filterAnd \- Perform a filter query on this nodes and all sub\-nodes, returning a TagCollection of elements matching ALL criteria filterOr \- Perform a filter query on this nodes and all sub\-nodes, returning a TagCollection of elements matching ANY criteria Every TagCollection has: filter / filterAnd \- Perform a filter query on JUST the nodes contained within this list (no children), returning a TagCollection of elements matching ALL criteria filterOr \- Perform a filter query on JUST the nodes contained within this list (no children), returning a TagCollection of elements matching ANY criteria filterAll / filterAllAnd \- Perform a filter query on the nodes contained within this list, and all of their sub\-nodes, returning a TagCollection of elements matching ALL criteria filterAllOr \- Perform a filter query on the nodes contained within this list, and all of their sub\-nodes, returning a TagCollection of elements matching ANY criteria Validation ---------- Validation can be performed by using ValidatingAdvancedHTMLParser. It will raise an exception if an assumption would have to be made to continue parsing (i.e. something important). InvalidCloseException - Tried to close a tag that shouldn't have been closed MissedCloseException - Missed a non-optional close of a tag that would lead to causing an assumption during parsing. InvalidAttributeNameException - An attribute name was found that contained an invalid character, or broke a naming rule. XPath ----- **XPath support is in Beta phase.** Basic XPath support has been added, which supports searching, attribute matching, positions, indexes, some functions, most axes (such as parent::). Examples of some currently supported expressions: //table//tr[last()]/parent::tbody Find any table, descend to any descendant that is the last tr of its parent, rise to and return the parent tbody of that tr. //div[ @name = "Cheese" ]/span[2] Find any div with attribute name="Cheese" , and return the second direct child which is a span. //\*[ normalize\-space() = "Banana" ] Find and return any tag which contains the inner text, normalized for whitespace, of "Banana" Find and return any tag under a div containing a class "purple-cheese" //div/\*[ contains( concat( ' ', @class, ' ' ), 'purple\-cheese' ) ] More will be added. If you have a needed xpath feature not currently supported (you'll know by parse exception raised), please open an issue and I will make it a priority! IndexedAdvancedHTMLParser ========================= IndexedAdvancedHTMLParser provides the ability to use indexing for faster search. If you are just parsing and not modifying, this is your best bet. If you are modifying the DOM tree, make sure you call IndexedAdvancedHTMLParser.reindex() before relying on them. Each of the get\* functions above takes an additional "useIndex" function, which can also be set to False to skip index. See constructor for more information, and "Performance and Indexing" section below. AdvancedHTMLFormatter and formatHTML ------------------------------------ **AdvancedHTMLFormatter** The AdvancedHTMLFormatter formats HTML into a pretty layout. It can handle elements like pre, core, script, style, etc to keep their contents preserved, but does not understand CSS rules. The methods are: parseStr \- Parse a string of contents parseFile \- Parse a filename or file object getHTML \- Get the formatted html getRootNodes \- Get a list of the "root" nodes (most outer nodes, should be <html> on a valid document) getRoot \- Gets the "root" node (on a valid document this should be <html>). For arbitrary HTML, you should use getRootNodes, as there may be several nodes at the same outermost level You can access this same formatting off an AdvancedHTMLParser.AdvancedHTMLParser (or IndexedAdvancedHTMLParser) by calling .getFormattedHTML() **AdvancedHTMLMiniFormatter** The AdvancedHTMLMiniFormatter will strip all non-functional whitespace (meaning any whitespace which wouldn't normally add a space to the document or is required for xhtml) and provide no indentation. Use this when pretty-printing doesn't matter and you'd like to save space. You can access this same formatting off an AdvancedHTMLParser.AdvancedHTMLParser (or IndexedAdvancedHTMLParser) by calling .getMiniHTML() **AdvancedHTMLSlimTagFormatter and AdvancedHTMLSlimTagMiniFormatter** In order to support some less-leniant parsers, AdvancedHTMLParser will by default include a space prior to the close-tag '>' character in HTML output. For example: <span id="abc" >Blah</span> <br /> <hr class="bigline" /> It is recommended to keep these extra spaces, but if for some reason you feel you need to get rid of them, you can use either *AdvancedHTMLSlimTagFormatter* or *AdvancedHTMLSlimTagMiniFormatter*. *AdvancedHTMLSlimTagFormatter* will do pretty-printing (like getFormattedHTML / AdvancedHTMLFormatter.getHTML output) *AdvancedHTMLSlimTagMiniFormatter* will do mini-printing (like getMiniHTML / AdvancedHTMLMiniFormatter.getHTML output) Feeding in your HTML via formatter.parseStr(htmlStr) [where htmlStr can be parser.getHTML()] will cause it to be output without the start-tag padding. For example: <span id="abc">Blah</span> By default, self-closing tags will retain their padding so that an xhtml-compliant parser doesn't treat "/" as either an attribute or part of the attribute-value of the preceding attribute. For example: <hr class="bigline"/> Could be interpreted as a horizontal rule with a class name of "bigline/". Most modern browsers work around this and will not have issue, but some parsers will. You may pass an optional keyword-argument to the formatter constructor, slimSelfClosing=True, in order to force removal of this padding from self-closing tags. For example: myHtml = '<hr class="bigline" />' formatter = AdvancedHTMLSlimTagMiniFormatter(slimSelfClosing=True) formatter.parseStr(myHtml) miniHtml = formatter.getHTML() # miniHtml will now contain '<hr class="bigline"/>' . **formatHTML script** A script, formatHTML comes with this package and will perform formatting on an input file, and output to a file or stdout: Usage: formatHTML (Optional Arguments) (optional: /path/to/in.html) (optional: [/path/to/output.html]) Formats HTML on input and writes to output. Optional Arguments: \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- \-e [encoding] \- Specify an encoding to use. Default is utf\-8 \-m or \-\-mini \- Output "mini" HTML (only retain functional whitespace, strip the rest and no indentation) \-p or \-\-pretty \- Output "pretty" HTML [This is the defualt mode] \-\-indent=' ' \- Use the provided string [default 4\-spaces] to represent each level of nesting. Use \-\-indent=" " for 1 tab insead, for example. Affects pretty printing mode only If output filename is not specified or is empty string, output will be to stdout. If input filename is not specified or is empty string, input will be from stdin If \-e is provided, will use that as the encoding. Defaults to utf\-8 Notes ----- * Each tag has a generated unique ID which is assigned at create time. The search functions use these to prevent duplicates in search results. There is a global function in the module, AdvancedHTMLParser.uniqueTags, which will filter a list of tags and remove any duplicates. TagCollections will only allow one instance of a tag (no duplicates) * In general, for tag names and attribute names, you should use lowercase values. During parsing, the parser will lowercase attribute names (like NAME="Abc" becomes name="Abc"). During searching, however, for performance reasons, it is assumed you are passing in already-lowercased strings. If you can't trust the input to be lowercase, then it is your responsibility to call .lower() before calling .getElementsBy\* * If you are using IndexedAdvancedHTMLParser (instead of AdvancedHTMLParser) to construct HTML and not search, I recommend either setting the index params to False in the constructor, or calling IndexedAdvancedHTMLParser.disableIndexing(). When you are finished and want to go back to searching, you can call IndexedAdvancedHTMLParser.reindex and set to True what you want to reindex. * There are additional functions and usages not documented here, check the file for more information. Performance and Indexing ------------------------ Performance is very good using either AdvancedHTMLParser, and even better (for scraping) using IndexedAdvancedHTMLParser class. The performance can be further enhanced on IndexedAdvancedHTMLParser via several indexing tunables: First, in the constructor of IndexedAdvancedHTMLParser and in the reindex method is a boolean to be set which determines if each field is indexed (e.x. indexIDs will make getElementByID use an index). If an index is used, parsing time slightly goes up, but searches become O(1) (from root node, slightly less efficent from other nodes) instead of O(n) [n=num elements]. By default, IDs, Names, Tag Names, Class Names are indexed. You can add an index for any arbitrary field (used in getElementByAttr) via IndexedAdvancedHTMLParser.addIndexOnAttribute('src'), for example, to index the 'src' attribute. This index can be removed via removeIndexOnAttribute. Dependencies ------------ AdvancedHTMLParser can be installed without dependencies (pass '\-\-no\-deps' to setup.py), and everything will function EXCEPT filter\* methods. By default, https://github.com/kata198/QueryableList will be installed, which will enable support for those additional filter methods. Unicode ------- AdvancedHTMLParser generally has very good support for unicode, and defaults to "utf\-8" (can be altered by the "encoding" argument to the AdvancedHTMLParser.AdvancedHTMLParser when parsing.) If you are still getting UnicodeDecodeError or UnicodeEncodeError, there are a few things you can try: * If the error happens when printing/writing to stdout ( default behaviour for apache / mod\_python is to open stdout with the ANSI/ASCII encoding ), ensure your streams are, in fact, set to utf\-8. \* Set the environment variable PYTHONIOENCODING to "utf\\\-8" before python is launched. In Apache, you can add the line "SetEnv PYTHONIOENCODING utf\\\-8" to your httpd.conf in order to achieve this. * Ensure that the data you are passing to AdvancedHTMLParser has the correct encoding (matching the "encoding" parameter). * Switch to python3 if at all possible \-\- python2 does have 'unicode' support and AdvancedHTMLParser uses it to the best of its ability, but python2 does still have some inherit flaws which may come up using standard library / output functions. You should ensure that these are set to use utf\-8 (as described above). AdvancedHTMLParser is tested against unicode ( even has a unit test ) which works in both python2 and python3 in the general case. If you are having an issue (even on python2) and you've checked the above "common configuration/usage" errors and think there is still an issue, please open a bug report on https://github.com/kata198/AdvancedHTMLParser with a test case, python version, and traceback. The library itself is considered unicode-safe, and almost always it's an issue outside of this library, or has a simple workaround. Example Usage ------------- See https://raw.githubusercontent.com/kata198/AdvancedHTMLParser/master/example.py for an example of parsing store data using this class. Changes ------- See: https://raw.githubusercontent.com/kata198/AdvancedHTMLParser/master/ChangeLog Contact Me / Support -------------------- I am available by email to provide support, answer questions, or otherwise provide assistance in using this software. Use my email kata198 at gmail.com with "AdvancedHTMLParser" in the subject line. If you are having an issue / found a bug / want to merge in some changes, please open a pull request. Unit Tests ---------- See "tests" directory available in github. Use "runTests.py" within that directory. Tests use my `GoodTests <https://github.com/kata198/GoodTests>`_ framework. It will download it to the current directory if not found in path, so you don't need to worry that it's a dependency.
AdvancedHTMLParser
/AdvancedHTMLParser-9.0.2.tar.gz/AdvancedHTMLParser-9.0.2/README.rst
README.rst
AdvancedHTMLParser ================== AdvancedHTMLParser is an Advanced HTML Parser, with support for adding, removing, modifying, and formatting HTML. It aims to provide the same interface as you would find in a compliant browser through javascript ( i.e. all the getElement methods, appendChild, etc), an XPath implementation, as well as many more complex and sophisticated features not available through a browser. And most importantly, it's in python! There are many potential applications, not limited to: * Webpage Scraping / Data Extraction * Testing and Validation * HTML Modification/Insertion * Outputting your website * Debugging * HTML Document generation * Web Crawling * Formatting HTML documents or web pages It is especially good for servlets/webpages. It is quick to take an expertly crafted page in raw HTML / css, and have your servlet's ingest with AdvancedHTMLParser and create/insert data elements into the existing view using a simple and well-known interface ( javascript-like + HTML DOM ). Another useful scenario is creating automated testing suites which can operate much more quickly and reliably (and at a deeper function-level), unlike in-browser testing suites. Full API -------- Can be found http://htmlpreview.github.io/?https://github.com/kata198/AdvancedHTMLParser/blob/master/doc/AdvancedHTMLParser.html?vers=8.1.8 . Examples -------- Various examples can be found in the "tests" directory. A very old, simple example can also be found as "example.py" in the root directory. Short Doc --------- **The Package and Modules** The top-level module in this package is "*AdvancedHTMLParser*." import AdvancedHTMLParser Most everything "public" is available through this top-level module, but some corner-case usages may require importing from a submodule. All of these associations can be found through the pydocs. For example, to access AdvancedTag, the recommended path is just to import the top-level, and use dot-access: import AdvancedHTMLParser myTag = AdvancedHTMLParser.AdvancedTag('div') However, you can also import AdvancedTag through this top-level module: import AdvancedHTMLParser from AdvancedHTMLParser import AdvancedTag Or, you can import from the specific sub-module, directly: import AdvancedHTMLParser from AdvancedHTMLParser.Tags import AdvancedTag All examples below are written as if "import AdvancedHTMLParser" has already been performed, and all relations in examples are based off usages from the top-level import, only. **AdvancedHTMLParser** Think of this like "document" in a browser. The AdvancedHTMLParser can read in a file (or string) of HTML, and will create a modifiable DOM tree from it. It can also be constructed manually from AdvancedHTMLParser.AdvancedTag objects. To populate an AdvancedHTMLParser from existing HTML: parser = AdvancedHTMLParser.AdvancedHTMLParser() # Parse an HTML string into the document parser.parseStr(htmlStr) # Parse an HTML file into the document parser.parseFile(filename) The parser then exposes many "standard" functions as you'd find on the web for accessing the data, and some others: getElementsByTagName - Returns a list of all elements matching a tag name getElementsByName - Returns a list of all elements with a given name attribute getElementById - Returns a single AdvancedTag (or None) if found an element matching the provided ID getElementsByClassName - Returns a list of all elements containing one or more space-separated class names getElementsByAttr - Returns a list of all elements matching a paticular attribute/value pair. getElementsByXPathExpression - Return a TagCollection (list) of all elements matching a given XPath expression getElementsWithAttrValues - Returns a list of all elements with a specific attribute name containing one of a list of values getElementsCustomFilter - Provide a function/lambda that takes a tag argument, and returns True to "match" it. Returns all matched objects getRootNodes - Get a list of nodes at root level (0) getAllNodes - Get all the nodes contained within this document getHTML - Returns string of HTML representing this DOM getFormattedHTML - Returns a formatted string (using AdvancedHTMLFormatter; see below) of the HTML. Takes as argument an indent (defaults to four spaces) getMiniHTML - Returns a "mini" HTML representation which disregards all whitespace and indentation beyond the functional single-space The results of all of these getElement\* functions are TagCollection objects. This is a special kind of list which contains additional functions. See the "TagCollection" section below for more info. These objects can be modified, and will be reflected in the parent DOM. The parser also contains some expected properties, like head - The "head" tag associated with this document, or None body - The "body" tag associated with this document, or None forms - All "forms" on this document as a TagCollection **General Attributes** In general, attributes can be accessed with dot-syntax, i.e. tagEm.id = "Hello" will set the "id" attribute. If it works in HTML javascript on a tag element, it should work on an AdvancedTag element with python. setAttribute, getAttribute, and removeAttribute are more explicit and recommended ways of getting/setting/deleting attributes on elements. The same names are used in python as in the javascript/DOM, such as 'className' corrosponding to a space-separated string of the 'class' attribute, 'classList' corrosponding to a list of classes, etc. **Style Attribute** Style attributes can be manipulated just like in javascript, so element.style.position = 'relative' for setting, or element.style.position for access. You can also assign the tag.style as a string, like: myTag.style = "display: block; float: right; font-weight: bold" in addition to individual properties: myTag.style.display = 'block' myTag.style.float = 'right' myTag.style.fontWeight = 'bold' You can remove style properties by setting its value to an empty string. For example, to clear "display" property: myTag.style.display = '' A standard method *setProperty* can also obe used to set or remove individual properties For example: myTag.style.setProperty("display", "block") # Set display: block myTag.style.setProperty("display", '') # Clear display: property The naming conventions are the same as in javascript, like "element.style.paddingTop" for "padding-top" attribute. **TagCollection** A TagCollection can be used like a list. Every element has a unique uuid associated with it, and a TagCollection will ensure that the same element does not appear twice within its list (so it acts like an ordered set) It also exposes the various getElement\* functions which operate on the elements within the list (and their children). For example: # Filter off the parser all tags with "item" in class tagCollection = document.getElementsByClassName('item') # Return all nodes which are nested within any class="item" object # and also contains the class name "onsale" itemsWithOnSaleClass = tagCollection.getElementsByClassName('onsale') To operate just on items in the list, you can use the TagCollection method, *filterCollection*, which takes a lambda/function and returns True to retain that tag in the return. For example: # Filter off the parser all tags with "item" in class tagCollection = document.getElementsByClassName('item') # Provide a lambda to filter this collection, returning in tagCollection2 # those items which have a "value" attribute > 20 and contains at least # 1 child element with "specialPrice" class tagCollection2 = tagCollection.filterCollection( lambda node : int(node.getAttribute('value') or 0) > 20 and len(node.getElementsByClassName('specialPrice')) > 1 ) TagCollections also support advanced filtering (find/filter methods), see "Advanced Filtering" section below. **AdvancedTag** The AdvancedTag represents a single tag and its inner text. It exposes many of the functions and properties you would expect to be present if using javascript. each AdvancedTag also supports the same getElementsBy\* functions as the parser. It adds several additional that are not found in javascript, such as peers and arbitrary attribute searching. some of these include: appendText - Append text to this element appendChild - Append a child to this element appendBlock - Append a block (text or AdvancedTag) to this element append - alias of appendBlock removeChild - Removes a child removeText - Removes first occurance of some text from any text nodes removeTextAll - Removes ALL occurances of some text from any text nodes insertBefore - Inserts a child before an existing child insertAfter - Inserts a child after an existing child getChildren - Returns the children as a list getStartTag - Start Tag, with attributes getEndTag - End Tag getPeersByName - Gets "peers" (elements with same parent, at same level in tree) with a given name getPeersByAttr - Gets peers by an arbitrary attribute/value combination getPeersWithAttrValues - Gets peers by an arbitrary attribute/values combination. getPeersByClassName - Gets peers that contain a given class name getElement\* - Same as above, but act on the children of this element. getParentElementCustomFilter - Takes a lambda/function and applies on all parents of this element upward until the document root. Returns the first node that when passed to this function returns True, or None if no matches on any parent nodes getHTML / toHTML / asHTML - Get the HTML representation using this node as a root (so start tag and attributes, innerHTML (text and child nodes), and end tag) firstChild - Get the first child of this node, be it text or an element (AdvancedTag) firstElementChild - Get the first child of this node that is an element lastChild - Get the last child of this node, be it text or an element (AdvancedTag) lastElementChild - Get the last child of this node that is an element nextSibling - Get next sibling, be it text or an element nextElementSibling - Get next sibling, that is an element previousSibling - Get previous sibling, be it text or an element previousElementSibling - Get previous sibling, that is an element {get,set,has,remove}Attribute - get/set/test/remove an attribute {add,remove}Class - Add/remove a class from the list of classes setStyle - Set a specific style property [like: setStyle("font-weight", "bold") ] isTagEqual - Compare if two tags have the same attributes. Using the == operator will compare if they are the same exact tag (by uuid) getUid - Get a unique ID for this tag (internal) getAllChildNodes - Gets all nodes beneath this node in the document (its children, its children's children, etc) getAllNodes - Same as getAllChildNodes, but also includes this node contains - Check if a provided node appears anywhere beneath this node (as child, child-of-child, etc) remove - Remove this node from its parent element, and disassociates this and all sub-nodes from the associated document __str__ - str(tag) will show start tag with attributes, inner text, and end tag __repr__ - Shows a reconstructable representation of this tag __getitem__ - Can be indexed like tag[2] to access second child. And some properties: children/childNodes - The children (tags) as a list NOTE: This returns only AdvancedTag objects, not text. childBlocks - All direct child blocks. This includes both AdvnacedTag objects and text nodes (str) innerHTML - The innerHTML including the html of all children innerText - The text nodes, in order, as they appear as direct children to this node as a string textContent - All the text nodes, in order, as they appear within this node or any children (or their children, etc.) outerHTML - innerHTML wrapped in this tag classNames/classList - a list of the classes parentNode/parentElement - The parent tag tagName - The tag name ownerDocument - The document associated with this node, if any And many others. See the pydocs for a full list, and associated docstrings. **Appending raw HTML** You can append raw HTML to a tag by calling: tagEm.appendInnerHTML('<div id="Some sample HTML"> <span> Yes </span> </div>') which acts like, in javascript: tagEm.innerHTML += '<div id="Some sample HTML"> <span> Yes </span> </div>'; **Creating Tags from HTML** Tags can be created from HTML strings outside of AdvancedHTMLParser.parseStr (which parses an entire document) by: * Parser.AdvancedHTMLParser.createElement - Like document.createElement, creates a tag with a given tag name. Not associated with any document. * Parser.AdvancedHTMLParser.createElementFromHTML - Creates a single tag from HTML. * Parser.AdvancedHTMLParser.createElementsFromHTML - Creates and returns a list of one or more tags from HTML. * Parser.AdvancedHTMLParser.createBlocksFromHTML - Creates and returns a list of blocks. These can be AdvancedTag objects (A tag), or a str object (if raw text outside of tags). This is recommended for parsing arbitrary HTML outside of parsing the entire document. The createElement{,s}FromHTML functions will discard any text outside of the tags passed in. Advanced Filtering ------------------ AdvancedHTMLParser contains two kinds of "Advanced Filtering": **find** The most basic unified-search, AdvancedHTMLParser has a "find" method on it. This will search all nodes with a single, simple query. This is not as robust as the "filter" method (which can also be used on any tag or TagCollection), but does not require any dependency packages. find - Perform a search of elements using attributes as keys and potential values as values (i.e. parser.find(name='blah', tagname='span') will return all elements in this document with the name "blah" of the tag type "span" ) Arguments are key = value, or key can equal a tuple/list of values to match ANY of those values. Append a key with __contains to test if some strs (or several possible strs) are within an element Append a key with __icontains to perform the same __contains op, but ignoring case Special keys: tagname - The tag name of the element text - The text within an element NOTE: Empty string means both "not set" and "no value" in this implementation. Example: cheddarElements = parser.find(name='items', text__icontains='cheddar') **filter** If you have QueryableList installed (a default dependency since 7.0.0 to AdvancedHTMLParser, but can be skipped with '\-\-no\-deps' passed to setup.py) then you can take advantage of the advanced "filter" methods, on either the parser (entire document), any tag (that tag and nodes beneath), or tag collection (any of those tags, or any tags beneath them). A full explanation of the various filter modes that QueryableList supports can be found at https://github.com/kata198/QueryableList Special keys are: "tagname" for the tag name, and "text" for the inner text of a node. An attribute that is unset has a value of None, which is different than a set attribute with an empty value ''. For example: cheddarElements = parser.filter(name='items', text__icontains='cheddar') The AdvancedHTMLParser has: filter / filterAnd - Perform a filter query on all nodes in this document, returning a TagCollection of elements matching ALL criteria filterOr - Perform a filter query on all nodes in this document, returning a TagCollection of elements matching ANY criteria Every AdvancedTag has: filter / filterAnd - Perform a filter query on this nodes and all sub-nodes, returning a TagCollection of elements matching ALL criteria filterOr - Perform a filter query on this nodes and all sub-nodes, returning a TagCollection of elements matching ANY criteria Every TagCollection has: filter / filterAnd - Perform a filter query on JUST the nodes contained within this list (no children), returning a TagCollection of elements matching ALL criteria filterOr - Perform a filter query on JUST the nodes contained within this list (no children), returning a TagCollection of elements matching ANY criteria filterAll / filterAllAnd - Perform a filter query on the nodes contained within this list, and all of their sub-nodes, returning a TagCollection of elements matching ALL criteria filterAllOr - Perform a filter query on the nodes contained within this list, and all of their sub-nodes, returning a TagCollection of elements matching ANY criteria Validation ---------- Validation can be performed by using ValidatingAdvancedHTMLParser. It will raise an exception if an assumption would have to be made to continue parsing (i.e. something important). InvalidCloseException - Tried to close a tag that shouldn't have been closed MissedCloseException - Missed a non-optional close of a tag that would lead to causing an assumption during parsing. InvalidAttributeNameException - An attribute name was found that contained an invalid character, or broke a naming rule. XPath ----- **XPath support is in Beta phase.** Basic XPath support has been added, which supports searching, attribute matching, positions, indexes, some functions, most axes (such as parent::). Examples of some currently supported expressions: //table//tr[last()]/parent::tbody Find any table, descend to any descendant that is the last tr of its parent, rise to and return the parent tbody of that tr. //div[ @name = "Cheese" ]/span[2] Find any div with attribute name="Cheese" , and return the second direct child which is a span. //*[ normalize-space() = "Banana" ] Find and return any tag which contains the inner text, normalized for whitespace, of "Banana" Find and return any tag under a div containing a class "purple-cheese" //div/*[ contains( concat( ' ', @class, ' ' ), 'purple-cheese' ) ] More will be added. If you have a needed xpath feature not currently supported (you'll know by parse exception raised), please open an issue and I will make it a priority! IndexedAdvancedHTMLParser ========================= IndexedAdvancedHTMLParser provides the ability to use indexing for faster search. If you are just parsing and not modifying, this is your best bet. If you are modifying the DOM tree, make sure you call IndexedAdvancedHTMLParser.reindex() before relying on them. Each of the get\* functions above takes an additional "useIndex" function, which can also be set to False to skip index. See constructor for more information, and "Performance and Indexing" section below. AdvancedHTMLFormatter and formatHTML ------------------------------------ **AdvancedHTMLFormatter** The AdvancedHTMLFormatter formats HTML into a pretty layout. It can handle elements like pre, core, script, style, etc to keep their contents preserved, but does not understand CSS rules. The methods are: parseStr - Parse a string of contents parseFile - Parse a filename or file object getHTML - Get the formatted html getRootNodes - Get a list of the "root" nodes (most outer nodes, should be <html> on a valid document) getRoot - Gets the "root" node (on a valid document this should be <html>). For arbitrary HTML, you should use getRootNodes, as there may be several nodes at the same outermost level You can access this same formatting off an AdvancedHTMLParser.AdvancedHTMLParser (or IndexedAdvancedHTMLParser) by calling .getFormattedHTML() **AdvancedHTMLMiniFormatter** The AdvancedHTMLMiniFormatter will strip all non-functional whitespace (meaning any whitespace which wouldn't normally add a space to the document or is required for xhtml) and provide no indentation. Use this when pretty-printing doesn't matter and you'd like to save space. You can access this same formatting off an AdvancedHTMLParser.AdvancedHTMLParser (or IndexedAdvancedHTMLParser) by calling .getMiniHTML() **AdvancedHTMLSlimTagFormatter and AdvancedHTMLSlimTagMiniFormatter** In order to support some less-leniant parsers, AdvancedHTMLParser will by default include a space prior to the close-tag '>' character in HTML output. For example: <span id="abc" >Blah</span> <br /> <hr class="bigline" /> It is recommended to keep these extra spaces, but if for some reason you feel you need to get rid of them, you can use either *AdvancedHTMLSlimTagFormatter* or *AdvancedHTMLSlimTagMiniFormatter*. *AdvancedHTMLSlimTagFormatter* will do pretty-printing (like getFormattedHTML / AdvancedHTMLFormatter.getHTML output) *AdvancedHTMLSlimTagMiniFormatter* will do mini-printing (like getMiniHTML / AdvancedHTMLMiniFormatter.getHTML output) Feeding in your HTML via formatter.parseStr(htmlStr) [where htmlStr can be parser.getHTML()] will cause it to be output without the start-tag padding. For example: <span id="abc">Blah</span> By default, self-closing tags will retain their padding so that an xhtml-compliant parser doesn't treat "/" as either an attribute or part of the attribute-value of the preceding attribute. For example: <hr class="bigline"/> Could be interpreted as a horizontal rule with a class name of "bigline/". Most modern browsers work around this and will not have issue, but some parsers will. You may pass an optional keyword-argument to the formatter constructor, slimSelfClosing=True, in order to force removal of this padding from self-closing tags. For example: myHtml = '<hr class="bigline" />' formatter = AdvancedHTMLSlimTagMiniFormatter(slimSelfClosing=True) formatter.parseStr(myHtml) miniHtml = formatter.getHTML() # miniHtml will now contain '<hr class="bigline"/>' . **formatHTML script** A script, formatHTML comes with this package and will perform formatting on an input file, and output to a file or stdout: Usage: formatHTML (Optional Arguments) (optional: /path/to/in.html) (optional: [/path/to/output.html]) Formats HTML on input and writes to output. Optional Arguments: ------------------- -e [encoding] - Specify an encoding to use. Default is utf-8 -m or --mini - Output "mini" HTML (only retain functional whitespace, strip the rest and no indentation) -p or --pretty - Output "pretty" HTML [This is the defualt mode] --indent=' ' - Use the provided string [default 4-spaces] to represent each level of nesting. Use --indent=" " for 1 tab insead, for example. Affects pretty printing mode only If output filename is not specified or is empty string, output will be to stdout. If input filename is not specified or is empty string, input will be from stdin If -e is provided, will use that as the encoding. Defaults to utf-8 Notes ----- * Each tag has a generated unique ID which is assigned at create time. The search functions use these to prevent duplicates in search results. There is a global function in the module, AdvancedHTMLParser.uniqueTags, which will filter a list of tags and remove any duplicates. TagCollections will only allow one instance of a tag (no duplicates) * In general, for tag names and attribute names, you should use lowercase values. During parsing, the parser will lowercase attribute names (like NAME="Abc" becomes name="Abc"). During searching, however, for performance reasons, it is assumed you are passing in already-lowercased strings. If you can't trust the input to be lowercase, then it is your responsibility to call .lower() before calling .getElementsBy\* * If you are using IndexedAdvancedHTMLParser (instead of AdvancedHTMLParser) to construct HTML and not search, I recommend either setting the index params to False in the constructor, or calling IndexedAdvancedHTMLParser.disableIndexing(). When you are finished and want to go back to searching, you can call IndexedAdvancedHTMLParser.reindex and set to True what you want to reindex. * There are additional functions and usages not documented here, check the file for more information. Performance and Indexing ------------------------ Performance is very good using either AdvancedHTMLParser, and even better (for scraping) using IndexedAdvancedHTMLParser class. The performance can be further enhanced on IndexedAdvancedHTMLParser via several indexing tunables: First, in the constructor of IndexedAdvancedHTMLParser and in the reindex method is a boolean to be set which determines if each field is indexed (e.x. indexIDs will make getElementByID use an index). If an index is used, parsing time slightly goes up, but searches become O(1) (from root node, slightly less efficent from other nodes) instead of O(n) [n=num elements]. By default, IDs, Names, Tag Names, Class Names are indexed. You can add an index for any arbitrary field (used in getElementByAttr) via IndexedAdvancedHTMLParser.addIndexOnAttribute('src'), for example, to index the 'src' attribute. This index can be removed via removeIndexOnAttribute. Dependencies ------------ AdvancedHTMLParser can be installed without dependencies (pass '\-\-no\-deps' to setup.py), and everything will function EXCEPT filter\* methods. By default, https://github.com/kata198/QueryableList will be installed, which will enable support for those additional filter methods. Unicode ------- AdvancedHTMLParser generally has very good support for unicode, and defaults to "utf\-8" (can be altered by the "encoding" argument to the AdvancedHTMLParser.AdvancedHTMLParser when parsing.) If you are still getting UnicodeDecodeError or UnicodeEncodeError, there are a few things you can try: * If the error happens when printing/writing to stdout ( default behaviour for apache / mod\_python is to open stdout with the ANSI/ASCII encoding ), ensure your streams are, in fact, set to utf\-8. * Set the environment variable PYTHONIOENCODING to "utf\-8" before python is launched. In Apache, you can add the line "SetEnv PYTHONIOENCODING utf\-8" to your httpd.conf in order to achieve this. * Ensure that the data you are passing to AdvancedHTMLParser has the correct encoding (matching the "encoding" parameter). * Switch to python3 if at all possible \-\- python2 does have 'unicode' support and AdvancedHTMLParser uses it to the best of its ability, but python2 does still have some inherit flaws which may come up using standard library / output functions. You should ensure that these are set to use utf\-8 (as described above). AdvancedHTMLParser is tested against unicode ( even has a unit test ) which works in both python2 and python3 in the general case. If you are having an issue (even on python2) and you've checked the above "common configuration/usage" errors and think there is still an issue, please open a bug report on https://github.com/kata198/AdvancedHTMLParser with a test case, python version, and traceback. The library itself is considered unicode-safe, and almost always it's an issue outside of this library, or has a simple workaround. Example Usage ------------- See https://raw.githubusercontent.com/kata198/AdvancedHTMLParser/master/example.py for an example of parsing store data using this class. Changes ------- See: https://raw.githubusercontent.com/kata198/AdvancedHTMLParser/master/ChangeLog Contact Me / Support -------------------- I am available by email to provide support, answer questions, or otherwise provide assistance in using this software. Use my email kata198 at gmail.com with "AdvancedHTMLParser" in the subject line. If you are having an issue / found a bug / want to merge in some changes, please open a pull request. Unit Tests ---------- See "tests" directory available in github. Use "runTests.py" within that directory. Tests use my [GoodTests](https://github.com/kata198/GoodTests) framework. It will download it to the current directory if not found in path, so you don't need to worry that it's a dependency.
AdvancedHTMLParser
/AdvancedHTMLParser-9.0.2.tar.gz/AdvancedHTMLParser-9.0.2/README.md
README.md
import AdvancedHTMLParser if __name__ == '__main__': parser = AdvancedHTMLParser.AdvancedHTMLParser() parser.parseStr(''' <html> <head> <title>HEllo</title> </head> <body> <div id="container1" class="abc"> <div name="items"> <span name="price">1.96</span> <span name="itemName">Sponges</span> </div> <div name="items"> <span name="price">3.55</span> <span name="itemName">Turtles</span> </div> <div name="items"> <span name="price" class="something" >6.55</span> <img src="/images/cheddar.png" style="width: 64px; height: 64px;" /> <span name="itemName">Cheese</span> </div> </div> <div id="images"> <img src="/abc.gif" name="image" /> <img src="/abc2.gif" name="image" /> </div> <div id="saleSection" style="background-color: blue"> <div name="items"> <span name="itemName">Pudding Cups</span> <span name="price">1.60</span> </div> <hr /> <div name="items" class="limited-supplies" > <span name="itemName">Gold Brick</span> <span name="price">214.55</span> <b style="margin-left: 10px">LIMITED QUANTITIES: <span id="item_5123523_remain">130</span></b> </div> </body> </html> ''') # Get all items by name items = parser.getElementsByName('items') # Parse some arbitrary html parser2 = AdvancedHTMLParser.AdvancedHTMLParser() parser2.parseStr('<div name="items"> <span name="itemName">Coop</span><span name="price">1.44</span></div>') # Append a new item to the list items[0].parentNode.appendChild(parser2.getRoot()) items = parser.getElementsByName('items') print ( "Items less than $4.00: ") print ( "-----------------------\n") #import pdb; pdb.set_trace() for item in items: priceEm = item.getElementsByName('price')[0] priceValue = round(float(priceEm.innerHTML.strip()), 2) if priceValue < 4.00: name = priceEm.getPeersByName('itemName')[0].innerHTML.strip() print ( "%s - $%.2f" %(name, priceValue) ) # OUTPUT: # Items less than $4.00: # ----------------------- # # Sponges - $1.96 # Turtles - $3.55 # Coop - $1.44 # Pudding Cups - $1.60
AdvancedHTMLParser
/AdvancedHTMLParser-9.0.2.tar.gz/AdvancedHTMLParser-9.0.2/example.py
example.py
AdvancedHTTPServer ================== Standalone web server built on Pythonโ€™s BaseHTTPServer |Build Status| |Documentation Status| |Github Issues| |PyPi Release| License ------- AdvancedHTTPServer is released under the BSD 3-clause license, for more details see the `LICENSE <https://github.com/zeroSteiner/AdvancedHTTPServer/blob/master/LICENSE>`__ file. Features -------- AdvancedHTTPServer builds on top of Pythonโ€™s included BaseHTTPServer and provides out of the box support for additional commonly needed features such as: - Threaded request handling - Binding to multiple interfaces - SSL and SNI support - Registering handler functions to HTTP resources - A default robots.txt file - Basic authentication - The HTTP verbs GET, HEAD, POST, and OPTIONS - Remote Procedure Call (RPC) over HTTP - WebSockets Dependencies ------------ AdvancedHTTPServer does not have any additional dependencies outside of the Python standard library. The following version of Python are currently supported: - Python 2.7 - Python 3.3 - Python 3.4 - Python 3.5 - Python 3.6 - Python 3.7 Code Documentation ------------------ AdvancedHTTPServer uses Sphinx for internal code documentation. This documentation can be generated from source with the command ``sphinx-build docs/source docs/html``. The latest documentation is kindly hosted on `ReadTheDocs <https://readthedocs.org/>`__ at `advancedhttpserver.readthedocs.io <https://advancedhttpserver.readthedocs.io/en/latest/>`__. Changes In Version 2.0 ---------------------- - The ``AdvancedHTTPServer`` module has been renamed ``advancedhttpserver`` - Classes prefixed with ``AdvancedHTTPServer`` have been renamed to have the redundant prefix removed - The ``hmac_key`` option is no longer supported - A single ``AdvancedHTTPServer`` instance can now be bound to multiple ports - The ``RequestHandler.install_handlers`` method has been renamed to ``on_init`` - ``SERIALIZER_DRIVERS`` was renamed to ``g_serializer_drivers`` - Support for multiple hostnames with SSL using the SNI extension - Support for persistent HTTP 1.1 TCP connections Powered By AdvancedHTTPServer ----------------------------- - `King Phisher <https://github.com/securestate/king-phisher>`__ Phishing Campaign Toolkit .. |Build Status| image:: http://img.shields.io/travis/zeroSteiner/AdvancedHTTPServer.svg?style=flat-square :target: https://travis-ci.org/zeroSteiner/AdvancedHTTPServer .. |Documentation Status| image:: https://readthedocs.org/projects/advancedhttpserver/badge/?version=latest&style=flat-square :target: http://advancedhttpserver.readthedocs.org/en/latest .. |Github Issues| image:: http://img.shields.io/github/issues/zerosteiner/AdvancedHTTPServer.svg?style=flat-square :target: https://github.com/zerosteiner/AdvancedHTTPServer/issues .. |PyPi Release| image:: https://img.shields.io/pypi/v/AdvancedHTTPServer.svg?style=flat-square :target: https://pypi.python.org/pypi/AdvancedHTTPServer
AdvancedHTTPServer
/AdvancedHTTPServer-2.2.0.tar.gz/AdvancedHTTPServer-2.2.0/README.rst
README.rst
# Homepage: https://github.com/zeroSteiner/AdvancedHTTPServer # Author: Spencer McIntyre (zeroSteiner) # Config file example FILE_CONFIG = """ [server] ip = 0.0.0.0 port = 8080 web_root = /var/www/html list_directories = True # Set an ssl_cert to enable SSL # ssl_cert = /path/to/cert.pem # ssl_key = /path/to/cert.key # ssl_version = TLSv1 """ # The AdvancedHTTPServer systemd service unit file # Quick how to: # 1. Copy this file to /etc/systemd/system/pyhttpd.service # 2. Edit the run parameters appropriately in the ExecStart option # 3. Set configuration settings in /etc/pyhttpd.conf # 4. Run "systemctl daemon-reload" FILE_SYSTEMD_SERVICE_UNIT = """ [Unit] Description=Python Advanced HTTP Server After=network.target [Service] Type=simple ExecStart=/sbin/runuser -l nobody -c "/usr/bin/python -m advancedhttpserver -c /etc/pyhttpd.conf" ExecStop=/bin/kill -INT $MAINPID [Install] WantedBy=multi-user.target """ __version__ = '2.2.0' __all__ = ( 'AdvancedHTTPServer', 'RegisterPath', 'RequestHandler', 'RPCClient', 'RPCClientCached', 'RPCError', 'RPCConnectionError', 'ServerTestCase', 'WebSocketHandler', 'build_server_from_argparser', 'build_server_from_config' ) import base64 import binascii import collections import datetime import hashlib import io import json import logging import logging.handlers import mimetypes import os import posixpath import random import re import select import shutil import socket import sqlite3 import ssl import string import struct import sys import threading import time import traceback import unittest import urllib import weakref import zlib if sys.version_info[0] < 3: import BaseHTTPServer import cgi as html import Cookie import httplib import Queue as queue import SocketServer as socketserver import urlparse http = type('http', (), {'client': httplib, 'cookies': Cookie, 'server': BaseHTTPServer}) urllib.parse = urlparse urllib.parse.quote = urllib.quote urllib.parse.unquote = urllib.unquote urllib.parse.urlencode = urllib.urlencode from ConfigParser import ConfigParser else: import html import http.client import http.cookies import http.server import queue import socketserver import urllib.parse from configparser import ConfigParser g_handler_map = {} g_serializer_drivers = {} """Dictionary of available drivers for serialization.""" g_ssl_has_server_sni = (getattr(ssl, 'HAS_SNI', False) and sys.version_info >= ((2, 7, 9) if sys.version_info[0] < 3 else (3, 4))) """An indication of if the environment offers server side SNI support.""" def _serialize_ext_dump(obj): if obj.__class__ == datetime.date: return 'datetime.date', obj.isoformat() elif obj.__class__ == datetime.datetime: return 'datetime.datetime', obj.isoformat() elif obj.__class__ == datetime.time: return 'datetime.time', obj.isoformat() raise TypeError('Unknown type: ' + repr(obj)) def _serialize_ext_load(obj_type, obj_value, default): if obj_type == 'datetime.date': return datetime.datetime.strptime(obj_value, '%Y-%m-%d').date() elif obj_type == 'datetime.datetime': return datetime.datetime.strptime(obj_value, '%Y-%m-%dT%H:%M:%S' + ('.%f' if '.' in obj_value else '')) elif obj_type == 'datetime.time': return datetime.datetime.strptime(obj_value, '%H:%M:%S' + ('.%f' if '.' in obj_value else '')).time() return default def _json_default(obj): obj_type, obj_value = _serialize_ext_dump(obj) return {'__complex_type__': obj_type, 'value': obj_value} def _json_object_hook(obj): return _serialize_ext_load(obj.get('__complex_type__'), obj.get('value'), obj) g_serializer_drivers['application/json'] = { 'dumps': lambda d: json.dumps(d, default=_json_default), 'loads': lambda d, e: json.loads(d, object_hook=_json_object_hook) } try: import msgpack except ImportError: has_msgpack = False else: has_msgpack = True _MSGPACK_EXT_TYPES = {10: 'datetime.datetime', 11: 'datetime.date', 12: 'datetime.time'} def _msgpack_default(obj): obj_type, obj_value = _serialize_ext_dump(obj) obj_type = next(i[0] for i in _MSGPACK_EXT_TYPES.items() if i[1] == obj_type) if sys.version_info[0] == 3: obj_value = obj_value.encode('utf-8') return msgpack.ExtType(obj_type, obj_value) def _msgpack_ext_hook(code, obj_value): default = msgpack.ExtType(code, obj_value) if sys.version_info[0] == 3: obj_value = obj_value.decode('utf-8') obj_type = _MSGPACK_EXT_TYPES.get(code) return _serialize_ext_load(obj_type, obj_value, default) g_serializer_drivers['binary/message-pack'] = { 'dumps': lambda d: msgpack.dumps(d, default=_msgpack_default), 'loads': lambda d, e: msgpack.loads(d, encoding=e, ext_hook=_msgpack_ext_hook) } if hasattr(logging, 'NullHandler'): logging.getLogger('AdvancedHTTPServer').addHandler(logging.NullHandler()) def random_string(size): """ Generate a random string of *size* length consisting of both letters and numbers. This function is not meant for cryptographic purposes and should not be used to generate security tokens. :param int size: The length of the string to return. :return: A string consisting of random characters. :rtype: str """ return ''.join(random.choice(string.ascii_letters + string.digits) for _ in range(size)) def resolve_ssl_protocol_version(version=None): """ Look up an SSL protocol version by name. If *version* is not specified, then the strongest protocol available will be returned. :param str version: The name of the version to look up. :return: A protocol constant from the :py:mod:`ssl` module. :rtype: int """ if version is None: protocol_preference = ('TLSv1_2', 'TLSv1_1', 'TLSv1', 'SSLv3', 'SSLv23', 'SSLv2') for protocol in protocol_preference: if hasattr(ssl, 'PROTOCOL_' + protocol): return getattr(ssl, 'PROTOCOL_' + protocol) raise RuntimeError('could not find a suitable ssl PROTOCOL_ version constant') elif isinstance(version, str): if not hasattr(ssl, 'PROTOCOL_' + version): raise ValueError('invalid ssl protocol version: ' + version) return getattr(ssl, 'PROTOCOL_' + version) raise TypeError("ssl_version() argument 1 must be str, not {0}".format(type(version).__name__)) def build_server_from_argparser(description=None, server_klass=None, handler_klass=None): """ Build a server from command line arguments. If a ServerClass or HandlerClass is specified, then the object must inherit from the corresponding AdvancedHTTPServer base class. :param str description: Description string to be passed to the argument parser. :param server_klass: Alternative server class to use. :type server_klass: :py:class:`.AdvancedHTTPServer` :param handler_klass: Alternative handler class to use. :type handler_klass: :py:class:`.RequestHandler` :return: A configured server instance. :rtype: :py:class:`.AdvancedHTTPServer` """ import argparse def _argp_dir_type(arg): if not os.path.isdir(arg): raise argparse.ArgumentTypeError("{0} is not a valid directory".format(repr(arg))) return arg def _argp_port_type(arg): if not arg.isdigit(): raise argparse.ArgumentTypeError("{0} is not a valid port".format(repr(arg))) arg = int(arg) if arg < 0 or arg > 65535: raise argparse.ArgumentTypeError("{0} is not a valid port".format(repr(arg))) return arg description = (description or 'HTTP Server') server_klass = (server_klass or AdvancedHTTPServer) handler_klass = (handler_klass or RequestHandler) parser = argparse.ArgumentParser(conflict_handler='resolve', description=description, fromfile_prefix_chars='@') parser.epilog = 'When a config file is specified with --config only the --log, --log-file and --password options will be used.' parser.add_argument('-c', '--conf', dest='config', type=argparse.FileType('r'), help='read settings from a config file') parser.add_argument('-i', '--ip', dest='ip', default='0.0.0.0', help='the ip address to serve on') parser.add_argument('-L', '--log', dest='loglvl', choices=('DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'), default='INFO', help='set the logging level') parser.add_argument('-p', '--port', dest='port', default=8080, type=_argp_port_type, help='port to serve on') parser.add_argument('-v', '--version', action='version', version=parser.prog + ' Version: ' + __version__) parser.add_argument('-w', '--web-root', dest='web_root', default='.', type=_argp_dir_type, help='path to the web root directory') parser.add_argument('--log-file', dest='log_file', help='log information to a file') parser.add_argument('--no-threads', dest='use_threads', action='store_false', default=True, help='disable threading') parser.add_argument('--password', dest='password', help='password to use for basic authentication') ssl_group = parser.add_argument_group('ssl options') ssl_group.add_argument('--ssl-cert', dest='ssl_cert', help='the ssl cert to use') ssl_group.add_argument('--ssl-key', dest='ssl_key', help='the ssl key to use') ssl_group.add_argument('--ssl-version', dest='ssl_version', choices=[p[9:] for p in dir(ssl) if p.startswith('PROTOCOL_')], help='the version of ssl to use') arguments = parser.parse_args() logging.getLogger('').setLevel(logging.DEBUG) console_log_handler = logging.StreamHandler() console_log_handler.setLevel(getattr(logging, arguments.loglvl)) console_log_handler.setFormatter(logging.Formatter("%(asctime)s %(levelname)-8s %(message)s")) logging.getLogger('').addHandler(console_log_handler) if arguments.log_file: main_file_handler = logging.handlers.RotatingFileHandler(arguments.log_file, maxBytes=262144, backupCount=5) main_file_handler.setLevel(logging.DEBUG) main_file_handler.setFormatter(logging.Formatter("%(asctime)s %(name)-30s %(levelname)-10s %(message)s")) logging.getLogger('').setLevel(logging.DEBUG) logging.getLogger('').addHandler(main_file_handler) if arguments.config: config = ConfigParser() config.readfp(arguments.config) server = build_server_from_config( config, 'server', server_klass=server_klass, handler_klass=handler_klass ) else: server = server_klass( handler_klass, address=(arguments.ip, arguments.port), use_threads=arguments.use_threads, ssl_certfile=arguments.ssl_cert, ssl_keyfile=arguments.ssl_key, ssl_version=arguments.ssl_version ) server.serve_files_root = arguments.web_root if arguments.password: server.auth_add_creds('', arguments.password) return server def build_server_from_config(config, section_name, server_klass=None, handler_klass=None): """ Build a server from a provided :py:class:`configparser.ConfigParser` instance. If a ServerClass or HandlerClass is specified, then the object must inherit from the corresponding AdvancedHTTPServer base class. :param config: Configuration to retrieve settings from. :type config: :py:class:`configparser.ConfigParser` :param str section_name: The section name of the configuration to use. :param server_klass: Alternative server class to use. :type server_klass: :py:class:`.AdvancedHTTPServer` :param handler_klass: Alternative handler class to use. :type handler_klass: :py:class:`.RequestHandler` :return: A configured server instance. :rtype: :py:class:`.AdvancedHTTPServer` """ server_klass = (server_klass or AdvancedHTTPServer) handler_klass = (handler_klass or RequestHandler) port = config.getint(section_name, 'port') web_root = None if config.has_option(section_name, 'web_root'): web_root = config.get(section_name, 'web_root') if config.has_option(section_name, 'ip'): ip = config.get(section_name, 'ip') else: ip = '0.0.0.0' ssl_certfile = None if config.has_option(section_name, 'ssl_cert'): ssl_certfile = config.get(section_name, 'ssl_cert') ssl_keyfile = None if config.has_option(section_name, 'ssl_key'): ssl_keyfile = config.get(section_name, 'ssl_key') ssl_version = None if config.has_option(section_name, 'ssl_version'): ssl_version = config.get(section_name, 'ssl_version') server = server_klass( handler_klass, address=(ip, port), ssl_certfile=ssl_certfile, ssl_keyfile=ssl_keyfile, ssl_version=ssl_version ) if config.has_option(section_name, 'password_type'): password_type = config.get(section_name, 'password_type') else: password_type = 'md5' if config.has_option(section_name, 'password'): password = config.get(section_name, 'password') if config.has_option(section_name, 'username'): username = config.get(section_name, 'username') else: username = '' server.auth_add_creds(username, password, pwtype=password_type) cred_idx = 0 while config.has_option(section_name, 'password' + str(cred_idx)): password = config.get(section_name, 'password' + str(cred_idx)) if not config.has_option(section_name, 'username' + str(cred_idx)): break username = config.get(section_name, 'username' + str(cred_idx)) server.auth_add_creds(username, password, pwtype=password_type) cred_idx += 1 if web_root is None: server.serve_files = False else: server.serve_files = True server.serve_files_root = web_root if config.has_option(section_name, 'list_directories'): server.serve_files_list_directories = config.getboolean(section_name, 'list_directories') return server class _RequestEmbryo(object): __slots__ = ('server', 'socket', 'address', 'created') def __init__(self, server, client_socket, address, created=None): server.request_embryos.append(self) self.server = weakref.ref(server) self.socket = client_socket self.address = address self.created = created or time.time() def fileno(self): return self.socket.fileno() def serve_ready(self): server = self.server() # server is a weakref if not server: return False try: self.socket.do_handshake() except ssl.SSLWantReadError: return False except (socket.error, OSError, ValueError): self.socket.close() server.request_embryos.remove(self) return False self.socket.settimeout(None) server.request_embryos.remove(self) server.request_queue.put((self.socket, self.address)) server.handle_request() return True class RegisterPath(object): """ Register a path and handler with the global handler map. This can be used as a decorator. If no handler is specified then the path and function will be registered with all :py:class:`.RequestHandler` instances. .. code-block:: python @RegisterPath('^test$') def handle_test(handler, query): pass """ def __init__(self, path, handler=None, is_rpc=False): """ :param str path: The path regex to register the function to. :param str handler: A specific :py:class:`.RequestHandler` class to register the handler with. :param bool is_rpc: Whether the handler is an RPC handler or not. """ self.path = path self.is_rpc = is_rpc if handler is None or isinstance(handler, str): self.handler = handler elif hasattr(handler, '__name__'): self.handler = handler.__name__ elif hasattr(handler, '__class__'): self.handler = handler.__class__.__name__ else: raise ValueError('unknown handler: ' + repr(handler)) def __call__(self, function): handler_map = g_handler_map.get(self.handler, {}) handler_map[self.path] = (function, self.is_rpc) g_handler_map[self.handler] = handler_map return function class RPCError(Exception): """ This class represents an RPC error either local or remote. Any errors in routines executed on the server will raise this error. """ def __init__(self, message, status=None, remote_exception=None): super(RPCError, self).__init__() self.message = message self.status = status self.remote_exception = remote_exception def __repr__(self): return "{0}(message='{1}', status={2}, remote_exception={3})".format(self.__class__.__name__, self.message, self.status, self.is_remote_exception) def __str__(self): if self.is_remote_exception: return 'a remote exception occurred' return "the server responded with {0} '{1}'".format(self.status, self.message) @property def is_remote_exception(self): """ This is true if the represented error resulted from an exception on the remote server. :type: bool """ return bool(self.remote_exception is not None) class RPCConnectionError(RPCError): """ An exception raised when there is a connection-related error encountered by the RPC client. .. versionadded:: 2.1.0 """ pass class RPCClient(object): """ This object facilitates communication with remote RPC methods as provided by a :py:class:`.RequestHandler` instance. Once created this object can be called directly, doing so is the same as using the call method. This object uses locks internally to be thread safe. Only one thread can execute a function at a time. """ def __init__(self, address, use_ssl=False, username=None, password=None, uri_base='/', ssl_context=None): """ :param tuple address: The address of the server to connect to as (host, port). :param bool use_ssl: Whether to connect with SSL or not. :param str username: The username to authenticate with. :param str password: The password to authenticate with. :param str uri_base: An optional prefix for all methods. :param ssl_context: An optional SSL context to use for SSL related options. """ self.host = str(address[0]) self.port = int(address[1]) if not hasattr(self, 'logger'): self.logger = logging.getLogger('AdvancedHTTPServer.RPCClient') self.headers = None """An optional dictionary of headers to include with each RPC request.""" self.use_ssl = bool(use_ssl) self.ssl_context = ssl_context self.uri_base = str(uri_base) self.username = (None if username is None else str(username)) self.password = (None if password is None else str(password)) self.lock = threading.Lock() """A :py:class:`threading.Lock` instance used to synchronize operations.""" self.serializer = None """The :py:class:`.Serializer` instance to use for encoding RPC data to the server.""" self.set_serializer('application/json') self.reconnect() def __del__(self): self.client.close() def __reduce__(self): address = (self.host, self.port) return (self.__class__, (address, self.use_ssl, self.username, self.password, self.uri_base)) def set_serializer(self, serializer_name, compression=None): """ Configure the serializer to use for communication with the server. The serializer specified must be valid and in the :py:data:`.g_serializer_drivers` map. :param str serializer_name: The name of the serializer to use. :param str compression: The name of a compression library to use. """ self.serializer = Serializer(serializer_name, charset='UTF-8', compression=compression) self.logger.debug('using serializer: ' + serializer_name) def __call__(self, *args, **kwargs): return self.call(*args, **kwargs) def encode(self, data): """Encode data with the configured serializer.""" return self.serializer.dumps(data) def decode(self, data): """Decode data with the configured serializer.""" return self.serializer.loads(data) def reconnect(self): """Reconnect to the remote server.""" self.lock.acquire() if self.use_ssl: self.client = http.client.HTTPSConnection(self.host, self.port, context=self.ssl_context) else: self.client = http.client.HTTPConnection(self.host, self.port) self.lock.release() def call(self, method, *args, **kwargs): """ Issue a call to the remote end point to execute the specified procedure. :param str method: The name of the remote procedure to execute. :return: The return value from the remote function. """ if kwargs: options = self.encode(dict(args=args, kwargs=kwargs)) else: options = self.encode(args) headers = {} if self.headers: headers.update(self.headers) headers['Content-Type'] = self.serializer.content_type headers['Content-Length'] = str(len(options)) headers['Connection'] = 'close' if self.username is not None and self.password is not None: headers['Authorization'] = 'Basic ' + base64.b64encode((self.username + ':' + self.password).encode('UTF-8')).decode('UTF-8') method = os.path.join(self.uri_base, method) self.logger.debug('calling RPC method: ' + method[1:]) try: with self.lock: self.client.request('RPC', method, options, headers) resp = self.client.getresponse() except http.client.ImproperConnectionState: raise RPCConnectionError('improper connection state') if resp.status != 200: raise RPCError(resp.reason, resp.status) resp_data = resp.read() resp_data = self.decode(resp_data) if not ('exception_occurred' in resp_data and 'result' in resp_data): raise RPCError('missing response information', resp.status) if resp_data['exception_occurred']: raise RPCError('remote method incurred an exception', resp.status, remote_exception=resp_data['exception']) return resp_data['result'] class RPCClientCached(RPCClient): """ This object builds upon :py:class:`.RPCClient` and provides additional methods for cacheing results in memory. """ def __init__(self, *args, **kwargs): cache_db = kwargs.pop('cache_db', ':memory:') super(RPCClientCached, self).__init__(*args, **kwargs) self.cache_db = sqlite3.connect(cache_db, check_same_thread=False) cursor = self.cache_db.cursor() cursor.execute('CREATE TABLE IF NOT EXISTS cache (method TEXT NOT NULL, options_hash BLOB NOT NULL, return_value BLOB NOT NULL)') self.cache_db.commit() self.cache_lock = threading.Lock() def cache_call(self, method, *options): """ Call a remote method and store the result locally. Subsequent calls to the same method with the same arguments will return the cached result without invoking the remote procedure. Cached results are kept indefinitely and must be manually refreshed with a call to :py:meth:`.cache_call_refresh`. :param str method: The name of the remote procedure to execute. :return: The return value from the remote function. """ options_hash = self.encode(options) if len(options_hash) > 20: options_hash = hashlib.new('sha1', options_hash).digest() options_hash = sqlite3.Binary(options_hash) with self.cache_lock: cursor = self.cache_db.cursor() cursor.execute('SELECT return_value FROM cache WHERE method = ? AND options_hash = ?', (method, options_hash)) return_value = cursor.fetchone() if return_value: return_value = bytes(return_value[0]) return self.decode(return_value) return_value = self.call(method, *options) store_return_value = sqlite3.Binary(self.encode(return_value)) with self.cache_lock: cursor = self.cache_db.cursor() cursor.execute('INSERT INTO cache (method, options_hash, return_value) VALUES (?, ?, ?)', (method, options_hash, store_return_value)) self.cache_db.commit() return return_value def cache_call_refresh(self, method, *options): """ Call a remote method and update the local cache with the result if it already existed. :param str method: The name of the remote procedure to execute. :return: The return value from the remote function. """ options_hash = self.encode(options) if len(options_hash) > 20: options_hash = hashlib.new('sha1', options).digest() options_hash = sqlite3.Binary(options_hash) with self.cache_lock: cursor = self.cache_db.cursor() cursor.execute('DELETE FROM cache WHERE method = ? AND options_hash = ?', (method, options_hash)) return_value = self.call(method, *options) store_return_value = sqlite3.Binary(self.encode(return_value)) with self.cache_lock: cursor = self.cache_db.cursor() cursor.execute('INSERT INTO cache (method, options_hash, return_value) VALUES (?, ?, ?)', (method, options_hash, store_return_value)) self.cache_db.commit() return return_value def cache_clear(self): """Purge the local store of all cached function information.""" with self.cache_lock: cursor = self.cache_db.cursor() cursor.execute('DELETE FROM cache') self.cache_db.commit() self.logger.info('the RPC cache has been purged') return class ServerNonThreaded(http.server.HTTPServer, object): """ This class is used internally by :py:class:`.AdvancedHTTPServer` and is not intended for use by other classes or functions. It is responsible for listening on a single address, TCP port and SSL combination. """ def __init__(self, *args, **kwargs): self.__config = kwargs.pop('config') if not hasattr(self, 'logger'): self.logger = logging.getLogger('AdvancedHTTPServer') self.allow_reuse_address = True self.request_queue = queue.Queue() self.request_embryos = [] self.using_ssl = False super(ServerNonThreaded, self).__init__(*args, **kwargs) def __repr__(self): address = self.server_address[0] if self.socket.family == socket.AF_INET: address += ':' + str(self.server_address[1]) elif self.socket.family == socket.AF_INET6: address = '[' + address + ']:' + str(self.server_address[1]) return "<{0} address: {1} ssl: {2!r}>".format(self.__class__.__name__, address, self.using_ssl) @property def read_checkable_fds(self): return [self] + self.request_embryos def get_config(self): return self.__config def get_request(self): return self.request_queue.get(block=True, timeout=None) def handle_request(self): timeout = self.socket.gettimeout() if timeout is None: timeout = self.timeout elif self.timeout is not None: timeout = min(timeout, self.timeout) try: request, client_address = self.request_queue.get(block=True, timeout=timeout) except queue.Empty: return self.handle_timeout() except OSError: return None if self.verify_request(request, client_address): try: self.process_request(request, client_address) except Exception: self.handle_error(request, client_address) self.shutdown_request(request) except: self.shutdown_request(request) raise else: self.shutdown_request(request) return None def finish_request(self, request, client_address): try: super(ServerNonThreaded, self).finish_request(request, client_address) except IOError: self.logger.warning('IOError encountered in finish_request') except KeyboardInterrupt: self.logger.warning('KeyboardInterrupt encountered in finish_request') self.shutdown() def serve_ready(self): client_socket, address = self.socket.accept() if self.using_ssl: client_socket.settimeout(0) embryo = _RequestEmbryo(self, client_socket, address) embryo.serve_ready() else: client_socket.settimeout(None) self.request_queue.put((client_socket, address)) self.handle_request() def server_bind(self, *args, **kwargs): self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) super(ServerNonThreaded, self).server_bind(*args, **kwargs) def shutdown(self, *args, **kwargs): try: self.socket.shutdown(socket.SHUT_RDWR) except socket.error: pass self.socket.close() class ServerThreaded(socketserver.ThreadingMixIn, ServerNonThreaded): """ This class is used internally by :py:class:`.AdvancedHTTPServer` and is not intended for use by other classes or functions. It is responsible for listening on a single address, TCP port and SSL combination. """ daemon_threads = True class RequestHandler(http.server.BaseHTTPRequestHandler, object): """ This is the primary http request handler class of the AdvancedHTTPServer framework. Custom request handlers must inherit from this object to be compatible. Instances of this class are created automatically. This class will handle standard HTTP GET, HEAD, OPTIONS, and POST requests. Callback functions called handlers can be registered to resource paths using regular expressions in the *handler_map* attribute for GET HEAD and POST requests and *rpc_handler_map* for RPC requests. Non-RPC handler functions that are not class methods of the request handler instance will be passed the instance of the request handler as the first argument. """ if not mimetypes.inited: mimetypes.init() # try to read system mime.types extensions_map = mimetypes.types_map.copy() extensions_map.update({ '': 'application/octet-stream', # Default '.py': 'text/plain', '.rb': 'text/plain', '.c': 'text/plain', '.h': 'text/plain', }) protocol_version = 'HTTP/1.1' wbufsize = 4096 web_socket_handler = None """An optional class to handle Web Sockets. This class must be derived from :py:class:`.WebSocketHandler`.""" def __init__(self, *args, **kwargs): self.cookies = None self.path = None self.wfile = None self._wfile = None self.server = args[2] self.headers_active = False """Whether or not the request is in the sending headers phase.""" self.handler_map = {} """The dictionary object which maps regular expressions of resources to the functions which should handle them.""" self.rpc_handler_map = {} """The dictionary object which maps regular expressions of RPC functions to their handlers.""" for map_name in (None, self.__class__.__name__): handler_map = g_handler_map.get(map_name, {}) for path, function_info in handler_map.items(): function, function_is_rpc = function_info if function_is_rpc: self.rpc_handler_map[path] = function else: self.handler_map[path] = function self.basic_auth_user = None """The name of the user if the current request is using basic authentication.""" self.query_data = None """The parameter data that has been passed to the server parsed as a dict.""" self.raw_query_data = None """The raw data that was parsed into the :py:attr:`.query_data` attribute.""" self.__config = self.server.get_config() """A reference to the configuration provided by the server.""" self.on_init() super(RequestHandler, self).__init__(*args, **kwargs) def setup(self, *args, **kwargs): ret = super(RequestHandler, self).setup(*args, **kwargs) self._wfile = self.wfile return ret def on_init(self): """ This method is meant to be over ridden by custom classes. It is called as part of the __init__ method and provides an opportunity for the handler maps to be populated with entries or the config to be customized. """ pass # over ride me def __get_handler(self, is_rpc=False): handler = None handler_map = (self.rpc_handler_map if is_rpc else self.handler_map) for (path_regex, handler) in handler_map.items(): if re.match(path_regex, self.path): break else: return (None, None) is_method = False self_handler = None if hasattr(handler, '__name__'): self_handler = getattr(self, handler.__name__, None) if self_handler is not None and (handler == self_handler.__func__ or handler == self_handler): is_method = True return (handler, is_method) def version_string(self): return self.__config['server_version'] def respond_file(self, file_path, attachment=False, query=None): """ Respond to the client by serving a file, either directly or as an attachment. :param str file_path: The path to the file to serve, this does not need to be in the web root. :param bool attachment: Whether to serve the file as a download by setting the Content-Disposition header. """ del query file_path = os.path.abspath(file_path) try: file_obj = open(file_path, 'rb') except IOError: self.respond_not_found() return self.send_response(200) self.send_header('Content-Type', self.guess_mime_type(file_path)) fs = os.fstat(file_obj.fileno()) self.send_header('Content-Length', str(fs[6])) if attachment: file_name = os.path.basename(file_path) self.send_header('Content-Disposition', 'attachment; filename=' + file_name) self.send_header('Last-Modified', self.date_time_string(fs.st_mtime)) self.end_headers() shutil.copyfileobj(file_obj, self.wfile) file_obj.close() return def respond_list_directory(self, dir_path, query=None): """ Respond to the client with an HTML page listing the contents of the specified directory. :param str dir_path: The path of the directory to list the contents of. """ del query try: dir_contents = os.listdir(dir_path) except os.error: self.respond_not_found() return if os.path.normpath(dir_path) != self.__config['serve_files_root']: dir_contents.append('..') dir_contents.sort(key=lambda a: a.lower()) displaypath = html.escape(urllib.parse.unquote(self.path), quote=True) f = io.BytesIO() encoding = sys.getfilesystemencoding() f.write(b'<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n') f.write(b'<html>\n<title>Directory listing for ' + displaypath.encode(encoding) + b'</title>\n') f.write(b'<body>\n<h2>Directory listing for ' + displaypath.encode(encoding) + b'</h2>\n') f.write(b'<hr>\n<ul>\n') for name in dir_contents: fullname = os.path.join(dir_path, name) displayname = linkname = name # Append / for directories or @ for symbolic links if os.path.isdir(fullname): displayname = name + "/" linkname = name + "/" if os.path.islink(fullname): displayname = name + "@" # Note: a link to a directory displays with @ and links with / f.write(('<li><a href="' + urllib.parse.quote(linkname) + '">' + html.escape(displayname, quote=True) + '</a>\n').encode(encoding)) f.write(b'</ul>\n<hr>\n</body>\n</html>\n') length = f.tell() f.seek(0) self.send_response(200) self.send_header('Content-Type', 'text/html; charset=' + encoding) self.send_header('Content-Length', length) self.end_headers() shutil.copyfileobj(f, self.wfile) f.close() return def respond_not_found(self): """Respond to the client with a default 404 message.""" self.send_response_full(b'Resource Not Found\n', status=404) return def respond_redirect(self, location='/'): """ Respond to the client with a 301 message and redirect them with a Location header. :param str location: The new location to redirect the client to. """ self.send_response(301) self.send_header('Content-Length', 0) self.send_header('Location', location) self.end_headers() return def respond_server_error(self, status=None, status_line=None, message=None): """ Handle an internal server error, logging a traceback if executed within an exception handler. :param int status: The status code to respond to the client with. :param str status_line: The status message to respond to the client with. :param str message: The body of the response that is sent to the client. """ (ex_type, ex_value, ex_traceback) = sys.exc_info() if ex_type: (ex_file_name, ex_line, _, _) = traceback.extract_tb(ex_traceback)[-1] line_info = "{0}:{1}".format(ex_file_name, ex_line) log_msg = "encountered {0} in {1}".format(repr(ex_value), line_info) self.server.logger.error(log_msg, exc_info=True) status = (status or 500) status_line = (status_line or http.client.responses.get(status, 'Internal Server Error')).strip() self.send_response(status, status_line) message = (message or status_line) if isinstance(message, (str, bytes)): self.send_header('Content-Length', len(message)) self.end_headers() if isinstance(message, str): self.wfile.write(message.encode(sys.getdefaultencoding())) else: self.wfile.write(message) elif hasattr(message, 'fileno'): fs = os.fstat(message.fileno()) self.send_header('Content-Length', fs[6]) self.end_headers() shutil.copyfileobj(message, self.wfile) else: self.end_headers() return def respond_unauthorized(self, request_authentication=False): """ Respond to the client that the request is unauthorized. :param bool request_authentication: Whether to request basic authentication information by sending a WWW-Authenticate header. """ headers = {} if request_authentication: headers['WWW-Authenticate'] = 'Basic realm="' + self.__config['server_version'] + '"' self.send_response_full(b'Unauthorized', status=401, headers=headers) return def dispatch_handler(self, query=None): """ Dispatch functions based on the established handler_map. It is generally not necessary to override this function and doing so will prevent any handlers from being executed. This function is executed automatically when requests of either GET, HEAD, or POST are received. :param dict query: Parsed query parameters from the corresponding request. """ query = (query or {}) # normalize the path # abandon query parameters self.path = self.path.split('?', 1)[0] self.path = self.path.split('#', 1)[0] original_path = urllib.parse.unquote(self.path) self.path = posixpath.normpath(original_path) words = self.path.split('/') words = filter(None, words) tmp_path = '' for word in words: _, word = os.path.splitdrive(word) _, word = os.path.split(word) if word in (os.curdir, os.pardir): continue tmp_path = os.path.join(tmp_path, word) self.path = tmp_path if self.path == 'robots.txt' and self.__config['serve_robots_txt']: self.send_response_full(self.__config['robots_txt']) return self.cookies = http.cookies.SimpleCookie(self.headers.get('cookie', '')) handler, is_method = self.__get_handler(is_rpc=False) if handler is not None: try: handler(*((query,) if is_method else (self, query))) except Exception: self.respond_server_error() return if not self.__config['serve_files']: self.respond_not_found() return file_path = self.__config['serve_files_root'] file_path = os.path.join(file_path, tmp_path) if os.path.isfile(file_path) and os.access(file_path, os.R_OK): self.respond_file(file_path, query=query) return elif os.path.isdir(file_path) and os.access(file_path, os.R_OK): if not original_path.endswith('/'): # redirect browser, doing what apache does destination = self.path + '/' if self.command == 'GET' and self.query_data: destination += '?' + urllib.parse.urlencode(self.query_data, True) self.respond_redirect(destination) return for index in ['index.html', 'index.htm']: index = os.path.join(file_path, index) if os.path.isfile(index) and os.access(index, os.R_OK): self.respond_file(index, query=query) return if self.__config['serve_files_list_directories']: self.respond_list_directory(file_path, query=query) return self.respond_not_found() return def send_response(self, *args, **kwargs): if self.wfile != self._wfile: self.wfile.close() self.wfile = self._wfile super(RequestHandler, self).send_response(*args, **kwargs) self.headers_active = True # in the event that the http request is invalid, all attributes may not be defined headers = getattr(self, 'headers', {}) protocol_version = getattr(self, 'protocol_version', 'HTTP/1.0').upper() if headers.get('Connection', None) == 'keep-alive' and protocol_version == 'HTTP/1.1': connection = 'keep-alive' else: connection = 'close' self.send_header('Connection', connection) def send_response_full(self, message, content_type='text/plain; charset=UTF-8', status=200, headers=None): self.send_response(status) self.send_header('Content-Type', content_type) self.send_header('Content-Length', len(message)) if headers is not None: for header, value in headers.items(): self.send_header(header, value) self.end_headers() self.wfile.write(message) return def end_headers(self): super(RequestHandler, self).end_headers() self.headers_active = False if self.command == 'HEAD': self.wfile.flush() self.wfile = open(os.devnull, 'wb') def guess_mime_type(self, path): """ Guess an appropriate MIME type based on the extension of the provided path. :param str path: The of the file to analyze. :return: The guessed MIME type of the default if non are found. :rtype: str """ _, ext = posixpath.splitext(path) if ext in self.extensions_map: return self.extensions_map[ext] ext = ext.lower() return self.extensions_map[ext if ext in self.extensions_map else ''] def stock_handler_respond_unauthorized(self, query): """This method provides a handler suitable to be used in the handler_map.""" del query self.respond_unauthorized() return def stock_handler_respond_not_found(self, query): """This method provides a handler suitable to be used in the handler_map.""" del query self.respond_not_found() return def check_authorization(self): """ Check for the presence of a basic auth Authorization header and if the credentials contained within in are valid. :return: Whether or not the credentials are valid. :rtype: bool """ try: store = self.__config.get('basic_auth') if store is None: return True auth_info = self.headers.get('Authorization') if not auth_info: return False auth_info = auth_info.split() if len(auth_info) != 2 or auth_info[0] != 'Basic': return False auth_info = base64.b64decode(auth_info[1]).decode(sys.getdefaultencoding()) username = auth_info.split(':')[0] password = ':'.join(auth_info.split(':')[1:]) password_bytes = password.encode(sys.getdefaultencoding()) if hasattr(self, 'custom_authentication'): if self.custom_authentication(username, password): self.basic_auth_user = username return True return False if not username in store: self.server.logger.warning('received invalid username: ' + username) return False password_data = store[username] if password_data['type'] == 'plain': if password == password_data['value']: self.basic_auth_user = username return True elif hashlib.new(password_data['type'], password_bytes).digest() == password_data['value']: self.basic_auth_user = username return True self.server.logger.warning('received invalid password from user: ' + username) except Exception: pass return False def cookie_get(self, name): """ Check for a cookie value by name. :param str name: Name of the cookie value to retreive. :return: Returns the cookie value if it's set or None if it's not found. """ if not hasattr(self, 'cookies'): return None if self.cookies.get(name): return self.cookies.get(name).value return None def cookie_set(self, name, value): """ Set the value of a client cookie. This can only be called while headers can be sent. :param str name: The name of the cookie value to set. :param str value: The value of the cookie to set. """ if not self.headers_active: raise RuntimeError('headers have already been ended') cookie = "{0}={1}; Path=/; HttpOnly".format(name, value) self.send_header('Set-Cookie', cookie) def do_GET(self): if not self.check_authorization(): self.respond_unauthorized(request_authentication=True) return uri = urllib.parse.urlparse(self.path) self.path = uri.path self.query_data = urllib.parse.parse_qs(uri.query) if self.web_socket_handler is not None and self.headers.get('upgrade', '').lower() == 'websocket': self.web_socket_handler(self) # pylint: disable=not-callable return self.dispatch_handler(self.query_data) return do_HEAD = do_GET def do_POST(self): if not self.check_authorization(): self.respond_unauthorized(request_authentication=True) return content_length = int(self.headers.get('content-length', 0)) data = self.rfile.read(content_length) self.raw_query_data = data content_type = self.headers.get('content-type', '') content_type = content_type.split(';', 1)[0] self.query_data = {} try: if not isinstance(data, str): data = data.decode(self.get_content_type_charset()) if content_type.startswith('application/json'): data = json.loads(data) if isinstance(data, dict): self.query_data = dict([(i[0], [i[1]]) for i in data.items()]) else: self.query_data = urllib.parse.parse_qs(data, keep_blank_values=1) except Exception: self.respond_server_error(400) else: self.dispatch_handler(self.query_data) return def do_OPTIONS(self): available_methods = list(x[3:] for x in dir(self) if x.startswith('do_')) if 'RPC' in available_methods and not self.rpc_handler_map: available_methods.remove('RPC') self.send_response(200) self.send_header('Content-Length', 0) self.send_header('Allow', ', '.join(available_methods)) self.end_headers() def do_RPC(self): if not self.check_authorization(): self.respond_unauthorized(request_authentication=True) return data_length = self.headers.get('content-length') if data_length is None: self.send_error(411) return content_type = self.headers.get('content-type') if content_type is None: self.send_error(400, 'Missing Header: Content-Type') return try: data_length = int(self.headers.get('content-length')) data = self.rfile.read(data_length) except Exception: self.send_error(400, 'Invalid Data') return try: serializer = Serializer.from_content_type(content_type) except ValueError: self.send_error(400, 'Invalid Content-Type') return try: data = serializer.loads(data) except Exception: self.server.logger.warning('serializer failed to load data') self.send_error(400, 'Invalid Data') return if isinstance(data, (list, tuple)): meth_args = data meth_kwargs = {} elif isinstance(data, dict): meth_args = data.get('args', ()) meth_kwargs = data.get('kwargs', {}) else: self.server.logger.warning('received data does not match the calling convention') self.send_error(400, 'Invalid Data') return rpc_handler, is_method = self.__get_handler(is_rpc=True) if not rpc_handler: self.respond_server_error(501) return if not is_method: meth_args = (self,) + tuple(meth_args) response = {'result': None, 'exception_occurred': False} try: response['result'] = rpc_handler(*meth_args, **meth_kwargs) except Exception as error: response['exception_occurred'] = True exc_name = "{0}.{1}".format(error.__class__.__module__, error.__class__.__name__) response['exception'] = dict(name=exc_name, message=getattr(error, 'message', None)) self.server.logger.error('error: ' + exc_name + ' occurred while calling rpc method: ' + self.path, exc_info=True) try: response = serializer.dumps(response) except Exception: self.respond_server_error(message='Failed To Pack Response') return self.send_response(200) self.send_header('Content-Type', serializer.content_type) self.end_headers() self.wfile.write(response) return def log_error(self, msg_format, *args): self.server.logger.warning(self.address_string() + ' ' + msg_format % args) def log_message(self, msg_format, *args): self.server.logger.info(self.address_string() + ' ' + msg_format % args) def get_query(self, name, default=None): """ Get a value from the query data that was sent to the server. :param str name: The name of the query value to retrieve. :param default: The value to return if *name* is not specified. :return: The value if it exists, otherwise *default* will be returned. :rtype: str """ return self.query_data.get(name, [default])[0] def get_content_type_charset(self, default='UTF-8'): """ Inspect the Content-Type header to retrieve the charset that the client has specified. :param str default: The default charset to return if none exists. :return: The charset of the request. :rtype: str """ encoding = default header = self.headers.get('Content-Type', '') idx = header.find('charset=') if idx > 0: encoding = (header[idx + 8:].split(' ', 1)[0] or encoding) return encoding class WakeupFd(object): __slots__ = ('read_fd', 'write_fd') def __init__(self): self.read_fd, self.write_fd = os.pipe() def close(self): os.close(self.read_fd) os.close(self.write_fd) def fileno(self): return self.read_fd class WebSocketHandler(object): """ A handler for web socket connections. """ _opcode_continue = 0x00 _opcode_text = 0x01 _opcode_binary = 0x02 _opcode_close = 0x08 _opcode_ping = 0x09 _opcode_pong = 0x0a _opcode_names = { _opcode_continue: 'continue', _opcode_text: 'text', _opcode_binary: 'binary', _opcode_close: 'close', _opcode_ping: 'ping', _opcode_pong: 'pong' } guid = '258EAFA5-E914-47DA-95CA-C5AB0DC85B11' def __init__(self, handler): """ :param handler: The :py:class:`RequestHandler` instance that is handling the request. """ self.handler = handler if not hasattr(self, 'logger'): self.logger = logging.getLogger('AdvancedHTTPServer.WebSocketHandler') headers = self.handler.headers client_extensions = headers.get('Sec-WebSocket-Extensions', '') self.client_extensions = [extension.strip() for extension in client_extensions.split(',')] key = headers.get('Sec-WebSocket-Key', None) digest = hashlib.sha1((key + self.guid).encode('utf-8')).digest() handler.send_response(101, 'Switching Protocols') handler.send_header('Upgrade', 'WebSocket') handler.send_header('Connection', 'Upgrade') handler.send_header('Sec-WebSocket-Accept', base64.b64encode(digest).decode('utf-8')) handler.end_headers() handler.wfile.flush() self.lock = threading.Lock() self.connected = True self.logger.info('web socket has been connected') self.on_connected() self._last_buffer = b'' self._last_opcode = 0 self._last_sent_opcode = 0 while self.connected: try: self._process_message() except socket.error: self.logger.warning('there was a socket error while processing web socket messages') self.close() except Exception: self.logger.error('there was an error while processing web socket messages', exc_info=True) self.close() self.handler.close_connection = 1 def _decode_string(self, data): str = data.decode('utf-8') if sys.version_info[0] == 3: return str # raise an exception on surrogates in python 2.7 to more closely replicate 3.x behaviour for idx, ch in enumerate(str): if 0xD800 <= ord(ch) <= 0xDFFF: raise UnicodeDecodeError('utf-8', '', idx, idx + 1, 'invalid continuation byte') return str def _process_message(self): byte_0 = self.handler.rfile.read(1) if not byte_0: self.close() return byte_0 = ord(byte_0) if byte_0 & 0x70: self.close() return fin = bool(byte_0 & 0x80) opcode = byte_0 & 0x0f length = ord(self.handler.rfile.read(1)) & 0x7f if length == 126: length = struct.unpack('>H', self.handler.rfile.read(2))[0] elif length == 127: length = struct.unpack('>Q', self.handler.rfile.read(8))[0] masks = [b for b in self.handler.rfile.read(4)] if sys.version_info[0] < 3: masks = map(ord, masks) payload = bytearray(self.handler.rfile.read(length)) for idx, char in enumerate(payload): payload[idx] = char ^ masks[idx % 4] payload = bytes(payload) self.logger.debug("received message (len: {0:,} opcode: 0x{1:02x} fin: {2})".format(len(payload), opcode, fin)) if fin: if opcode == self._opcode_continue: opcode = self._last_opcode payload = self._last_buffer + payload self._last_buffer = b'' self._last_opcode = 0 elif self._last_buffer and opcode in (self._opcode_binary, self._opcode_text): self.logger.warning('closing connection due to unflushed buffer in new data frame') self.close() return self.on_message(opcode, payload) return if opcode > 0x02: self.logger.warning('closing connection due to fin flag not set on opcode > 0x02') self.close() return if opcode: if self._last_buffer: self.logger.warning('closing connection due to unflushed buffer in new continuation frame') self.close() return self._last_buffer = payload self._last_opcode = opcode else: self._last_buffer += payload def close(self): """ Close the web socket connection and stop processing results. If the connection is still open, a WebSocket close message will be sent to the peer. """ if not self.connected: return self.connected = False if self.handler.wfile.closed: return if select.select([], [self.handler.wfile], [], 0)[1]: with self.lock: self.handler.wfile.write(b'\x88\x00') self.handler.wfile.flush() self.on_closed() def send_message(self, opcode, message): """ Send a message to the peer over the socket. :param int opcode: The opcode for the message to send. :param bytes message: The message data to send. """ if not isinstance(message, bytes): message = message.encode('utf-8') length = len(message) if not select.select([], [self.handler.wfile], [], 0)[1]: self.logger.error('the socket is not ready for writing') self.close() return buffer = b'' buffer += struct.pack('B', 0x80 + opcode) if length <= 125: buffer += struct.pack('B', length) elif 126 <= length <= 65535: buffer += struct.pack('>BH', 126, length) else: buffer += struct.pack('>BQ', 127, length) buffer += message self._last_sent_opcode = opcode self.lock.acquire() try: self.handler.wfile.write(buffer) self.handler.wfile.flush() except Exception: self.logger.error('an error occurred while sending a message', exc_info=True) self.close() finally: self.lock.release() def send_message_binary(self, message): return self.send_message(self._opcode_binary, message) def send_message_ping(self, message): return self.send_message(self._opcode_ping, message) def send_message_text(self, message): return self.send_message(self._opcode_text, message) def on_closed(self): """ A method that can be over ridden and is called after the web socket is closed. """ pass def on_connected(self): """ A method that can be over ridden and is called after the web socket is connected. """ pass def on_message(self, opcode, message): """ The primary dispatch function to handle incoming WebSocket messages. :param int opcode: The opcode of the message that was received. :param bytes message: The data contained within the message. """ self.logger.debug("processing {0} (opcode: 0x{1:02x}) message".format(self._opcode_names.get(opcode, 'UNKNOWN'), opcode)) if opcode == self._opcode_close: self.close() elif opcode == self._opcode_ping: if len(message) > 125: self.close() return self.send_message(self._opcode_pong, message) elif opcode == self._opcode_pong: pass elif opcode == self._opcode_binary: self.on_message_binary(message) elif opcode == self._opcode_text: try: message = self._decode_string(message) except UnicodeDecodeError: self.logger.warning('closing connection due to invalid unicode within a text message') self.close() else: self.on_message_text(message) elif opcode == self._opcode_continue: self.close() else: self.logger.warning("received unknown opcode: {0} (0x{0:02x})".format(opcode)) self.close() def on_message_binary(self, message): """ A method that can be over ridden and is called when a binary message is received from the peer. :param bytes message: The message data. """ pass def on_message_text(self, message): """ A method that can be over ridden and is called when a text message is received from the peer. :param str message: The message data. """ pass def ping(self): self.send_message_ping(random_string(16)) class Serializer(object): """ This class represents a serilizer object for use with the RPC system. """ def __init__(self, name, charset='UTF-8', compression=None): """ :param str name: The name of the serializer to use. :param str charset: The name of the encoding to use. :param str compression: The compression library to use. """ if not name in g_serializer_drivers: raise ValueError("unknown serializer '{0}'".format(name)) self.name = name self._charset = charset self._compression = compression self.content_type = "{0}; charset={1}".format(self.name, self._charset) if self._compression: self.content_type += '; compression=' + self._compression @classmethod def from_content_type(cls, content_type): """ Build a serializer object from a MIME Content-Type string. :param str content_type: The Content-Type string to parse. :return: A new serializer instance. :rtype: :py:class:`.Serializer` """ name = content_type options = {} if ';' in content_type: name, options_str = content_type.split(';', 1) for part in options_str.split(';'): part = part.strip() if '=' in part: key, value = part.split('=') else: key, value = (part, None) options[key] = value # old style compatibility if name.endswith('+zlib'): options['compression'] = 'zlib' name = name[:-5] return cls(name, charset=options.get('charset', 'UTF-8'), compression=options.get('compression')) def dumps(self, data): """ Serialize a python data type for transmission or storage. :param data: The python object to serialize. :return: The serialized representation of the object. :rtype: bytes """ data = g_serializer_drivers[self.name]['dumps'](data) if sys.version_info[0] == 3 and isinstance(data, str): data = data.encode(self._charset) if self._compression == 'zlib': data = zlib.compress(data) assert isinstance(data, bytes) return data def loads(self, data): """ Deserialize the data into it's original python object. :param bytes data: The serialized object to load. :return: The original python object. """ if not isinstance(data, bytes): raise TypeError("loads() argument 1 must be bytes, not {0}".format(type(data).__name__)) if self._compression == 'zlib': data = zlib.decompress(data) if sys.version_info[0] == 3 and self.name.startswith('application/'): data = data.decode(self._charset) data = g_serializer_drivers[self.name]['loads'](data, (self._charset if sys.version_info[0] == 3 else None)) if isinstance(data, list): data = tuple(data) return data SSLSNICertificate = collections.namedtuple('SSLSNICertificate', ('hostname', 'certfile', 'keyfile')) """ The information for a certificate used by SSL's Server Name Indicator (SNI) extension. .. versionadded:: 2.2.0 .. py:attribute:: hostname The hostname string for requests which should use this certificate information. .. py:attribute:: certfile The path to the SSL certificate file on disk to use for the hostname. .. py:attribute:: keyfile The path to the SSL key file on disk to use for the hostname. """ SSLSNIEntry = collections.namedtuple('SSLSNIEntry', ('certificate', 'context')) class AdvancedHTTPServer(object): """ This is the primary server class for the AdvancedHTTPServer module. Custom servers must inherit from this object to be compatible. When no *address* parameter is specified the address '0.0.0.0' is used and the port is guessed based on if the server is run as root or not and SSL is used. """ def __init__(self, handler_klass, address=None, addresses=None, use_threads=True, ssl_certfile=None, ssl_keyfile=None, ssl_version=None): """ :param handler_klass: The request handler class to use. :type handler_klass: :py:class:`.RequestHandler` :param tuple address: The address to bind to in the format (host, port). :param tuple addresses: The addresses to bind to in the format (host, port, ssl). :param bool use_threads: Whether to enable the use of a threaded handler. :param str ssl_certfile: An SSL certificate file to use, setting this enables SSL. :param str ssl_keyfile: An SSL certificate file to use. :param ssl_version: The SSL protocol version to use. """ if addresses is None: addresses = [] if address is None and not addresses: if ssl_certfile is not None: if os.getuid(): addresses.insert(0, ('0.0.0.0', 8443, True)) else: addresses.insert(0, ('0.0.0.0', 443, True)) else: if os.getuid(): addresses.insert(0, ('0.0.0.0', 8080, False)) else: addresses.insert(0, ('0.0.0.0', 80, False)) elif address: addresses.insert(0, (address[0], address[1], ssl_certfile is not None)) self.ssl_certfile = ssl_certfile self.ssl_keyfile = ssl_keyfile if not hasattr(self, 'logger'): self.logger = logging.getLogger('AdvancedHTTPServer') self.__should_stop = threading.Event() self.__is_shutdown = threading.Event() self.__is_shutdown.set() self.__is_running = threading.Event() self.__is_running.clear() self.__server_thread = None self.__wakeup_fd = None self.__config = { 'basic_auth': None, 'robots_txt': b'User-agent: *\nDisallow: /\n', 'serve_files': False, 'serve_files_list_directories': True, # irrelevant if serve_files == False 'serve_files_root': os.getcwd(), 'serve_robots_txt': True, 'server_version': 'AdvancedHTTPServer/' + __version__ } self.sub_servers = [] """The instances of :py:class:`.ServerNonThreaded` that are responsible for listening on each configured address.""" if use_threads: server_klass = ServerThreaded else: server_klass = ServerNonThreaded for address in addresses: server = server_klass((address[0], address[1]), handler_klass, config=self.__config) use_ssl = (len(address) == 3 and address[2]) server.using_ssl = use_ssl self.sub_servers.append(server) self.logger.info("listening on {0}:{1}".format(address[0], address[1]) + (' with ssl' if use_ssl else '')) self._ssl_sni_entries = None if any([server.using_ssl for server in self.sub_servers]): self._ssl_sni_entries = {} if ssl_version is None or isinstance(ssl_version, str): ssl_version = resolve_ssl_protocol_version(ssl_version) self._ssl_ctx = ssl.SSLContext(ssl_version) self._ssl_ctx.load_cert_chain(ssl_certfile, keyfile=ssl_keyfile) if g_ssl_has_server_sni: self._ssl_ctx.set_servername_callback(self._ssl_servername_callback) for server in self.sub_servers: if not server.using_ssl: continue server.socket = self._ssl_ctx.wrap_socket(server.socket, server_side=True, do_handshake_on_connect=False) if hasattr(handler_klass, 'custom_authentication'): self.logger.debug('a custom authentication function is being used') self.auth_set(True) def _ssl_servername_callback(self, sock, hostname, context): sni_entry = self._ssl_sni_entries.get(hostname) if sni_entry: self.logger.debug('setting a new ssl context for sni hostname: %s', hostname) sock.context = sni_entry.context return None def add_sni_cert(self, hostname, ssl_certfile=None, ssl_keyfile=None, ssl_version=None): """ Add an SSL certificate for a specific hostname as supported by SSL's Server Name Indicator (SNI) extension. See :rfc:`3546` for more details on SSL extensions. In order to use this method, the server instance must have been initialized with at least one address configured for SSL. .. warning:: This method will raise a :py:exc:`RuntimeError` if either the SNI extension is not available in the :py:mod:`ssl` module or if SSL was not enabled at initialization time through the use of arguments to :py:meth:`~.__init__`. .. versionadded:: 2.0.0 :param str hostname: The hostname for this configuration. :param str ssl_certfile: An SSL certificate file to use, setting this enables SSL. :param str ssl_keyfile: An SSL certificate file to use. :param ssl_version: The SSL protocol version to use. """ if not g_ssl_has_server_sni: raise RuntimeError('the ssl server name indicator extension is unavailable') if self._ssl_sni_entries is None: raise RuntimeError('ssl was not enabled on initialization') if ssl_certfile: ssl_certfile = os.path.abspath(ssl_certfile) if ssl_keyfile: ssl_keyfile = os.path.abspath(ssl_keyfile) cert_info = SSLSNICertificate(hostname, ssl_certfile, ssl_keyfile) if ssl_version is None or isinstance(ssl_version, str): ssl_version = resolve_ssl_protocol_version(ssl_version) ssl_ctx = ssl.SSLContext(ssl_version) ssl_ctx.load_cert_chain(ssl_certfile, keyfile=ssl_keyfile) self._ssl_sni_entries[hostname] = SSLSNIEntry(context=ssl_ctx, certificate=cert_info) def remove_sni_cert(self, hostname): """ Remove the SSL Server Name Indicator (SNI) certificate configuration for the specified *hostname*. .. warning:: This method will raise a :py:exc:`RuntimeError` if either the SNI extension is not available in the :py:mod:`ssl` module or if SSL was not enabled at initialization time through the use of arguments to :py:meth:`~.__init__`. .. versionadded:: 2.2.0 :param str hostname: The hostname to delete the SNI configuration for. """ if not g_ssl_has_server_sni: raise RuntimeError('the ssl server name indicator extension is unavailable') if self._ssl_sni_entries is None: raise RuntimeError('ssl was not enabled on initialization') sni_entry = self._ssl_sni_entries.pop(hostname, None) if sni_entry is None: raise ValueError('the specified hostname does not have an sni certificate configuration') @property def sni_certs(self): """ .. versionadded:: 2.2.0 :return: Return a tuple of :py:class:`~.SSLSNICertificate` instances for each of the certificates that are configured. :rtype: tuple """ if not g_ssl_has_server_sni or self._ssl_sni_entries is None: return tuple() return tuple(entry.certificate for entry in self._ssl_sni_entries.values()) @property def server_started(self): return self.__server_thread is not None def _serve_ready(self): read_check = [self.__wakeup_fd] for sub_server in self.sub_servers: read_check.extend(sub_server.read_checkable_fds) all_read_ready, _, _ = select.select(read_check, [], []) for read_ready in all_read_ready: if isinstance(read_ready, (_RequestEmbryo, http.server.HTTPServer)): read_ready.serve_ready() def serve_forever(self, fork=False): """ Start handling requests. This method must be called and does not return unless the :py:meth:`.shutdown` method is called from another thread. :param bool fork: Whether to fork or not before serving content. :return: The child processes PID if *fork* is set to True. :rtype: int """ if fork: if not hasattr(os, 'fork'): raise OSError('os.fork is not available') child_pid = os.fork() if child_pid != 0: self.logger.info('forked child process: ' + str(child_pid)) return child_pid self.__server_thread = threading.current_thread() self.__wakeup_fd = WakeupFd() self.__is_shutdown.clear() self.__should_stop.clear() self.__is_running.set() while not self.__should_stop.is_set(): try: self._serve_ready() except socket.error: self.logger.warning('encountered socket error, stopping server') self.__should_stop.set() self.__is_shutdown.set() self.__is_running.clear() return 0 def shutdown(self): """Shutdown the server and stop responding to requests.""" self.__should_stop.set() if self.__server_thread == threading.current_thread(): self.__is_shutdown.set() self.__is_running.clear() else: if self.__wakeup_fd is not None: os.write(self.__wakeup_fd.write_fd, b'\x00') self.__is_shutdown.wait() if self.__wakeup_fd is not None: self.__wakeup_fd.close() self.__wakeup_fd = None for server in self.sub_servers: server.shutdown() @property def serve_files(self): """ Whether to enable serving files or not. :type: bool """ return self.__config['serve_files'] @serve_files.setter def serve_files(self, value): value = bool(value) if self.__config['serve_files'] == value: return self.__config['serve_files'] = value if value: self.logger.info('serving files has been enabled') else: self.logger.info('serving files has been disabled') @property def serve_files_root(self): """ The web root to use when serving files. :type: str """ return self.__config['serve_files_root'] @serve_files_root.setter def serve_files_root(self, value): self.__config['serve_files_root'] = os.path.abspath(value) @property def serve_files_list_directories(self): """ Whether to list the contents of directories. This is only honored when :py:attr:`.serve_files` is True. :type: bool """ return self.__config['serve_files_list_directories'] @serve_files_list_directories.setter def serve_files_list_directories(self, value): self.__config['serve_files_list_directories'] = bool(value) @property def serve_robots_txt(self): """ Whether to serve a default robots.txt file which denies everything. :type: bool """ return self.__config['serve_robots_txt'] @serve_robots_txt.setter def serve_robots_txt(self, value): self.__config['serve_robots_txt'] = bool(value) @property def server_version(self): """ The server version to be sent to clients in headers. :type: str """ return self.__config['server_version'] @server_version.setter def server_version(self, value): self.__config['server_version'] = str(value) def auth_set(self, status): """ Enable or disable requiring authentication on all incoming requests. :param bool status: Whether to enable or disable requiring authentication. """ if not bool(status): self.__config['basic_auth'] = None self.logger.info('basic authentication has been disabled') else: self.__config['basic_auth'] = {} self.logger.info('basic authentication has been enabled') def auth_delete_creds(self, username=None): """ Delete the credentials for a specific username if specified or all stored credentials. :param str username: The username of the credentials to delete. """ if not username: self.__config['basic_auth'] = {} self.logger.info('basic authentication database has been cleared of all entries') return del self.__config['basic_auth'][username] def auth_add_creds(self, username, password, pwtype='plain'): """ Add a valid set of credentials to be accepted for authentication. Calling this function will automatically enable requiring authentication. Passwords can be provided in either plaintext or as a hash by specifying the hash type in the *pwtype* argument. :param str username: The username of the credentials to be added. :param password: The password data of the credentials to be added. :type password: bytes, str :param str pwtype: The type of the *password* data, (plain, md5, sha1, etc.). """ if not isinstance(password, (bytes, str)): raise TypeError("auth_add_creds() argument 2 must be bytes or str, not {0}".format(type(password).__name__)) pwtype = pwtype.lower() if not pwtype in ('plain', 'md5', 'sha1', 'sha256', 'sha384', 'sha512'): raise ValueError('invalid password type, must be \'plain\', or supported by hashlib') if self.__config.get('basic_auth') is None: self.__config['basic_auth'] = {} self.logger.info('basic authentication has been enabled') if pwtype != 'plain': algorithms_available = getattr(hashlib, 'algorithms_available', ()) or getattr(hashlib, 'algorithms', ()) if pwtype not in algorithms_available: raise ValueError('hashlib does not support the desired algorithm') # only md5 and sha1 hex for backwards compatibility if pwtype == 'md5' and len(password) == 32: password = binascii.unhexlify(password) elif pwtype == 'sha1' and len(password) == 40: password = binascii.unhexlify(password) if not isinstance(password, bytes): password = password.encode('UTF-8') if len(hashlib.new(pwtype, b'foobar').digest()) != len(password): raise ValueError('the length of the password hash does not match the type specified') self.__config['basic_auth'][username] = {'value': password, 'type': pwtype} class ServerTestCase(unittest.TestCase): """ A base class for unit tests with AdvancedHTTPServer derived classes. """ server_class = AdvancedHTTPServer """The :py:class:`.AdvancedHTTPServer` class to use as the server, this can be overridden by subclasses.""" handler_class = RequestHandler """The :py:class:`.RequestHandler` class to use as the request handler, this can be overridden by subclasses.""" def __init__(self, *args, **kwargs): super(ServerTestCase, self).__init__(*args, **kwargs) self.test_resource = "/{0}".format(random_string(40)) """ A resource which has a handler set to it which will respond with a 200 status code and the message 'Hello World!' """ self.server_address = ('localhost', random.randint(30000, 50000)) self._server_kwargs = { 'address': self.server_address } if hasattr(self, 'assertRegexpMatches') and not hasattr(self, 'assertRegexMatches'): self.assertRegexMatches = self.assertRegexpMatches if hasattr(self, 'assertRaisesRegexp') and not hasattr(self, 'assertRaisesRegex'): self.assertRaisesRegex = self.assertRaisesRegexp def setUp(self): RegisterPath("^{0}$".format(self.test_resource[1:]), self.handler_class.__name__)(self._test_resource_handler) self.server = self.server_class(self.handler_class, **self._server_kwargs) self.assertTrue(isinstance(self.server, AdvancedHTTPServer)) self.server_thread = threading.Thread(target=self.server.serve_forever) self.server_thread.daemon = True self.server_thread.start() self.assertTrue(self.server_thread.is_alive()) self.shutdown_requested = False if len(self.server_address) == 3 and self.server_address[2]: context = ssl.create_default_context() context.check_hostname = False context.verify_mode = ssl.CERT_NONE self.http_connection = http.client.HTTPSConnection(self.server_address[0], self.server_address[1], context=context) else: self.http_connection = http.client.HTTPConnection(self.server_address[0], self.server_address[1]) self.http_connection.connect() def _test_resource_handler(self, handler, query): del query handler.send_response_full(b'Hello World!\n') return def assertHTTPStatus(self, http_response, status): """ Check an HTTP response object and ensure the status is correct. :param http_response: The response object to check. :type http_response: :py:class:`http.client.HTTPResponse` :param int status: The status code to expect for *http_response*. """ self.assertTrue(isinstance(http_response, http.client.HTTPResponse)) error_message = "HTTP Response received status {0} when {1} was expected".format(http_response.status, status) self.assertEqual(http_response.status, status, msg=error_message) def http_request(self, resource, method='GET', headers=None): """ Make an HTTP request to the test server and return the response. :param str resource: The resource to issue the request to. :param str method: The HTTP verb to use (GET, HEAD, POST etc.). :param dict headers: The HTTP headers to provide in the request. :return: The HTTP response object. :rtype: :py:class:`http.client.HTTPResponse` """ headers = (headers or {}) if not 'Connection' in headers: headers['Connection'] = 'keep-alive' self.http_connection.request(method, resource, headers=headers) time.sleep(0.025) response = self.http_connection.getresponse() response.data = response.read() return response def tearDown(self): if not self.shutdown_requested: self.assertTrue(self.server_thread.is_alive()) self.http_connection.close() self.server.shutdown() self.server_thread.join(10.0) self.assertFalse(self.server_thread.is_alive()) del self.server def main(): try: server = build_server_from_argparser() except ImportError: server = AdvancedHTTPServer(RequestHandler, use_threads=False) server.serve_files_root = '.' server.serve_files_root = (server.serve_files_root or '.') server.serve_files = True try: server.serve_forever() except KeyboardInterrupt: pass server.shutdown() logging.shutdown() return 0 if __name__ == '__main__': main()
AdvancedHTTPServer
/AdvancedHTTPServer-2.2.0.tar.gz/AdvancedHTTPServer-2.2.0/advancedhttpserver.py
advancedhttpserver.py
from globalfunc import * from settings import Settings import re try: import converter except ImportError: converter = None class ConverterHandler(object): def __init__(self, variant, settings = {}): self.settings = Settings(settings) ### INITIATE CONVERTERS AND RULEPARSER ### self.variant = variant self.converters = {} self.ruleparser = _RuleParser(variant, self) for vvariant in self.settings.VALIDVARIANTS: self.converters[vvariant] = _Converter(vvariant, self) self.mainconverter = self.converters[variant] def convert(self, content, parserules = True): return self.mainconverter.convert(content, parserules) def convert_to(self, variant, content, parserules = True): return self.converters[variant].convert(content, parserules) def parse(self, text): return self.ruleparser.parse(text) class _Converter(object): def __init__(self, variant, handler): ### DEFINATION OF VARIBLES ### self.variant = variant # The variant we want convert to self.handler = handler self.convtable = {} # The conversion table self.quicktable = {} # A quick table self.maxlen = 0 # Max length of the words self.maxdepth = 10 # Depth for recursive convert rule self.hooks = {'depth_exceed_msg': None, 'rule_parser': None} # Hooks for converter ### DEFINATION OF LAMBDA METHOD ### self.get_message = lambda name, *args, **kwargs: get_message(variant, name, *args, **kwargs) ### INITIATE FUNCTIONS ### self.load_table() # Load default table self.set_default_hooks() # As it says """def get_message(self, name, *args, **kwargs): return get_message(self.variant, name, *args, **kwargs)""" def set_default_hooks(self): """As it says.""" self.hooks['depth_exceed_msg'] = lambda depth: self.get_message('deptherr', depth) self.hooks['rule_parser'] = self.handler.ruleparser.parse def set_hook(self, name, callfunc): self.hooks[name] = callfunc def add_quick(self, ori): """Add item to quicktable.""" orilen = len(ori) self.maxlen = orilen > self.maxlen and orilen or self.maxlen try: wordlens = self.quicktable[ori[0]] except KeyError, err: self.quicktable[ori[0]] = [orilen] else: wllen = len(wordlens) pos = wllen // 2 while pos > -1 and pos < wllen + 1: if pos == 0: left = orilen + 1 else: left = wordlens[pos - 1] if pos == wllen: right = orilen - 1 else: right = wordlens[pos] #print left, orilen, right, pos if orilen == left or orilen == right: break elif left > orilen and orilen > right: wordlens.insert(pos, orilen) break elif orilen > left: pos -= pos // 2 or 1 else: # right > orilen pos += (wllen - pos) // 2 or 1 def load_table(self, isgroup = False): """Load a conversion table. Raise ImportException if an import error happens.""" newtable = __import__('langconv.defaulttables.%s' % \ self.variant.replace('-', '_'), fromlist = 'convtable').convtable self.convtable.update(newtable) # try to load quicktable from cache if not isgroup: self.quicktable = get_cache(self.handler.settings, '%s-qtable' % self.variant) self.maxlen = get_cache(self.handler.settings, '%s-maxlen' % self.variant) if self.quicktable is not None and self.maxlen is not None: return else: self.quicktable = {} self.maxlen = 0 for (ori, dst) in newtable.iteritems(): self.add_quick(ori) # try to dump quicktable to cache if not isgroup: set_cache(self.handler.settings, '%s-qtable' % self.variant, self.quicktable) set_cache(self.handler.settings, '%s-maxlen' % self.variant, self.maxlen) def update(self, newtable): self.convtable.update(newtable) for (ori, dst) in newtable.iteritems(): self.add_quick(ori) def add_rule(self, ori, dst): """add a rule to convtable and quicktable""" self.convtable[ori] = dst self.add_quick(ori) def del_rule(self, ori, dst): if self.convtable.get(ori) == dst: self.convtable.pop(ori) if converter: # The C module has been imported correctly def convert(self, content, parserules = True): content = to_unicode(content) return converter.convert(self, content, parserules) else: def recursive_convert_rule(self, content, pos, contlen, depth = 1): oripos = pos out = [] exceedtime = 0 while pos < contlen: token = content[pos:pos + 2] if token == '-{': if depth < self.maxdepth: inner, pos = self.recursive_convert_rule(content, pos + 2, contlen, depth + 1) out.append(inner) continue else: if not exceedtime and self.hooks['depth_exceed_msg'] is not None: out.append(self.hooks['depth_exceed_msg'](depth)) exceedtime += 1 elif token == '}-': if depth >= self.maxdepth and exceedtime: exceedtime -= 1 else: inner = ''.join(out) if not exceedtime: inner = self.handler.parse(inner) return (inner, pos + 2) out.append(content[pos]) pos += 1 else: # unclosed rule, won't parse but still auto convert return ('-', oripos - 1) def convert(self, content, parserules = True): """Use the specified variant to convert the content. content is the string to convert, set parserules to False if you don't want to parse rules.""" content = to_unicode(content) out = [] contlen = len(content) pos = 0 trytime = 0 # for debug while pos < contlen: if parserules and content[pos:pos + 2] == '-{': # markup found inner, pos = self.recursive_convert_rule(content, pos + 2, contlen) out.append(inner) continue wordlens = self.quicktable.get(content[pos]) single = content[pos] if wordlens is None: trytime += 1 # for debug out.append(single) pos += 1 else: for wordlen in wordlens: trytime += 1 # for debug oriword = content[pos:pos + wordlen] convword = self.convtable.get(oriword) if convword is not None: out.append(convword) pos += wordlen break else: trytime += 1 # for debug out.append(single) pos += 1 print trytime # for debug return ''.join(out) class _RuleParser(object): def __init__(self, variant, handler): self.variant = variant self.handler = handler self.flagdict = {'A': lambda flag, rule: self.add_rule(flag, rule, display = True), # add a single rule to convtable and return the converted result # -{FLAG|rule}- # FLAG: A[[;NF]|[;NA:variant]] 'D': self.describe_rule, # describe the rule # -{D|rule}- 'G': self.add_group, # add a lot rules from a group to convtable # -{G|groupname}- 'H': lambda flag, rule: self.add_rule(flag, rule, display = False), # add a single rule to convtable # -{FLAG|rule}- # FLAG: H[[;NF]|[;NA:variant]] 'R': self.display_raw, # raw content # -{R|content}- 'T': self.set_title, # set title # -{FLAG|rule}- # FLAG: T[[;NF]|[;NA:variant]] '-': self.remove_rule, # remove rules from convtable # -{-|rule}- } self.variants = self.handler.settings.VALIDVARIANTS self.fallback = self.handler.settings.VARIANTFALLBACK self.asfallback = {} for var in self.variants: self.asfallback[var] = [] for varright in self.variants: for varleft in self.fallback[varright]: self.asfallback[varleft].append(varright) self.myfallback = self.fallback[self.variant] varsep_pattern = ';\s*(?=' for variant in self.variants: varsep_pattern += '%s\s*:|' % variant # zh-hans:xxx;zh-hant:yyy varsep_pattern += '[^;]*?=>\s*%s\s*:|' % variant # xxx=>zh-hans:yyy; xxx=>zh-hant:zzz varsep_pattern += '\s*$)' self.varsep = re.compile(varsep_pattern) def parse(self, text): flagrule = text.split(u'|', 1) if len(flagrule) == 1: # flag is empty, so just call the default rule parser return self.parse_rule(text, withtable = False) else: flag, rule = flagrule flag = flag.strip() rule = rule.strip() ruleparser = self.flagdict.get(flag[0]) if ruleparser: # we got a valid flag, call the parser now return ruleparser(flag, rule) else: # perhaps it's a "fallback convert" return self.fb_convert(text, flag, rule) def parse_rule(self, rule, withtable = True, allowfallback = True, notadd = []): """parse rule and get default output.""" #TODO: #add flags: # NOFALLBACK # NOCONVERT table = {} for variant in self.variants: table[variant] = {} bidtable = {} unidtable = {} all = '' out = '' overrule = False rule = rule.replace(u'=&gt;', u'=>') choices = self.varsep.split(rule) for choice in choices: if choice == '': continue #first, we split [xxx=>]zh-hans:yyy to ([xxx=>]zh-hans, yyy) part = choice.split(u':', 1) # only 'yyy' if len(part) == 1: all = part[0] out = all # output continue variant = part[0].strip() # [xxx=>]zh-hans toword = part[1].strip() # yyy #then, we split xxx=>zh-hans to (xxx, zh-hans) unid = variant.split(u'=>', 1) if toword: # only 'zh-hans:xxx' if len(unid) == 1 and variant in self.variants: if variant == self.variant: out = toword overrule = True elif allowfallback and \ not overrule and \ variant in self.myfallback: out = toword if withtable: bidtable[variant] = toword # 'xxx=>zh-hans:yyy' elif len(unid) == 2: variant = unid[1].strip() # zh-hans if variant == self.variant: out = toword overrule = True elif allowfallback and \ not overrule and \ variant in self.myfallback: out = toword if withtable: fromword = unid[0].strip() if not unidtable.has_key(variant): unidtable[variant] = {} if toword and variant in self.variants: if variant not in notadd: unidtable[variant][fromword] = toword if allowfallback: for fbv in self.asfallback[variant]: if fbv not in notadd: if not unidtable.has_key(fbv): unidtable[fbv] = {} if not unidtable[fbv].has_key(fromword): unidtable[fbv][fromword] = toword elif out == '': out = choice elif out == '': out = choice if not withtable: return out ### ELSE # add 'xxx': 'xxx' to every variant if all: for variant in self.variants: table[variant][all] = all # parse bidtable, aka tables filled by 'zh-hans:xxx' for (variant, toword) in bidtable.iteritems(): for fromword in bidtable.itervalues(): if variant not in notadd: table[variant][fromword] = toword if allowfallback: for fbv in self.asfallback[variant]: if not table[fbv].has_key(fromword) and \ fbv not in notadd: table[fbv][fromword] = toword # parse unidtable, aka tables filled by 'xxx=>zh-hans:yyy' for variant in unidtable.iterkeys(): table[variant].update(unidtable[variant]) ### ENDIF return (out, table) def _parse_multiflag(self, flag): allowfallback = True notadd = [] # a valid multiflag could be: # (A|H|T|-)[[;NF]|[;NA:variant]] for fpart in flag.split(';'): fpart = fpart.strip() if fpart == 'NF': # no fallback allowfallback = False elif fpart.startswith('NA'): # not add napart = fpart.split(':', 1) if len(napart) == 2 and napart[0].strip() == 'NA' and \ napart[1].strip() in self.variants: notadd.append(napart[1]) return (allowfallback, notadd) def add_rule(self, flag, rule, display): af, na = self._parse_multiflag(flag) out, tables = self.parse_rule(rule, withtable = True, \ allowfallback = af, \ notadd = na) for (variant, table) in tables.iteritems(): self.handler.converters[variant].update(table) if display: return out else: return u'' def describe_rule(self, flag, rule): return rule def add_group(self, flag, rule): return '' def display_raw(self, flag, rule): return rule def set_title(self, flag, rule): af, na = self._parse_multiflag(flag) out = self.parse_rule(rule, withtable = False, \ allowfallback = af, \ notadd = na) return '' def remove_rule(self, flag, rule): af, na = self._parse_multiflag(flag) out, tables = self.parse_rule(rule, withtable = True, \ allowfallback = af, \ notadd = na) for (variant, table) in tables.iteritems(): for oridst in table.iteritems(): self.handler.converters[variant].del_rule(*oridst) return '' def fb_convert(self, text, flag, rule): return text
AdvancedLangConv
/AdvancedLangConv-0.01.tar.gz/AdvancedLangConv-0.01/langconv/langconv.py
langconv.py
convtable = { u'ใ‘ฉ': u'ๅ„ธ', u'ใ“ฅ': u'ๅŠ', u'ใ”‰': u'ๅŠš', u'ใ–Š': u'ๅ™š', u'ใ–ž': u'ๅ–Ž', u'ใ›Ÿ': u'๐กžต', u'ใ› ': u'๐กขƒ', u'ใ›ฟ': u'๐ก น', u'ใŸ†': u'ใ ', u'ใง‘': u'ๆ’', u'ใงŸ': u'ๆ““', u'ใจซ': u'ใฉœ', u'ใฑฉ': u'ๆฎฐ', u'ใฑฎ': u'ๆฎจ', u'ใฒฟ': u'็€‡', u'ใณ ': u'ๆพพ', u'ใถ‰': u'้ธ‚', u'ใถถ': u'็‡ถ', u'ใถฝ': u'็…ฑ', u'ใบ': u'็ฑ', u'ใป': u'๐คซฉ', u'ใป˜': u'๐คชบ', u'ไ–': u'็žœ', u'ไ…‰': u'็จ', u'ไ‡ฒ': u'็ญด', u'ไŒถ': u'ไŠท', u'ไŒท': u'็ดฌ', u'ไŒธ': u'็ธณ', u'ไŒน': u'็ต…', u'ไŒบ': u'ไ‹™', u'ไŒผ': u'็ถ', u'ไŒฝ': u'็ถต', u'ไŒพ': u'ไ‹ป', u'ไ€': u'็นฟ', u'ไ': u'็นธ', u'ไ‘ฝ': u'๐ฆช™', u'ไ“•': u'่–ณ', u'ไ—–': u'่žฎ', u'ไ˜›': u'๐งž', u'ไ™Š': u'๐งœต', u'ไ™“': u'่ฅฌ', u'ไœฃ': u'่จข', u'ไœฅ': u'๐งฉ™', u'ไœง': u'่ญ…', u'ไ™': u'่ฒ™', u'ไžŒ': u'๐งตณ', u'ไž': u'ไผ', u'ไž': u'่ณฐ', u'ไข‚': u'๐จ‹ข', u'ไฅบ': u'้‡พ', u'ไฅฝ': u'้บ', u'ไฅฟ': u'๐จฏ…', u'ไฆ€': u'๐จฆซ', u'ไฆ': u'๐จงœ', u'ไฆƒ': u'้ฏ', u'ไฆ…': u'้ฅ', u'ไฉ„': u'้ฆ', u'ไญช': u'๐ฉžฏ', u'ไฏƒ': u'๐ฉฃ‘', u'ไฏ„': u'้จง', u'ไฏ…': u'ไฏ€', u'ไฒ': u'ไฑฝ', u'ไฒž': u'๐ฉถ˜', u'ไฒŸ': u'้ฎฃ', u'ไฒ ': u'้ฐ†', u'ไฒก': u'้ฐŒ', u'ไฒข': u'้ฐง', u'ไฒฃ': u'ไฑท', u'ได“': u'้ณพ', u'ได”': u'้ต', u'ได•': u'้ดท', u'ได–': u'้ถ„', u'ได—': u'้ถช', u'ได˜': u'้ทˆ', u'ได™': u'้ทฟ', u'ไธ‡': u'่ฌ', u'ไธŽ': u'่ˆ‡', u'ไธ“': u'ๅฐˆ', u'ไธš': u'ๆฅญ', u'ไธ›': u'ๅข', u'ไธœ': u'ๆฑ', u'ไธ': u'็ตฒ', u'ไธข': u'ไธŸ', u'ไธค': u'ๅ…ฉ', u'ไธฅ': u'ๅšด', u'ไธง': u'ๅ–ช', u'ไธช': u'ๅ€‹', u'ไธฐ': u'่ฑ', u'ไธด': u'่‡จ', u'ไธบ': u'็‚บ', u'ไธฝ': u'้บ—', u'ไธพ': u'่ˆ‰', u'ไน‰': u'็พฉ', u'ไนŒ': u'็ƒ', u'ไน': u'ๆจ‚', u'ไน”': u'ๅ–ฌ', u'ไน ': u'็ฟ’', u'ไนก': u'้„‰', u'ไนฆ': u'ๆ›ธ', u'ไนฐ': u'่ฒท', u'ไนฑ': u'ไบ‚', u'ไบ‰': u'็ˆญ', u'ไบŽ': u'ๆ–ผ', u'ไบ': u'่™ง', u'ไบ‘': u'้›ฒ', u'ไบš': u'ไบž', u'ไบง': u'็”ข', u'ไบฉ': u'็•', u'ไบฒ': u'่ฆช', u'ไบต': u'่คป', u'ไบธ': u'ๅšฒ', u'ไบฟ': u'ๅ„„', u'ไป…': u'ๅƒ…', u'ไปŽ': u'ๅพž', u'ไป‘': u'ไพ–', u'ไป“': u'ๅ€‰', u'ไปช': u'ๅ„€', u'ไปฌ': u'ๅ€‘', u'ไปท': u'ๅƒน', u'ไผ—': u'็œพ', u'ไผ˜': u'ๅ„ช', u'ไผš': u'ๆœƒ', u'ไผ›': u'ๅ‚ด', u'ไผž': u'ๅ‚˜', u'ไผŸ': u'ๅ‰', u'ไผ ': u'ๅ‚ณ', u'ไผฃ': u'ไฟ”', u'ไผค': u'ๅ‚ท', u'ไผฅ': u'ๅ€€', u'ไผฆ': u'ๅ€ซ', u'ไผง': u'ๅ‚–', u'ไผช': u'ๅฝ', u'ไผซ': u'ไฝ‡', u'ไฝ“': u'้ซ”', u'ไฝฅ': u'ๅƒ‰', u'ไพ ': u'ไฟ ', u'ไพฃ': u'ไพถ', u'ไพฅ': u'ๅƒฅ', u'ไพฆ': u'ๅต', u'ไพง': u'ๅด', u'ไพจ': u'ๅƒ‘', u'ไพฉ': u'ๅ„ˆ', u'ไพช': u'ๅ„•', u'ไพฌ': u'ๅ„‚', u'ไฟฃ': u'ไฟ', u'ไฟฆ': u'ๅ„”', u'ไฟจ': u'ๅ„ผ', u'ไฟฉ': u'ๅ€†', u'ไฟช': u'ๅ„ท', u'ไฟซ': u'ๅ€ˆ', u'ไฟญ': u'ๅ„‰', u'ๅ€บ': u'ๅ‚ต', u'ๅ€พ': u'ๅ‚พ', u'ๅฌ': u'ๅ‚ฏ', u'ๅป': u'ๅƒ‚', u'ๅพ': u'ๅƒจ', u'ๅฟ': u'ๅ„Ÿ', u'ๅ‚ฅ': u'ๅ„ป', u'ๅ‚ง': u'ๅ„', u'ๅ‚จ': u'ๅ„ฒ', u'ๅ‚ฉ': u'ๅ„บ', u'ๅ„ฟ': u'ๅ…’', u'ๅ…‘': u'ๅ…Œ', u'ๅ…–': u'ๅ…—', u'ๅ…š': u'้ปจ', u'ๅ…ฐ': u'่˜ญ', u'ๅ…ณ': u'้—œ', u'ๅ…ด': u'่ˆˆ', u'ๅ…น': u'่Œฒ', u'ๅ…ป': u'้คŠ', u'ๅ…ฝ': u'็ธ', u'ๅ†': u'ๅ›…', u'ๅ†…': u'ๅ…ง', u'ๅ†ˆ': u'ๅฒก', u'ๅ†Œ': u'ๅ†Š', u'ๅ†™': u'ๅฏซ', u'ๅ†›': u'่ป', u'ๅ†œ': u'่พฒ', u'ๅ†ฏ': u'้ฆฎ', u'ๅ†ฒ': u'ๆฒ–', u'ๅ†ณ': u'ๆฑบ', u'ๅ†ต': u'ๆณ', u'ๅ†ป': u'ๅ‡', u'ๅ‡€': u'ๅ‡ˆ', u'ๅ‡‰': u'ๆถผ', u'ๅ‡': u'ๆธ›', u'ๅ‡‘': u'ๆนŠ', u'ๅ‡›': u'ๅ‡œ', u'ๅ‡ ': u'ๅนพ', u'ๅ‡ค': u'้ณณ', u'ๅ‡ซ': u'้ณง', u'ๅ‡ญ': u'ๆ†‘', u'ๅ‡ฏ': u'ๅ‡ฑ', u'ๅ‡ป': u'ๆ“Š', u'ๅ‡ฟ': u'้‘ฟ', u'ๅˆ': u'่Šป', u'ๅˆ˜': u'ๅЉ', u'ๅˆ™': u'ๅ‰‡', u'ๅˆš': u'ๅ‰›', u'ๅˆ›': u'ๅ‰ต', u'ๅˆ ': u'ๅˆช', u'ๅˆซ': u'ๅˆฅ', u'ๅˆฌ': u'ๅ‰—', u'ๅˆญ': u'ๅ‰„', u'ๅˆน': u'ๅ‰Ž', u'ๅˆฝ': u'ๅŠŠ', u'ๅˆฟ': u'ๅŠŒ', u'ๅ‰€': u'ๅ‰ด', u'ๅ‰‚': u'ๅŠ‘', u'ๅ‰': u'ๅ‰ฎ', u'ๅ‰‘': u'ๅŠ', u'ๅ‰ฅ': u'ๅ‰', u'ๅ‰ง': u'ๅЇ', u'ๅŠ': u'ๅ‹ธ', u'ๅŠž': u'่พฆ', u'ๅŠก': u'ๅ‹™', u'ๅŠข': u'ๅ‹ฑ', u'ๅŠจ': u'ๅ‹•', u'ๅŠฑ': u'ๅ‹ต', u'ๅŠฒ': u'ๅ‹', u'ๅŠณ': u'ๅ‹ž', u'ๅŠฟ': u'ๅ‹ข', u'ๅ‹‹': u'ๅ‹›', u'ๅ‹š': u'ๅ‹ฉ', u'ๅŒ€': u'ๅ‹ป', u'ๅŒฆ': u'ๅŒญ', u'ๅŒฎ': u'ๅŒฑ', u'ๅŒบ': u'ๅ€', u'ๅŒป': u'้†ซ', u'ๅŽ': u'่ฏ', u'ๅ': u'ๅ”', u'ๅ•': u'ๅ–ฎ', u'ๅ–': u'่ณฃ', u'ๅข': u'็›ง', u'ๅค': u'้นต', u'ๅซ': u'่ก›', u'ๅด': u'ๅป', u'ๅบ': u'ๅทน', u'ๅŽ‚': u'ๅป ', u'ๅŽ…': u'ๅปณ', u'ๅކ': u'ๆญท', u'ๅމ': u'ๅŽฒ', u'ๅŽ‹': u'ๅฃ“', u'ๅŽŒ': u'ๅŽญ', u'ๅŽ': u'ๅŽ™', u'ๅސ': u'้พŽ', u'ๅŽ•': u'ๅป', u'ๅŽข': u'ๅป‚', u'ๅŽฃ': u'ๅŽด', u'ๅŽฆ': u'ๅปˆ', u'ๅŽจ': u'ๅปš', u'ๅŽฉ': u'ๅป„', u'ๅŽฎ': u'ๅป', u'ๅŽฟ': u'็ธฃ', u'ๅ': u'ๅ„', u'ๅ‚': u'ๅƒ', u'ๅŒ': u'้›™', u'ๅ‘': u'็™ผ', u'ๅ˜': u'่ฎŠ', u'ๅ™': u'ๆ•˜', u'ๅ ': u'็–Š', u'ๅถ': u'่‘‰', u'ๅท': u'่™Ÿ', u'ๅน': u'ๅ˜†', u'ๅฝ': u'ๅ˜ฐ', u'ๅŽ': u'ๅพŒ', u'ๅ“': u'ๅš‡', u'ๅ•': u'ๅ‘‚', u'ๅ—': u'ๅ—Ž', u'ๅฃ': u'ๅ”š', u'ๅจ': u'ๅ™ธ', u'ๅฌ': u'่ฝ', u'ๅฏ': u'ๅ•Ÿ', u'ๅด': u'ๅณ', u'ๅ‘': u'ๅถ', u'ๅ‘’': u'ๅ˜ธ', u'ๅ‘“': u'ๅ›ˆ', u'ๅ‘•': u'ๅ˜”', u'ๅ‘–': u'ๅšฆ', u'ๅ‘—': u'ๅ”„', u'ๅ‘˜': u'ๅ“ก', u'ๅ‘™': u'ๅ’ผ', u'ๅ‘›': u'ๅ—†', u'ๅ‘œ': u'ๅ—š', u'ๅ’': u'่ฉ ', u'ๅ’™': u'ๅšจ', u'ๅ’›': u'ๅš€', u'ๅ’': u'ๅ™', u'ๅ’ค': u'ๅ’', u'ๅ“': u'้Ÿฟ', u'ๅ“‘': u'ๅ•ž', u'ๅ“’': u'ๅ™ ', u'ๅ““': u'ๅ˜ต', u'ๅ“”': u'ๅ—ถ', u'ๅ“•': u'ๅ™ฆ', u'ๅ“—': u'ๅ˜ฉ', u'ๅ“™': u'ๅ™ฒ', u'ๅ“œ': u'ๅšŒ', u'ๅ“': u'ๅ™ฅ', u'ๅ“Ÿ': u'ๅ–ฒ', u'ๅ”›': u'ๅ˜œ', u'ๅ”': u'ๅ—Š', u'ๅ” ': u'ๅ˜ฎ', u'ๅ”ก': u'ๅ•ข', u'ๅ”ข': u'ๅ—ฉ', u'ๅ”ค': u'ๅ–š', u'ๅ•ง': u'ๅ˜–', u'ๅ•ฌ': u'ๅ—‡', u'ๅ•ญ': u'ๅ›€', u'ๅ•ฎ': u'ๅš™', u'ๅ•ด': u'ๅ˜ฝ', u'ๅ•ธ': u'ๅ˜ฏ', u'ๅ–ท': u'ๅ™ด', u'ๅ–ฝ': u'ๅ˜', u'ๅ–พ': u'ๅšณ', u'ๅ—ซ': u'ๅ›', u'ๅ—ณ': u'ๅ™ฏ', u'ๅ˜˜': u'ๅ™“', u'ๅ˜ค': u'ๅšถ', u'ๅ˜ฑ': u'ๅ›‘', u'ๅ™œ': u'ๅš•', u'ๅšฃ': u'ๅ›‚', u'ๅ›ข': u'ๅœ˜', u'ๅ›ญ': u'ๅœ’', u'ๅ›ฑ': u'ๅ›ช', u'ๅ›ด': u'ๅœ', u'ๅ›ต': u'ๅœ‡', u'ๅ›ฝ': u'ๅœ‹', u'ๅ›พ': u'ๅœ–', u'ๅœ†': u'ๅœ“', u'ๅœฃ': u'่–', u'ๅœน': u'ๅฃ™', u'ๅœบ': u'ๅ ด', u'ๅ': u'ๅฃž', u'ๅ—': u'ๅกŠ', u'ๅš': u'ๅ …', u'ๅ›': u'ๅฃ‡', u'ๅœ': u'ๅฃข', u'ๅ': u'ๅฃฉ', u'ๅž': u'ๅกข', u'ๅŸ': u'ๅขณ', u'ๅ ': u'ๅขœ', u'ๅž„': u'ๅฃŸ', u'ๅž…': u'ๅฃ ', u'ๅž†': u'ๅฃš', u'ๅž’': u'ๅฃ˜', u'ๅžฆ': u'ๅขพ', u'ๅžฉ': u'ๅ Š', u'ๅžซ': u'ๅขŠ', u'ๅžญ': u'ๅŸก', u'ๅžฑ': u'ๅฃ‹', u'ๅžฒ': u'ๅก', u'ๅžด': u'ๅ –', u'ๅŸ˜': u'ๅก’', u'ๅŸ™': u'ๅกค', u'ๅŸš': u'ๅ ', u'ๅŸฏ': u'ๅžต', u'ๅ ‘': u'ๅกน', u'ๅ •': u'ๅขฎ', u'ๅข™': u'็‰†', u'ๅฃฎ': u'ๅฃฏ', u'ๅฃฐ': u'่ฒ', u'ๅฃณ': u'ๆฎผ', u'ๅฃถ': u'ๅฃบ', u'ๅฃธ': u'ๅฃผ', u'ๅค„': u'่™•', u'ๅค‡': u'ๅ‚™', u'ๅค': u'ๅพฉ', u'ๅคŸ': u'ๅค ', u'ๅคด': u'้ ญ', u'ๅคธ': u'่ช‡', u'ๅคน': u'ๅคพ', u'ๅคบ': u'ๅฅช', u'ๅฅ': u'ๅฅฉ', u'ๅฅ‚': u'ๅฅ', u'ๅฅ‹': u'ๅฅฎ', u'ๅฅ–': u'็Ž', u'ๅฅฅ': u'ๅฅง', u'ๅฆ†': u'ๅฆ', u'ๅฆ‡': u'ๅฉฆ', u'ๅฆˆ': u'ๅชฝ', u'ๅฆฉ': u'ๅซต', u'ๅฆช': u'ๅซ—', u'ๅฆซ': u'ๅชฏ', u'ๅง—': u'ๅง', u'ๅงน': u'ๅฅผ', u'ๅจ„': u'ๅฉ', u'ๅจ…': u'ๅฉญ', u'ๅจ†': u'ๅฌˆ', u'ๅจ‡': u'ๅฌŒ', u'ๅจˆ': u'ๅญŒ', u'ๅจฑ': u'ๅจ›', u'ๅจฒ': u'ๅชง', u'ๅจด': u'ๅซป', u'ๅฉณ': u'ๅซฟ', u'ๅฉด': u'ๅฌฐ', u'ๅฉต': u'ๅฌ‹', u'ๅฉถ': u'ๅฌธ', u'ๅชช': u'ๅชผ', u'ๅซ’': u'ๅฌก', u'ๅซ”': u'ๅฌช', u'ๅซฑ': u'ๅฌ™', u'ๅฌท': u'ๅฌค', u'ๅญ™': u'ๅญซ', u'ๅญฆ': u'ๅญธ', u'ๅญช': u'ๅญฟ', u'ๅฎ': u'ๅฏง', u'ๅฎ': u'ๅฏถ', u'ๅฎž': u'ๅฏฆ', u'ๅฎ ': u'ๅฏต', u'ๅฎก': u'ๅฏฉ', u'ๅฎช': u'ๆ†ฒ', u'ๅฎซ': u'ๅฎฎ', u'ๅฎฝ': u'ๅฏฌ', u'ๅฎพ': u'่ณ“', u'ๅฏ': u'ๅฏข', u'ๅฏน': u'ๅฐ', u'ๅฏป': u'ๅฐ‹', u'ๅฏผ': u'ๅฐŽ', u'ๅฏฟ': u'ๅฃฝ', u'ๅฐ†': u'ๅฐ‡', u'ๅฐ”': u'็ˆพ', u'ๅฐ˜': u'ๅกต', u'ๅฐ': u'ๅ˜—', u'ๅฐง': u'ๅ ฏ', u'ๅฐด': u'ๅฐท', u'ๅฐธ': u'ๅฑ', u'ๅฐฝ': u'็›ก', u'ๅฑ‚': u'ๅฑค', u'ๅฑƒ': u'ๅฑ“', u'ๅฑ‰': u'ๅฑœ', u'ๅฑŠ': u'ๅฑ†', u'ๅฑž': u'ๅฑฌ', u'ๅฑก': u'ๅฑข', u'ๅฑฆ': u'ๅฑจ', u'ๅฑฟ': u'ๅถผ', u'ๅฒ': u'ๆญฒ', u'ๅฒ‚': u'่ฑˆ', u'ๅฒ–': u'ๅถ‡', u'ๅฒ—': u'ๅด—', u'ๅฒ˜': u'ๅณด', u'ๅฒ™': u'ๅถด', u'ๅฒš': u'ๅต', u'ๅฒ›': u'ๅณถ', u'ๅฒญ': u'ๅถบ', u'ๅฒฝ': u'ๅดฌ', u'ๅฒฟ': u'ๅท‹', u'ๅณ„': u'ๅถง', u'ๅณก': u'ๅณฝ', u'ๅณฃ': u'ๅถข', u'ๅณค': u'ๅถ ', u'ๅณฅ': u'ๅดข', u'ๅณฆ': u'ๅท’', u'ๅด‚': u'ๅถ—', u'ๅดƒ': u'ๅด', u'ๅด„': u'ๅถฎ', u'ๅดญ': u'ๅถ„', u'ๅต˜': u'ๅถธ', u'ๅตš': u'ๅถ”', u'ๅต': u'ๅถ', u'ๅท…': u'ๅท”', u'ๅทฉ': u'้ž', u'ๅทฏ': u'ๅทฐ', u'ๅธ': u'ๅนฃ', u'ๅธ…': u'ๅธฅ', u'ๅธˆ': u'ๅธซ', u'ๅธ': u'ๅนƒ', u'ๅธ': u'ๅธณ', u'ๅธ˜': u'็ฐพ', u'ๅธœ': u'ๅนŸ', u'ๅธฆ': u'ๅธถ', u'ๅธง': u'ๅน€', u'ๅธฎ': u'ๅนซ', u'ๅธฑ': u'ๅนฌ', u'ๅธป': u'ๅน˜', u'ๅธผ': u'ๅน—', u'ๅน‚': u'ๅ†ช', u'ๅนž': u'่ฅ†', u'ๅนถ': u'ไธฆ', u'ๅนฟ': u'ๅปฃ', u'ๅบ†': u'ๆ…ถ', u'ๅบ': u'ๅปฌ', u'ๅบ‘': u'ๅปก', u'ๅบ“': u'ๅบซ', u'ๅบ”': u'ๆ‡‰', u'ๅบ™': u'ๅปŸ', u'ๅบž': u'้พ', u'ๅบŸ': u'ๅปข', u'ๅปช': u'ๅปฉ', u'ๅผ€': u'้–‹', u'ๅผ‚': u'็•ฐ', u'ๅผƒ': u'ๆฃ„', u'ๅผ‘': u'ๅผ’', u'ๅผ ': u'ๅผต', u'ๅผฅ': u'ๅฝŒ', u'ๅผช': u'ๅผณ', u'ๅผฏ': u'ๅฝŽ', u'ๅผน': u'ๅฝˆ', u'ๅผบ': u'ๅผท', u'ๅฝ’': u'ๆญธ', u'ๅฝ“': u'็•ถ', u'ๅฝ•': u'้Œ„', u'ๅฝฆ': u'ๅฝฅ', u'ๅฝป': u'ๅพน', u'ๅพ„': u'ๅพ‘', u'ๅพ•': u'ๅพ ', u'ๅฟ†': u'ๆ†ถ', u'ๅฟ': u'ๆ‡บ', u'ๅฟง': u'ๆ†‚', u'ๅฟพ': u'ๆ„พ', u'ๆ€€': u'ๆ‡ท', u'ๆ€': u'ๆ…‹', u'ๆ€‚': u'ๆ…ซ', u'ๆ€ƒ': u'ๆ†ฎ', u'ๆ€„': u'ๆ…ช', u'ๆ€…': u'ๆ‚ต', u'ๆ€†': u'ๆ„ด', u'ๆ€œ': u'ๆ†', u'ๆ€ป': u'็ธฝ', u'ๆ€ผ': u'ๆ‡Ÿ', u'ๆ€ฟ': u'ๆ‡Œ', u'ๆ‹': u'ๆˆ€', u'ๆ’': u'ๆ†', u'ๆณ': u'ๆ‡‡', u'ๆถ': u'ๆƒก', u'ๆธ': u'ๆ…Ÿ', u'ๆน': u'ๆ‡จ', u'ๆบ': u'ๆ„ท', u'ๆป': u'ๆƒป', u'ๆผ': u'ๆƒฑ', u'ๆฝ': u'ๆƒฒ', u'ๆ‚ฆ': u'ๆ‚…', u'ๆ‚ซ': u'ๆ„จ', u'ๆ‚ฌ': u'ๆ‡ธ', u'ๆ‚ญ': u'ๆ…ณ', u'ๆ‚ฎ': u'ๆ‚ž', u'ๆ‚ฏ': u'ๆ†ซ', u'ๆƒŠ': u'้ฉš', u'ๆƒง': u'ๆ‡ผ', u'ๆƒจ': u'ๆ…˜', u'ๆƒฉ': u'ๆ‡ฒ', u'ๆƒซ': u'ๆ†Š', u'ๆƒฌ': u'ๆ„œ', u'ๆƒญ': u'ๆ…š', u'ๆƒฎ': u'ๆ†š', u'ๆƒฏ': u'ๆ…ฃ', u'ๆ„ ': u'ๆ…', u'ๆ„ค': u'ๆ†ค', u'ๆ„ฆ': u'ๆ†’', u'ๆ„ฟ': u'้ก˜', u'ๆ…‘': u'ๆ‡พ', u'ๆ‡‘': u'ๆ‡ฃ', u'ๆ‡’': u'ๆ‡ถ', u'ๆ‡”': u'ๆ‡', u'ๆˆ†': u'ๆˆ‡', u'ๆˆ‹': u'ๆˆ”', u'ๆˆ': u'ๆˆฒ', u'ๆˆ—': u'ๆˆง', u'ๆˆ˜': u'ๆˆฐ', u'ๆˆฌ': u'ๆˆฉ', u'ๆˆฏ': u'ๆˆฑ', u'ๆˆท': u'ๆˆถ', u'ๆ‰‘': u'ๆ’ฒ', u'ๆ‰ง': u'ๅŸท', u'ๆ‰ฉ': u'ๆ“ด', u'ๆ‰ช': u'ๆซ', u'ๆ‰ซ': u'ๆŽƒ', u'ๆ‰ฌ': u'ๆš', u'ๆ‰ฐ': u'ๆ“พ', u'ๆŠš': u'ๆ’ซ', u'ๆŠ›': u'ๆ‹‹', u'ๆŠŸ': u'ๆ‘ถ', u'ๆŠ ': u'ๆ‘ณ', u'ๆŠก': u'ๆŽ„', u'ๆŠข': u'ๆถ', u'ๆŠค': u'่ญท', u'ๆŠฅ': u'ๅ ฑ', u'ๆ‹…': u'ๆ“”', u'ๆ‹Ÿ': u'ๆ“ฌ', u'ๆ‹ข': u'ๆ”', u'ๆ‹ฃ': u'ๆ€', u'ๆ‹ฅ': u'ๆ“', u'ๆ‹ฆ': u'ๆ””', u'ๆ‹ง': u'ๆ“ฐ', u'ๆ‹จ': u'ๆ’ฅ', u'ๆ‹ฉ': u'ๆ“‡', u'ๆŒ‚': u'ๆŽ›', u'ๆŒš': u'ๆ‘ฏ', u'ๆŒ›': u'ๆ”ฃ', u'ๆŒœ': u'ๆŽ—', u'ๆŒ': u'ๆ’พ', u'ๆŒž': u'ๆ’ป', u'ๆŒŸ': u'ๆŒพ', u'ๆŒ ': u'ๆ’“', u'ๆŒก': u'ๆ“‹', u'ๆŒข': u'ๆ’Ÿ', u'ๆŒฃ': u'ๆŽ™', u'ๆŒค': u'ๆ“ ', u'ๆŒฅ': u'ๆฎ', u'ๆŒฆ': u'ๆ’', u'ๆ': u'ๆŒฉ', u'ๆž': u'ๆ’ˆ', u'ๆŸ': u'ๆ', u'ๆก': u'ๆ’ฟ', u'ๆข': u'ๆ›', u'ๆฃ': u'ๆ—', u'ๆฎ': u'ๆ“š', u'ๆŽณ': u'ๆ“„', u'ๆŽด': u'ๆ‘‘', u'ๆŽท': u'ๆ“ฒ', u'ๆŽธ': u'ๆ’ฃ', u'ๆŽบ': u'ๆ‘ป', u'ๆŽผ': u'ๆ‘œ', u'ๆฝ': u'ๆ”ฌ', u'ๆพ': u'ๆต', u'ๆฟ': u'ๆ’ณ', u'ๆ€': u'ๆ”™', u'ๆ': u'ๆ“ฑ', u'ๆ‚': u'ๆ‘Ÿ', u'ๆ…': u'ๆ”ช', u'ๆบ': u'ๆ”œ', u'ๆ‘„': u'ๆ”', u'ๆ‘…': u'ๆ”„', u'ๆ‘†': u'ๆ“บ', u'ๆ‘‡': u'ๆ–', u'ๆ‘ˆ': u'ๆ“ฏ', u'ๆ‘Š': u'ๆ”ค', u'ๆ’„': u'ๆ”–', u'ๆ’‘': u'ๆ’', u'ๆ’ต': u'ๆ”†', u'ๆ’ท': u'ๆ“ท', u'ๆ’ธ': u'ๆ“ผ', u'ๆ’บ': u'ๆ”›', u'ๆ“ž': u'ๆ“ป', u'ๆ”’': u'ๆ”ข', u'ๆ•Œ': u'ๆ•ต', u'ๆ•›': u'ๆ–‚', u'ๆ•ฐ': u'ๆ•ธ', u'ๆ–‹': u'้ฝ‹', u'ๆ–“': u'ๆ–•', u'ๆ–ฉ': u'ๆ–ฌ', u'ๆ–ญ': u'ๆ–ท', u'ๆ— ': u'็„ก', u'ๆ—ง': u'่ˆŠ', u'ๆ—ถ': u'ๆ™‚', u'ๆ—ท': u'ๆ› ', u'ๆ—ธ': u'ๆš˜', u'ๆ˜™': u'ๆ›‡', u'ๆ˜ผ': u'ๆ™', u'ๆ˜ฝ': u'ๆ›จ', u'ๆ˜พ': u'้กฏ', u'ๆ™‹': u'ๆ™‰', u'ๆ™’': u'ๆ›ฌ', u'ๆ™“': u'ๆ›‰', u'ๆ™”': u'ๆ›„', u'ๆ™•': u'ๆšˆ', u'ๆ™–': u'ๆš‰', u'ๆš‚': u'ๆšซ', u'ๆšง': u'ๆ›–', u'ๆœฏ': u'่ก“', u'ๆœบ': u'ๆฉŸ', u'ๆ€': u'ๆฎบ', u'ๆ‚': u'้›œ', u'ๆƒ': u'ๆฌŠ', u'ๆ†': u'ๆกฟ', u'ๆก': u'ๆข', u'ๆฅ': u'ไพ†', u'ๆจ': u'ๆฅŠ', u'ๆฉ': u'ๆฆช', u'ๆฐ': u'ๅ‚‘', u'ๆž': u'ๆฅต', u'ๆž„': u'ๆง‹', u'ๆžž': u'ๆจ…', u'ๆžข': u'ๆจž', u'ๆžฃ': u'ๆฃ—', u'ๆžฅ': u'ๆซช', u'ๆžง': u'ๆข˜', u'ๆžจ': u'ๆฃ–', u'ๆžช': u'ๆง', u'ๆžซ': u'ๆฅ“', u'ๆžญ': u'ๆขŸ', u'ๆŸœ': u'ๆซƒ', u'ๆŸ ': u'ๆชธ', u'ๆŸฝ': u'ๆช‰', u'ๆ €': u'ๆข”', u'ๆ …': u'ๆŸต', u'ๆ ‡': u'ๆจ™', u'ๆ ˆ': u'ๆฃง', u'ๆ ‰': u'ๆซ›', u'ๆ Š': u'ๆซณ', u'ๆ ‹': u'ๆฃŸ', u'ๆ Œ': u'ๆซจ', u'ๆ Ž': u'ๆซŸ', u'ๆ ': u'ๆฌ„', u'ๆ ‘': u'ๆจน', u'ๆ –': u'ๆฃฒ', u'ๆ ท': u'ๆจฃ', u'ๆ พ': u'ๆฌ’', u'ๆก ': u'ๆค', u'ๆกก': u'ๆฉˆ', u'ๆกข': u'ๆฅจ', u'ๆกฃ': u'ๆช”', u'ๆกค': u'ๆฆฟ', u'ๆกฅ': u'ๆฉ‹', u'ๆกฆ': u'ๆจบ', u'ๆกง': u'ๆชœ', u'ๆกจ': u'ๆงณ', u'ๆกฉ': u'ๆจ', u'ๆขฆ': u'ๅคข', u'ๆขผ': u'ๆชฎ', u'ๆขพ': u'ๆฃถ', u'ๆขฟ': u'ๆงค', u'ๆฃ€': u'ๆชข', u'ๆฃ': u'ๆขฒ', u'ๆฃ‚': u'ๆฌž', u'ๆค': u'ๆงจ', u'ๆคŸ': u'ๆซ', u'ๆค ': u'ๆงง', u'ๆคค': u'ๆฌ', u'ๆคญ': u'ๆฉข', u'ๆฅผ': u'ๆจ“', u'ๆฆ„': u'ๆฌ–', u'ๆฆ…': u'ๆฆฒ', u'ๆฆ‡': u'ๆซฌ', u'ๆฆˆ': u'ๆซš', u'ๆฆ‰': u'ๆซธ', u'ๆงš': u'ๆชŸ', u'ๆง›': u'ๆชป', u'ๆงŸ': u'ๆชณ', u'ๆง ': u'ๆซง', u'ๆจช': u'ๆฉซ', u'ๆจฏ': u'ๆชฃ', u'ๆจฑ': u'ๆซป', u'ๆฉฅ': u'ๆซซ', u'ๆฉฑ': u'ๆซฅ', u'ๆฉน': u'ๆซ“', u'ๆฉผ': u'ๆซž', u'ๆชฉ': u'ๆช', u'ๆฌข': u'ๆญก', u'ๆฌค': u'ๆญŸ', u'ๆฌง': u'ๆญ', u'ๆญผ': u'ๆฎฒ', u'ๆฎ': u'ๆญฟ', u'ๆฎ‡': u'ๆฎค', u'ๆฎ‹': u'ๆฎ˜', u'ๆฎ’': u'ๆฎž', u'ๆฎ“': u'ๆฎฎ', u'ๆฎš': u'ๆฎซ', u'ๆฎก': u'ๆฎฏ', u'ๆฎด': u'ๆฏ†', u'ๆฏ': u'ๆฏ€', u'ๆฏ‚': u'่ฝ‚', u'ๆฏ•': u'็•ข', u'ๆฏ™': u'ๆ–ƒ', u'ๆฏก': u'ๆฐˆ', u'ๆฏต': u'ๆฏฟ', u'ๆฐ‡': u'ๆฐŒ', u'ๆฐ”': u'ๆฐฃ', u'ๆฐข': u'ๆฐซ', u'ๆฐฉ': u'ๆฐฌ', u'ๆฐฒ': u'ๆฐณ', u'ๆฑ‡': u'ๅŒฏ', u'ๆฑ‰': u'ๆผข', u'ๆฑค': u'ๆนฏ', u'ๆฑน': u'ๆดถ', u'ๆฒŸ': u'ๆบ', u'ๆฒก': u'ๆฒ’', u'ๆฒฃ': u'็ƒ', u'ๆฒค': u'ๆผš', u'ๆฒฅ': u'็€', u'ๆฒฆ': u'ๆทช', u'ๆฒง': u'ๆป„', u'ๆฒฉ': u'ๆบˆ', u'ๆฒช': u'ๆปฌ', u'ๆณž': u'ๆฟ˜', u'ๆณช': u'ๆทš', u'ๆณถ': u'ๆพฉ', u'ๆณท': u'็€ง', u'ๆณธ': u'็€˜', u'ๆณบ': u'ๆฟผ', u'ๆณป': u'็€‰', u'ๆณผ': u'ๆฝ‘', u'ๆณฝ': u'ๆพค', u'ๆณพ': u'ๆถ‡', u'ๆด': u'ๆฝ”', u'ๆด’': u'็‘', u'ๆดผ': u'็ชช', u'ๆตƒ': u'ๆตน', u'ๆต…': u'ๆทบ', u'ๆต†': u'ๆผฟ', u'ๆต‡': u'ๆพ†', u'ๆตˆ': u'ๆนž', u'ๆตŠ': u'ๆฟ', u'ๆต‹': u'ๆธฌ', u'ๆต': u'ๆพฎ', u'ๆตŽ': u'ๆฟŸ', u'ๆต': u'็€', u'ๆต': u'ๆปป', u'ๆต‘': u'ๆธพ', u'ๆต’': u'ๆปธ', u'ๆต“': u'ๆฟƒ', u'ๆต”': u'ๆฝฏ', u'ๆถ‚': u'ๅก—', u'ๆถ›': u'ๆฟค', u'ๆถ': u'ๆพ‡', u'ๆถž': u'ๆทถ', u'ๆถŸ': u'ๆผฃ', u'ๆถ ': u'ๆฝฟ', u'ๆถก': u'ๆธฆ', u'ๆถฃ': u'ๆธ™', u'ๆถค': u'ๆปŒ', u'ๆถฆ': u'ๆฝค', u'ๆถง': u'ๆพ—', u'ๆถจ': u'ๆผฒ', u'ๆถฉ': u'ๆพ€', u'ๆธŠ': u'ๆทต', u'ๆธŒ': u'ๆทฅ', u'ๆธ': u'ๆผฌ', u'ๆธŽ': u'็€†', u'ๆธ': u'ๆผธ', u'ๆธ‘': u'ๆพ ', u'ๆธ”': u'ๆผ', u'ๆธ–': u'็€‹', u'ๆธ—': u'ๆปฒ', u'ๆธฉ': u'ๆบซ', u'ๆนพ': u'็ฃ', u'ๆนฟ': u'ๆฟ•', u'ๆบƒ': u'ๆฝฐ', u'ๆบ…': u'ๆฟบ', u'ๆบ†': u'ๆผต', u'ๆป—': u'ๆฝท', u'ๆปš': u'ๆปพ', u'ๆปž': u'ๆปฏ', u'ๆปŸ': u'็ง', u'ๆป ': u'็„', u'ๆปก': u'ๆปฟ', u'ๆปข': u'็€…', u'ๆปค': u'ๆฟพ', u'ๆปฅ': u'ๆฟซ', u'ๆปฆ': u'็ค', u'ๆปจ': u'ๆฟฑ', u'ๆปฉ': u'็˜', u'ๆปช': u'ๆพฆ', u'ๆผค': u'็ ', u'ๆฝ†': u'็€ ', u'ๆฝ‡': u'็€Ÿ', u'ๆฝ‹': u'็€ฒ', u'ๆฝ': u'ๆฟฐ', u'ๆฝœ': u'ๆฝ›', u'ๆฝด': u'็€ฆ', u'ๆพœ': u'็€พ', u'ๆฟ‘': u'็€จ', u'ๆฟ’': u'็€•', u'็': u'็', u'็ญ': u'ๆป…', u'็ฏ': u'็‡ˆ', u'็ต': u'้ˆ', u'็พ': u'็ฝ', u'็ฟ': u'็‡ฆ', u'็‚€': u'็…ฌ', u'็‚‰': u'็ˆ', u'็‚–': u'็‡‰', u'็‚œ': u'็…’', u'็‚': u'็†—', u'็‚น': u'้ปž', u'็‚ผ': u'็…‰', u'็‚ฝ': u'็†พ', u'็ƒ': u'็ˆ', u'็ƒ‚': u'็ˆ›', u'็ƒƒ': u'็ƒด', u'็ƒ›': u'็‡ญ', u'็ƒŸ': u'็…™', u'็ƒฆ': u'็…ฉ', u'็ƒง': u'็‡’', u'็ƒจ': u'็‡', u'็ƒฉ': u'็‡ด', u'็ƒซ': u'็‡™', u'็ƒฌ': u'็‡ผ', u'็ƒญ': u'็†ฑ', u'็„•': u'็…ฅ', u'็„–': u'็‡œ', u'็„˜': u'็‡พ', u'็…ด': u'็†…', u'็ˆฑ': u'ๆ„›', u'็ˆท': u'็ˆบ', u'็‰': u'็‰˜', u'็‰ฆ': u'ๆฐ‚', u'็‰ต': u'็‰ฝ', u'็‰บ': u'็Šง', u'็ŠŠ': u'็Šข', u'็Šถ': u'็‹€', u'็Šท': u'็ท', u'็Šธ': u'็', u'็Šน': u'็Œถ', u'็‹ˆ': u'็‹ฝ', u'็‹': u'็ฎ', u'็‹ž': u'็ฐ', u'็‹ฌ': u'็จ', u'็‹ญ': u'็‹น', u'็‹ฎ': u'็…', u'็‹ฏ': u'็ช', u'็‹ฐ': u'็Œ™', u'็‹ฑ': u'็„', u'็‹ฒ': u'็Œป', u'็Œƒ': u'็ซ', u'็ŒŽ': u'็ต', u'็Œ•': u'็ผ', u'็Œก': u'็Ž€', u'็Œช': u'่ฑฌ', u'็Œซ': u'่ฒ“', u'็Œฌ': u'่Ÿ', u'็Œฎ': u'็ป', u'็ญ': u'็บ', u'็Ž‘': u'็’ฃ', u'็Žš': u'็‘’', u'็Ž›': u'็‘ช', u'็Žฎ': u'็‘‹', u'็Žฏ': u'็’ฐ', u'็Žฐ': u'็พ', u'็Žฑ': u'็‘ฒ', u'็Žบ': u'็’ฝ', u'็': u'็บ', u'็‘': u'็“', u'็ฐ': u'็’ซ', u'็ฒ': u'็ฟ', u'็': u'็’‰', u'็': u'็‘ฃ', u'็ผ': u'็“Š', u'็‘ถ': u'็‘ค', u'็‘ท': u'็’ฆ', u'็’Ž': u'็“”', u'็“’': u'็“š', u'็“ฏ': u'็”Œ', u'็”ต': u'้›ป', u'็”ป': u'็•ซ', u'็•…': u'ๆšข', u'็•ด': u'็–‡', u'็––': u'็™ค', u'็–—': u'็™‚', u'็–Ÿ': u'็˜ง', u'็– ': u'็™˜', u'็–ก': u'็˜', u'็–ฌ': u'็™ง', u'็–ญ': u'็˜ฒ', u'็–ฎ': u'็˜ก', u'็–ฏ': u'็˜‹', u'็–ฑ': u'็šฐ', u'็–ด': u'็—พ', u'็—ˆ': u'็™ฐ', u'็—‰': u'็—™', u'็—’': u'็™ข', u'็—–': u'็˜‚', u'็—จ': u'็™†', u'็—ช': u'็˜“', u'็—ซ': u'็™‡', u'็˜…': u'็™‰', u'็˜†': u'็˜ฎ', u'็˜—': u'็˜ž', u'็˜˜': u'็˜บ', u'็˜ช': u'็™Ÿ', u'็˜ซ': u'็™ฑ', u'็˜พ': u'็™ฎ', u'็˜ฟ': u'็™ญ', u'็™ž': u'็™ฉ', u'็™ฃ': u'็™ฌ', u'็™ซ': u'็™ฒ', u'็š‘': u'็šš', u'็šฑ': u'็šบ', u'็šฒ': u'็šธ', u'็›': u'็›ž', u'็›': u'้นฝ', u'็›‘': u'็›ฃ', u'็›–': u'่“‹', u'็›—': u'็›œ', u'็›˜': u'็›ค', u'็œ': u'็ž˜', u'็œฆ': u'็œฅ', u'็œฌ': u'็Ÿ“', u'็': u'็œ', u'็': u'็ž', u'็‘': u'็žผ', u'็ž†': u'็žถ', u'็ž’': u'็žž', u'็žฉ': u'็Ÿš', u'็Ÿซ': u'็Ÿฏ', u'็Ÿถ': u'็ฃฏ', u'็Ÿพ': u'็คฌ', u'็Ÿฟ': u'็คฆ', u'็ €': u'็ขญ', u'็ ': u'็ขผ', u'็ –': u'็ฃš', u'็ —': u'็กจ', u'็ š': u'็กฏ', u'็ œ': u'็ขธ', u'็ บ': u'็คช', u'็ ป': u'็คฑ', u'็ พ': u'็คซ', u'็ก€': u'็คŽ', u'็ก': u'็กœ', u'็ก•': u'็ขฉ', u'็ก–': u'็กค', u'็ก—': u'็ฃฝ', u'็ก™': u'็ฃ‘', u'็กฎ': u'็ขบ', u'็กท': u'็ค†', u'็ข': u'็ค™', u'็ข›': u'็ฃง', u'็ขœ': u'็ฃฃ', u'็ขฑ': u'้นผ', u'็คผ': u'็ฆฎ', u'็ฅƒ': u'็ฆก', u'็ฅŽ': u'็ฆ•', u'็ฅข': u'็ฆฐ', u'็ฅฏ': u'็ฆŽ', u'็ฅท': u'็ฆฑ', u'็ฅธ': u'็ฆ', u'็ฆ€': u'็จŸ', u'็ฆ„': u'็ฅฟ', u'็ฆ…': u'็ฆช', u'็ฆป': u'้›ข', u'็งƒ': u'็ฆฟ', u'็ง†': u'็จˆ', u'็ง': u'็จฎ', u'็งฏ': u'็ฉ', u'็งฐ': u'็จฑ', u'็งฝ': u'็ฉข', u'็งพ': u'็ฉ ', u'็จ†': u'็ฉญ', u'็จŽ': u'็จ…', u'็จฃ': u'็ฉŒ', u'็จณ': u'็ฉฉ', u'็ฉ‘': u'็ฉก', u'็ฉท': u'็ชฎ', u'็ชƒ': u'็ซŠ', u'็ช': u'็ซ…', u'็ชŽ': u'็ชต', u'็ช‘': u'็ชฏ', u'็ชœ': u'็ซ„', u'็ช': u'็ชฉ', u'็ชฅ': u'็ชบ', u'็ชฆ': u'็ซ‡', u'็ชญ': u'็ชถ', u'็ซ–': u'่ฑŽ', u'็ซž': u'็ซถ', u'็ฌƒ': u'็ฏค', u'็ฌ‹': u'็ญ', u'็ฌ”': u'็ญ†', u'็ฌ•': u'็ญง', u'็ฌบ': u'็ฎ‹', u'็ฌผ': u'็ฑ ', u'็ฌพ': u'็ฑฉ', u'็ญ‘': u'็ฏ‰', u'็ญš': u'็ฏณ', u'็ญ›': u'็ฏฉ', u'็ญœ': u'็ฐน', u'็ญ': u'็ฎ', u'็ญน': u'็ฑŒ', u'็ญผ': u'็ฏ”', u'็ญพ': u'็ฐฝ', u'็ฎ€': u'็ฐก', u'็ฎ“': u'็ฑ™', u'็ฎฆ': u'็ฐ€', u'็ฎง': u'็ฏ‹', u'็ฎจ': u'็ฑœ', u'็ฎฉ': u'็ฑฎ', u'็ฎช': u'็ฐž', u'็ฎซ': u'็ฐซ', u'็ฏ‘': u'็ฐฃ', u'็ฏ“': u'็ฐ', u'็ฏฎ': u'็ฑƒ', u'็ฏฑ': u'็ฑฌ', u'็ฐ–': u'็ฑช', u'็ฑ': u'็ฑŸ', u'็ฑด': u'็ณด', u'็ฑป': u'้กž', u'็ฑผ': u'็งˆ', u'็ฒœ': u'็ณถ', u'็ฒ': u'็ณฒ', u'็ฒค': u'็ฒต', u'็ฒช': u'็ณž', u'็ฒฎ': u'็ณง', u'็ณ': u'็ณ', u'็ณ‡': u'้คฑ', u'็ดง': u'็ทŠ', u'็ตท': u'็ธถ', u'็บŸ': u'็ณน', u'็บ ': u'็ณพ', u'็บก': u'็ด†', u'็บข': u'็ด…', u'็บฃ': u'็ด‚', u'็บค': u'็บ–', u'็บฅ': u'็ด‡', u'็บฆ': u'็ด„', u'็บง': u'็ดš', u'็บจ': u'็ดˆ', u'็บฉ': u'็บŠ', u'็บช': u'็ด€', u'็บซ': u'็ด‰', u'็บฌ': u'็ทฏ', u'็บญ': u'็ดœ', u'็บฎ': u'็ด˜', u'็บฏ': u'็ด”', u'็บฐ': u'็ด•', u'็บฑ': u'็ด—', u'็บฒ': u'็ถฑ', u'็บณ': u'็ด', u'็บด': u'็ด', u'็บต': u'็ธฑ', u'็บถ': u'็ถธ', u'็บท': u'็ด›', u'็บธ': u'็ด™', u'็บน': u'็ด‹', u'็บบ': u'็ดก', u'็บป': u'็ดต', u'็บผ': u'็ด–', u'็บฝ': u'็ด', u'็บพ': u'็ด“', u'็บฟ': u'็ทš', u'็ป€': u'็ดบ', u'็ป': u'็ดฒ', u'็ป‚': u'็ดฑ', u'็ปƒ': u'็ทด', u'็ป„': u'็ต„', u'็ป…': u'็ดณ', u'็ป†': u'็ดฐ', u'็ป‡': u'็น”', u'็ปˆ': u'็ต‚', u'็ป‰': u'็ธ', u'็ปŠ': u'็ต†', u'็ป‹': u'็ดผ', u'็ปŒ': u'็ต€', u'็ป': u'็ดน', u'็ปŽ': u'็นน', u'็ป': u'็ถ“', u'็ป': u'็ดฟ', u'็ป‘': u'็ถ', u'็ป’': u'็ตจ', u'็ป“': u'็ต', u'็ป”': u'็ต', u'็ป•': u'็นž', u'็ป–': u'็ตฐ', u'็ป—': u'็ตŽ', u'็ป˜': u'็นช', u'็ป™': u'็ตฆ', u'็ปš': u'็ตข', u'็ป›': u'็ตณ', u'็ปœ': u'็ตก', u'็ป': u'็ต•', u'็ปž': u'็ตž', u'็ปŸ': u'็ตฑ', u'็ป ': u'็ถ†', u'็ปก': u'็ถƒ', u'็ปข': u'็ตน', u'็ปฃ': u'็ถ‰', u'็ปค': u'็ถŒ', u'็ปฅ': u'็ถ', u'็ปฆ': u'็ต›', u'็ปง': u'็นผ', u'็ปจ': u'็ถˆ', u'็ปฉ': u'็ธพ', u'็ปช': u'็ท’', u'็ปซ': u'็ถพ', u'็ปฌ': u'็ท“', u'็ปญ': u'็บŒ', u'็ปฎ': u'็ถบ', u'็ปฏ': u'็ท‹', u'็ปฐ': u'็ถฝ', u'็ปฑ': u'็ท”', u'็ปฒ': u'็ท„', u'็ปณ': u'็นฉ', u'็ปด': u'็ถญ', u'็ปต': u'็ถฟ', u'็ปถ': u'็ถฌ', u'็ปท': u'็ถณ', u'็ปธ': u'็ถข', u'็ปน': u'็ถฏ', u'็ปบ': u'็ถน', u'็ปป': u'็ถฃ', u'็ปผ': u'็ถœ', u'็ปฝ': u'็ถป', u'็ปพ': u'็ถฐ', u'็ปฟ': u'็ถ ', u'็ผ€': u'็ถด', u'็ผ': u'็ท‡', u'็ผ‚': u'็ท™', u'็ผƒ': u'็ท—', u'็ผ„': u'็ท˜', u'็ผ…': u'็ทฌ', u'็ผ†': u'็บœ', u'็ผ‡': u'็ทน', u'็ผˆ': u'็ทฒ', u'็ผ‰': u'็ท', u'็ผŠ': u'็ธ•', u'็ผ‹': u'็นข', u'็ผŒ': u'็ทฆ', u'็ผ': u'็ถž', u'็ผŽ': u'็ทž', u'็ผ': u'็ทถ', u'็ผ‘': u'็ทฑ', u'็ผ’': u'็ธ‹', u'็ผ“': u'็ทฉ', u'็ผ”': u'็ท ', u'็ผ•': u'็ธท', u'็ผ–': u'็ทจ', u'็ผ—': u'็ทก', u'็ผ˜': u'็ทฃ', u'็ผ™': u'็ธ‰', u'็ผš': u'็ธ›', u'็ผ›': u'็ธŸ', u'็ผœ': u'็ธ', u'็ผ': u'็ธซ', u'็ผž': u'็ธ—', u'็ผŸ': u'็ธž', u'็ผ ': u'็บ', u'็ผก': u'็ธญ', u'็ผข': u'็ธŠ', u'็ผฃ': u'็ธ‘', u'็ผค': u'็นฝ', u'็ผฅ': u'็ธน', u'็ผฆ': u'็ธต', u'็ผง': u'็ธฒ', u'็ผจ': u'็บ“', u'็ผฉ': u'็ธฎ', u'็ผช': u'็น†', u'็ผซ': u'็น…', u'็ผฌ': u'็บˆ', u'็ผญ': u'็นš', u'็ผฎ': u'็น•', u'็ผฏ': u'็น’', u'็ผฐ': u'้Ÿ', u'็ผฑ': u'็นพ', u'็ผฒ': u'็นฐ', u'็ผณ': u'็นฏ', u'็ผด': u'็นณ', u'็ผต': u'็บ˜', u'็ฝ‚': u'็ฝŒ', u'็ฝ‘': u'็ถฒ', u'็ฝ—': u'็พ…', u'็ฝš': u'็ฝฐ', u'็ฝข': u'็ฝท', u'็ฝด': u'็พ†', u'็พ': u'็พˆ', u'็พŸ': u'็พฅ', u'็พก': u'็พจ', u'็ฟ˜': u'็ฟน', u'่€ข': u'่€ฎ', u'่€ง': u'่€ฌ', u'่€ธ': u'่ณ', u'่€ป': u'ๆฅ', u'่‚': u'่ถ', u'่‹': u'่พ', u'่Œ': u'่ท', u'่': u'่น', u'่”': u'่ฏ', u'่ฉ': u'่ต', u'่ช': u'่ฐ', u'่‚ƒ': u'่‚…', u'่‚ ': u'่…ธ', u'่‚ค': u'่†š', u'่‚ฎ': u'้ชฏ', u'่‚ด': u'้คš', u'่‚พ': u'่…Ž', u'่‚ฟ': u'่…ซ', u'่ƒ€': u'่„น', u'่ƒ': u'่„…', u'่ƒ†': u'่†ฝ', u'่ƒœ': u'ๅ‹', u'่ƒง': u'ๆœง', u'่ƒจ': u'่…–', u'่ƒช': u'่‡š', u'่ƒซ': u'่„›', u'่ƒถ': u'่† ', u'่„‰': u'่„ˆ', u'่„': u'่†พ', u'่„': u'่‡Ÿ', u'่„': u'่‡', u'่„‘': u'่…ฆ', u'่„“': u'่†ฟ', u'่„”': u'่‡ ', u'่„š': u'่…ณ', u'่„ฑ': u'่„ซ', u'่„ถ': u'่…ก', u'่„ธ': u'่‡‰', u'่…Š': u'่‡˜', u'่…ญ': u'้ฝถ', u'่…ป': u'่†ฉ', u'่…ผ': u'้ฆ', u'่…ฝ': u'่†ƒ', u'่…พ': u'้จฐ', u'่†‘': u'่‡', u'่‡œ': u'่‡ข', u'่ˆ†': u'่ผฟ', u'่ˆฃ': u'่‰ค', u'่ˆฐ': u'่‰ฆ', u'่ˆฑ': u'่‰™', u'่ˆป': u'่‰ซ', u'่‰ฐ': u'่‰ฑ', u'่‰ณ': u'่‰ท', u'่‰บ': u'่—', u'่Š‚': u'็ฏ€', u'่Šˆ': u'็พ‹', u'่Š—': u'่–Œ', u'่Šœ': u'่•ช', u'่Šฆ': u'่˜†', u'่‹': u'่“ฏ', u'่‹‡': u'่‘ฆ', u'่‹ˆ': u'่—ถ', u'่‹‹': u'่Žง', u'่‹Œ': u'่‡', u'่‹': u'่’ผ', u'่‹Ž': u'่‹ง', u'่‹': u'่˜‡', u'่‹ง': u'่–ด', u'่‹น': u'่˜‹', u'่ŒŽ': u'่Ž–', u'่Œ': u'่˜ข', u'่Œ‘': u'่”ฆ', u'่Œ”': u'ๅก‹', u'่Œ•': u'็…ข', u'่Œง': u'็นญ', u'่†': u'่Š', u'่': u'่–ฆ', u'่™': u'่–˜', u'่š': u'่Žข', u'่›': u'่•˜', u'่œ': u'่“ฝ', u'่ž': u'่•Ž', u'่Ÿ': u'่–ˆ', u'่ ': u'่–บ', u'่ก': u'็›ช', u'่ฃ': u'ๆฆฎ', u'่ค': u'่‘ท', u'่ฅ': u'ๆปŽ', u'่ฆ': u'็Š–', u'่ง': u'็†’', u'่จ': u'่•', u'่ฉ': u'่—Ž', u'่ช': u'่“€', u'่ซ': u'่”ญ', u'่ฌ': u'่•’', u'่ญ': u'่‘’', u'่ฎ': u'่‘ค', u'่ฏ': u'่‘ฏ', u'่Ž…': u'่’ž', u'่Žฑ': u'่Š', u'่Žฒ': u'่“ฎ', u'่Žณ': u'่’”', u'่Žด': u'่ต', u'่Žถ': u'่–Ÿ', u'่Žท': u'็ฒ', u'่Žธ': u'่••', u'่Žน': u'็‘ฉ', u'่Žบ': u'้ถฏ', u'่Žผ': u'่’“', u'่': u'่˜ฟ', u'่ค': u'่žข', u'่ฅ': u'็‡Ÿ', u'่ฆ': u'็ธˆ', u'่ง': u'่•ญ', u'่จ': u'่–ฉ', u'่‘ฑ': u'่”ฅ', u'่’‡': u'่•†', u'่’‰': u'่•ข', u'่’‹': u'่”ฃ', u'่’Œ': u'่”ž', u'่“': u'่—', u'่“Ÿ': u'่–Š', u'่“ ': u'่˜บ', u'่“ฃ': u'่•ท', u'่“ฅ': u'้Žฃ', u'่“ฆ': u'้ฉ€', u'่”‚': u'่™†', u'่”ท': u'่–”', u'่”น': u'่˜ž', u'่”บ': u'่—บ', u'่”ผ': u'่—น', u'่•ฐ': u'่–€', u'่•ฒ': u'่˜„', u'่•ด': u'่˜Š', u'่–ฎ': u'่—ช', u'่—“': u'่˜š', u'่˜–': u'ๆซฑ', u'่™': u'่™œ', u'่™‘': u'ๆ…ฎ', u'่™š': u'่™›', u'่™ซ': u'่Ÿฒ', u'่™ฌ': u'่™ฏ', u'่™ฎ': u'่Ÿฃ', u'่™ฝ': u'้›–', u'่™พ': u'่ฆ', u'่™ฟ': u'่ †', u'่š€': u'่•', u'่š': u'่Ÿป', u'่š‚': u'่žž', u'่š•': u'่ ถ', u'่šฌ': u'่œ†', u'่›Š': u'่ ฑ', u'่›Ž': u'่ ฃ', u'่›': u'่Ÿถ', u'่›ฎ': u'่ ป', u'่›ฐ': u'่Ÿ„', u'่›ฑ': u'่›บ', u'่›ฒ': u'่Ÿฏ', u'่›ณ': u'่ž„', u'่›ด': u'่ ', u'่œ•': u'่›ป', u'่œ—': u'่ธ', u'่œก': u'่ Ÿ', u'่‡': u'่ …', u'่ˆ': u'่Ÿˆ', u'่‰': u'่Ÿฌ', u'่Ž': u'่ ', u'่ผ': u'่žป', u'่พ': u'่ ‘', u'่ž€': u'่žฟ', u'่žจ': u'่ŸŽ', u'่Ÿ': u'่ จ', u'่ก…': u'้‡', u'่ก”': u'้Šœ', u'่กฅ': u'่ฃœ', u'่กฌ': u'่ฅฏ', u'่กฎ': u'่ขž', u'่ข„': u'่ฅ–', u'่ข…': u'่ฃŠ', u'่ข†': u'่ค˜', u'่ขœ': u'่ฅช', u'่ขญ': u'่ฅฒ', u'่ขฏ': u'่ฅ', u'่ฃ…': u'่ฃ', u'่ฃ†': u'่ฅ ', u'่ฃˆ': u'่คŒ', u'่ฃข': u'่คณ', u'่ฃฃ': u'่ฅ', u'่ฃค': u'่คฒ', u'่ฃฅ': u'่ฅ‡', u'่ค›': u'่คธ', u'่คด': u'่ฅค', u'่ง': u'่ฆ‹', u'่ง‚': u'่ง€', u'่งƒ': u'่ฆŽ', u'่ง„': u'่ฆ', u'่ง…': u'่ฆ“', u'่ง†': u'่ฆ–', u'่ง‡': u'่ฆ˜', u'่งˆ': u'่ฆฝ', u'่ง‰': u'่ฆบ', u'่งŠ': u'่ฆฌ', u'่ง‹': u'่ฆก', u'่งŒ': u'่ฆฟ', u'่ง': u'่ฆฅ', u'่งŽ': u'่ฆฆ', u'่ง': u'่ฆฏ', u'่ง': u'่ฆฒ', u'่ง‘': u'่ฆท', u'่งž': u'่งด', u'่งฆ': u'่งธ', u'่งฏ': u'่งถ', u'่จš': u'่ชพ', u'่ช‰': u'่ญฝ', u'่ชŠ': u'่ฌ„', u'่ฎ ': u'่จ', u'่ฎก': u'่จˆ', u'่ฎข': u'่จ‚', u'่ฎฃ': u'่จƒ', u'่ฎค': u'่ช', u'่ฎฅ': u'่ญ', u'่ฎฆ': u'่จ', u'่ฎง': u'่จŒ', u'่ฎจ': u'่จŽ', u'่ฎฉ': u'่ฎ“', u'่ฎช': u'่จ•', u'่ฎซ': u'่จ–', u'่ฎฌ': u'่จ—', u'่ฎญ': u'่จ“', u'่ฎฎ': u'่ญฐ', u'่ฎฏ': u'่จŠ', u'่ฎฐ': u'่จ˜', u'่ฎฑ': u'่จ’', u'่ฎฒ': u'่ฌ›', u'่ฎณ': u'่ซฑ', u'่ฎด': u'่ฌณ', u'่ฎต': u'่ฉŽ', u'่ฎถ': u'่จ', u'่ฎท': u'่จฅ', u'่ฎธ': u'่จฑ', u'่ฎน': u'่จ›', u'่ฎบ': u'่ซ–', u'่ฎป': u'่จฉ', u'่ฎผ': u'่จŸ', u'่ฎฝ': u'่ซท', u'่ฎพ': u'่จญ', u'่ฎฟ': u'่จช', u'่ฏ€': u'่จฃ', u'่ฏ': u'่ญ‰', u'่ฏ‚': u'่ฉ', u'่ฏƒ': u'่จถ', u'่ฏ„': u'่ฉ•', u'่ฏ…': u'่ฉ›', u'่ฏ†': u'่ญ˜', u'่ฏ‡': u'่ฉ—', u'่ฏˆ': u'่ฉ', u'่ฏ‰': u'่จด', u'่ฏŠ': u'่จบ', u'่ฏ‹': u'่ฉ†', u'่ฏŒ': u'่ฌ…', u'่ฏ': u'่ฉž', u'่ฏŽ': u'่ฉ˜', u'่ฏ': u'่ฉ”', u'่ฏ': u'่ฉ–', u'่ฏ‘': u'่ญฏ', u'่ฏ’': u'่ฉ’', u'่ฏ“': u'่ช†', u'่ฏ”': u'่ช„', u'่ฏ•': u'่ฉฆ', u'่ฏ–': u'่ฉฟ', u'่ฏ—': u'่ฉฉ', u'่ฏ˜': u'่ฉฐ', u'่ฏ™': u'่ฉผ', u'่ฏš': u'่ช ', u'่ฏ›': u'่ช…', u'่ฏœ': u'่ฉต', u'่ฏ': u'่ฉฑ', u'่ฏž': u'่ช•', u'่ฏŸ': u'่ฉฌ', u'่ฏ ': u'่ฉฎ', u'่ฏก': u'่ฉญ', u'่ฏข': u'่ฉข', u'่ฏฃ': u'่ฉฃ', u'่ฏค': u'่ซ', u'่ฏฅ': u'่ฉฒ', u'่ฏฆ': u'่ฉณ', u'่ฏง': u'่ฉซ', u'่ฏจ': u'่ซข', u'่ฏฉ': u'่ฉก', u'่ฏช': u'่ญธ', u'่ฏซ': u'่ชก', u'่ฏฌ': u'่ชฃ', u'่ฏญ': u'่ชž', u'่ฏฎ': u'่ชš', u'่ฏฏ': u'่ชค', u'่ฏฐ': u'่ชฅ', u'่ฏฑ': u'่ช˜', u'่ฏฒ': u'่ชจ', u'่ฏณ': u'่ช‘', u'่ฏด': u'่ชช', u'่ฏต': u'่ชฆ', u'่ฏถ': u'่ช’', u'่ฏท': u'่ซ‹', u'่ฏธ': u'่ซธ', u'่ฏน': u'่ซ', u'่ฏบ': u'่ซพ', u'่ฏป': u'่ฎ€', u'่ฏผ': u'่ซ‘', u'่ฏฝ': u'่ชน', u'่ฏพ': u'่ชฒ', u'่ฏฟ': u'่ซ‰', u'่ฐ€': u'่ซ›', u'่ฐ': u'่ชฐ', u'่ฐ‚': u'่ซ—', u'่ฐƒ': u'่ชฟ', u'่ฐ„': u'่ซ‚', u'่ฐ…': u'่ซ’', u'่ฐ†': u'่ซ„', u'่ฐ‡': u'่ชถ', u'่ฐˆ': u'่ซ‡', u'่ฐŠ': u'่ชผ', u'่ฐ‹': u'่ฌ€', u'่ฐŒ': u'่ซถ', u'่ฐ': u'่ซœ', u'่ฐŽ': u'่ฌŠ', u'่ฐ': u'่ซซ', u'่ฐ': u'่ซง', u'่ฐ‘': u'่ฌ”', u'่ฐ’': u'่ฌ', u'่ฐ“': u'่ฌ‚', u'่ฐ”': u'่ซค', u'่ฐ•': u'่ซญ', u'่ฐ–': u'่ซผ', u'่ฐ—': u'่ฎ’', u'่ฐ˜': u'่ซฎ', u'่ฐ™': u'่ซณ', u'่ฐš': u'่ซบ', u'่ฐ›': u'่ซฆ', u'่ฐœ': u'่ฌŽ', u'่ฐ': u'่ซž', u'่ฐž': u'่ซ', u'่ฐŸ': u'่ฌจ', u'่ฐ ': u'่ฎœ', u'่ฐก': u'่ฌ–', u'่ฐข': u'่ฌ', u'่ฐฃ': u'่ฌ ', u'่ฐค': u'่ฌ—', u'่ฐฅ': u'่ฌš', u'่ฐฆ': u'่ฌ™', u'่ฐง': u'่ฌ', u'่ฐจ': u'่ฌน', u'่ฐฉ': u'่ฌพ', u'่ฐช': u'่ฌซ', u'่ฐซ': u'่ญพ', u'่ฐฌ': u'่ฌฌ', u'่ฐญ': u'่ญš', u'่ฐฎ': u'่ญ–', u'่ฐฏ': u'่ญ™', u'่ฐฐ': u'่ฎ•', u'่ฐฑ': u'่ญœ', u'่ฐฒ': u'่ญŽ', u'่ฐณ': u'่ฎž', u'่ฐด': u'่ญด', u'่ฐต': u'่ญซ', u'่ฐถ': u'่ฎ–', u'่ฑฎ': u'่ฑถ', u'่ด': u'่ฒ', u'่ดž': u'่ฒž', u'่ดŸ': u'่ฒ ', u'่ด ': u'่ฒŸ', u'่ดก': u'่ฒข', u'่ดข': u'่ฒก', u'่ดฃ': u'่ฒฌ', u'่ดค': u'่ณข', u'่ดฅ': u'ๆ•—', u'่ดฆ': u'่ณฌ', u'่ดง': u'่ฒจ', u'่ดจ': u'่ณช', u'่ดฉ': u'่ฒฉ', u'่ดช': u'่ฒช', u'่ดซ': u'่ฒง', u'่ดฌ': u'่ฒถ', u'่ดญ': u'่ณผ', u'่ดฎ': u'่ฒฏ', u'่ดฏ': u'่ฒซ', u'่ดฐ': u'่ฒณ', u'่ดฑ': u'่ณค', u'่ดฒ': u'่ณ', u'่ดณ': u'่ฒฐ', u'่ดด': u'่ฒผ', u'่ดต': u'่ฒด', u'่ดถ': u'่ฒบ', u'่ดท': u'่ฒธ', u'่ดธ': u'่ฒฟ', u'่ดน': u'่ฒป', u'่ดบ': u'่ณ€', u'่ดป': u'่ฒฝ', u'่ดผ': u'่ณŠ', u'่ดฝ': u'่ด„', u'่ดพ': u'่ณˆ', u'่ดฟ': u'่ณ„', u'่ต€': u'่ฒฒ', u'่ต': u'่ณƒ', u'่ต‚': u'่ณ‚', u'่ตƒ': u'่ด“', u'่ต„': u'่ณ‡', u'่ต…': u'่ณ…', u'่ต†': u'่ด', u'่ต‡': u'่ณ•', u'่ตˆ': u'่ณ‘', u'่ต‰': u'่ณš', u'่ตŠ': u'่ณ’', u'่ต‹': u'่ณฆ', u'่ตŒ': u'่ณญ', u'่ต': u'้ฝŽ', u'่ตŽ': u'่ด–', u'่ต': u'่ณž', u'่ต': u'่ณœ', u'่ต‘': u'่ด”', u'่ต’': u'่ณ™', u'่ต“': u'่ณก', u'่ต”': u'่ณ ', u'่ต•': u'่ณง', u'่ต–': u'่ณด', u'่ต—': u'่ณต', u'่ต˜': u'่ด…', u'่ต™': u'่ณป', u'่ตš': u'่ณบ', u'่ต›': u'่ณฝ', u'่ตœ': u'่ณพ', u'่ต': u'่ด—', u'่ตž': u'่ดŠ', u'่ตŸ': u'่ด‡', u'่ต ': u'่ดˆ', u'่ตก': u'่ด', u'่ตข': u'่ด', u'่ตฃ': u'่ด›', u'่ตช': u'่ตฌ', u'่ตต': u'่ถ™', u'่ตถ': u'่ถ•', u'่ถ‹': u'่ถจ', u'่ถฑ': u'่ถฒ', u'่ถธ': u'่บ‰', u'่ทƒ': u'่บ', u'่ท„': u'่นŒ', u'่ทž': u'่บ’', u'่ทต': u'่ธ', u'่ทถ': u'่บ‚', u'่ทท': u'่นบ', u'่ทธ': u'่น•', u'่ทน': u'่บš', u'่ทป': u'่บ‹', u'่ธŠ': u'่ธด', u'่ธŒ': u'่บŠ', u'่ธช': u'่นค', u'่ธฌ': u'่บ“', u'่ธฏ': u'่บ‘', u'่น‘': u'่บก', u'่น’': u'่นฃ', u'่นฐ': u'่บ•', u'่นฟ': u'่บฅ', u'่บ': u'่บช', u'่บœ': u'่บฆ', u'่บฏ': u'่ป€', u'่ปฟ': u'๐ซš’', u'่ฝฆ': u'่ปŠ', u'่ฝง': u'่ป‹', u'่ฝจ': u'่ปŒ', u'่ฝฉ': u'่ป’', u'่ฝช': u'่ป‘', u'่ฝซ': u'่ป”', u'่ฝฌ': u'่ฝ‰', u'่ฝญ': u'่ป›', u'่ฝฎ': u'่ผช', u'่ฝฏ': u'่ปŸ', u'่ฝฐ': u'่ฝŸ', u'่ฝฑ': u'่ปฒ', u'่ฝฒ': u'่ปป', u'่ฝณ': u'่ฝค', u'่ฝด': u'่ปธ', u'่ฝต': u'่ปน', u'่ฝถ': u'่ปผ', u'่ฝท': u'่ปค', u'่ฝธ': u'่ปซ', u'่ฝน': u'่ฝข', u'่ฝบ': u'่ปบ', u'่ฝป': u'่ผ•', u'่ฝผ': u'่ปพ', u'่ฝฝ': u'่ผ‰', u'่ฝพ': u'่ผŠ', u'่ฝฟ': u'่ฝŽ', u'่พ€': u'่ผˆ', u'่พ': u'่ผ‡', u'่พ‚': u'่ผ…', u'่พƒ': u'่ผƒ', u'่พ„': u'่ผ’', u'่พ…': u'่ผ”', u'่พ†': u'่ผ›', u'่พ‡': u'่ผฆ', u'่พˆ': u'่ผฉ', u'่พ‰': u'่ผ', u'่พŠ': u'่ผฅ', u'่พ‹': u'่ผž', u'่พŒ': u'่ผฌ', u'่พ': u'่ผŸ', u'่พŽ': u'่ผœ', u'่พ': u'่ผณ', u'่พ': u'่ผป', u'่พ‘': u'่ผฏ', u'่พ’': u'่ฝ€', u'่พ“': u'่ผธ', u'่พ”': u'่ฝก', u'่พ•': u'่ฝ…', u'่พ–': u'่ฝ„', u'่พ—': u'่ผพ', u'่พ˜': u'่ฝ†', u'่พ™': u'่ฝ', u'่พš': u'่ฝ”', u'่พž': u'่พญ', u'่พฉ': u'่พฏ', u'่พซ': u'่พฎ', u'่พน': u'้‚Š', u'่พฝ': u'้ผ', u'่พพ': u'้”', u'่ฟ': u'้ท', u'่ฟ‡': u'้Ž', u'่ฟˆ': u'้‚', u'่ฟ': u'้‹', u'่ฟ˜': u'้‚„', u'่ฟ™': u'้€™', u'่ฟ›': u'้€ฒ', u'่ฟœ': u'้ ', u'่ฟ': u'้•', u'่ฟž': u'้€ฃ', u'่ฟŸ': u'้ฒ', u'่ฟฉ': u'้‚‡', u'่ฟณ': u'้€•', u'่ฟน': u'่ทก', u'้€‚': u'้ฉ', u'้€‰': u'้ธ', u'้€Š': u'้œ', u'้€’': u'้ž', u'้€ฆ': u'้‚', u'้€ป': u'้‚', u'้—': u'้บ', u'้ฅ': u'้™', u'้‚“': u'้„ง', u'้‚': u'้„บ', u'้‚ฌ': u'้„”', u'้‚ฎ': u'้ƒต', u'้‚น': u'้„’', u'้‚บ': u'้„ด', u'้‚ป': u'้„ฐ', u'้ƒ': u'้ƒŸ', u'้ƒ': u'้„ถ', u'้ƒ‘': u'้„ญ', u'้ƒ“': u'้„†', u'้ƒฆ': u'้…ˆ', u'้ƒง': u'้„–', u'้ƒธ': u'้„ฒ', u'้…‚': u'้…‡', u'้…': u'้†ž', u'้…ฆ': u'้†ฑ', u'้…ฑ': u'้†ฌ', u'้…ฝ': u'้‡…', u'้…พ': u'้‡ƒ', u'้…ฟ': u'้‡€', u'้‡Š': u'้‡‹', u'้‰ด': u'้‘’', u'้Šฎ': u'้‘พ', u'้Œพ': u'้จ', u'้Žญ': u'้Žฎ', u'้’…': u'้‡’', u'้’†': u'้‡“', u'้’‡': u'้‡”', u'้’ˆ': u'้‡', u'้’‰': u'้‡˜', u'้’Š': u'้‡—', u'้’‹': u'้‡™', u'้’Œ': u'้‡•', u'้’': u'้‡ท', u'้’Ž': u'้‡บ', u'้’': u'้‡ง', u'้’': u'้‡ค', u'้’‘': u'้ˆ’', u'้’’': u'้‡ฉ', u'้’“': u'้‡ฃ', u'้’”': u'้†', u'้’•': u'้‡น', u'้’–': u'้š', u'้’—': u'้‡ต', u'้’˜': u'้ˆƒ', u'้’™': u'้ˆฃ', u'้’š': u'้ˆˆ', u'้’›': u'้ˆฆ', u'้’œ': u'้‰…', u'้’': u'้ˆ', u'้’ž': u'้ˆ”', u'้’Ÿ': u'้พ', u'้’ ': u'้ˆ‰', u'้’ก': u'้‹‡', u'้’ข': u'้‹ผ', u'้’ฃ': u'้ˆ‘', u'้’ค': u'้ˆ', u'้’ฅ': u'้‘ฐ', u'้’ฆ': u'ๆฌฝ', u'้’ง': u'้ˆž', u'้’จ': u'้Žข', u'้’ฉ': u'้‰ค', u'้’ช': u'้ˆง', u'้’ซ': u'้ˆ', u'้’ฌ': u'้ˆฅ', u'้’ญ': u'้ˆ„', u'้’ฎ': u'้ˆ•', u'้’ฏ': u'้ˆ€', u'้’ฐ': u'้ˆบ', u'้’ฑ': u'้Œข', u'้’ฒ': u'้‰ฆ', u'้’ณ': u'้‰—', u'้’ด': u'้ˆท', u'้’ต': u'็ผฝ', u'้’ถ': u'้ˆณ', u'้’ท': u'้‰•', u'้’ธ': u'้ˆฝ', u'้’น': u'้ˆธ', u'้’บ': u'้‰ž', u'้’ป': u'้‘ฝ', u'้’ผ': u'้‰ฌ', u'้’ฝ': u'้‰ญ', u'้’พ': u'้‰€', u'้’ฟ': u'้ˆฟ', u'้“€': u'้ˆพ', u'้“': u'้ต', u'้“‚': u'้‰‘', u'้“ƒ': u'้ˆด', u'้“„': u'้‘ ', u'้“…': u'้‰›', u'้“†': u'้‰š', u'้“‡': u'้‰‹', u'้“ˆ': u'้ˆฐ', u'้“‰': u'้‰‰', u'้“Š': u'้‰ˆ', u'้“‹': u'้‰', u'้“Œ': u'้ˆฎ', u'้“': u'้ˆน', u'้“Ž': u'้ธ', u'้“': u'้‰ถ', u'้“': u'้Šฌ', u'้“‘': u'้Š ', u'้“’': u'้‰บ', u'้““': u'้‹ฉ', u'้“”': u'้Œ', u'้“•': u'้Šช', u'้“–': u'้‹ฎ', u'้“—': u'้‹', u'้“˜': u'้‹ฃ', u'้“™': u'้ƒ', u'้“š': u'้Š', u'้“›': u'้บ', u'้“œ': u'้Š…', u'้“': u'้‹', u'้“ž': u'้Šฑ', u'้“Ÿ': u'้Šฆ', u'้“ ': u'้Žง', u'้“ก': u'้˜', u'้“ข': u'้Š–', u'้“ฃ': u'้Š‘', u'้“ค': u'้‹Œ', u'้“ฅ': u'้Šฉ', u'้“ฆ': u'้Š›', u'้“ง': u'้ต', u'้“จ': u'้Š“', u'้“ฉ': u'้Žฉ', u'้“ช': u'้‰ฟ', u'้“ซ': u'้Šš', u'้“ฌ': u'้‰ป', u'้“ญ': u'้Š˜', u'้“ฎ': u'้Œš', u'้“ฏ': u'้Šซ', u'้“ฐ': u'้‰ธ', u'้“ฑ': u'้Šฅ', u'้“ฒ': u'้Ÿ', u'้“ณ': u'้Šƒ', u'้“ด': u'้‹', u'้“ต': u'้Šจ', u'้“ถ': u'้Š€', u'้“ท': u'้Šฃ', u'้“ธ': u'้‘„', u'้“น': u'้’', u'้“บ': u'้‹ช', u'้“ป': u'้‹™', u'้“ผ': u'้Œธ', u'้“ฝ': u'้‹ฑ', u'้“พ': u'้ˆ', u'้“ฟ': u'้—', u'้”€': u'้Šท', u'้”': u'้Ž–', u'้”‚': u'้‹ฐ', u'้”ƒ': u'้‹ฅ', u'้”„': u'้‹ค', u'้”…': u'้‹', u'้”†': u'้‹ฏ', u'้”‡': u'้‹จ', u'้”ˆ': u'้Šน', u'้”‰': u'้Šผ', u'้”Š': u'้‹', u'้”‹': u'้‹’', u'้”Œ': u'้‹…', u'้”': u'้‹ถ', u'้”Ž': u'้ฆ', u'้”': u'้ง', u'้”': u'้Šณ', u'้”‘': u'้Šป', u'้”’': u'้‹ƒ', u'้”“': u'้‹Ÿ', u'้””': u'้‹ฆ', u'้”•': u'้Œ’', u'้”–': u'้Œ†', u'้”—': u'้บ', u'้”˜': u'้ฉ', u'้”™': u'้Œฏ', u'้”š': u'้Œจ', u'้”›': u'้Œ›', u'้”œ': u'้Œก', u'้”': u'้€', u'้”ž': u'้Œ', u'้”Ÿ': u'้Œ•', u'้” ': u'้Œฉ', u'้”ก': u'้Œซ', u'้”ข': u'้Œฎ', u'้”ฃ': u'้‘ผ', u'้”ค': u'้Œ˜', u'้”ฅ': u'้Œ', u'้”ฆ': u'้Œฆ', u'้”ง': u'้‘•', u'้”จ': u'ๆด', u'้”ฉ': u'้Œˆ', u'้”ช': u'้ƒ', u'้”ซ': u'้Œ‡', u'้”ฌ': u'้ŒŸ', u'้”ญ': u'้Œ ', u'้”ฎ': u'้ต', u'้”ฏ': u'้‹ธ', u'้”ฐ': u'้Œณ', u'้”ฑ': u'้Œ™', u'้”ฒ': u'้ฅ', u'้”ณ': u'้ˆ', u'้”ด': u'้‡', u'้”ต': u'้˜', u'้”ถ': u'้ถ', u'้”ท': u'้”', u'้”ธ': u'้ค', u'้”น': u'้ฌ', u'้”บ': u'้พ', u'้”ป': u'้›', u'้”ผ': u'้Žช', u'้”ฝ': u'้ ', u'้”พ': u'้ฐ', u'้”ฟ': u'้Ž„', u'้•€': u'้', u'้•': u'้Ž‚', u'้•‚': u'้ค', u'้•ƒ': u'้Žก', u'้•„': u'้จ', u'้•…': u'้އ', u'้•†': u'้Œ', u'้•‡': u'้Žฎ', u'้•ˆ': u'้Ž›', u'้•‰': u'้Ž˜', u'้•Š': u'้‘ท', u'้•‹': u'้Žฒ', u'้•Œ': u'้ซ', u'้•': u'้Žณ', u'้•Ž': u'้Žฟ', u'้•': u'้Žฆ', u'้•': u'้Žฌ', u'้•‘': u'้ŽŠ', u'้•’': u'้Žฐ', u'้•“': u'้Žต', u'้•”': u'้‘Œ', u'้••': u'้Ž”', u'้•–': u'้ข', u'้•—': u'้œ', u'้•˜': u'้', u'้•™': u'้', u'้•š': u'้ฐ', u'้•›': u'้ž', u'้•œ': u'้ก', u'้•': u'้‘', u'้•ž': u'้ƒ', u'้•Ÿ': u'้‡', u'้• ': u'้', u'้•ก': u'้”', u'้•ข': u'้’', u'้•ฃ': u'้', u'้•ค': u'้ท', u'้•ฅ': u'้‘ฅ', u'้•ฆ': u'้“', u'้•ง': u'้‘ญ', u'้•จ': u'้ ', u'้•ฉ': u'้‘น', u'้•ช': u'้น', u'้•ซ': u'้™', u'้•ฌ': u'้‘Š', u'้•ญ': u'้ณ', u'้•ฎ': u'้ถ', u'้•ฏ': u'้ฒ', u'้•ฐ': u'้ฎ', u'้•ฑ': u'้ฟ', u'้•ฒ': u'้‘”', u'้•ณ': u'้‘ฃ', u'้•ด': u'้‘ž', u'้•ต': u'้‘ฑ', u'้•ถ': u'้‘ฒ', u'้•ฟ': u'้•ท', u'้—จ': u'้–€', u'้—ฉ': u'้–‚', u'้—ช': u'้–ƒ', u'้—ซ': u'้–†', u'้—ฌ': u'้–ˆ', u'้—ญ': u'้–‰', u'้—ฎ': u'ๅ•', u'้—ฏ': u'้—–', u'้—ฐ': u'้–', u'้—ฑ': u'้—ˆ', u'้—ฒ': u'้–‘', u'้—ณ': u'้–Ž', u'้—ด': u'้–“', u'้—ต': u'้–”', u'้—ถ': u'้–Œ', u'้—ท': u'ๆ‚ถ', u'้—ธ': u'้–˜', u'้—น': u'้ฌง', u'้—บ': u'้–จ', u'้—ป': u'่ž', u'้—ผ': u'้—ฅ', u'้—ฝ': u'้–ฉ', u'้—พ': u'้–ญ', u'้—ฟ': u'้—“', u'้˜€': u'้–ฅ', u'้˜': u'้–ฃ', u'้˜‚': u'้–ก', u'้˜ƒ': u'้–ซ', u'้˜„': u'้ฌฎ', u'้˜…': u'้–ฑ', u'้˜†': u'้–ฌ', u'้˜‡': u'้—', u'้˜ˆ': u'้–พ', u'้˜‰': u'้–น', u'้˜Š': u'้–ถ', u'้˜‹': u'้ฌฉ', u'้˜Œ': u'้–ฟ', u'้˜': u'้–ฝ', u'้˜Ž': u'้–ป', u'้˜': u'้–ผ', u'้˜': u'้—ก', u'้˜‘': u'้—Œ', u'้˜’': u'้—ƒ', u'้˜“': u'้— ', u'้˜”': u'้—Š', u'้˜•': u'้—‹', u'้˜–': u'้—”', u'้˜—': u'้—', u'้˜˜': u'้—’', u'้˜™': u'้—•', u'้˜š': u'้—ž', u'้˜›': u'้—ค', u'้˜Ÿ': u'้šŠ', u'้˜ณ': u'้™ฝ', u'้˜ด': u'้™ฐ', u'้˜ต': u'้™ฃ', u'้˜ถ': u'้šŽ', u'้™…': u'้š›', u'้™†': u'้™ธ', u'้™‡': u'้šด', u'้™ˆ': u'้™ณ', u'้™‰': u'้™˜', u'้™•': u'้™', u'้™ง': u'้š‰', u'้™จ': u'้š•', u'้™ฉ': u'้šช', u'้š': u'้šจ', u'้š': u'้šฑ', u'้šถ': u'้šธ', u'้šฝ': u'้›‹', u'้šพ': u'้›ฃ', u'้›': u'้››', u'้› ': u'่ฎŽ', u'้›ณ': u'้‚', u'้›พ': u'้œง', u'้œ': u'้œฝ', u'้œก': u'้œข', u'้œญ': u'้„', u'้“': u'้š', u'้™': u'้œ', u'้ฅ': u'้จ', u'้ž‘': u'้Ÿƒ', u'้ž’': u'้žฝ', u'้žฏ': u'้Ÿ‰', u'้žฒ': u'้Ÿ', u'้Ÿฆ': u'้Ÿ‹', u'้Ÿง': u'้ŸŒ', u'้Ÿจ': u'้Ÿ', u'้Ÿฉ': u'้Ÿ“', u'้Ÿช': u'้Ÿ™', u'้Ÿซ': u'้Ÿž', u'้Ÿฌ': u'้Ÿœ', u'้Ÿต': u'้Ÿป', u'้กต': u'้ ', u'้กถ': u'้ ‚', u'้กท': u'้ ƒ', u'้กธ': u'้ ‡', u'้กน': u'้ …', u'้กบ': u'้ †', u'้กป': u'้ ˆ', u'้กผ': u'้ Š', u'้กฝ': u'้ ‘', u'้กพ': u'้กง', u'้กฟ': u'้ “', u'้ข€': u'้ Ž', u'้ข': u'้ ’', u'้ข‚': u'้ Œ', u'้ขƒ': u'้ ', u'้ข„': u'้ ', u'้ข…': u'้กฑ', u'้ข†': u'้ ˜', u'้ข‡': u'้ —', u'้ขˆ': u'้ ธ', u'้ข‰': u'้ ก', u'้ขŠ': u'้ ฐ', u'้ข‹': u'้ ฒ', u'้ขŒ': u'้ œ', u'้ข': u'ๆฝ', u'้ขŽ': u'็†ฒ', u'้ข': u'้ ฆ', u'้ข': u'้ ค', u'้ข‘': u'้ ป', u'้ข’': u'้ ฎ', u'้ข“': u'้ น', u'้ข”': u'้ ท', u'้ข•': u'้ ด', u'้ข–': u'็ฉŽ', u'้ข—': u'้ก†', u'้ข˜': u'้กŒ', u'้ข™': u'้ก’', u'้ขš': u'้กŽ', u'้ข›': u'้ก“', u'้ขœ': u'้ก', u'้ข': u'้ก', u'้ขž': u'้กณ', u'้ขŸ': u'้กข', u'้ข ': u'้ก›', u'้ขก': u'้ก™', u'้ขข': u'้กฅ', u'้ขค': u'้กซ', u'้ขฅ': u'้กฌ', u'้ขฆ': u'้กฐ', u'้ขง': u'้กด', u'้ฃŽ': u'้ขจ', u'้ฃ': u'้ขบ', u'้ฃ': u'้ขญ', u'้ฃ‘': u'้ขฎ', u'้ฃ’': u'้ขฏ', u'้ฃ“': u'้ขถ', u'้ฃ”': u'้ขธ', u'้ฃ•': u'้ขผ', u'้ฃ–': u'้ขป', u'้ฃ—': u'้ฃ€', u'้ฃ˜': u'้ฃ„', u'้ฃ™': u'้ฃ†', u'้ฃš': u'้ฃˆ', u'้ฃž': u'้ฃ›', u'้ฃจ': u'้ฅ—', u'้ค': u'้ฅœ', u'้ฅฃ': u'้ฃ ', u'้ฅค': u'้ฃฃ', u'้ฅฅ': u'้ฃข', u'้ฅฆ': u'้ฃฅ', u'้ฅง': u'้คณ', u'้ฅจ': u'้ฃฉ', u'้ฅฉ': u'้คผ', u'้ฅช': u'้ฃช', u'้ฅซ': u'้ฃซ', u'้ฅฌ': u'้ฃญ', u'้ฅญ': u'้ฃฏ', u'้ฅฎ': u'้ฃฒ', u'้ฅฏ': u'้คž', u'้ฅฐ': u'้ฃพ', u'้ฅฑ': u'้ฃฝ', u'้ฅฒ': u'้ฃผ', u'้ฅณ': u'้ฃฟ', u'้ฅด': u'้ฃด', u'้ฅต': u'้คŒ', u'้ฅถ': u'้ฅ’', u'้ฅท': u'้ค‰', u'้ฅธ': u'้ค„', u'้ฅน': u'้คŽ', u'้ฅบ': u'้คƒ', u'้ฅป': u'้ค', u'้ฅผ': u'้ค…', u'้ฅฝ': u'้ค‘', u'้ฅพ': u'้ค–', u'้ฅฟ': u'้ค“', u'้ฆ€': u'้ค˜', u'้ฆ': u'้ค’', u'้ฆ‚': u'้ค•', u'้ฆƒ': u'้คœ', u'้ฆ„': u'้ค›', u'้ฆ…': u'้คก', u'้ฆ†': u'้คจ', u'้ฆ‡': u'้คท', u'้ฆˆ': u'้ฅ‹', u'้ฆ‰': u'้คถ', u'้ฆŠ': u'้คฟ', u'้ฆ‹': u'้ฅž', u'้ฆŒ': u'้ฅ', u'้ฆ': u'้ฅƒ', u'้ฆŽ': u'้คบ', u'้ฆ': u'้คพ', u'้ฆ': u'้ฅˆ', u'้ฆ‘': u'้ฅ‰', u'้ฆ’': u'้ฅ…', u'้ฆ“': u'้ฅŠ', u'้ฆ”': u'้ฅŒ', u'้ฆ•': u'้ฅข', u'้ฉฌ': u'้ฆฌ', u'้ฉญ': u'้ฆญ', u'้ฉฎ': u'้ฆฑ', u'้ฉฏ': u'้ฆด', u'้ฉฐ': u'้ฆณ', u'้ฉฑ': u'้ฉ…', u'้ฉฒ': u'้ฆน', u'้ฉณ': u'้ง', u'้ฉด': u'้ฉข', u'้ฉต': u'้ง”', u'้ฉถ': u'้ง›', u'้ฉท': u'้งŸ', u'้ฉธ': u'้ง™', u'้ฉน': u'้ง’', u'้ฉบ': u'้จถ', u'้ฉป': u'้ง', u'้ฉผ': u'้ง', u'้ฉฝ': u'้ง‘', u'้ฉพ': u'้ง•', u'้ฉฟ': u'้ฉ›', u'้ช€': u'้ง˜', u'้ช': u'้ฉ', u'้ช‚': u'็ฝต', u'้ชƒ': u'้งฐ', u'้ช„': u'้ฉ•', u'้ช…': u'้ฉŠ', u'้ช†': u'้งฑ', u'้ช‡': u'้งญ', u'้ชˆ': u'้งข', u'้ช‰': u'้ฉซ', u'้ชŠ': u'้ฉช', u'้ช‹': u'้จ', u'้ชŒ': u'้ฉ—', u'้ช': u'้จ‚', u'้ชŽ': u'้งธ', u'้ช': u'้งฟ', u'้ช': u'้จ', u'้ช‘': u'้จŽ', u'้ช’': u'้จ', u'้ช“': u'้จ…', u'้ช”': u'้จŒ', u'้ช•': u'้ฉŒ', u'้ช–': u'้ฉ‚', u'้ช—': u'้จ™', u'้ช˜': u'้จญ', u'้ช™': u'้จค', u'้ชš': u'้จท', u'้ช›': u'้จ–', u'้ชœ': u'้ฉ', u'้ช': u'้จฎ', u'้ชž': u'้จซ', u'้ชŸ': u'้จธ', u'้ช ': u'้ฉƒ', u'้ชก': u'้จพ', u'้ชข': u'้ฉ„', u'้ชฃ': u'้ฉ', u'้ชค': u'้ฉŸ', u'้ชฅ': u'้ฉฅ', u'้ชฆ': u'้ฉฆ', u'้ชง': u'้ฉค', u'้ซ…': u'้ซ', u'้ซ‹': u'้ซ–', u'้ซŒ': u'้ซ•', u'้ฌ“': u'้ฌข', u'้ญ‡': u'้ญ˜', u'้ญ‰': u'้ญŽ', u'้ฑผ': u'้ญš', u'้ฑฝ': u'้ญ›', u'้ฑพ': u'้ญข', u'้ฑฟ': u'้ญท', u'้ฒ€': u'้ญจ', u'้ฒ': u'้ญฏ', u'้ฒ‚': u'้ญด', u'้ฒƒ': u'ไฐพ', u'้ฒ„': u'้ญบ', u'้ฒ…': u'้ฎ', u'้ฒ†': u'้ฎƒ', u'้ฒ‡': u'้ฏฐ', u'้ฒˆ': u'้ฑธ', u'้ฒ‰': u'้ฎ‹', u'้ฒŠ': u'้ฎ“', u'้ฒ‹': u'้ฎ’', u'้ฒŒ': u'้ฎŠ', u'้ฒ': u'้ฎ‘', u'้ฒŽ': u'้ฑŸ', u'้ฒ': u'้ฎ', u'้ฒ': u'้ฎ', u'้ฒ‘': u'้ฎญ', u'้ฒ’': u'้ฎš', u'้ฒ“': u'้ฎณ', u'้ฒ”': u'้ฎช', u'้ฒ•': u'้ฎž', u'้ฒ–': u'้ฎฆ', u'้ฒ—': u'้ฐ‚', u'้ฒ˜': u'้ฎœ', u'้ฒ™': u'้ฑ ', u'้ฒš': u'้ฑญ', u'้ฒ›': u'้ฎซ', u'้ฒœ': u'้ฎฎ', u'้ฒ': u'้ฎบ', u'้ฒž': u'้ฏ—', u'้ฒŸ': u'้ฑ˜', u'้ฒ ': u'้ฏ', u'้ฒก': u'้ฑบ', u'้ฒข': u'้ฐฑ', u'้ฒฃ': u'้ฐน', u'้ฒค': u'้ฏ‰', u'้ฒฅ': u'้ฐฃ', u'้ฒฆ': u'้ฐท', u'้ฒง': u'้ฏ€', u'้ฒจ': u'้ฏŠ', u'้ฒฉ': u'้ฏ‡', u'้ฒช': u'้ฎถ', u'้ฒซ': u'้ฏฝ', u'้ฒฌ': u'้ฏ’', u'้ฒญ': u'้ฏ–', u'้ฒฎ': u'้ฏช', u'้ฒฏ': u'้ฏ•', u'้ฒฐ': u'้ฏซ', u'้ฒฑ': u'้ฏก', u'้ฒฒ': u'้ฏค', u'้ฒณ': u'้ฏง', u'้ฒด': u'้ฏ', u'้ฒต': u'้ฏข', u'้ฒถ': u'้ฏฐ', u'้ฒท': u'้ฏ›', u'้ฒธ': u'้ฏจ', u'้ฒน': u'้ฐบ', u'้ฒบ': u'้ฏด', u'้ฒป': u'้ฏ”', u'้ฒผ': u'้ฑ', u'้ฒฝ': u'้ฐˆ', u'้ฒพ': u'้ฐ', u'้ฒฟ': u'้ฑจ', u'้ณ€': u'้ฏท', u'้ณ': u'้ฐฎ', u'้ณ‚': u'้ฐƒ', u'้ณƒ': u'้ฐ“', u'้ณ„': u'้ฑท', u'้ณ…': u'้ฐ', u'้ณ†': u'้ฐ’', u'้ณ‡': u'้ฐ‰', u'้ณˆ': u'้ฐ', u'้ณ‰': u'้ฑ‚', u'้ณŠ': u'้ฏฟ', u'้ณ‹': u'้ฐ ', u'้ณŒ': u'้ฐฒ', u'้ณ': u'้ฐญ', u'้ณŽ': u'้ฐจ', u'้ณ': u'้ฐฅ', u'้ณ': u'้ฐฉ', u'้ณ‘': u'้ฐŸ', u'้ณ’': u'้ฐœ', u'้ณ“': u'้ฐณ', u'้ณ”': u'้ฐพ', u'้ณ•': u'้ฑˆ', u'้ณ–': u'้ฑ‰', u'้ณ—': u'้ฐป', u'้ณ˜': u'้ฐต', u'้ณ™': u'้ฑ…', u'้ณš': u'ไฒ', u'้ณ›': u'้ฐผ', u'้ณœ': u'้ฑ–', u'้ณ': u'้ฑ”', u'้ณž': u'้ฑ—', u'้ณŸ': u'้ฑ’', u'้ณ ': u'้ฑฏ', u'้ณก': u'้ฑค', u'้ณข': u'้ฑง', u'้ณฃ': u'้ฑฃ', u'้ธŸ': u'้ณฅ', u'้ธ ': u'้ณฉ', u'้ธก': u'้›ž', u'้ธข': u'้ณถ', u'้ธฃ': u'้ณด', u'้ธค': u'้ณฒ', u'้ธฅ': u'้ท—', u'้ธฆ': u'้ด‰', u'้ธง': u'้ถฌ', u'้ธจ': u'้ด‡', u'้ธฉ': u'้ด†', u'้ธช': u'้ดฃ', u'้ธซ': u'้ถ‡', u'้ธฌ': u'้ธ•', u'้ธญ': u'้ดจ', u'้ธฎ': u'้ดž', u'้ธฏ': u'้ดฆ', u'้ธฐ': u'้ด’', u'้ธฑ': u'้ดŸ', u'้ธฒ': u'้ด', u'้ธณ': u'้ด›', u'้ธด': u'้ทฝ', u'้ธต': u'้ด•', u'้ธถ': u'้ทฅ', u'้ธท': u'้ท™', u'้ธธ': u'้ดฏ', u'้ธน': u'้ดฐ', u'้ธบ': u'้ต‚', u'้ธป': u'้ดด', u'้ธผ': u'้ตƒ', u'้ธฝ': u'้ดฟ', u'้ธพ': u'้ธž', u'้ธฟ': u'้ดป', u'้น€': u'้ต', u'้น': u'้ต“', u'้น‚': u'้ธ', u'้นƒ': u'้ต‘', u'้น„': u'้ต ', u'้น…': u'้ต', u'้น†': u'้ต’', u'้น‡': u'้ทณ', u'้นˆ': u'้ตœ', u'้น‰': u'้ตก', u'้นŠ': u'้ตฒ', u'้น‹': u'้ถ“', u'้นŒ': u'้ตช', u'้น': u'้ตพ', u'้นŽ': u'้ตฏ', u'้น': u'้ตฌ', u'้น': u'้ตฎ', u'้น‘': u'้ถ‰', u'้น’': u'้ถŠ', u'้น“': u'้ตท', u'้น”': u'้ทซ', u'้น•': u'้ถ˜', u'้น–': u'้ถก', u'้น—': u'้ถš', u'้น˜': u'้ถป', u'้น™': u'้ถ–', u'้นš': u'้ถฟ', u'้น›': u'้ถฅ', u'้นœ': u'้ถฉ', u'้น': u'้ทŠ', u'้นž': u'้ท‚', u'้นŸ': u'้ถฒ', u'้น ': u'้ถน', u'้นก': u'้ถบ', u'้นข': u'้ท', u'้นฃ': u'้ถผ', u'้นค': u'้ถด', u'้นฅ': u'้ท–', u'้นฆ': u'้ธš', u'้นง': u'้ท“', u'้นจ': u'้ทš', u'้นฉ': u'้ทฏ', u'้นช': u'้ทฆ', u'้นซ': u'้ทฒ', u'้นฌ': u'้ทธ', u'้นญ': u'้ทบ', u'้นฎ': u'ได‰', u'้นฏ': u'้ธ‡', u'้นฐ': u'้ทน', u'้นฑ': u'้ธŒ', u'้นฒ': u'้ธ', u'้นณ': u'้ธ›', u'้นด': u'้ธ˜', u'้นพ': u'้นบ', u'้บฆ': u'้บฅ', u'้บธ': u'้บฉ', u'้ป„': u'้ปƒ', u'้ป‰': u'้ปŒ', u'้ปก': u'้ปถ', u'้ปฉ': u'้ปท', u'้ปช': u'้ปฒ', u'้ปพ': u'้ปฝ', u'้ผ‹': u'้ปฟ', u'้ผ': u'้ผ‰', u'้ผ—': u'้ž€', u'้ผน': u'้ผด', u'้ฝ„': u'้ฝ‡', u'้ฝ': u'้ฝŠ', u'้ฝ‘': u'้ฝ', u'้ฝฟ': u'้ฝ’', u'้พ€': u'้ฝ”', u'้พ': u'้ฝ•', u'้พ‚': u'้ฝ—', u'้พƒ': u'้ฝŸ', u'้พ„': u'้ฝก', u'้พ…': u'้ฝ™', u'้พ†': u'้ฝ ', u'้พ‡': u'้ฝœ', u'้พˆ': u'้ฝฆ', u'้พ‰': u'้ฝฌ', u'้พŠ': u'้ฝช', u'้พ‹': u'้ฝฒ', u'้พŒ': u'้ฝท', u'้พ™': u'้พ', u'้พš': u'้พ”', u'้พ›': u'้พ•', u'้พŸ': u'้พœ', u'๐ ฎถ': u'ๅ—ฐ', u'๐ก’„': u'ๅฃˆ', u'๐ฆˆ–': u'ไŒˆ', u'๐จฐพ': u'้Žท', u'๐จฐฟ': u'้‡ณ', u'๐จฑ€': u'๐จฅ›', u'๐จฑ': u'้ˆ ', u'๐จฑ‚': u'้ˆ‹', u'๐จฑƒ': u'้ˆฒ', u'๐จฑ„': u'้ˆฏ', u'๐จฑ…': u'้‰', u'๐จฑ‡': u'้Šถ', u'๐จฑˆ': u'้‹‰', u'๐จฑ‰': u'้„', u'๐จฑŠ': u'๐จงฑ', u'๐จฑ‹': u'้Œ‚', u'๐จฑŒ': u'้†', u'๐จฑ': u'้Žฏ', u'๐จฑŽ': u'้ฎ', u'๐จฑ': u'้Ž', u'๐จฑ': u'๐จซ’', u'๐จฑ’': u'้‰', u'๐จฑ“': u'้Ž', u'๐จฑ”': u'้', u'๐จฑ•': u'๐จฎ‚', u'๐จธ‚': u'้–', u'๐จธƒ': u'้–', u'๐ฉผ': u'ไช', u'๐ฉฝ': u'๐ฉช', u'๐ฉพ': u'๐ฉŽข', u'๐ฉฟ': u'ไช˜', u'๐ฉ€': u'ไช—', u'๐ฉ–•': u'๐ฉ“ฃ', u'๐ฉ––': u'้กƒ', u'๐ฉ–—': u'ไซด', u'๐ฉ™ฅ': u'้ขฐ', u'๐ฉ™ฆ': u'๐ฉ—€', u'๐ฉ™ง': u'๐ฉ—ก', u'๐ฉ™จ': u'๐ฉ˜น', u'๐ฉ™ฉ': u'๐ฉ˜€', u'๐ฉ™ช': u'้ขท', u'๐ฉ™ซ': u'้ขพ', u'๐ฉ™ฌ': u'๐ฉ˜บ', u'๐ฉ™ญ': u'๐ฉ˜', u'๐ฉ™ฎ': u'ไฌ˜', u'๐ฉ™ฏ': u'ไฌ', u'๐ฉ™ฐ': u'๐ฉ™ˆ', u'๐ฉ …': u'๐ฉŸ', u'๐ฉ †': u'๐ฉœฆ', u'๐ฉ ‡': u'ไญ€', u'๐ฉ ˆ': u'ไญƒ', u'๐ฉ ‹': u'๐ฉ”', u'๐ฉ Œ': u'้คธ', u'๐ฉงฆ': u'๐ฉกบ', u'๐ฉงจ': u'้งŽ', u'๐ฉงฉ': u'๐ฉคŠ', u'๐ฉงช': u'ไฎพ', u'๐ฉงซ': u'้งš', u'๐ฉงฌ': u'๐ฉขก', u'๐ฉงญ': u'ไญฟ', u'๐ฉงฎ': u'๐ฉขพ', u'๐ฉงฏ': u'้ฉ‹', u'๐ฉงฐ': u'ไฎ', u'๐ฉงฑ': u'๐ฉฅ‰', u'๐ฉงฒ': u'้งง', u'๐ฉงณ': u'๐ฉขธ', u'๐ฉงด': u'้งฉ', u'๐ฉงต': u'๐ฉขด', u'๐ฉงถ': u'๐ฉฃ', u'๐ฉงบ': u'้งถ', u'๐ฉงป': u'๐ฉฃต', u'๐ฉงผ': u'๐ฉฃบ', u'๐ฉงฟ': u'ไฎ ', u'๐ฉจ€': u'้จ”', u'๐ฉจ': u'ไฎž', u'๐ฉจƒ': u'้จ', u'๐ฉจ„': u'้จช', u'๐ฉจ…': u'๐ฉคธ', u'๐ฉจ†': u'๐ฉค™', u'๐ฉจˆ': u'้จŸ', u'๐ฉจ‰': u'๐ฉคฒ', u'๐ฉจŠ': u'้จš', u'๐ฉจ‹': u'๐ฉฅ„', u'๐ฉจŒ': u'๐ฉฅ‘', u'๐ฉจ': u'๐ฉฅ‡', u'๐ฉจ': u'ไฎณ', u'๐ฉจ': u'๐ฉง†', u'๐ฉฝน': u'้ญฅ', u'๐ฉฝบ': u'๐ฉตฉ', u'๐ฉฝป': u'๐ฉตน', u'๐ฉฝผ': u'้ฏถ', u'๐ฉฝฝ': u'๐ฉถฑ', u'๐ฉฝพ': u'้ฎŸ', u'๐ฉฝฟ': u'๐ฉถฐ', u'๐ฉพ€': u'้ฎ•', u'๐ฉพ': u'้ฏ„', u'๐ฉพƒ': u'้ฎธ', u'๐ฉพ„': u'๐ฉทฐ', u'๐ฉพ…': u'๐ฉธƒ', u'๐ฉพ†': u'๐ฉธฆ', u'๐ฉพ‡': u'้ฏฑ', u'๐ฉพˆ': u'ไฑ™', u'๐ฉพŠ': u'ไฑฌ', u'๐ฉพ‹': u'ไฑฐ', u'๐ฉพŒ': u'้ฑ‡', u'๐ฉพŽ': u'๐ฉฝ‡', u'๐ช‰‚': u'ไฒฐ', u'๐ช‰ƒ': u'้ณผ', u'๐ช‰„': u'๐ฉฟช', u'๐ช‰…': u'๐ช€ฆ', u'๐ช‰†': u'้ดฒ', u'๐ช‰ˆ': u'้ดœ', u'๐ช‰‰': u'๐ชˆ', u'๐ช‰Š': u'้ทจ', u'๐ช‰‹': u'๐ช€พ', u'๐ช‰Œ': u'๐ช–', u'๐ช‰': u'้ตš', u'๐ช‰Ž': u'๐ช‚†', u'๐ช‰': u'๐ชƒ', u'๐ช‰': u'๐ชƒ', u'๐ช‰‘': u'้ท”', u'๐ช‰’': u'๐ช„•', u'๐ช‰”': u'๐ช„†', u'๐ช‰•': u'๐ช‡ณ', u'๐ชŽˆ': u'ไดฌ', u'๐ชމ': u'้บฒ', u'๐ชŽŠ': u'้บจ', u'๐ชŽ‹': u'ไดด', u'๐ชŽŒ': u'้บณ', u'๐ชš': u'๐ช˜€', u'๐ชš': u'๐ช˜ฏ', u'๐ชž': u'ๅ‡™', u'๐ชก': u'ๅ—น', u'๐ชขฎ': u'ๅœž', u'๐ชจŠ': u'ใžž', u'๐ชจ—': u'ๅฑฉ', u'๐ชป': u'็‘ฝ', u'๐ชพข': u'็', u'๐ซก': u'้ด—', u'๐ซ‚ˆ': u'ไ‰ฌ', u'๐ซ„จ': u'็ตบ', u'๐ซ„ธ': u'็บ', u'๐ซŒ€': u'่ฅ€', u'๐ซŒจ': u'่ฆผ', u'๐ซ™': u'่จ‘', u'๐ซŸ': u'๐งฆง', u'๐ซข': u'่ญŠ', u'๐ซฐ': u'่ซฐ', u'๐ซฒ': u'่ฌ', u'๐ซ‹': u'่นป', u'๐ซ„': u'่ป', u'๐ซ†': u'่ฝฃ', u'๐ซ‰': u'่ปจ', u'๐ซ': u'่ผ—', u'๐ซ“': u'่ผฎ', u'๐ซ“ง': u'้ˆ‡', u'๐ซ“ฉ': u'้ฆ', u'๐ซ”Ž': u'้', u'๐ซ— ': u'้คฆ', u'๐ซ—ฆ': u'้ค”', u'๐ซ—ง': u'้ค—', u'๐ซ—ฎ': u'้คญ', u'๐ซ—ด': u'้ฅ˜', u'๐ซ˜': u'้งƒ', u'๐ซ˜ฃ': u'้งป', u'๐ซ˜ค': u'้จƒ', u'๐ซ˜จ': u'้จ ', u'๐ซšˆ': u'้ฑฎ', u'๐ซš‰': u'้ญŸ', u'๐ซš’': u'้ฎ„', u'๐ซš”': u'้ฎฐ', u'๐ซš•': u'้ฐค', u'๐ซš™': u'้ฏ†', u'๐ซ››': u'้ณท', u'๐ซ›ž': u'้ดƒ', u'๐ซ›ข': u'้ธ‹', u'๐ซ›ถ': u'้ถ’', u'๐ซ›ธ': u'้ถ—', u'๎ ญ': u'ๆฃก', u'0ๅคšๅช': u'0ๅคš้šป', u'0ๅคฉๅŽ': u'0ๅคฉๅพŒ', u'0ๅช': u'0้šป', u'0ไฝ™': u'0้ค˜', u'1ๅคฉๅŽ': u'1ๅคฉๅพŒ', u'1ๅช': u'1้šป', u'1ไฝ™': u'1้ค˜', u'2ๅคฉๅŽ': u'2ๅคฉๅพŒ', u'2ๅช': u'2้šป', u'2ไฝ™': u'2้ค˜', u'3ๅคฉๅŽ': u'3ๅคฉๅพŒ', u'3ๅช': u'3้šป', u'3ไฝ™': u'3้ค˜', u'4ๅคฉๅŽ': u'4ๅคฉๅพŒ', u'4ๅช': u'4้šป', u'4ไฝ™': u'4้ค˜', u'5ๅคฉๅŽ': u'5ๅคฉๅพŒ', u'5ๅช': u'5้šป', u'5ไฝ™': u'5้ค˜', u'6ๅคฉๅŽ': u'6ๅคฉๅพŒ', u'6ๅช': u'6้šป', u'6ไฝ™': u'6้ค˜', u'7ๅคฉๅŽ': u'7ๅคฉๅพŒ', u'7ๅช': u'7้šป', u'7ไฝ™': u'7้ค˜', u'8ๅคฉๅŽ': u'8ๅคฉๅพŒ', u'8ๅช': u'8้šป', u'8ไฝ™': u'8้ค˜', u'9ๅคฉๅŽ': u'9ๅคฉๅพŒ', u'9ๅช': u'9้šป', u'9ไฝ™': u'9้ค˜', u'ยท่Œƒ': u'ยท่Œƒ', u'ใ€ๅ…‹ๅˆถ': u'ใ€ๅ‰‹ๅˆถ', u'ใ€‚ๅ…‹ๅˆถ': u'ใ€‚ๅ‰‹ๅˆถ', u'ใ€‡ๅช': u'ใ€‡้šป', u'ใ€‡ไฝ™': u'ใ€‡้ค˜', u'ไธ€ๅนฒไบŒๅ‡€': u'ไธ€ไนพไบŒๆทจ', u'ไธ€ไผ™ไบบ': u'ไธ€ไผ™ไบบ', u'ไธ€ไผ™ๅคด': u'ไธ€ไผ™้ ญ', u'ไธ€ไผ™้ฃŸ': u'ไธ€ไผ™้ฃŸ', u'ไธ€ๅนถ': u'ไธ€ไฝต', u'ไธ€ไธช': u'ไธ€ๅ€‹', u'ไธ€ไธชๅ‡†': u'ไธ€ๅ€‹ๆบ–', u'ไธ€ๅ‡บๅˆŠ': u'ไธ€ๅ‡บๅˆŠ', u'ไธ€ๅ‡บๅฃ': u'ไธ€ๅ‡บๅฃ', u'ไธ€ๅ‡บ็‰ˆ': u'ไธ€ๅ‡บ็‰ˆ', u'ไธ€ๅ‡บ็”Ÿ': u'ไธ€ๅ‡บ็”Ÿ', u'ไธ€ๅ‡บ็ฅๅฑฑ': u'ไธ€ๅ‡บ็ฅๅฑฑ', u'ไธ€ๅ‡บ้€ƒ': u'ไธ€ๅ‡บ้€ƒ', u'ไธ€ๅˆ’': u'ไธ€ๅŠƒ', u'ไธ€ๅŠๅช': u'ไธ€ๅŠๅช', u'ไธ€ๅŠ้’ฑ': u'ไธ€ๅŠ้Œข', u'ไธ€ๅœฐ้‡Œ': u'ไธ€ๅœฐ่ฃก', u'ไธ€ไผ™': u'ไธ€ๅคฅ', u'ไธ€ๅคฉๅŽ': u'ไธ€ๅคฉๅพŒ', u'ไธ€ๅคฉ้’Ÿ': u'ไธ€ๅคฉ้˜', u'ไธ€ๅนฒไบบ': u'ไธ€ๅนฒไบบ', u'ไธ€ๅนฒๅฎถไธญ': u'ไธ€ๅนฒๅฎถไธญ', u'ไธ€ๅนฒๅผŸๅ…„': u'ไธ€ๅนฒๅผŸๅ…„', u'ไธ€ๅนฒๅผŸๅญ': u'ไธ€ๅนฒๅผŸๅญ', u'ไธ€ๅนฒ้ƒจไธ‹': u'ไธ€ๅนฒ้ƒจไธ‹', u'ไธ€ๅŠ': u'ไธ€ๅผ”', u'ไธ€ๅˆซๅคด': u'ไธ€ๅฝ†้ ญ', u'ไธ€ๆ–—ๆ–—': u'ไธ€ๆ–—ๆ–—', u'ไธ€ๆ ‘็™พ่Žท': u'ไธ€ๆจน็™พ็ฉซ', u'ไธ€ๅ‡†': u'ไธ€ๆบ–', u'ไธ€ไบ‰ไธคไธ‘': u'ไธ€็ˆญๅ…ฉ้†œ', u'ไธ€็‰ฉๅ…‹ไธ€็‰ฉ': u'ไธ€็‰ฉๅ‰‹ไธ€็‰ฉ', u'ไธ€็›ฎไบ†็„ถ': u'ไธ€็›ฎไบ†็„ถ', u'ไธ€ๆ‰Ž': u'ไธ€็ดฎ', u'ไธ€ๅ†ฒ': u'ไธ€่ก', u'ไธ€้”…้ข': u'ไธ€้‹้บต', u'ไธ€ๅช': u'ไธ€้šป', u'ไธ€้ข้ฃŸ': u'ไธ€้ข้ฃŸ', u'ไธ€ไฝ™': u'ไธ€้ค˜', u'ไธ€ๅ‘ๅƒ้’ง': u'ไธ€้ซฎๅƒ้ˆž', u'ไธ€ๅ“„่€Œๆ•ฃ': u'ไธ€้ฌจ่€Œๆ•ฃ', u'ไธ€ๅ‡บๅญ': u'ไธ€้ฝฃๅญ', u'ไธไธๅฝ“ๅฝ“': u'ไธไธ็•ถ็•ถ', u'ไธไธ‘': u'ไธไธ‘', u'ไธƒไธช': u'ไธƒๅ€‹', u'ไธƒๅ‡บๅˆŠ': u'ไธƒๅ‡บๅˆŠ', u'ไธƒๅ‡บๅฃ': u'ไธƒๅ‡บๅฃ', u'ไธƒๅ‡บ็‰ˆ': u'ไธƒๅ‡บ็‰ˆ', u'ไธƒๅ‡บ็”Ÿ': u'ไธƒๅ‡บ็”Ÿ', u'ไธƒๅ‡บ็ฅๅฑฑ': u'ไธƒๅ‡บ็ฅๅฑฑ', u'ไธƒๅ‡บ้€ƒ': u'ไธƒๅ‡บ้€ƒ', u'ไธƒๅˆ’': u'ไธƒๅŠƒ', u'ไธƒๅคฉๅŽ': u'ไธƒๅคฉๅพŒ', u'ไธƒๆƒ…ๅ…ญๆฌฒ': u'ไธƒๆƒ…ๅ…ญๆ…พ', u'ไธƒๆ‰Ž': u'ไธƒ็ดฎ', u'ไธƒๅช': u'ไธƒ้šป', u'ไธƒไฝ™': u'ไธƒ้ค˜', u'ไธ‡ไฟŸ': u'ไธ‡ไฟŸ', u'ไธ‡ๆ——': u'ไธ‡ๆ——', u'ไธ‰ไธช': u'ไธ‰ๅ€‹', u'ไธ‰ๅ‡บๅˆŠ': u'ไธ‰ๅ‡บๅˆŠ', u'ไธ‰ๅ‡บๅฃ': u'ไธ‰ๅ‡บๅฃ', u'ไธ‰ๅ‡บ็‰ˆ': u'ไธ‰ๅ‡บ็‰ˆ', u'ไธ‰ๅ‡บ็”Ÿ': u'ไธ‰ๅ‡บ็”Ÿ', u'ไธ‰ๅ‡บ็ฅๅฑฑ': u'ไธ‰ๅ‡บ็ฅๅฑฑ', u'ไธ‰ๅ‡บ้€ƒ': u'ไธ‰ๅ‡บ้€ƒ', u'ไธ‰ๅคฉๅŽ': u'ไธ‰ๅคฉๅพŒ', u'ไธ‰ๅพไธƒ่พŸ': u'ไธ‰ๅพตไธƒ่พŸ', u'ไธ‰ๅ‡†': u'ไธ‰ๆบ–', u'ไธ‰ๆ‰Ž': u'ไธ‰็ดฎ', u'ไธ‰็ปŸๅކ': u'ไธ‰็ตฑๆ›†', u'ไธ‰็ปŸๅކๅฒ': u'ไธ‰็ตฑๆญทๅฒ', u'ไธ‰ๅค': u'ไธ‰่ค‡', u'ไธ‰ๅช': u'ไธ‰้šป', u'ไธ‰ไฝ™': u'ไธ‰้ค˜', u'ไธŠๆขๅฑฑ': u'ไธŠๆขๅฑฑ', u'ไธŠๆข': u'ไธŠๆจ‘', u'ไธŠ็ญพๅ': u'ไธŠ็ฐฝๅ', u'ไธŠ็ญพๅญ—': u'ไธŠ็ฐฝๅญ—', u'ไธŠ็ญพๅ†™': u'ไธŠ็ฐฝๅฏซ', u'ไธŠ็ญพๆ”ถ': u'ไธŠ็ฐฝๆ”ถ', u'ไธŠ็ญพ': u'ไธŠ็ฑค', u'ไธŠ่ฏ': u'ไธŠ่—ฅ', u'ไธŠ่ฏพ้’Ÿ': u'ไธŠ่ชฒ้˜', u'ไธŠ้ข็ณŠ': u'ไธŠ้ข็ณŠ', u'ไธ‹ไป‘่ทฏ': u'ไธ‹ๅด™่ทฏ', u'ไธ‹ไบŽ': u'ไธ‹ๆ–ผ', u'ไธ‹ๆข': u'ไธ‹ๆจ‘', u'ไธ‹ๆณจ่งฃ': u'ไธ‹ๆณจ่งฃ', u'ไธ‹็ญพๅ': u'ไธ‹็ฐฝๅ', u'ไธ‹็ญพๅญ—': u'ไธ‹็ฐฝๅญ—', u'ไธ‹็ญพๅ†™': u'ไธ‹็ฐฝๅฏซ', u'ไธ‹็ญพๆ”ถ': u'ไธ‹็ฐฝๆ”ถ', u'ไธ‹็ญพ': u'ไธ‹็ฑค', u'ไธ‹่ฏ': u'ไธ‹่—ฅ', u'ไธ‹่ฏพ้’Ÿ': u'ไธ‹่ชฒ้˜', u'ไธๅนฒไธๅ‡€': u'ไธไนพไธๆทจ', u'ไธๅ ': u'ไธไฝ”', u'ไธๅ…‹่‡ชๅˆถ': u'ไธๅ…‹่‡ชๅˆถ', u'ไธๅ‡†ไป–': u'ไธๅ‡†ไป–', u'ไธๅ‡†ไฝ ': u'ไธๅ‡†ไฝ ', u'ไธๅ‡†ๅฅน': u'ไธๅ‡†ๅฅน', u'ไธๅ‡†ๅฎƒ': u'ไธๅ‡†ๅฎƒ', u'ไธๅ‡†ๆˆ‘': u'ไธๅ‡†ๆˆ‘', u'ไธๅ‡†ๆฒก': u'ไธๅ‡†ๆฒ’', u'ไธๅ‡†็ฟปๅฐ': u'ไธๅ‡†็ฟปๅฐ', u'ไธๅ‡†่ฎธ': u'ไธๅ‡†่จฑ', u'ไธๅ‡†่ฐ': u'ไธๅ‡†่ชฐ', u'ไธๅ…‹ๅˆถ': u'ไธๅ‰‹ๅˆถ', u'ไธๅŠ ่‡ชๅˆถ': u'ไธๅŠ ่‡ชๅˆถ', u'ไธๅ ๅ‡ถๅ‰': u'ไธๅ ๅ‡ถๅ‰', u'ไธๅ ๅœ': u'ไธๅ ๅœ', u'ไธๅ ๅ‰ๅ‡ถ': u'ไธๅ ๅ‰ๅ‡ถ', u'ไธๅ ็ฎ—': u'ไธๅ ็ฎ—', u'ไธๅฅฝๅนฒๆถ‰': u'ไธๅฅฝๅนฒๆถ‰', u'ไธๅฅฝๅนฒ้ข„': u'ไธๅฅฝๅนฒ้ ', u'ไธๅฅฝๅนฒ้ ': u'ไธๅฅฝๅนฒ้ ', u'ไธๅซŒๆฏไธ‘': u'ไธๅซŒๆฏ้†œ', u'ไธๅฏ’่€Œๆ —': u'ไธๅฏ’่€Œๆ…„', u'ไธๅนฒไบ‹': u'ไธๅนฒไบ‹', u'ไธๅนฒไป–': u'ไธๅนฒไป–', u'ไธๅนฒไผ‘': u'ไธๅนฒไผ‘', u'ไธๅนฒไฝ ': u'ไธๅนฒไฝ ', u'ไธๅนฒๅฅน': u'ไธๅนฒๅฅน', u'ไธๅนฒๅฎƒ': u'ไธๅนฒๅฎƒ', u'ไธๅนฒๆˆ‘': u'ไธๅนฒๆˆ‘', u'ไธๅนฒๆ“พ': u'ไธๅนฒๆ“พ', u'ไธๅนฒๆ‰ฐ': u'ไธๅนฒๆ“พ', u'ไธๅนฒๆถ‰': u'ไธๅนฒๆถ‰', u'ไธๅนฒ็‰ ': u'ไธๅนฒ็‰ ', u'ไธๅนฒ็Šฏ': u'ไธๅนฒ็Šฏ', u'ไธๅนฒ้ข„': u'ไธๅนฒ้ ', u'ไธๅนฒ้ ': u'ไธๅนฒ้ ', u'ไธๅนฒ': u'ไธๅนน', u'ไธๅŠ': u'ไธๅผ”', u'ไธ้‡‡': u'ไธๆŽก', u'ไธๆ–—่ƒ†': u'ไธๆ–—่†ฝ', u'ไธๆ–ญๅ‘': u'ไธๆ–ท็™ผ', u'ไธๆฏๅช': u'ไธๆฏๅช', u'ไธๅ‡†': u'ไธๆบ–', u'ไธๅ‡†็กฎ': u'ไธๆบ–็ขบ', u'ไธ่ฐท': u'ไธ็ฉ€', u'ไธ่ฏ่€Œๆ„ˆ': u'ไธ่—ฅ่€Œ็™’', u'ไธๆ‰˜': u'ไธ่จ—', u'ไธ่ดŸๆ‰€ๆ‰˜': u'ไธ่ฒ ๆ‰€ๆ‰˜', u'ไธ้€šๅŠๅบ†': u'ไธ้€šๅผ”ๆ…ถ', u'ไธไธ‘': u'ไธ้†œ', u'ไธ้‡‡ๅฃฐ': u'ไธ้‡‡่ฒ', u'ไธ้”ˆ้’ข': u'ไธ้ฝ้‹ผ', u'ไธ้ฃŸๅนฒ่…Š': u'ไธ้ฃŸไนพ่…Š', u'ไธๆ–—': u'ไธ้ฌฅ', u'ไธ‘ไธ‰': u'ไธ‘ไธ‰', u'ไธ‘ๅฉ†ๅญ': u'ไธ‘ๅฉ†ๅญ', u'ไธ‘ๅนด': u'ไธ‘ๅนด', u'ไธ‘ๆ—ฅ': u'ไธ‘ๆ—ฅ', u'ไธ‘ๆ—ฆ': u'ไธ‘ๆ—ฆ', u'ไธ‘ๆ—ถ': u'ไธ‘ๆ™‚', u'ไธ‘ๆœˆ': u'ไธ‘ๆœˆ', u'ไธ‘่กจๅŠŸ': u'ไธ‘่กจๅŠŸ', u'ไธ‘่ง’': u'ไธ‘่ง’', u'ไธ”ไบŽ': u'ไธ”ๆ–ผ', u'ไธ–็”ฐ่ฐท': u'ไธ–็”ฐ่ฐท', u'ไธ–็•Œๆฏ': u'ไธ–็•Œ็›ƒ', u'ไธ–็•Œ้‡Œ': u'ไธ–็•Œ่ฃก', u'ไธ–็บช้’Ÿ': u'ไธ–็ด€้˜', u'ไธ–็บช้’Ÿ่กจ': u'ไธ–็ด€้˜้Œถ', u'ไธขไธ‘': u'ไธŸ้†œ', u'ๅนถไธๅ‡†': u'ไธฆไธๅ‡†', u'ๅนถๅญ˜็€': u'ไธฆๅญ˜่‘—', u'ๅนถๆ›ฐๅ…ฅๆท€': u'ไธฆๆ›ฐๅ…ฅๆพฑ', u'ๅนถๅ‘ๅŠจ': u'ไธฆ็™ผๅ‹•', u'ๅนถๅ‘ๅฑ•': u'ไธฆ็™ผๅฑ•', u'ๅนถๅ‘็Žฐ': u'ไธฆ็™ผ็พ', u'ๅนถๅ‘่กจ': u'ไธฆ็™ผ่กจ', u'ไธญๅ›ฝๅ›ฝ้™…ไฟกๆ‰˜ๆŠ•่ต„ๅ…ฌๅธ': u'ไธญๅœ‹ๅœ‹้š›ไฟกๆ‰˜ๆŠ•่ณ‡ๅ…ฌๅธ', u'ไธญๅž‹้’Ÿ': u'ไธญๅž‹้˜', u'ไธญๅž‹้’Ÿ่กจ้ข': u'ไธญๅž‹้˜่กจ้ข', u'ไธญๅž‹้’Ÿ่กจ': u'ไธญๅž‹้˜้Œถ', u'ไธญๅž‹้’Ÿ้ข': u'ไธญๅž‹้˜้ข', u'ไธญไป‘': u'ไธญๅด™', u'ไธญๅฒณ': u'ไธญๅถฝ', u'ไธญๅบ„ๅญ': u'ไธญๅบ„ๅญ', u'ไธญๆ–‡้‡Œ': u'ไธญๆ–‡่ฃก', u'ไธญไบŽ': u'ไธญๆ–ผ', u'ไธญ็ญพ': u'ไธญ็ฑค', u'ไธญ็พŽๅ‘่กจ': u'ไธญ็พŽ็™ผ่กจ', u'ไธญ่ฏ': u'ไธญ่—ฅ', u'ไธญ้ฃŽๅŽ': u'ไธญ้ขจๅพŒ', u'ไธฐๅ„€': u'ไธฐๅ„€', u'ไธฐไปช': u'ไธฐๅ„€', u'ไธฐๅ—': u'ไธฐๅ—', u'ไธฐๅฐ': u'ไธฐๅฐ', u'ไธฐๅงฟ': u'ไธฐๅงฟ', u'ไธฐๅฎน': u'ไธฐๅฎน', u'ไธฐๅบฆ': u'ไธฐๅบฆ', u'ไธฐๆƒ…': u'ไธฐๆƒ…', u'ไธฐๆ ‡': u'ไธฐๆจ™', u'ไธฐๆจ™ไธๅ‡ก': u'ไธฐๆจ™ไธๅ‡ก', u'ไธฐๆ ‡ไธๅ‡ก': u'ไธฐๆจ™ไธๅ‡ก', u'ไธฐ็ฅž': u'ไธฐ็ฅž', u'ไธฐ่Œธ': u'ไธฐ่Œธ', u'ไธฐ้‡‡': u'ไธฐ้‡‡', u'ไธฐ้Ÿต': u'ไธฐ้Ÿป', u'ไธฐ้Ÿป': u'ไธฐ้Ÿป', u'ไธธ่ฏ': u'ไธธ่—ฅ', u'ไธน่ฏ': u'ไธน่—ฅ', u'ไธปไป†': u'ไธปๅƒ•', u'ไธปๅนฒ': u'ไธปๅนน', u'ไธป้’Ÿๅทฎ': u'ไธป้˜ๅทฎ', u'ไธป้’Ÿๆ›ฒ็บฟ': u'ไธป้˜ๆ›ฒ็ทš', u'ไนˆไนˆๅฐไธ‘': u'ไนˆ้บผๅฐไธ‘', u'ไน‹ไธ€ๅช': u'ไน‹ไธ€ๅช', u'ไน‹ไบŒๅช': u'ไน‹ไบŒๅช', u'ไน‹ๅ…ซไนๅช': u'ไน‹ๅ…ซไนๅช', u'ไน‹ๅพ': u'ไน‹ๅพต', u'ไน‹ๆ‰˜': u'ไน‹่จ—', u'ไน‹้’Ÿ': u'ไน‹้˜', u'ไน‹ไฝ™': u'ไน‹้ค˜', u'ไน™ไธ‘': u'ไน™ไธ‘', u'ไนไธ–ไน‹ไป‡': u'ไนไธ–ไน‹่ฎŽ', u'ไนไธช': u'ไนๅ€‹', u'ไนๅ‡บๅˆŠ': u'ไนๅ‡บๅˆŠ', u'ไนๅ‡บๅฃ': u'ไนๅ‡บๅฃ', u'ไนๅ‡บ็‰ˆ': u'ไนๅ‡บ็‰ˆ', u'ไนๅ‡บ็”Ÿ': u'ไนๅ‡บ็”Ÿ', u'ไนๅ‡บ็ฅๅฑฑ': u'ไนๅ‡บ็ฅๅฑฑ', u'ไนๅ‡บ้€ƒ': u'ไนๅ‡บ้€ƒ', u'ไนๅˆ’': u'ไนๅŠƒ', u'ไนๅคฉๅŽ': u'ไนๅคฉๅพŒ', u'ไน่ฐท': u'ไน็ฉ€', u'ไนๆ‰Ž': u'ไน็ดฎ', u'ไนๅช': u'ไน้šป', u'ไนไฝ™': u'ไน้ค˜', u'ไน้พ™่กจ่กŒ': u'ไน้พ่กจ่กŒ', u'ไนŸๅ…‹ๅˆถ': u'ไนŸๅ‰‹ๅˆถ', u'ไนŸๆ–—ไบ†่ƒ†': u'ไนŸๆ–—ไบ†่†ฝ', u'ๅนฒๅนฒ': u'ไนพไนพ', u'ๅนฒๅนฒๅ„ฟ็š„': u'ไนพไนพๅ…’็š„', u'ๅนฒๅนฒๅ‡€ๅ‡€': u'ไนพไนพๆทจๆทจ', u'ๅนฒไบ•': u'ไนพไบ•', u'ๅนฒไธชๅคŸ': u'ไนพๅ€‹ๅค ', u'ๅนฒๅ„ฟ': u'ไนพๅ…’', u'ๅนฒๅ†ฐ': u'ไนพๅ†ฐ', u'ๅนฒๅ†ท': u'ไนพๅ†ท', u'ๅนฒๅˆป็‰ˆ': u'ไนพๅˆป็‰ˆ', u'ๅนฒๅ‰ฅๅ‰ฅ': u'ไนพๅ‰ๅ‰', u'ๅนฒๅฆ': u'ไนพๅฆ', u'ๅนฒๅŠ็€ไธ‹ๅทด': u'ไนพๅŠ่‘—ไธ‹ๅทด', u'ๅนฒๅ’Œ': u'ไนพๅ’Œ', u'ๅนฒๅ’ณ': u'ไนพๅ’ณ', u'ๅนฒๅ’ฝ': u'ไนพๅ’ฝ', u'ๅนฒๅ“ฅ': u'ไนพๅ“ฅ', u'ๅนฒๅ“ญ': u'ไนพๅ“ญ', u'ๅนฒๅ”ฑ': u'ไนพๅ”ฑ', u'ๅนฒๅ•ผ': u'ไนพๅ•ผ', u'ๅนฒไน”': u'ไนพๅ–ฌ', u'ๅนฒๅ‘•': u'ไนพๅ˜”', u'ๅนฒๅ“•': u'ไนพๅ™ฆ', u'ๅนฒๅšŽ': u'ไนพๅšŽ', u'ๅนฒๅ›žไป˜': u'ไนพๅ›žไป˜', u'ๅนฒๅœ†ๆดๅ‡€': u'ไนพๅœ“ๆฝ”ๆทจ', u'ๅนฒๅœฐ': u'ไนพๅœฐ', u'ๅนฒๅค': u'ไนพๅค', u'ๅนฒๅž': u'ไนพๅกข', u'ๅนฒๅฅณ': u'ไนพๅฅณ', u'ๅนฒๅฅดๆ‰': u'ไนพๅฅดๆ‰', u'ๅนฒๅฆน': u'ไนพๅฆน', u'ๅนฒๅงŠ': u'ไนพๅงŠ', u'ๅนฒๅจ˜': u'ไนพๅจ˜', u'ๅนฒๅฆˆ': u'ไนพๅชฝ', u'ๅนฒๅญ': u'ไนพๅญ', u'ๅนฒๅญฃ': u'ไนพๅญฃ', u'ๅนฒๅฐธ': u'ไนพๅฑ', u'ๅนฒๅฑŽๆฉ›': u'ไนพๅฑŽๆฉ›', u'ๅนฒๅทด': u'ไนพๅทด', u'ๅนฒๅผ': u'ไนพๅผ', u'ๅนฒๅผŸ': u'ไนพๅผŸ', u'ๅนฒๆ€ฅ': u'ไนพๆ€ฅ', u'ๅนฒๆ€ง': u'ไนพๆ€ง', u'ๅนฒๆ‰“้›ท': u'ไนพๆ‰“้›ท', u'ๅนฒๆŠ˜': u'ไนพๆŠ˜', u'ๅนฒๆ’‚ๅฐ': u'ไนพๆ’‚ๅฐ', u'ๅนฒๆ’‡ไธ‹': u'ไนพๆ’‡ไธ‹', u'ๅนฒๆ“ฆ': u'ไนพๆ“ฆ', u'ๅนฒๆ”ฏๅ‰Œ': u'ไนพๆ”ฏๅ‰Œ', u'ๅนฒๆ”ฏๆ”ฏ': u'ไนพๆ”ฏๆ”ฏ', u'ๅนฒๆ•ฒๆข†ๅญไธๅ–ๆฒน': u'ไนพๆ•ฒๆข†ๅญไธ่ณฃๆฒน', u'ๅนฒๆ–™': u'ไนพๆ–™', u'ๅนฒๆ—ฑ': u'ไนพๆ—ฑ', u'ๅนฒๆš–': u'ไนพๆš–', u'ๅนฒๆ': u'ไนพๆ', u'ๅนฒๆ‘ๆฒ™': u'ไนพๆ‘ๆฒ™', u'ๅนฒๆฏ': u'ไนพๆฏ', u'ๅนฒๆžœ': u'ไนพๆžœ', u'ๅนฒๆžฏ': u'ไนพๆžฏ', u'ๅนฒๆŸด': u'ไนพๆŸด', u'ๅนฒๆŸด็ƒˆ็ซ': u'ไนพๆŸด็ƒˆ็ซ', u'ๅนฒๆข…': u'ไนพๆข…', u'ๅนฒๆญป': u'ไนพๆญป', u'ๅนฒๆฑ ': u'ไนพๆฑ ', u'ๅนฒๆฒก': u'ไนพๆฒ’', u'ๅนฒๆด—': u'ไนพๆด—', u'ๅนฒๆถธ': u'ไนพๆถธ', u'ๅนฒๅ‡‰': u'ไนพๆถผ', u'ๅนฒๅ‡€': u'ไนพๆทจ', u'ๅนฒๆธ ': u'ไนพๆธ ', u'ๅนฒๆธด': u'ไนพๆธด', u'ๅนฒๆฒŸ': u'ไนพๆบ', u'ๅนฒๆผ†': u'ไนพๆผ†', u'ๅนฒๆถฉ': u'ไนพๆพ€', u'ๅนฒๆนฟ': u'ไนพๆฟ•', u'ๅนฒ็†ฌ': u'ไนพ็†ฌ', u'ๅนฒ็ƒญ': u'ไนพ็†ฑ', u'ๅนฒ็†ฑ': u'ไนพ็†ฑ', u'ๅนฒ็ฏ็›': u'ไนพ็‡ˆ็›ž', u'ๅนฒ็‡ฅ': u'ไนพ็‡ฅ', u'ๅนฒ็ˆธ': u'ไนพ็ˆธ', u'ๅนฒ็ˆน': u'ไนพ็ˆน', u'ๅนฒ็ˆฝ': u'ไนพ็ˆฝ', u'ๅนฒ็‰‡': u'ไนพ็‰‡', u'ๅนฒ็”Ÿๅ—': u'ไนพ็”Ÿๅ—', u'ๅนฒ็”Ÿๅญ': u'ไนพ็”Ÿๅญ', u'ๅนฒไบง': u'ไนพ็”ข', u'ๅนฒ็”ฐ': u'ไนพ็”ฐ', u'ๅนฒ็–ฅ': u'ไนพ็–ฅ', u'ๅนฒ็˜ฆ': u'ไนพ็˜ฆ', u'ๅนฒ็˜ช': u'ไนพ็™Ÿ', u'ๅนฒ็™ฃ': u'ไนพ็™ฌ', u'ๅนฒ็˜พ': u'ไนพ็™ฎ', u'ๅนฒ็™ฝๅ„ฟ': u'ไนพ็™ฝๅ…’', u'ๅนฒ็š„': u'ไนพ็š„', u'ๅนฒ็œผ': u'ไนพ็œผ', u'ๅนฒ็žช็œผ': u'ไนพ็žช็œผ', u'ๅนฒ็คผ': u'ไนพ็ฆฎ', u'ๅนฒ็จฟ': u'ไนพ็จฟ', u'ๅนฒ็ฌ‘': u'ไนพ็ฌ‘', u'ๅนฒ็ญ‰': u'ไนพ็ญ‰', u'ๅนฒ็ฏพ็‰‡': u'ไนพ็ฏพ็‰‡', u'ๅนฒ็ฒ‰': u'ไนพ็ฒ‰', u'ๅนฒ็ฒฎ': u'ไนพ็ณง', u'ๅนฒ็ป“': u'ไนพ็ต', u'ๅนฒไธ': u'ไนพ็ตฒ', u'ๅนฒ็บฒ': u'ไนพ็ถฑ', u'ๅนฒ็ปท': u'ไนพ็นƒ', u'ๅนฒ่€—': u'ไนพ่€—', u'ๅนฒ่‚‰็‰‡': u'ไนพ่‚‰็‰‡', u'ๅนฒ่‚ก': u'ไนพ่‚ก', u'ๅนฒ่‚ฅ': u'ไนพ่‚ฅ', u'ๅนฒ่„†': u'ไนพ่„†', u'ๅนฒ่Šฑ': u'ไนพ่Šฑ', u'ๅนฒๅˆ': u'ไนพ่Šป', u'ๅนฒ่‹”': u'ไนพ่‹”', u'ๅนฒ่Œจ่…Š': u'ไนพ่Œจ่‡˜', u'ๅนฒ่Œถ้’ฑ': u'ไนพ่Œถ้Œข', u'ๅนฒ่‰': u'ไนพ่‰', u'ๅนฒ่œ': u'ไนพ่œ', u'ๅนฒ่ฝ': u'ไนพ่ฝ', u'ๅนฒ็€': u'ไนพ่‘—', u'ๅนฒๅงœ': u'ไนพ่–‘', u'ๅนฒ่–ช': u'ไนพ่–ช', u'ๅนฒ่™”': u'ไนพ่™”', u'ๅนฒๅท': u'ไนพ่™Ÿ', u'ๅนฒ่ก€ๆต†': u'ไนพ่ก€ๆผฟ', u'ๅนฒ่กฃ': u'ไนพ่กฃ', u'ๅนฒ่ฃ‚': u'ไนพ่ฃ‚', u'ๅนฒไบฒ': u'ไนพ่ฆช', u'ไนพ่ฑกๅކ': u'ไนพ่ฑกๆ›†', u'ไนพ่ฑกๆ›†': u'ไนพ่ฑกๆ›†', u'ๅนฒ่ด': u'ไนพ่ฒ', u'ๅนฒ่ดง': u'ไนพ่ฒจ', u'ๅนฒ่บ': u'ไนพ่บ', u'ๅนฒ้€ผ': u'ไนพ้€ผ', u'ๅนฒ้…ช': u'ไนพ้…ช', u'ๅนฒ้…ตๆฏ': u'ไนพ้…ตๆฏ', u'ๅนฒ้†‹': u'ไนพ้†‹', u'ๅนฒ้‡': u'ไนพ้‡', u'ๅนฒ้‡': u'ไนพ้‡', u'ๅนฒ้˜ฟๅฅถ': u'ไนพ้˜ฟๅฅถ', u'ๅนฒ้š†': u'ไนพ้š†', u'ๅนฒ้›ท': u'ไนพ้›ท', u'ๅนฒ็”ต': u'ไนพ้›ป', u'ๅนฒ้œไนฑ': u'ไนพ้œไบ‚', u'ๅนฒ้ขก': u'ไนพ้ก™', u'ๅนฒๅฐ': u'ไนพ้ขฑ', u'ๅนฒ้ฅญ': u'ไนพ้ฃฏ', u'ๅนฒ้ฆ†': u'ไนพ้คจ', u'ๅนฒ็ณ‡': u'ไนพ้คฑ', u'ๅนฒ้ฆ': u'ไนพ้คพ', u'ๅนฒ้ฑผ': u'ไนพ้ญš', u'ๅนฒ้ฒœ': u'ไนพ้ฎฎ', u'ๅนฒ้ข': u'ไนพ้บต', u'ไนฑๅ‘': u'ไบ‚้ซฎ', u'ไนฑๅ“„': u'ไบ‚้ฌจ', u'ไนฑๅ“„ไธ่ฟ‡ๆฅ': u'ไบ‚้ฌจไธ้Žไพ†', u'ไบ†ๅ…‹ๅˆถ': u'ไบ†ๅ‰‹ๅˆถ', u'ไบ‹ๆƒ…ๅนฒ่„†': u'ไบ‹ๆƒ…ๅนฒ่„†', u'ไบ‹ๆœ‰ๆ–—ๅทง': u'ไบ‹ๆœ‰้ฌฅๅทง', u'ไบ‹่ฟน': u'ไบ‹่ฟน', u'ไบ‹้ƒฝๅนฒ่„†': u'ไบ‹้ƒฝๅนฒ่„†', u'ไบŒไธๆฃฑ็™ป': u'ไบŒไธ็จœ็™ป', u'ไบŒไธช': u'ไบŒๅ€‹', u'ไบŒๅ‡บๅˆŠ': u'ไบŒๅ‡บๅˆŠ', u'ไบŒๅ‡บๅฃ': u'ไบŒๅ‡บๅฃ', u'ไบŒๅ‡บ็‰ˆ': u'ไบŒๅ‡บ็‰ˆ', u'ไบŒๅ‡บ็”Ÿ': u'ไบŒๅ‡บ็”Ÿ', u'ไบŒๅ‡บ็ฅๅฑฑ': u'ไบŒๅ‡บ็ฅๅฑฑ', u'ไบŒๅ‡บ้€ƒ': u'ไบŒๅ‡บ้€ƒ', u'ไบŒๅˆ’': u'ไบŒๅŠƒ', u'ไบŒๅชๅพ—': u'ไบŒๅชๅพ—', u'ไบŒๅคฉๅŽ': u'ไบŒๅคฉๅพŒ', u'ไบŒไป‘': u'ไบŒๅด™', u'ไบŒ็ผถ้’Ÿๆƒ‘': u'ไบŒ็ผถ้˜ๆƒ‘', u'ไบŒ่€ๆฟ': u'ไบŒ่€ๆฟ', u'ไบŒ่™Ž็›ธๆ–—': u'ไบŒ่™Ž็›ธ้ฌฅ', u'ไบŒ้‡Œๅคด': u'ไบŒ้‡Œ้ ญ', u'ไบŒ้‡Œ้ ญ': u'ไบŒ้‡Œ้ ญ', u'ไบŒๅช': u'ไบŒ้šป', u'ไบŒไฝ™': u'ไบŒ้ค˜', u'ไบŽไธน': u'ไบŽไธน', u'ไบŽไบŽ': u'ไบŽไบŽ', u'ไบŽไปๆณฐ': u'ไบŽไปๆณฐ', u'ไบŽไฝณๅ‰': u'ไบŽไฝณๅ‰', u'ไบŽไผŸๅ›ฝ': u'ไบŽๅ‰ๅœ‹', u'ไบŽๅ‰ๅœ‹': u'ไบŽๅ‰ๅœ‹', u'ไบŽๅ…‰้ ': u'ไบŽๅ…‰้ ', u'ไบŽๅ…‰่ฟœ': u'ไบŽๅ…‰้ ', u'ไบŽๅ…‹-่˜ญๅคš็ธฃ': u'ไบŽๅ…‹-่˜ญๅคš็ธฃ', u'ไบŽๅ…‹-ๅ…ฐๅคšๅŽฟ': u'ไบŽๅ…‹-่˜ญๅคš็ธฃ', u'ไบŽๅ…‹ๅ‹’': u'ไบŽๅ…‹ๅ‹’', u'ไบŽๅ†•': u'ไบŽๅ†•', u'ไบŽๅ‡ŒๅฅŽ': u'ไบŽๅ‡ŒๅฅŽ', u'ไบŽๅ‹’': u'ไบŽๅ‹’', u'ไบŽๅŒ–่™Ž': u'ไบŽๅŒ–่™Ž', u'ไบŽๅ ๅ…ƒ': u'ไบŽๅ ๅ…ƒ', u'ไบŽๅฐ็…™': u'ไบŽๅฐ็…™', u'ไบŽๅฐ็ƒŸ': u'ไบŽๅฐ็…™', u'ไบŽๅณไปป': u'ไบŽๅณไปป', u'ไบŽๅ‰': u'ไบŽๅ‰', u'ไบŽๅ“ๆตท': u'ไบŽๅ“ๆตท', u'ไบŽๅ›ฝๆกข': u'ไบŽๅœ‹ๆฅจ', u'ไบŽๅœ‹ๆฅจ': u'ไบŽๅœ‹ๆฅจ', u'ไบŽๅš': u'ไบŽๅ …', u'ไบŽๅ …': u'ไบŽๅ …', u'ไบŽๅคงๅฏถ': u'ไบŽๅคงๅฏถ', u'ไบŽๅคงๅฎ': u'ไบŽๅคงๅฏถ', u'ไบŽๅคฉไป': u'ไบŽๅคฉไป', u'ไบŽๅฅ‡ๅบ“ๆœๅ…‹': u'ไบŽๅฅ‡ๅบซๆœๅ…‹', u'ไบŽๅฅ‡ๅบซๆœๅ…‹': u'ไบŽๅฅ‡ๅบซๆœๅ…‹', u'ไบŽๅง“': u'ไบŽๅง“', u'ไบŽๅจœ': u'ไบŽๅจœ', u'ไบŽๅจŸ': u'ไบŽๅจŸ', u'ไบŽๅญๅƒ': u'ไบŽๅญๅƒ', u'ไบŽๅญ”ๅ…ผ': u'ไบŽๅญ”ๅ…ผ', u'ไบŽๅญธๅฟ ': u'ไบŽๅญธๅฟ ', u'ไบŽๅญฆๅฟ ': u'ไบŽๅญธๅฟ ', u'ไบŽๅฎถๅ ก': u'ไบŽๅฎถๅ ก', u'ไบŽๅฏ˜': u'ไบŽๅฏ˜', u'ไบŽๅฐไผŸ': u'ไบŽๅฐๅ‰', u'ไบŽๅฐๅ‰': u'ไบŽๅฐๅ‰', u'ไบŽๅฐๅฝค': u'ไบŽๅฐๅฝค', u'ไบŽๅฑฑ': u'ไบŽๅฑฑ', u'ไบŽๅฑฑๅ›ฝ': u'ไบŽๅฑฑๅœ‹', u'ไบŽๅฑฑๅœ‹': u'ไบŽๅฑฑๅœ‹', u'ไบŽๅธฅ': u'ไบŽๅธฅ', u'ไบŽๅธ…': u'ไบŽๅธฅ', u'ไบŽๅนผ่ป': u'ไบŽๅนผ่ป', u'ไบŽๅนผๅ†›': u'ไบŽๅนผ่ป', u'ไบŽๅบท้œ‡': u'ไบŽๅบท้œ‡', u'ไบŽๅปฃๆดฒ': u'ไบŽๅปฃๆดฒ', u'ไบŽๅนฟๆดฒ': u'ไบŽๅปฃๆดฒ', u'ไบŽๅผๆžš': u'ไบŽๅผๆžš', u'ไบŽๅพžๆฟ‚': u'ไบŽๅพžๆฟ‚', u'ไบŽไปŽๆฟ‚': u'ไบŽๅพžๆฟ‚', u'ไบŽๅพทๆตท': u'ไบŽๅพทๆตท', u'ไบŽๅฟ—ๅฎ': u'ไบŽๅฟ—ๅฏง', u'ไบŽๅฟ—ๅฏง': u'ไบŽๅฟ—ๅฏง', u'ไบŽๆ€': u'ไบŽๆ€', u'ไบŽๆ…Ž่กŒ': u'ไบŽๆ…Ž่กŒ', u'ไบŽๆ…ง': u'ไบŽๆ…ง', u'ไบŽๆˆ้พ™': u'ไบŽๆˆ้พ', u'ไบŽๆˆ้พ': u'ไบŽๆˆ้พ', u'ไบŽๆŒฏ': u'ไบŽๆŒฏ', u'ไบŽๆŒฏๆญฆ': u'ไบŽๆŒฏๆญฆ', u'ไบŽๆ•': u'ไบŽๆ•', u'ไบŽๆ•ไธญ': u'ไบŽๆ•ไธญ', u'ไบŽๆ–Œ': u'ไบŽๆ–Œ', u'ไบŽๆ–ฏๅก”ๅพท': u'ไบŽๆ–ฏๅก”ๅพท', u'ไบŽๆ–ฏ็บณๅฐ”ๆ–ฏ่ด้‡Œ': u'ไบŽๆ–ฏ็ด็ˆพๆ–ฏ่ฒ้‡Œ', u'ไบŽๆ–ฏ็ด็ˆพๆ–ฏ่ฒ้‡Œ': u'ไบŽๆ–ฏ็ด็ˆพๆ–ฏ่ฒ้‡Œ', u'ไบŽๆ–ฏ่พพๅฐ”': u'ไบŽๆ–ฏ้”็ˆพ', u'ไบŽๆ–ฏ้”็ˆพ': u'ไบŽๆ–ฏ้”็ˆพ', u'ไบŽๆ˜Žๆถ›': u'ไบŽๆ˜Žๆฟค', u'ไบŽๆ˜Žๆฟค': u'ไบŽๆ˜Žๆฟค', u'ไบŽๆ˜ฏไน‹': u'ไบŽๆ˜ฏไน‹', u'ไบŽๆ™จๆฅ ': u'ไบŽๆ™จๆฅ ', u'ไบŽๆ™ด': u'ไบŽๆ™ด', u'ไบŽๆœƒๆณณ': u'ไบŽๆœƒๆณณ', u'ไบŽไผšๆณณ': u'ไบŽๆœƒๆณณ', u'ไบŽๆ นไผŸ': u'ไบŽๆ นๅ‰', u'ไบŽๆ นๅ‰': u'ไบŽๆ นๅ‰', u'ไบŽๆ ผ': u'ไบŽๆ ผ', u'ไบŽๆจ‚': u'ไบŽๆจ‚', u'ไบŽๆ ‘ๆด': u'ไบŽๆจนๆฝ”', u'ไบŽๆจนๆฝ”': u'ไบŽๆจนๆฝ”', u'ไบŽๆฌฃๆบ': u'ไบŽๆฌฃๆบ', u'ไบŽๆญฃๅ‡': u'ไบŽๆญฃๆ˜‡', u'ไบŽๆญฃๆ˜‡': u'ไบŽๆญฃๆ˜‡', u'ไบŽๆญฃๆ˜Œ': u'ไบŽๆญฃๆ˜Œ', u'ไบŽๅฝ’': u'ไบŽๆญธ', u'ไบŽๆฐธๆณข': u'ไบŽๆฐธๆณข', u'ไบŽๆฑŸ้œ‡': u'ไบŽๆฑŸ้œ‡', u'ไบŽๆณข': u'ไบŽๆณข', u'ไบŽๆดชๅŒบ': u'ไบŽๆดชๅ€', u'ไบŽๆดชๅ€': u'ไบŽๆดชๅ€', u'ไบŽๆตฉๅจ': u'ไบŽๆตฉๅจ', u'ไบŽๆตทๆด‹': u'ไบŽๆตทๆด‹', u'ไบŽๆน˜ๅ…ฐ': u'ไบŽๆน˜่˜ญ', u'ไบŽๆน˜่˜ญ': u'ไบŽๆน˜่˜ญ', u'ไบŽๆผข่ถ…': u'ไบŽๆผข่ถ…', u'ไบŽๆฑ‰่ถ…': u'ไบŽๆผข่ถ…', u'ไบŽๆณฝๅฐ”': u'ไบŽๆพค็ˆพ', u'ไบŽๆพค็ˆพ': u'ไบŽๆพค็ˆพ', u'ไบŽๆถ›': u'ไบŽๆฟค', u'ไบŽๆฟค': u'ไบŽๆฟค', u'ไบŽ็ˆพๅฒ‘': u'ไบŽ็ˆพๅฒ‘', u'ไบŽๅฐ”ๅฒ‘': u'ไบŽ็ˆพๅฒ‘', u'ไบŽๅฐ”ๆ น': u'ไบŽ็ˆพๆ น', u'ไบŽ็ˆพๆ น': u'ไบŽ็ˆพๆ น', u'ไบŽๅฐ”้‡Œๅ…‹': u'ไบŽ็ˆพ้‡Œๅ…‹', u'ไบŽ็ˆพ้‡Œๅ…‹': u'ไบŽ็ˆพ้‡Œๅ…‹', u'ไบŽ็‰นๆฃฎ': u'ไบŽ็‰นๆฃฎ', u'ไบŽ็މ็ซ‹': u'ไบŽ็މ็ซ‹', u'ไบŽ็”ฐ': u'ไบŽ็”ฐ', u'ไบŽ็ฆ': u'ไบŽ็ฆ', u'ไบŽ็ง€ๆ•': u'ไบŽ็ง€ๆ•', u'ไบŽ็ด ็ง‹': u'ไบŽ็ด ็ง‹', u'ไบŽ็พŽไบบ': u'ไบŽ็พŽไบบ', u'ไบŽ่‹ฅๆœจ': u'ไบŽ่‹ฅๆœจ', u'ไบŽ่”ญ้œ–': u'ไบŽ่”ญ้œ–', u'ไบŽ่ซ้œ–': u'ไบŽ่”ญ้œ–', u'ไบŽ่กก': u'ไบŽ่กก', u'ไบŽ่ฅฟ็ฟฐ': u'ไบŽ่ฅฟ็ฟฐ', u'ไบŽ่ฌ™': u'ไบŽ่ฌ™', u'ไบŽ่ฐฆ': u'ไบŽ่ฌ™', u'ไบŽ่ฒ็ˆพ': u'ไบŽ่ฒ็ˆพ', u'ไบŽ่ดๅฐ”': u'ไบŽ่ฒ็ˆพ', u'ไบŽ่ต ': u'ไบŽ่ดˆ', u'ไบŽ่ดˆ': u'ไบŽ่ดˆ', u'ไบŽ่ถŠ': u'ไบŽ่ถŠ', u'ไบŽๅ†›': u'ไบŽ่ป', u'ไบŽ่ป': u'ไบŽ่ป', u'ไบŽ้“ๆณ‰': u'ไบŽ้“ๆณ‰', u'ไบŽ่ฟœไผŸ': u'ไบŽ้ ๅ‰', u'ไบŽ้ ๅ‰': u'ไบŽ้ ๅ‰', u'ไบŽ้ƒฝ็ธฃ': u'ไบŽ้ƒฝ็ธฃ', u'ไบŽ้ƒฝๅŽฟ': u'ไบŽ้ƒฝ็ธฃ', u'ไบŽ้‡ŒๅฏŸ': u'ไบŽ้‡ŒๅฏŸ', u'ไบŽ้˜—': u'ไบŽ้—', u'ไบŽ้›™ๆˆˆ': u'ไบŽ้›™ๆˆˆ', u'ไบŽๅŒๆˆˆ': u'ไบŽ้›™ๆˆˆ', u'ไบŽ้œ‡ๅฏฐ': u'ไบŽ้œ‡ๅฏฐ', u'ไบŽ้œ‡็Žฏ': u'ไบŽ้œ‡็’ฐ', u'ไบŽ้œ‡็’ฐ': u'ไบŽ้œ‡็’ฐ', u'ไบŽ้–': u'ไบŽ้–', u'ไบŽ้ž้—‡': u'ไบŽ้ž้—‡', u'ไบŽ้Ÿ‹ๆ–ฏๅฑˆ่Š': u'ไบŽ้Ÿ‹ๆ–ฏๅฑˆ่Š', u'ไบŽ้Ÿฆๆ–ฏๅฑˆ่Žฑ': u'ไบŽ้Ÿ‹ๆ–ฏๅฑˆ่Š', u'ไบŽ้ฃŽๆ”ฟ': u'ไบŽ้ขจๆ”ฟ', u'ไบŽ้ขจๆ”ฟ': u'ไบŽ้ขจๆ”ฟ', u'ไบŽ้ฃž': u'ไบŽ้ฃ›', u'ไบŽไฝ™ๆ›ฒๆŠ˜': u'ไบŽ้ค˜ๆ›ฒๆŠ˜', u'ไบŽๅ‡คๆก': u'ไบŽ้ณณๆก', u'ไบŽ้ณณๆก': u'ไบŽ้ณณๆก', u'ไบŽๅ‡ค่‡ณ': u'ไบŽ้ณณ่‡ณ', u'ไบŽ้ณณ่‡ณ': u'ไบŽ้ณณ่‡ณ', u'ไบŽ้ป˜ๅฅฅ': u'ไบŽ้ป˜ๅฅง', u'ไบŽ้ป˜ๅฅง': u'ไบŽ้ป˜ๅฅง', u'ไบ‘ไนŽ': u'ไบ‘ไนŽ', u'ไบ‘ไบ‘': u'ไบ‘ไบ‘', u'ไบ‘ไฝ•': u'ไบ‘ไฝ•', u'ไบ‘ไธบ': u'ไบ‘็‚บ', u'ไบ‘็‚บ': u'ไบ‘็‚บ', u'ไบ‘็„ถ': u'ไบ‘็„ถ', u'ไบ‘ๅฐ”': u'ไบ‘็ˆพ', u'ไบ‘๏ผš': u'ไบ‘๏ผš', u'ไบ”ไธช': u'ไบ”ๅ€‹', u'ไบ”ๅ‡บๅˆŠ': u'ไบ”ๅ‡บๅˆŠ', u'ไบ”ๅ‡บๅฃ': u'ไบ”ๅ‡บๅฃ', u'ไบ”ๅ‡บ็‰ˆ': u'ไบ”ๅ‡บ็‰ˆ', u'ไบ”ๅ‡บ็”Ÿ': u'ไบ”ๅ‡บ็”Ÿ', u'ไบ”ๅ‡บ็ฅๅฑฑ': u'ไบ”ๅ‡บ็ฅๅฑฑ', u'ไบ”ๅ‡บ้€ƒ': u'ไบ”ๅ‡บ้€ƒ', u'ไบ”ๅˆ’': u'ไบ”ๅŠƒ', u'ไบ”ๅคฉๅŽ': u'ไบ”ๅคฉๅพŒ', u'ไบ”ๅฒณ': u'ไบ”ๅถฝ', u'ไบ”่ฐท': u'ไบ”็ฉ€', u'ไบ”ๆ‰Ž': u'ไบ”็ดฎ', u'ไบ”่กŒ็”Ÿๅ…‹': u'ไบ”่กŒ็”Ÿๅ‰‹', u'ไบ”่ฐท็Ž‹ๅŒ—่ก—': u'ไบ”่ฐท็Ž‹ๅŒ—่ก—', u'ไบ”่ฐท็Ž‹ๅ—่ก—': u'ไบ”่ฐท็Ž‹ๅ—่ก—', u'ไบ”ๅช': u'ไบ”้šป', u'ไบ”ไฝ™': u'ไบ”้ค˜', u'ไบ”ๅ‡บ': u'ไบ”้ฝฃ', u'ไบ•ๅนฒๆ‘ง่ดฅ': u'ไบ•ๆฆฆๆ‘งๆ•—', u'ไบ•้‡Œ': u'ไบ•่ฃก', u'ไบšไบŽ': u'ไบžๆ–ผ', u'ไบš็พŽๅฐผไบšๅކ': u'ไบž็พŽๅฐผไบžๆ›†', u'ไบคๆ‰˜': u'ไบค่จ—', u'ไบคๆธธ': u'ไบค้Š', u'ไบคๅ“„': u'ไบค้ฌจ', u'ไบฆไบ‘': u'ไบฆไบ‘', u'ไบฆๅบ„ไบฆ่ฐ': u'ไบฆ่ŽŠไบฆ่ซง', u'ไบฎไธ‘': u'ไบฎ้†œ', u'ไบฎ้’Ÿ': u'ไบฎ้˜', u'ไบบไบ‘': u'ไบบไบ‘', u'ไบบๅ‚ๅŠ ': u'ไบบๅƒๅŠ ', u'ไบบๅ‚ๅฑ•': u'ไบบๅƒๅฑ•', u'ไบบๅ‚ๆˆ˜': u'ไบบๅƒๆˆฐ', u'ไบบๅ‚ๆ‹œ': u'ไบบๅƒๆ‹œ', u'ไบบๅ‚ๆ”ฟ': u'ไบบๅƒๆ”ฟ', u'ไบบๅ‚็…ง': u'ไบบๅƒ็…ง', u'ไบบๅ‚็œ‹': u'ไบบๅƒ็œ‹', u'ไบบๅ‚็ฆ…': u'ไบบๅƒ็ฆช', u'ไบบๅ‚่€ƒ': u'ไบบๅƒ่€ƒ', u'ไบบๅ‚ไธŽ': u'ไบบๅƒ่ˆ‡', u'ไบบๅ‚่ง': u'ไบบๅƒ่ฆ‹', u'ไบบๅ‚่ง‚': u'ไบบๅƒ่ง€', u'ไบบๅ‚่ฐ‹': u'ไบบๅƒ่ฌ€', u'ไบบๅ‚่ฎฎ': u'ไบบๅƒ่ญฐ', u'ไบบๅ‚่ตž': u'ไบบๅƒ่ดŠ', u'ไบบๅ‚้€': u'ไบบๅƒ้€', u'ไบบๅ‚้€‰': u'ไบบๅƒ้ธ', u'ไบบๅ‚้…Œ': u'ไบบๅƒ้…Œ', u'ไบบๅ‚้˜…': u'ไบบๅƒ้–ฑ', u'ไบบๅฆ‚้ฃŽๅŽๅ…ฅๆฑŸไบ‘': u'ไบบๅฆ‚้ขจๅพŒๅ…ฅๆฑŸ้›ฒ', u'ไบบๆฌฒ': u'ไบบๆ…พ', u'ไบบ็‰ฉๅฟ—': u'ไบบ็‰ฉ่ชŒ', u'ไบบๅ‚': u'ไบบ่”˜', u'ไป€้”ฆ้ข': u'ไป€้Œฆ้บต', u'ไป€ไนˆ': u'ไป€้บผ', u'ไป‡ไป‡': u'ไป‡่ฎŽ', u'ไป–ๅ…‹ๅˆถ': u'ไป–ๅ‰‹ๅˆถ', u'ไป–้’Ÿ': u'ไป–้˜', u'ไป˜ๆ‰˜': u'ไป˜่จ—', u'ไป™ๅŽ': u'ไป™ๅŽ', u'ไป™่ฏ': u'ไป™่—ฅ', u'ไปฃ็ ่กจ': u'ไปฃ็ขผ่กจ', u'ไปฃ่กจ': u'ไปฃ่กจ', u'ไปคไบบๅ‘ๆŒ‡': u'ไปคไบบ้ซฎๆŒ‡', u'ไปฅ่‡ชๅˆถ': u'ไปฅ่‡ชๅˆถ', u'ไปฐ่ฏ': u'ไปฐ่—ฅ', u'ไปถ้’Ÿ': u'ไปถ้˜', u'ไปปไฝ•่กจๆผ”': u'ไปปไฝ•่กจๆผ”', u'ไปปไฝ•่กจ็คบ': u'ไปปไฝ•่กจ็คบ', u'ไปปไฝ•่กจ้”': u'ไปปไฝ•่กจ้”', u'ไปปไฝ•่กจ่พพ': u'ไปปไฝ•่กจ้”', u'ไปปไฝ•่กจ': u'ไปปไฝ•้Œถ', u'ไปปไฝ•้’Ÿ': u'ไปปไฝ•้˜', u'ไปปไฝ•้’Ÿ่กจ': u'ไปปไฝ•้˜้Œถ', u'ไปปๆ•™ไบŽ': u'ไปปๆ•™ๆ–ผ', u'ไปปไบŽ': u'ไปปๆ–ผ', u'ไปฟๅˆถ': u'ไปฟ่ฃฝ', u'ไผๅˆ’': u'ไผๅŠƒ', u'ไผŠไบŽๆน–ๅบ•': u'ไผŠไบŽๆน–ๅบ•', u'ไผŠๅบœ้ข': u'ไผŠๅบœ้บต', u'ไผŠๆ–ฏๅ…ฐๆ•™ๅކ': u'ไผŠๆ–ฏ่˜ญๆ•™ๆ›†', u'ไผŠๆ–ฏๅ…ฐๆ•™ๅކๅฒ': u'ไผŠๆ–ฏ่˜ญๆ•™ๆญทๅฒ', u'ไผŠๆ–ฏๅ…ฐๅކ': u'ไผŠๆ–ฏ่˜ญๆ›†', u'ไผŠๆ–ฏๅ…ฐๅކๅฒ': u'ไผŠๆ–ฏ่˜ญๆญทๅฒ', u'ไผŠ้ƒ': u'ไผŠ้ฌฑ', u'ไผๅ‡ ': u'ไผๅ‡ ', u'ไผ็ฝชๅŠๆฐ‘': u'ไผ็ฝชๅผ”ๆฐ‘', u'ไผ‘ๅพ': u'ไผ‘ๅพต', u'ไผ™ๅคด': u'ไผ™้ ญ', u'ไผดๆธธ': u'ไผด้Š', u'ไผผไบŽ': u'ไผผๆ–ผ', u'ไฝ†ไบ‘': u'ไฝ†ไบ‘', u'ๅธƒไบŽ': u'ไฝˆๆ–ผ', u'ๅธƒ้“': u'ไฝˆ้“', u'ๅธƒ้›ทใ€': u'ไฝˆ้›ทใ€', u'ๅธƒ้›ทใ€‚': u'ไฝˆ้›ทใ€‚', u'ๅธƒ้›ทๅฐ้”': u'ไฝˆ้›ทๅฐ้Ž–', u'ๅธƒ้›ท็š„': u'ไฝˆ้›ท็š„', u'ๅธƒ้›ท่‰‡': u'ไฝˆ้›ท่‰‡', u'ๅธƒ้›ท่ˆฐ': u'ไฝˆ้›ท่‰ฆ', u'ๅธƒ้›ท้€Ÿๅบฆ': u'ไฝˆ้›ท้€Ÿๅบฆ', u'ๅธƒ้›ท๏ผŒ': u'ไฝˆ้›ท๏ผŒ', u'ๅธƒ้›ท๏ผ›': u'ไฝˆ้›ท๏ผ›', u'ไฝไบŽ': u'ไฝๆ–ผ', u'ไฝๅ‡†': u'ไฝๆบ–', u'ไฝŽๆดผ': u'ไฝŽๆดผ', u'ไฝๆ‰Ž': u'ไฝ็ดฎ', u'ๅ 0': u'ไฝ”0', u'ๅ 1': u'ไฝ”1', u'ๅ 2': u'ไฝ”2', u'ๅ 3': u'ไฝ”3', u'ๅ 4': u'ไฝ”4', u'ๅ 5': u'ไฝ”5', u'ๅ 6': u'ไฝ”6', u'ๅ 7': u'ไฝ”7', u'ๅ 8': u'ไฝ”8', u'ๅ 9': u'ไฝ”9', u'ๅ A': u'ไฝ”A', u'ๅ B': u'ไฝ”B', u'ๅ C': u'ไฝ”C', u'ๅ D': u'ไฝ”D', u'ๅ E': u'ไฝ”E', u'ๅ F': u'ไฝ”F', u'ๅ G': u'ไฝ”G', u'ๅ H': u'ไฝ”H', u'ๅ I': u'ไฝ”I', u'ๅ J': u'ไฝ”J', u'ๅ K': u'ไฝ”K', u'ๅ L': u'ไฝ”L', u'ๅ M': u'ไฝ”M', u'ๅ N': u'ไฝ”N', u'ๅ O': u'ไฝ”O', u'ๅ P': u'ไฝ”P', u'ๅ Q': u'ไฝ”Q', u'ๅ R': u'ไฝ”R', u'ๅ S': u'ไฝ”S', u'ๅ T': u'ไฝ”T', u'ๅ U': u'ไฝ”U', u'ๅ V': u'ไฝ”V', u'ๅ W': u'ไฝ”W', u'ๅ X': u'ไฝ”X', u'ๅ Y': u'ไฝ”Y', u'ๅ Z': u'ไฝ”Z', u'ๅ a': u'ไฝ”a', u'ๅ b': u'ไฝ”b', u'ๅ c': u'ไฝ”c', u'ๅ d': u'ไฝ”d', u'ๅ e': u'ไฝ”e', u'ๅ f': u'ไฝ”f', u'ๅ g': u'ไฝ”g', u'ๅ h': u'ไฝ”h', u'ๅ i': u'ไฝ”i', u'ๅ j': u'ไฝ”j', u'ๅ k': u'ไฝ”k', u'ๅ l': u'ไฝ”l', u'ๅ m': u'ไฝ”m', u'ๅ n': u'ไฝ”n', u'ๅ o': u'ไฝ”o', u'ๅ p': u'ไฝ”p', u'ๅ q': u'ไฝ”q', u'ๅ r': u'ไฝ”r', u'ๅ s': u'ไฝ”s', u'ๅ t': u'ไฝ”t', u'ๅ u': u'ไฝ”u', u'ๅ v': u'ไฝ”v', u'ๅ w': u'ไฝ”w', u'ๅ x': u'ไฝ”x', u'ๅ y': u'ไฝ”y', u'ๅ z': u'ไฝ”z', u'ๅ ใ€‡': u'ไฝ”ใ€‡', u'ๅ ไธ€': u'ไฝ”ไธ€', u'ๅ ไธƒ': u'ไฝ”ไธƒ', u'ๅ ไธ‡': u'ไฝ”ไธ‡', u'ๅ ไธ‰': u'ไฝ”ไธ‰', u'ๅ ไธŠ้ฃŽ': u'ไฝ”ไธŠ้ขจ', u'ๅ ไธ‹': u'ไฝ”ไธ‹', u'ๅ ไธ‹้ฃŽ': u'ไฝ”ไธ‹้ขจ', u'ๅ ไธๅ ': u'ไฝ”ไธไฝ”', u'ๅ ไธ่ถณ': u'ไฝ”ไธ่ถณ', u'ๅ ไธ–็•Œ': u'ไฝ”ไธ–็•Œ', u'ๅ ไธญ': u'ไฝ”ไธญ', u'ๅ ไธป': u'ไฝ”ไธป', u'ๅ ไน': u'ไฝ”ไน', u'ๅ ไบ†': u'ไฝ”ไบ†', u'ๅ ไบŒ': u'ไฝ”ไบŒ', u'ๅ ไบ”': u'ไฝ”ไบ”', u'ๅ ไบบไพฟๅฎœ': u'ไฝ”ไบบไพฟๅฎœ', u'ๅ ไฝ': u'ไฝ”ไฝ', u'ๅ ไฝ': u'ไฝ”ไฝ', u'ๅ ๅ ': u'ไฝ”ไฝ”', u'ๅ ไพฟๅฎœ': u'ไฝ”ไพฟๅฎœ', u'ๅ ไฟ„': u'ไฝ”ไฟ„', u'ๅ ไธช': u'ไฝ”ๅ€‹', u'ๅ ไธชไฝ': u'ไฝ”ๅ€‹ไฝ', u'ๅ ๅœ่ฝฆ': u'ไฝ”ๅœ่ปŠ', u'ๅ ไบฟ': u'ไฝ”ๅ„„', u'ๅ ไผ˜': u'ไฝ”ๅ„ช', u'ๅ ๅ…ˆ': u'ไฝ”ๅ…ˆ', u'ๅ ๅ…‰': u'ไฝ”ๅ…‰', u'ๅ ๅ…จ': u'ไฝ”ๅ…จ', u'ๅ ไธค': u'ไฝ”ๅ…ฉ', u'ๅ ๅ…ซ': u'ไฝ”ๅ…ซ', u'ๅ ๅ…ญ': u'ไฝ”ๅ…ญ', u'ๅ ๅˆ†': u'ไฝ”ๅˆ†', u'ๅ ๅˆฐ': u'ไฝ”ๅˆฐ', u'ๅ ๅŠ ': u'ไฝ”ๅŠ ', u'ๅ ๅŠฃ': u'ไฝ”ๅŠฃ', u'ๅ ๅŒ—': u'ไฝ”ๅŒ—', u'ๅ ๅ': u'ไฝ”ๅ', u'ๅ ๅƒ': u'ไฝ”ๅƒ', u'ๅ ๅŠ': u'ไฝ”ๅŠ', u'ๅ ๅ—': u'ไฝ”ๅ—', u'ๅ ๅฐ': u'ไฝ”ๅฐ', u'ๅ ๅŽป': u'ไฝ”ๅŽป', u'ๅ ๅ–': u'ไฝ”ๅ–', u'ๅ ๅฐ': u'ไฝ”ๅฐ', u'ๅ ๅ“บไนณ': u'ไฝ”ๅ“บไนณ', u'ๅ ๅ—ซ': u'ไฝ”ๅ›', u'ๅ ๅ››': u'ไฝ”ๅ››', u'ๅ ๅ›ฝๅ†…': u'ไฝ”ๅœ‹ๅ…ง', u'ๅ ๅœจ': u'ไฝ”ๅœจ', u'ๅ ๅœฐ': u'ไฝ”ๅœฐ', u'ๅ ๅœบ': u'ไฝ”ๅ ด', u'ๅ ๅŽ‹': u'ไฝ”ๅฃ“', u'ๅ ๅคš': u'ไฝ”ๅคš', u'ๅ ๅคง': u'ไฝ”ๅคง', u'ๅ ๅฅฝ': u'ไฝ”ๅฅฝ', u'ๅ ๅฐ': u'ไฝ”ๅฐ', u'ๅ ๅฐ‘': u'ไฝ”ๅฐ‘', u'ๅ ๅฑ€้ƒจ': u'ไฝ”ๅฑ€้ƒจ', u'ๅ ๅฑ‹': u'ไฝ”ๅฑ‹', u'ๅ ๅฑฑ': u'ไฝ”ๅฑฑ', u'ๅ ๅธ‚ๅœบ': u'ไฝ”ๅธ‚ๅ ด', u'ๅ ๅนณๅ‡': u'ไฝ”ๅนณๅ‡', u'ๅ ๅบŠ': u'ไฝ”ๅบŠ', u'ๅ ๅบง': u'ไฝ”ๅบง', u'ๅ ๅŽ': u'ไฝ”ๅพŒ', u'ๅ ๅพ—': u'ไฝ”ๅพ—', u'ๅ ๅพท': u'ไฝ”ๅพท', u'ๅ ๆމ': u'ไฝ”ๆމ', u'ๅ ๆฎ': u'ไฝ”ๆ“š', u'ๅ ๆ•ดไฝ“': u'ไฝ”ๆ•ด้ซ”', u'ๅ ๆ–ฐ': u'ไฝ”ๆ–ฐ', u'ๅ ๆœ‰': u'ไฝ”ๆœ‰', u'ๅ ๆœ‰ๆฌฒ': u'ไฝ”ๆœ‰ๆ…พ', u'ๅ ไธœ': u'ไฝ”ๆฑ', u'ๅ ๆŸฅ': u'ไฝ”ๆŸฅ', u'ๅ ๆฌก': u'ไฝ”ๆฌก', u'ๅ ๆฏ”': u'ไฝ”ๆฏ”', u'ๅ ๆณ•': u'ไฝ”ๆณ•', u'ๅ ๆปก': u'ไฝ”ๆปฟ', u'ๅ ๆพณ': u'ไฝ”ๆพณ', u'ๅ ไธบ': u'ไฝ”็‚บ', u'ๅ ็އ': u'ไฝ”็އ', u'ๅ ็”จ': u'ไฝ”็”จ', u'ๅ ๆฏ•': u'ไฝ”็•ข', u'ๅ ็™พ': u'ไฝ”็™พ', u'ๅ ๅฐฝ': u'ไฝ”็›ก', u'ๅ ็จณ': u'ไฝ”็ฉฉ', u'ๅ ็ฝ‘': u'ไฝ”็ถฒ', u'ๅ ็บฟ': u'ไฝ”็ทš', u'ๅ ๆ€ป': u'ไฝ”็ธฝ', u'ๅ ็ผบ': u'ไฝ”็ผบ', u'ๅ ็พŽ': u'ไฝ”็พŽ', u'ๅ ่€•': u'ไฝ”่€•', u'ๅ ่‡ณๅคš': u'ไฝ”่‡ณๅคš', u'ๅ ่‡ณๅฐ‘': u'ไฝ”่‡ณๅฐ‘', u'ๅ ่‹ฑ': u'ไฝ”่‹ฑ', u'ๅ ็€': u'ไฝ”่‘—', u'ๅ ่‘ก': u'ไฝ”่‘ก', u'ๅ ่‹': u'ไฝ”่˜‡', u'ๅ ่ฅฟ': u'ไฝ”่ฅฟ', u'ๅ ่ต„ๆบ': u'ไฝ”่ณ‡ๆบ', u'ๅ ่ตท': u'ไฝ”่ตท', u'ๅ ่ถ…่ฟ‡': u'ไฝ”่ถ…้Ž', u'ๅ ่ฟ‡': u'ไฝ”้Ž', u'ๅ ้“': u'ไฝ”้“', u'ๅ ้›ถ': u'ไฝ”้›ถ', u'ๅ ้ ˜': u'ไฝ”้ ˜', u'ๅ ้ข†': u'ไฝ”้ ˜', u'ๅ ๅคด': u'ไฝ”้ ญ', u'ๅ ๅคด็ญน': u'ไฝ”้ ญ็ฑŒ', u'ๅ ้ฅญ': u'ไฝ”้ฃฏ', u'ๅ ้ฆ™': u'ไฝ”้ฆ™', u'ๅ ้ฉฌ': u'ไฝ”้ฆฌ', u'ๅ ้ซ˜ๆžๅ„ฟ': u'ไฝ”้ซ˜ๆžๅ…’', u'ๅ ๏ผ': u'ไฝ”๏ผ', u'ๅ ๏ผ‘': u'ไฝ”๏ผ‘', u'ๅ ๏ผ’': u'ไฝ”๏ผ’', u'ๅ ๏ผ“': u'ไฝ”๏ผ“', u'ๅ ๏ผ”': u'ไฝ”๏ผ”', u'ๅ ๏ผ•': u'ไฝ”๏ผ•', u'ๅ ๏ผ–': u'ไฝ”๏ผ–', u'ๅ ๏ผ—': u'ไฝ”๏ผ—', u'ๅ ๏ผ˜': u'ไฝ”๏ผ˜', u'ๅ ๏ผ™': u'ไฝ”๏ผ™', u'ๅ ๏ผก': u'ไฝ”๏ผก', u'ๅ ๏ผข': u'ไฝ”๏ผข', u'ๅ ๏ผฃ': u'ไฝ”๏ผฃ', u'ๅ ๏ผค': u'ไฝ”๏ผค', u'ๅ ๏ผฅ': u'ไฝ”๏ผฅ', u'ๅ ๏ผฆ': u'ไฝ”๏ผฆ', u'ๅ ๏ผง': u'ไฝ”๏ผง', u'ๅ ๏ผจ': u'ไฝ”๏ผจ', u'ๅ ๏ผฉ': u'ไฝ”๏ผฉ', u'ๅ ๏ผช': u'ไฝ”๏ผช', u'ๅ ๏ผซ': u'ไฝ”๏ผซ', u'ๅ ๏ผฌ': u'ไฝ”๏ผฌ', u'ๅ ๏ผญ': u'ไฝ”๏ผญ', u'ๅ ๏ผฎ': u'ไฝ”๏ผฎ', u'ๅ ๏ผฏ': u'ไฝ”๏ผฏ', u'ๅ ๏ผฐ': u'ไฝ”๏ผฐ', u'ๅ ๏ผฑ': u'ไฝ”๏ผฑ', u'ๅ ๏ผฒ': u'ไฝ”๏ผฒ', u'ๅ ๏ผณ': u'ไฝ”๏ผณ', u'ๅ ๏ผด': u'ไฝ”๏ผด', u'ๅ ๏ผต': u'ไฝ”๏ผต', u'ๅ ๏ผถ': u'ไฝ”๏ผถ', u'ๅ ๏ผท': u'ไฝ”๏ผท', u'ๅ ๏ผธ': u'ไฝ”๏ผธ', u'ๅ ๏ผน': u'ไฝ”๏ผน', u'ๅ ๏ผบ': u'ไฝ”๏ผบ', u'ๅ ๏ฝ': u'ไฝ”๏ฝ', u'ๅ ๏ฝ‚': u'ไฝ”๏ฝ‚', u'ๅ ๏ฝƒ': u'ไฝ”๏ฝƒ', u'ๅ ๏ฝ„': u'ไฝ”๏ฝ„', u'ๅ ๏ฝ…': u'ไฝ”๏ฝ…', u'ๅ ๏ฝ†': u'ไฝ”๏ฝ†', u'ๅ ๏ฝ‡': u'ไฝ”๏ฝ‡', u'ๅ ๏ฝˆ': u'ไฝ”๏ฝˆ', u'ๅ ๏ฝ‰': u'ไฝ”๏ฝ‰', u'ๅ ๏ฝŠ': u'ไฝ”๏ฝŠ', u'ๅ ๏ฝ‹': u'ไฝ”๏ฝ‹', u'ๅ ๏ฝŒ': u'ไฝ”๏ฝŒ', u'ๅ ๏ฝ': u'ไฝ”๏ฝ', u'ๅ ๏ฝŽ': u'ไฝ”๏ฝŽ', u'ๅ ๏ฝ': u'ไฝ”๏ฝ', u'ๅ ๏ฝ': u'ไฝ”๏ฝ', u'ๅ ๏ฝ‘': u'ไฝ”๏ฝ‘', u'ๅ ๏ฝ’': u'ไฝ”๏ฝ’', u'ๅ ๏ฝ“': u'ไฝ”๏ฝ“', u'ๅ ๏ฝ”': u'ไฝ”๏ฝ”', u'ๅ ๏ฝ•': u'ไฝ”๏ฝ•', u'ๅ ๏ฝ–': u'ไฝ”๏ฝ–', u'ๅ ๏ฝ—': u'ไฝ”๏ฝ—', u'ๅ ๏ฝ˜': u'ไฝ”๏ฝ˜', u'ๅ ๏ฝ™': u'ไฝ”๏ฝ™', u'ๅ ๏ฝš': u'ไฝ”๏ฝš', u'ไฝ™ไธ‰่ƒœ': u'ไฝ™ไธ‰ๅ‹', u'ไฝ™ไธ‰ๅ‹': u'ไฝ™ไธ‰ๅ‹', u'ไฝ™ๅ…‰ไธญ': u'ไฝ™ๅ…‰ไธญ', u'ไฝ™ๅ…‰็”Ÿ': u'ไฝ™ๅ…‰็”Ÿ', u'ไฝ™ๅง“': u'ไฝ™ๅง“', u'ไฝ™ๅจๅพท': u'ไฝ™ๅจๅพท', u'ไฝ™ๅญๆ˜Ž': u'ไฝ™ๅญๆ˜Ž', u'ไฝ™ๆ€ๆ•': u'ไฝ™ๆ€ๆ•', u'ไฝ›็ฝ—ๆฃฑ่จ': u'ไฝ›็พ…็จœ่–ฉ', u'ไฝ›้’Ÿ': u'ไฝ›้˜', u'ไฝœๅ“้‡Œ': u'ไฝœๅ“่ฃก', u'ไฝœๅฅธ็Šฏ็ง‘': u'ไฝœๅงฆ็Šฏ็ง‘', u'ไฝœๅ‡†': u'ไฝœๆบ–', u'ไฝœๅบ„': u'ไฝœ่ŽŠ', u'ไฝ ๅ…‹ๅˆถ': u'ไฝ ๅ‰‹ๅˆถ', u'ไฝ ๆ–—ไบ†่ƒ†': u'ไฝ ๆ–—ไบ†่†ฝ', u'ไฝ ๆ‰ๅญๅ‘ๆ˜': u'ไฝ ็บ”ๅญ็™ผๆ˜', u'ไฝฃ้‡‘ๆ”ถ็›Š': u'ไฝฃ้‡‘ๆ”ถ็›Š', u'ไฝฃ้‡‘่ดน็”จ': u'ไฝฃ้‡‘่ฒป็”จ', u'ไฝณ่‚ด': u'ไฝณ่‚ด', u'ไฝณ้‡Œ้Žฎ': u'ไฝณ้‡Œ้Žฎ', u'ๅนถไธ€ไธไบŒ': u'ไฝตไธ€ไธไบŒ', u'ๅนถๅ…ฅ': u'ไฝตๅ…ฅ', u'ๅนถๅ…ผ': u'ไฝตๅ…ผ', u'ๅนถๅˆฐ': u'ไฝตๅˆฐ', u'ๅนถๅˆ': u'ไฝตๅˆ', u'ๅนถๅ': u'ไฝตๅ', u'ๅนถๅžไธ‹': u'ไฝตๅžไธ‹', u'ๅนถๆ‹ข': u'ไฝตๆ”', u'ๅนถๆกˆ': u'ไฝตๆกˆ', u'ๅนถๆต': u'ไฝตๆต', u'ๅนถ็ซ': u'ไฝต็ซ', u'ๅนถไธบไธ€ๅฎถ': u'ไฝต็‚บไธ€ๅฎถ', u'ๅนถไธบไธ€ไฝ“': u'ไฝต็‚บไธ€้ซ”', u'ๅนถไบง': u'ไฝต็”ข', u'ๅนถๅฝ“': u'ไฝต็•ถ', u'ๅนถๅ ': u'ไฝต็–Š', u'ๅนถๅ‘': u'ไฝต็™ผ', u'ๅนถ็ง‘': u'ไฝต็ง‘', u'ๅนถ็ฝ‘': u'ไฝต็ถฒ', u'ๅนถ็บฟ': u'ไฝต็ทš', u'ๅนถ่‚ฉๅญ': u'ไฝต่‚ฉๅญ', u'ๅนถ่ดญ': u'ไฝต่ณผ', u'ๅนถ้™ค': u'ไฝต้™ค', u'ๅนถ้ชจ': u'ไฝต้ชจ', u'ไฝฟๅ…ถๆ–—': u'ไฝฟๅ…ถ้ฌฅ', u'ๆฅไบŽ': u'ไพ†ๆ–ผ', u'ๆฅๅค': u'ไพ†่ค‡', u'ไพไป†': u'ไพๅƒ•', u'ไพ›ๅˆถ': u'ไพ›่ฃฝ', u'ไพไพไธ่ˆ': u'ไพไพไธๆจ', u'ไพๆ‰˜': u'ไพ่จ—', u'ไพตๅ ': u'ไพตไฝ”', u'ไพตๅนถ': u'ไพตไฝต', u'ไพตๅ ๅˆฐ': u'ไพตๅ ๅˆฐ', u'ไพตๅ ็ฝช': u'ไพตๅ ็ฝช', u'ไพฟ่ฏ': u'ไพฟ่—ฅ', u'็ณปๆ•ฐ': u'ไฟ‚ๆ•ธ', u'็ณปไธบ': u'ไฟ‚็‚บ', u'ไฟ„ๅ ': u'ไฟ„ไฝ”', u'ไฟ้™ฉๆŸœ': u'ไฟ้šชๆŸœ', u'ไฟกๆ‰˜่ดธๆ˜“': u'ไฟกๆ‰˜่ฒฟๆ˜“', u'ไฟกๆ‰˜': u'ไฟก่จ—', u'ไฟฎๆฐๆฅท': u'ไฟฎๆฐๆฅท', u'ไฟฎ็‚ผ': u'ไฟฎ้Š', u'ไฟฎ่ƒกๅˆ€': u'ไฟฎ้ฌๅˆ€', u'ไฟฏๅ†ฒ': u'ไฟฏ่ก', u'ไธชไบบ': u'ๅ€‹ไบบ', u'ไธช้‡Œ': u'ๅ€‹่ฃก', u'ไธช้’Ÿ': u'ๅ€‹้˜', u'ไธช้’Ÿ่กจ': u'ๅ€‹้˜้Œถ', u'ไปฌๅ…‹ๅˆถ': u'ๅ€‘ๅ‰‹ๅˆถ', u'ไปฌๆ–—ไบ†่ƒ†': u'ๅ€‘ๆ–—ไบ†่†ฝ', u'ๅ€’็ปทๅญฉๅ„ฟ': u'ๅ€’็นƒๅญฉๅ…’', u'ๅนธๅ…': u'ๅ€–ๅ…', u'ๅนธๅญ˜': u'ๅ€–ๅญ˜', u'ๅนธๅนธ': u'ๅ€–ๅนธ', u'ๅ€›ไธ‘': u'ๅ€›้†œ', u'ๅ€ŸๅฌไบŽ่‹': u'ๅ€Ÿ่ฝๆ–ผ่พ', u'ๅ€ฆๆธธ': u'ๅ€ฆ้Š', u'ๅ‡่ฏ': u'ๅ‡่—ฅ', u'ๅ‡ๆ‰˜': u'ๅ‡่จ—', u'ๅ‡ๅ‘': u'ๅ‡้ซฎ', u'ๅŽๅนฒ': u'ๅŽไนพ', u'ๅšๅบ„': u'ๅš่ŽŠ', u'ๅœๅœๅฝ“ๅฝ“': u'ๅœๅœ็•ถ็•ถ', u'ๅœๅพ': u'ๅœๅพต', u'ๅœๅˆถ': u'ๅœ่ฃฝ', u'ๅท้ธกไธ็€': u'ๅท้›žไธ่‘—', u'ไผช่ฏ': u'ๅฝ่—ฅ', u'ๅค‡ๆณจ': u'ๅ‚™่จป', u'ๅฎถไผ™': u'ๅ‚ขไผ™', u'ๅฎถไฟฑ': u'ๅ‚ขไฟฑ', u'ๅฎถๅ…ท': u'ๅ‚ขๅ…ท', u'ๅ‚ฌๅนถ': u'ๅ‚ฌไฝต', u'ไฝฃไธญไฝผไฝผ': u'ๅ‚ญไธญไฝผไฝผ', u'ไฝฃไบบ': u'ๅ‚ญไบบ', u'ไฝฃไป†': u'ๅ‚ญๅƒ•', u'ไฝฃๅ…ต': u'ๅ‚ญๅ…ต', u'ไฝฃๅทฅ': u'ๅ‚ญๅทฅ', u'ไฝฃๆ‡’': u'ๅ‚ญๆ‡ถ', u'ไฝฃไนฆ': u'ๅ‚ญๆ›ธ', u'ไฝฃ้‡‘': u'ๅ‚ญ้‡‘', u'ๅ‚ฒ้œœๆ–—้›ช': u'ๅ‚ฒ้œœ้ฌฅ้›ช', u'ไผ ไฝไบŽๅ››ๅคชๅญ': u'ๅ‚ณไฝไบŽๅ››ๅคชๅญ', u'ไผ ไบŽ': u'ๅ‚ณๆ–ผ', u'ไผค็—•็ดฏ็ดฏ': u'ๅ‚ท็—•็บ็บ', u'ๅ‚ป้‡Œๅ‚ปๆฐ”': u'ๅ‚ป่ฃกๅ‚ปๆฐฃ', u'ๅ€พๅค': u'ๅ‚พ่ค‡', u'ไป†ไบบ': u'ๅƒ•ไบบ', u'ไป†ไฝฟ': u'ๅƒ•ไฝฟ', u'ไป†ไป†': u'ๅƒ•ๅƒ•', u'ไป†ๅƒฎ': u'ๅƒ•ๅƒฎ', u'ไป†ๅ': u'ๅƒ•ๅ', u'ไป†ๅ›บๆ€€ๆฉ': u'ๅƒ•ๅ›บๆ‡ทๆฉ', u'ไป†ๅคซ': u'ๅƒ•ๅคซ', u'ไป†ๅง‘': u'ๅƒ•ๅง‘', u'ไป†ๅฉข': u'ๅƒ•ๅฉข', u'ไป†ๅฆ‡': u'ๅƒ•ๅฉฆ', u'ไป†ๅฐ„': u'ๅƒ•ๅฐ„', u'ไป†ๅฐ‘': u'ๅƒ•ๅฐ‘', u'ไป†ๅฝน': u'ๅƒ•ๅฝน', u'ไป†ไปŽ': u'ๅƒ•ๅพž', u'ไป†ๆ†Ž': u'ๅƒ•ๆ†Ž', u'ไป†ๆฌง': u'ๅƒ•ๆญ', u'ไป†็จ‹': u'ๅƒ•็จ‹', u'ไป†่™ฝ็ฝข้ฉฝ': u'ๅƒ•้›–็ฝท้ง‘', u'ไพฅๅนธ': u'ๅƒฅๅ€–', u'ๅƒฎไป†': u'ๅƒฎๅƒ•', u'้›‡ไธป': u'ๅƒฑไธป', u'้›‡ไบบ': u'ๅƒฑไบบ', u'้›‡ไฝฃ': u'ๅƒฑๅ‚ญ', u'้›‡ๅˆฐ': u'ๅƒฑๅˆฐ', u'้›‡ๅ‘˜': u'ๅƒฑๅ“ก', u'้›‡ๅทฅ': u'ๅƒฑๅทฅ', u'้›‡็”จ': u'ๅƒฑ็”จ', u'้›‡ๅ†œ': u'ๅƒฑ่พฒ', u'ไปช่Œƒ': u'ๅ„€็ฏ„', u'ไปช่กจ': u'ๅ„€้Œถ', u'ไบฟไธช': u'ๅ„„ๅ€‹', u'ไบฟๅคšๅช': u'ๅ„„ๅคš้šป', u'ไบฟๅคฉๅŽ': u'ๅ„„ๅคฉๅพŒ', u'ไบฟๅช': u'ๅ„„้šป', u'ไบฟไฝ™': u'ๅ„„้ค˜', u'ไฟญไป†': u'ๅ„‰ๅƒ•', u'ไฟญๆœด': u'ๅ„‰ๆจธ', u'ไฟญ็กฎไน‹ๆ•™': u'ๅ„‰็กฎไน‹ๆ•™', u'ๅ„’็•ฅๆ”น้ฉๅކ': u'ๅ„’็•ฅๆ”น้ฉๆ›†', u'ๅ„’็•ฅๆ”น้ฉๅކๅฒ': u'ๅ„’็•ฅๆ”น้ฉๆญทๅฒ', u'ๅ„’็•ฅๅކ': u'ๅ„’็•ฅๆ›†', u'ๅ„’็•ฅๅކๅฒ': u'ๅ„’็•ฅๆญทๅฒ', u'ๅฐฝๅฐฝ': u'ๅ„˜ๅ„˜', u'ๅฐฝๅ…ˆ': u'ๅ„˜ๅ…ˆ', u'ๅฐฝๅ…ถๆ‰€ๆœ‰': u'ๅ„˜ๅ…ถๆ‰€ๆœ‰', u'ๅฐฝๅŠ›': u'ๅ„˜ๅŠ›', u'ๅฐฝๅฏ่ƒฝ': u'ๅ„˜ๅฏ่ƒฝ', u'ๅฐฝๅฟซ': u'ๅ„˜ๅฟซ', u'ๅฐฝๆ—ฉ': u'ๅ„˜ๆ—ฉ', u'ๅฐฝๆ˜ฏ': u'ๅ„˜ๆ˜ฏ', u'ๅฐฝ็ฎก': u'ๅ„˜็ฎก', u'ๅฐฝ้€Ÿ': u'ๅ„˜้€Ÿ', u'ไผ˜ไบŽ': u'ๅ„ชๆ–ผ', u'ไผ˜ๆธธ': u'ๅ„ช้Š', u'ๅ…€ๆœฏ': u'ๅ…€ๆœฎ', u'ๅ…ƒๅ‡ถ': u'ๅ…ƒๅ…‡', u'ๅ……้ฅฅ': u'ๅ……้ฅ‘', u'ๅ…†ไธช': u'ๅ…†ๅ€‹', u'ๅ…†ไฝ™': u'ๅ…†้ค˜', u'ๅ‡ถๅˆ€': u'ๅ…‡ๅˆ€', u'ๅ‡ถๅ™จ': u'ๅ…‡ๅ™จ', u'ๅ‡ถๅซŒ': u'ๅ…‡ๅซŒ', u'ๅ‡ถๅทดๅทด': u'ๅ…‡ๅทดๅทด', u'ๅ‡ถๅพ’': u'ๅ…‡ๅพ’', u'ๅ‡ถๆ‚': u'ๅ…‡ๆ‚', u'ๅ‡ถๆถ': u'ๅ…‡ๆƒก', u'ๅ‡ถๆ‰‹': u'ๅ…‡ๆ‰‹', u'ๅ‡ถๆกˆ': u'ๅ…‡ๆกˆ', u'ๅ‡ถๆžช': u'ๅ…‡ๆง', u'ๅ‡ถๆจช': u'ๅ…‡ๆฉซ', u'ๅ‡ถๆฎ˜': u'ๅ…‡ๆฎ˜', u'ๅ‡ถๆฎ‹': u'ๅ…‡ๆฎ˜', u'ๅ‡ถๆฎบ': u'ๅ…‡ๆฎบ', u'ๅ‡ถๆ€': u'ๅ…‡ๆฎบ', u'ๅ‡ถ็Šฏ': u'ๅ…‡็Šฏ', u'ๅ‡ถ็‹ ': u'ๅ…‡็‹ ', u'ๅ‡ถ็Œ›': u'ๅ…‡็Œ›', u'ๅ‡ถ็–‘': u'ๅ…‡็–‘', u'ๅ‡ถ็›ธ': u'ๅ…‡็›ธ', u'ๅ‡ถ้™ฉ': u'ๅ…‡้šช', u'ๅ…ˆๅ ': u'ๅ…ˆไฝ”', u'ๅ…ˆ้‡‡': u'ๅ…ˆๆŽก', u'ๅ…‰่‡ด่‡ด': u'ๅ…‰็ทป็ทป', u'ๅ…‹่ฏ': u'ๅ…‹่—ฅ', u'ๅ…‹ๅค': u'ๅ…‹่ค‡', u'ๅ…ๅพ': u'ๅ…ๅพต', u'ๅ…šๅ‚': u'ๅ…šๅƒ', u'ๅ…šๅคชๅฐ‰': u'ๅ…šๅคชๅฐ‰', u'ๅ…šๆ€€่‹ฑ': u'ๅ…šๆ‡ท่‹ฑ', u'ๅ…š่ฟ›': u'ๅ…š้€ฒ', u'ๅ…š้ …': u'ๅ…š้ …', u'ๅ…š้กน': u'ๅ…š้ …', u'ๅ†…ๅˆถ': u'ๅ…ง่ฃฝ', u'ๅ†…้ขๅŒ…': u'ๅ…ง้ขๅŒ…', u'ๅ†…้ขๅŒ…็š„': u'ๅ…ง้ขๅŒ…็š„', u'ๅ†…ๆ–—': u'ๅ…ง้ฌฅ', u'ๅ†…ๅ“„': u'ๅ…ง้ฌจ', u'ๅ…จๅนฒ': u'ๅ…จไนพ', u'ๅ…จ้ขๅŒ…ๅ›ด': u'ๅ…จ้ขๅŒ…ๅœ', u'ๅ…จ้ขๅŒ…่ฃน': u'ๅ…จ้ขๅŒ…่ฃน', u'ไธคไธช': u'ๅ…ฉๅ€‹', u'ไธคๅคฉๅŽ': u'ๅ…ฉๅคฉๅพŒ', u'ไธคๅคฉๆ™’็ฝ‘': u'ๅ…ฉๅคฉๆ™’็ถฒ', u'ไธคๆ‰Ž': u'ๅ…ฉ็ดฎ', u'ไธค่™Žๅ…ฑๆ–—': u'ๅ…ฉ่™Žๅ…ฑ้ฌฅ', u'ไธคๅช': u'ๅ…ฉ้šป', u'ไธคไฝ™': u'ๅ…ฉ้ค˜', u'ไธค้ผ ๆ–—็ฉด': u'ๅ…ฉ้ผ ้ฌฅ็ฉด', u'ๅ…ซไธช': u'ๅ…ซๅ€‹', u'ๅ…ซๅ‡บๅˆŠ': u'ๅ…ซๅ‡บๅˆŠ', u'ๅ…ซๅ‡บๅฃ': u'ๅ…ซๅ‡บๅฃ', u'ๅ…ซๅ‡บ็‰ˆ': u'ๅ…ซๅ‡บ็‰ˆ', u'ๅ…ซๅ‡บ็”Ÿ': u'ๅ…ซๅ‡บ็”Ÿ', u'ๅ…ซๅ‡บ็ฅๅฑฑ': u'ๅ…ซๅ‡บ็ฅๅฑฑ', u'ๅ…ซๅ‡บ้€ƒ': u'ๅ…ซๅ‡บ้€ƒ', u'ๅ…ซๅคง่ƒกๅŒ': u'ๅ…ซๅคง่ƒกๅŒ', u'ๅ…ซๅคฉๅŽ': u'ๅ…ซๅคฉๅพŒ', u'ๅ…ซๅญ—่ƒก': u'ๅ…ซๅญ—้ฌ', u'ๅ…ซๆ‰Ž': u'ๅ…ซ็ดฎ', u'ๅ…ซ่œก': u'ๅ…ซ่œก', u'ๅ…ซๅช': u'ๅ…ซ้šป', u'ๅ…ซไฝ™': u'ๅ…ซ้ค˜', u'ๅ…ฌไป”้ข': u'ๅ…ฌไป”้บต', u'ๅ…ฌไป†': u'ๅ…ฌๅƒ•', u'ๅ…ฌๅญ™ไธ‘': u'ๅ…ฌๅญซไธ‘', u'ๅ…ฌๅนฒ': u'ๅ…ฌๅนน', u'ๅ…ฌๅކ': u'ๅ…ฌๆ›†', u'ๅ…ฌๅކๅฒ': u'ๅ…ฌๆญทๅฒ', u'ๅ…ฌๅŽ˜': u'ๅ…ฌ้‡', u'ๅ…ฌไฝ™': u'ๅ…ฌ้ค˜', u'ๅ…ญไธช': u'ๅ…ญๅ€‹', u'ๅ…ญๅ‡บๅˆŠ': u'ๅ…ญๅ‡บๅˆŠ', u'ๅ…ญๅ‡บๅฃ': u'ๅ…ญๅ‡บๅฃ', u'ๅ…ญๅ‡บ็‰ˆ': u'ๅ…ญๅ‡บ็‰ˆ', u'ๅ…ญๅ‡บ็”Ÿ': u'ๅ…ญๅ‡บ็”Ÿ', u'ๅ…ญๅ‡บ็ฅๅฑฑ': u'ๅ…ญๅ‡บ็ฅๅฑฑ', u'ๅ…ญๅ‡บ้€ƒ': u'ๅ…ญๅ‡บ้€ƒ', u'ๅ…ญๅˆ’': u'ๅ…ญๅŠƒ', u'ๅ…ญๅคฉๅŽ': u'ๅ…ญๅคฉๅพŒ', u'ๅ…ญ่ฐท': u'ๅ…ญ็ฉ€', u'ๅ…ญๆ‰Ž': u'ๅ…ญ็ดฎ', u'ๅ…ญๅ†ฒ': u'ๅ…ญ่ก', u'ๅ…ญๅช': u'ๅ…ญ้šป', u'ๅ…ญไฝ™': u'ๅ…ญ้ค˜', u'ๅ…ญๅ‡บ': u'ๅ…ญ้ฝฃ', u'ๅ…ฑๅ’Œๅކ': u'ๅ…ฑๅ’Œๆ›†', u'ๅ…ฑๅ’Œๅކๅฒ': u'ๅ…ฑๅ’Œๆญทๅฒ', u'ๅ…ถไธ€ๅช': u'ๅ…ถไธ€ๅช', u'ๅ…ถไบŒๅช': u'ๅ…ถไบŒๅช', u'ๅ…ถๅ…ซไนๅช': u'ๅ…ถๅ…ซไนๅช', u'ๅ…ถๆฌก่พŸๅœฐ': u'ๅ…ถๆฌก่พŸๅœฐ', u'ๅ…ถไฝ™': u'ๅ…ถ้ค˜', u'ๅ…ธ่Œƒ': u'ๅ…ธ็ฏ„', u'ๅ…ผๅนถ': u'ๅ…ผๅนถ', u'ๅ†‰ๆœ‰ไป†': u'ๅ†‰ๆœ‰ๅƒ•', u'ๅ†—ไฝ™': u'ๅ†—้ค˜', u'ๅ†คไป‡': u'ๅ†ค่ฎŽ', u'ๅ†ฅ่’™': u'ๅ†ฅๆฟ›', u'ๅ†ฌๅคฉ้‡Œ': u'ๅ†ฌๅคฉ่ฃก', u'ๅ†ฌๅฑฑๅบ„': u'ๅ†ฌๅฑฑๅบ„', u'ๅ†ฌๆ—ฅ้‡Œ': u'ๅ†ฌๆ—ฅ่ฃก', u'ๅ†ฌๆธธ': u'ๅ†ฌ้Š', u'ๅ†ถๆธธ': u'ๅ†ถ้Š', u'ๅ†ทๅบ„ๅญ': u'ๅ†ท่ŽŠๅญ', u'ๅ†ท้ข็›ธ': u'ๅ†ท้ข็›ธ', u'ๅ†ท้ข': u'ๅ†ท้บต', u'ๅ‡†ไธ‰ๅŽ': u'ๅ‡†ไธ‰ๅŽ', u'ๅ‡†ไธๅ‡†ไป–': u'ๅ‡†ไธๅ‡†ไป–', u'ๅ‡†ไธๅ‡†ไฝ ': u'ๅ‡†ไธๅ‡†ไฝ ', u'ๅ‡†ไธๅ‡†ๅฅน': u'ๅ‡†ไธๅ‡†ๅฅน', u'ๅ‡†ไธๅ‡†ๅฎƒ': u'ๅ‡†ไธๅ‡†ๅฎƒ', u'ๅ‡†ไธๅ‡†ๆˆ‘': u'ๅ‡†ไธๅ‡†ๆˆ‘', u'ๅ‡†ไธๅ‡†่ฎธ': u'ๅ‡†ไธๅ‡†่จฑ', u'ๅ‡†ไธๅ‡†่ฐ': u'ๅ‡†ไธๅ‡†่ชฐ', u'ๅ‡†ไฟๆŠค': u'ๅ‡†ไฟ่ญท', u'ๅ‡†ไฟ้‡Š': u'ๅ‡†ไฟ้‡‹', u'ๅ‡Œ่’™ๅˆ': u'ๅ‡Œๆฟ›ๅˆ', u'ๅ‡็‚ผ': u'ๅ‡้Š', u'ๅ‡ ไธŠ': u'ๅ‡ ไธŠ', u'ๅ‡ ๅ‡ ': u'ๅ‡ ๅ‡ ', u'ๅ‡ ๅ‡ณ': u'ๅ‡ ๅ‡ณ', u'ๅ‡ ๅญ': u'ๅ‡ ๅญ', u'ๅ‡ ๆ—': u'ๅ‡ ๆ—', u'ๅ‡ ๆ–': u'ๅ‡ ๆ–', u'ๅ‡ ๆกˆ': u'ๅ‡ ๆกˆ', u'ๅ‡ ๆค…': u'ๅ‡ ๆค…', u'ๅ‡ ๆฆป': u'ๅ‡ ๆฆป', u'ๅ‡ ๅ‡€็ช—ๆ˜Ž': u'ๅ‡ ๆทจ็ช—ๆ˜Ž', u'ๅ‡ ็ญต': u'ๅ‡ ็ญต', u'ๅ‡ ไธ': u'ๅ‡ ็ตฒ', u'ๅ‡ ้ขไธŠ': u'ๅ‡ ้ขไธŠ', u'ๅ‡ถๆ€ๆกˆ': u'ๅ‡ถๆฎบๆกˆ', u'ๅ‡ถ็›ธๆฏ•้œฒ': u'ๅ‡ถ็›ธ็•ข้œฒ', u'ๅ‡นๆดž้‡Œ': u'ๅ‡นๆดž่ฃก', u'ๅ‡บไน–ๅผ„ไธ‘': u'ๅ‡บไน–ๅผ„้†œ', u'ๅ‡บไน–้œฒไธ‘': u'ๅ‡บไน–้œฒ้†œ', u'ๅ‡บๅพๆ”ถ': u'ๅ‡บๅพๆ”ถ', u'ๅ‡บไบŽ': u'ๅ‡บๆ–ผ', u'ๅ‡บ่ฐ‹ๅˆ’็ญ–': u'ๅ‡บ่ฌ€ๅŠƒ็ญ–', u'ๅ‡บๆธธ': u'ๅ‡บ้Š', u'ๅ‡บไธ‘': u'ๅ‡บ้†œ', u'ๅ‡บ้”ค': u'ๅ‡บ้Žš', u'ๅˆ†ๅ ': u'ๅˆ†ไฝ”', u'ๅˆ†ๅˆซ่‡ด': u'ๅˆ†ๅˆซ่‡ด', u'ๅˆ†ๅŠ้’Ÿ': u'ๅˆ†ๅŠ้˜', u'ๅˆ†ๅคš้’Ÿ': u'ๅˆ†ๅคš้˜', u'ๅˆ†ๅญ้’Ÿ': u'ๅˆ†ๅญ้˜', u'ๅˆ†ๅธƒๅœ–': u'ๅˆ†ๅธƒๅœ–', u'ๅˆ†ๅธƒๅ›พ': u'ๅˆ†ๅธƒๅœ–', u'ๅˆ†ๅธƒไบŽ': u'ๅˆ†ๅธƒๆ–ผ', u'ๅˆ†ๆ•ฃไบŽ': u'ๅˆ†ๆ•ฃๆ–ผ', u'ๅˆ†้’Ÿ': u'ๅˆ†้˜', u'ๅˆ‘ไฝ™': u'ๅˆ‘้ค˜', u'ๅˆ’ไธ€ๆกจ': u'ๅˆ’ไธ€ๆงณ', u'ๅˆ’ไบ†ไธ€ไผš': u'ๅˆ’ไบ†ไธ€ๆœƒ', u'ๅˆ’ๆฅๅˆ’ๅŽป': u'ๅˆ’ไพ†ๅˆ’ๅŽป', u'ๅˆ’ๅˆฐๅฒธ': u'ๅˆ’ๅˆฐๅฒธ', u'ๅˆ’ๅˆฐๆฑŸๅฟƒ': u'ๅˆ’ๅˆฐๆฑŸๅฟƒ', u'ๅˆ’ๅพ—ๆฅ': u'ๅˆ’ๅพ—ไพ†', u'ๅˆ’็€': u'ๅˆ’่‘—', u'ๅˆ’็€่ตฐ': u'ๅˆ’่‘—่ตฐ', u'ๅˆ’้พ™่ˆŸ': u'ๅˆ’้พ่ˆŸ', u'ๅˆคๆ–ญๅ‘': u'ๅˆคๆ–ท็™ผ', u'ๅˆซๆ—ฅๅ—้ธฟๆ‰ๅŒ—ๅŽป': u'ๅˆฅๆ—ฅๅ—้ดป็บ”ๅŒ—ๅŽป', u'ๅˆซ่‡ด': u'ๅˆฅ็ทป', u'ๅˆซๅบ„': u'ๅˆฅ่ŽŠ', u'ๅˆซ็€': u'ๅˆฅ่‘—', u'ๅˆซ่พŸ': u'ๅˆฅ้—ข', u'ๅˆฉๆฌฒ': u'ๅˆฉๆ…พ', u'ๅˆฉไบŽ': u'ๅˆฉๆ–ผ', u'ๅˆฉๆฌฒ็†ๅฟƒ': u'ๅˆฉๆฌฒ็†ๅฟƒ', u'ๅˆฎๆฅๅˆฎๅŽป': u'ๅˆฎไพ†ๅˆฎๅŽป', u'ๅˆฎ็€': u'ๅˆฎ่‘—', u'ๅˆฎ่ตทๆฅ': u'ๅˆฎ่ตทไพ†', u'ๅˆฎ้ฃŽไธ‹้›ชๅ€’ไพฟๅฎœ': u'ๅˆฎ้ขจไธ‹้›ชๅ€’ไพฟๅฎœ', u'ๅˆฎ่ƒก': u'ๅˆฎ้ฌ', u'ๅˆถๅ†ทๆœบ': u'ๅˆถๅ†ทๆฉŸ', u'ๅˆถ็ญพ': u'ๅˆถ็ฑค', u'ๅˆถ้’Ÿ': u'ๅˆถ้˜', u'ๅˆบ็ปฃ': u'ๅˆบ็นก', u'ๅˆปๅˆ’': u'ๅˆปๅŠƒ', u'ๅˆปๅŠ้’Ÿ': u'ๅˆปๅŠ้˜', u'ๅˆปๅคš้’Ÿ': u'ๅˆปๅคš้˜', u'ๅˆป้’Ÿ': u'ๅˆป้˜', u'ๅ‰ƒๅ‘': u'ๅ‰ƒ้ซฎ', u'ๅ‰ƒ่ƒก': u'ๅ‰ƒ้ฌ', u'ๅ‰ƒ้กป': u'ๅ‰ƒ้ฌš', u'ๅ‰Šๅ‘': u'ๅ‰Š้ซฎ', u'ๅ‰Š้ข': u'ๅ‰Š้บต', u'ๅ…‹ๅˆถไธไบ†': u'ๅ‰‹ๅˆถไธไบ†', u'ๅ…‹ๅˆถไธไฝ': u'ๅ‰‹ๅˆถไธไฝ', u'ๅ…‹ๆ‰ฃ': u'ๅ‰‹ๆ‰ฃ', u'ๅ…‹ๆ˜Ÿ': u'ๅ‰‹ๆ˜Ÿ', u'ๅ…‹ๆœŸ': u'ๅ‰‹ๆœŸ', u'ๅ…‹ๆญป': u'ๅ‰‹ๆญป', u'ๅ…‹่–„': u'ๅ‰‹่–„', u'ๅ‰่จ€ไธ็ญ”ๅŽ่ฏญ': u'ๅ‰่จ€ไธ็ญ”ๅพŒ่ชž', u'ๅ‰้ขๅบ—': u'ๅ‰้ขๅบ—', u'ๅ‰”ๅบ„่ดง': u'ๅ‰”่ŽŠ่ฒจ', u'ๅˆšๅนฒ': u'ๅ‰›ไนพ', u'ๅˆš้›‡': u'ๅ‰›ๅƒฑ', u'ๅˆšๆ‰ไธ€่ฝฝ': u'ๅ‰›็บ”ไธ€่ผ‰', u'ๅ‰ฅๅˆถ': u'ๅ‰่ฃฝ', u'ๅ‰ฉไฝ™': u'ๅ‰ฉ้ค˜', u'ๅ‰ชๅ…ถๅ‘': u'ๅ‰ชๅ…ถ้ซฎ', u'ๅ‰ช็‰กไธนๅ–‚็‰›': u'ๅ‰ช็‰กไธนๅ–‚็‰›', u'ๅ‰ชๅฝฉ': u'ๅ‰ช็ถต', u'ๅ‰ชๅ‘': u'ๅ‰ช้ซฎ', u'ๅ‰ฒ่ˆ': u'ๅ‰ฒๆจ', u'ๅˆ›่Žท': u'ๅ‰ต็ฉซ', u'ๅˆ›ๅˆถ': u'ๅ‰ต่ฃฝ', u'้“ฒๅ‡บ': u'ๅ‰ทๅ‡บ', u'้“ฒๅˆˆ': u'ๅ‰ทๅˆˆ', u'้“ฒๅนณ': u'ๅ‰ทๅนณ', u'้“ฒ้™ค': u'ๅ‰ท้™ค', u'้“ฒๅคด': u'ๅ‰ท้ ญ', u'ๅˆ’ไธ€': u'ๅŠƒไธ€', u'ๅˆ’ไธŠ': u'ๅŠƒไธŠ', u'ๅˆ’ไธ‹': u'ๅŠƒไธ‹', u'ๅˆ’ไบ†': u'ๅŠƒไบ†', u'ๅˆ’ๅ…ฅ': u'ๅŠƒๅ…ฅ', u'ๅˆ’ๅ‡บ': u'ๅŠƒๅ‡บ', u'ๅˆ’ๅˆ†': u'ๅŠƒๅˆ†', u'ๅˆ’ๅˆฐ': u'ๅŠƒๅˆฐ', u'ๅˆ’ๅˆ’': u'ๅŠƒๅŠƒ', u'ๅˆ’ๅŽป': u'ๅŠƒๅŽป', u'ๅˆ’ๅœจ': u'ๅŠƒๅœจ', u'ๅˆ’ๅœฐ': u'ๅŠƒๅœฐ', u'ๅˆ’ๅฎš': u'ๅŠƒๅฎš', u'ๅˆ’ๅพ—': u'ๅŠƒๅพ—', u'ๅˆ’ๆˆ': u'ๅŠƒๆˆ', u'ๅˆ’ๆމ': u'ๅŠƒๆމ', u'ๅˆ’ๆ‹จ': u'ๅŠƒๆ’ฅ', u'ๅˆ’ๆ—ถไปฃ': u'ๅŠƒๆ™‚ไปฃ', u'ๅˆ’ๆฌพ': u'ๅŠƒๆฌพ', u'ๅˆ’ๅฝ’': u'ๅŠƒๆญธ', u'ๅˆ’ๆณ•': u'ๅŠƒๆณ•', u'ๅˆ’ๆธ…': u'ๅŠƒๆธ…', u'ๅˆ’ไธบ': u'ๅŠƒ็‚บ', u'ๅˆ’็•Œ': u'ๅŠƒ็•Œ', u'ๅˆ’็ ด': u'ๅŠƒ็ ด', u'ๅˆ’็บฟ': u'ๅŠƒ็ทš', u'ๅˆ’่ถณ': u'ๅŠƒ่ถณ', u'ๅˆ’่ฟ‡': u'ๅŠƒ้Ž', u'ๅˆ’ๅผ€': u'ๅŠƒ้–‹', u'ๅ‰ง่ฏ': u'ๅЇ่—ฅ', u'ๅˆ˜ๅ…‹ๅบ„': u'ๅЉๅ…‹่ŽŠ', u'ๅŠ›ๅ…‹ๅˆถ': u'ๅŠ›ๅ‰‹ๅˆถ', u'ๅŠ›ๆ‹ผ': u'ๅŠ›ๆ‹š', u'ๅŠ›ๆ‹ผไผ—ๆ•Œ': u'ๅŠ›ๆ‹ผ็œพๆ•ต', u'ๅŠ›ๆฑ‚ๅ…‹ๅˆถ': u'ๅŠ›ๆฑ‚ๅ‰‹ๅˆถ', u'ๅŠ›ไบ‰ไธŠๆธธ': u'ๅŠ›็ˆญไธŠ้Š', u'ๅŠŸ่‡ด': u'ๅŠŸ็ทป', u'ๅŠ ๆฐข็ฒพๅˆถ': u'ๅŠ ๆฐซ็ฒพๅˆถ', u'ๅŠ ่ฏ': u'ๅŠ ่—ฅ', u'ๅŠ ๆณจ': u'ๅŠ ่จป', u'ๅŠฃไบŽ': u'ๅŠฃๆ–ผ', u'ๅŠฉไบŽ': u'ๅŠฉๆ–ผ', u'ๅŠซไฝ™': u'ๅŠซ้ค˜', u'ๅ‹ƒ้ƒ': u'ๅ‹ƒ้ฌฑ', u'ๅŠจ่ก': u'ๅ‹•่•ฉ', u'่ƒœไบŽ': u'ๅ‹ๆ–ผ', u'ๅŠณๅŠ›ๅฃซ่กจ': u'ๅ‹žๅŠ›ๅฃซ้Œถ', u'ๅ‹คไป†': u'ๅ‹คๅƒ•', u'ๅ‹คๆœด': u'ๅ‹คๆจธ', u'ๅ‹‹็ซ ': u'ๅ‹ณ็ซ ', u'ๅ‹บ่ฏ': u'ๅ‹บ่—ฅ', u'ๅ‹พๅนฒ': u'ๅ‹พๅนน', u'ๅ‹พๅฟƒๆ–—่ง’': u'ๅ‹พๅฟƒ้ฌฅ่ง’', u'ๅ‹พ้ญ‚่ก้ญ„': u'ๅ‹พ้ญ‚่•ฉ้ญ„', u'ๅŒ…ๆ‹ฌ': u'ๅŒ…ๆ‹ฌ', u'ๅŒ…ๅ‡†': u'ๅŒ…ๆบ–', u'ๅŒ…่ฐท': u'ๅŒ…็ฉ€', u'ๅŒ…ๆ‰Ž': u'ๅŒ…็ดฎ', u'ๅŒ…ๅบ„': u'ๅŒ…่ŽŠ', u'ๅŒ็ณป': u'ๅŒ็นซ', u'ๅŒ—ๅฒณ': u'ๅŒ—ๅถฝ', u'ๅŒ—ๅ›ž็บฟ': u'ๅŒ—่ฟด็ทš', u'ๅŒ—ๅ›ž้“่ทฏ': u'ๅŒ—่ฟด้ต่ทฏ', u'ๅŒกๅค': u'ๅŒก่ค‡', u'ๅŒชๅนฒ': u'ๅŒชๅนน', u'ๅŒฟไบŽ': u'ๅŒฟๆ–ผ', u'ๅŒบๅˆ’': u'ๅ€ๅŠƒ', u'ๅไธช': u'ๅๅ€‹', u'ๅๅ‡บๅˆŠ': u'ๅๅ‡บๅˆŠ', u'ๅๅ‡บๅฃ': u'ๅๅ‡บๅฃ', u'ๅๅ‡บ็‰ˆ': u'ๅๅ‡บ็‰ˆ', u'ๅๅ‡บ็”Ÿ': u'ๅๅ‡บ็”Ÿ', u'ๅๅ‡บ็ฅๅฑฑ': u'ๅๅ‡บ็ฅๅฑฑ', u'ๅๅ‡บ้€ƒ': u'ๅๅ‡บ้€ƒ', u'ๅๅˆ’': u'ๅๅŠƒ', u'ๅๅคšๅช': u'ๅๅคš้šป', u'ๅๅคฉๅŽ': u'ๅๅคฉๅพŒ', u'ๅๆ‰Ž': u'ๅ็ดฎ', u'ๅๅช': u'ๅ้šป', u'ๅไฝ™': u'ๅ้ค˜', u'ๅๅ‡บ': u'ๅ้ฝฃ', u'ๅƒไธช': u'ๅƒๅ€‹', u'ๅƒๅชๅฏ': u'ๅƒๅชๅฏ', u'ๅƒๅชๅคŸ': u'ๅƒๅชๅค ', u'ๅƒๅชๆ€•': u'ๅƒๅชๆ€•', u'ๅƒๅช่ƒฝ': u'ๅƒๅช่ƒฝ', u'ๅƒๅช่ถณๅคŸ': u'ๅƒๅช่ถณๅค ', u'ๅƒๅคšๅช': u'ๅƒๅคš้šป', u'ๅƒๅคฉๅŽ': u'ๅƒๅคฉๅพŒ', u'ๅƒๆ‰Ž': u'ๅƒ็ดฎ', u'ๅƒไธไธ‡็ผ•': u'ๅƒ็ตฒ่ฌ็ธท', u'ๅƒๅ›ž็™พๆŠ˜': u'ๅƒ่ฟด็™พๆŠ˜', u'ๅƒๅ›ž็™พ่ฝฌ': u'ๅƒ่ฟด็™พ่ฝ‰', u'ๅƒ้’งไธ€ๅ‘': u'ๅƒ้ˆžไธ€้ซฎ', u'ๅƒๅช': u'ๅƒ้šป', u'ๅƒไฝ™': u'ๅƒ้ค˜', u'ๅ‡ๅฎ˜ๅ‘่ดข': u'ๅ‡ๅฎ˜็™ผ่ฒก', u'ๅŠๅˆถๅ“': u'ๅŠๅˆถๅ“', u'ๅŠๅชๅฏ': u'ๅŠๅชๅฏ', u'ๅŠๅชๅคŸ': u'ๅŠๅชๅค ', u'ๅŠไบŽ': u'ๅŠๆ–ผ', u'ๅŠๅช': u'ๅŠ้šป', u'ๅ—ไบฌ้’Ÿ': u'ๅ—ไบฌ้˜', u'ๅ—ไบฌ้’Ÿ่กจ': u'ๅ—ไบฌ้˜้Œถ', u'ๅ—ๅฎซ้€‚': u'ๅ—ๅฎฎ้€‚', u'ๅ—ๅฑๆ™š้’Ÿ': u'ๅ—ๅฑๆ™š้˜', u'ๅ—ๅฒณ': u'ๅ—ๅถฝ', u'ๅ—็ญ‘': u'ๅ—็ญ‘', u'ๅ—ๅ›ž็บฟ': u'ๅ—่ฟด็ทš', u'ๅ—ๅ›ž้“่ทฏ': u'ๅ—่ฟด้ต่ทฏ', u'ๅ—ๆธธ': u'ๅ—้Š', u'ๅšๆฑ‡': u'ๅšๅฝ™', u'ๅš้‡‡': u'ๅšๆŽก', u'ๅžๅบ„': u'ๅž่ŽŠ', u'ๅžๅบ„ๅญ': u'ๅž่ŽŠๅญ', u'ๅ ไบ†ๅœ': u'ๅ ไบ†ๅœ', u'ๅ ไพฟๅฎœ็š„ๆ˜ฏๅ‘†': u'ๅ ไพฟๅฎœ็š„ๆ˜ฏ็ƒ', u'ๅ ๅœ': u'ๅ ๅœ', u'ๅ ๅคšๆ•ฐ': u'ๅ ๅคšๆ•ธ', u'ๅ ๆœ‰ไบ”ไธ้ชŒ': u'ๅ ๆœ‰ไบ”ไธ้ฉ—', u'ๅ ๆœ‰ๆƒ': u'ๅ ๆœ‰ๆฌŠ', u'ๅฐ็ดฏ็ปถ่‹ฅ': u'ๅฐ็บ็ถฌ่‹ฅ', u'ๅฐๅˆถ': u'ๅฐ่ฃฝ', u'ๅฑไบŽ': u'ๅฑๆ–ผ', u'ๅตไธŽ็Ÿณๆ–—': u'ๅต่ˆ‡็Ÿณ้ฌฅ', u'ๅท้กป': u'ๅท้ฌš', u'ๅŽ‚้ƒจ': u'ๅŽ‚้ƒจ', u'ๅŽ่–ชไบŽ็ซ': u'ๅŽ่–ชๆ–ผ็ซ', u'ๅŽŸๅญ้’Ÿ': u'ๅŽŸๅญ้˜', u'ๅŽŸ้’Ÿ': u'ๅŽŸ้˜', u'ๅކ็‰ฉไน‹ๆ„': u'ๅŽค็‰ฉไน‹ๆ„', u'ๅ‚ๅˆ': u'ๅƒๅˆ', u'ๅ‚่€ƒไปทๅ€ผ': u'ๅƒ่€ƒๅƒนๅ€ผ', u'ๅ‚ไธŽ': u'ๅƒ่ˆ‡', u'ๅ‚ไธŽไบบๅ‘˜': u'ๅƒ่ˆ‡ไบบๅ“ก', u'ๅ‚ไธŽๅˆถ': u'ๅƒ่ˆ‡ๅˆถ', u'ๅ‚ไธŽๆ„Ÿ': u'ๅƒ่ˆ‡ๆ„Ÿ', u'ๅ‚ไธŽ่€…': u'ๅƒ่ˆ‡่€…', u'ๅ‚่ง‚ๅ›ข': u'ๅƒ่ง€ๅœ˜', u'ๅ‚่ง‚ๅ›ขไฝ“': u'ๅƒ่ง€ๅœ˜้ซ”', u'ๅ‚้˜…': u'ๅƒ้–ฑ', u'ๅๆœด': u'ๅๆจธ', u'ๅๅ†ฒ': u'ๅ่ก', u'ๅๅคๅˆถ': u'ๅ่ค‡่ฃฝ', u'ๅๅค': u'ๅ่ฆ†', u'ๅ่ฆ†': u'ๅ่ฆ†', u'ๅ–่ˆ': u'ๅ–ๆจ', u'ๅ—ๆ‰˜': u'ๅ—่จ—', u'ๅฃๅนฒ': u'ๅฃไนพ', u'ๅฃๅนฒๅ†’': u'ๅฃๅนฒๅ†’', u'ๅฃๅนฒๆ”ฟ': u'ๅฃๅนฒๆ”ฟ', u'ๅฃๅนฒๆถ‰': u'ๅฃๅนฒๆถ‰', u'ๅฃๅนฒ็Šฏ': u'ๅฃๅนฒ็Šฏ', u'ๅฃๅนฒ้ข„': u'ๅฃๅนฒ้ ', u'ๅฃ็‡ฅๅ”‡ๅนฒ': u'ๅฃ็‡ฅๅ”‡ไนพ', u'ๅฃ่…นไน‹ๆฌฒ': u'ๅฃ่…นไน‹ๆ…พ', u'ๅฃ้‡Œ': u'ๅฃ่ฃก', u'ๅฃ้’Ÿ': u'ๅฃ้˜', u'ๅคไนฆไบ‘': u'ๅคๆ›ธไบ‘', u'ๅคๆ›ธไบ‘': u'ๅคๆ›ธไบ‘', u'ๅคๆŸฏๅ’ธ': u'ๅคๆŸฏ้นน', u'ๅคๆœด': u'ๅคๆจธ', u'ๅค่ฏญไบ‘': u'ๅค่ชžไบ‘', u'ๅค่ชžไบ‘': u'ๅค่ชžไบ‘', u'ๅค่ฟน': u'ๅค่ฟน', u'ๅค้’Ÿ': u'ๅค้˜', u'ๅค้’Ÿ่กจ': u'ๅค้˜้Œถ', u'ๅฆ่พŸ': u'ๅฆ้—ข', u'ๅฉ้’Ÿ': u'ๅฉ้˜', u'ๅชๅ ': u'ๅชไฝ”', u'ๅชๅ ๅœ': u'ๅชๅ ๅœ', u'ๅชๅ ๅ‰': u'ๅชๅ ๅ‰', u'ๅชๅ ็ฅž้—ฎๅœ': u'ๅชๅ ็ฅžๅ•ๅœ', u'ๅชๅ ็ฎ—': u'ๅชๅ ็ฎ—', u'ๅช้‡‡': u'ๅชๆŽก', u'ๅชๅ†ฒ': u'ๅช่ก', u'ๅช่บซไธŠๅทฒ': u'ๅช่บซไธŠๅทฒ', u'ๅช่บซไธŠๆœ‰': u'ๅช่บซไธŠๆœ‰', u'ๅช่บซไธŠๆฒก': u'ๅช่บซไธŠๆฒ’', u'ๅช่บซไธŠๆ— ': u'ๅช่บซไธŠ็„ก', u'ๅช่บซไธŠ็š„': u'ๅช่บซไธŠ็š„', u'ๅช่บซไธ–': u'ๅช่บซไธ–', u'ๅช่บซไปฝ': u'ๅช่บซไปฝ', u'ๅช่บซๅ‰': u'ๅช่บซๅ‰', u'ๅช่บซๅ—': u'ๅช่บซๅ—', u'ๅช่บซๅญ': u'ๅช่บซๅญ', u'ๅช่บซๅฝข': u'ๅช่บซๅฝข', u'ๅช่บซๅฝฑ': u'ๅช่บซๅฝฑ', u'ๅช่บซๅŽ': u'ๅช่บซๅพŒ', u'ๅช่บซๅฟƒ': u'ๅช่บซๅฟƒ', u'ๅช่บซๆ—': u'ๅช่บซๆ—', u'ๅช่บซๆ': u'ๅช่บซๆ', u'ๅช่บซๆฎต': u'ๅช่บซๆฎต', u'ๅช่บซไธบ': u'ๅช่บซ็‚บ', u'ๅช่บซ่พน': u'ๅช่บซ้‚Š', u'ๅช่บซ้ฆ–': u'ๅช่บซ้ฆ–', u'ๅช่บซไฝ“': u'ๅช่บซ้ซ”', u'ๅช่บซ้ซ˜': u'ๅช่บซ้ซ˜', u'ๅช้‡‡ๅฃฐ': u'ๅช้‡‡่ฒ', u'ๅฎๅฎๅฝ“ๅฝ“': u'ๅฎๅฎๅ™นๅ™น', u'ๅฎๅฝ“': u'ๅฎๅ™น', u'ๅฏไปฅๅ…‹ๅˆถ': u'ๅฏไปฅๅ‰‹ๅˆถ', u'ๅฏ็ดงๅฏๆพ': u'ๅฏ็ทŠๅฏ้ฌ†', u'ๅฏ่‡ชๅˆถ': u'ๅฏ่‡ชๅˆถ', u'ๅฐๅญๅฅณ': u'ๅฐๅญๅฅณ', u'ๅฐๅญๅญ™': u'ๅฐๅญๅญซ', u'ๅฐๅธƒๆ™ฏ': u'ๅฐๅธƒๆ™ฏ', u'ๅฐๅކๅฒ': u'ๅฐๆญทๅฒ', u'ๅฐ้’Ÿ': u'ๅฐ้˜', u'ๅฐ้ขๅ‰': u'ๅฐ้ขๅ‰', u'ๅฑๅ’ค903': u'ๅฑๅ’ค903', u'ๅฑๅ’คMY903': u'ๅฑๅ’คMY903', u'ๅฑๅ’คMy903': u'ๅฑๅ’คMy903', u'ๅฑๅ’คๅฑๅฑๅ’ค': u'ๅฑๅ’คๅฑๅฑๅ’ค', u'ๅฑๅ’คๅฑๅ’คๅฑๅ’คๅ’ค': u'ๅฑๅ’คๅฑๅ’คๅฑๅ’คๅ’ค', u'ๅฑๅ’คๅ’ค': u'ๅฑๅ’คๅ’ค', u'ๅฑๅ’คไนๅ›': u'ๅฑๅ’คๆจ‚ๅฃ‡', u'ๅฑๅ’คๆจ‚ๅฃ‡': u'ๅฑๅ’คๆจ‚ๅฃ‡', u'ๅถ ๆญๅผ˜': u'ๅถ ๆญๅผ˜', u'ๅถใ€€ๆญๅผ˜': u'ๅถใ€€ๆญๅผ˜', u'ๅถๆญๅผ˜': u'ๅถๆญๅผ˜', u'ๅถ้Ÿณ': u'ๅถ้Ÿณ', u'ๅถ้Ÿต': u'ๅถ้Ÿป', u'ๅƒๆฟๅˆ€้ข': u'ๅƒๆฟๅˆ€้บต', u'ๅƒ็€ไธๅฐฝ': u'ๅƒ่‘—ไธ็›ก', u'ๅƒๅงœ': u'ๅƒ่–‘', u'ๅƒ่ฏ': u'ๅƒ่—ฅ', u'ๅƒ้‡Œๆ‰’ๅค–': u'ๅƒ่ฃกๆ‰’ๅค–', u'ๅƒ้‡Œ็ˆฌๅค–': u'ๅƒ่ฃก็ˆฌๅค–', u'ๅƒ่พฃ้ข': u'ๅƒ่พฃ้บต', u'ๅƒ้”™่ฏ': u'ๅƒ้Œฏ่—ฅ', u'ๅ„่พŸ': u'ๅ„้—ข', u'ๅ„็ฑป้’Ÿ': u'ๅ„้กž้˜', u'ๅˆไผ™ไบบ': u'ๅˆไผ™ไบบ', u'ๅˆๅนถ': u'ๅˆไฝต', u'ๅˆไผ™': u'ๅˆๅคฅ', u'ๅˆๅบœไธŠ': u'ๅˆๅบœไธŠ', u'ๅˆ้‡‡': u'ๅˆๆŽก', u'ๅˆๅކ': u'ๅˆๆ›†', u'ๅˆๅކๅฒ': u'ๅˆๆญทๅฒ', u'ๅˆๅ‡†': u'ๅˆๆบ–', u'ๅˆ็€': u'ๅˆ่‘—', u'ๅˆ่‘—่€…': u'ๅˆ่‘—่€…', u'ๅ‰ๅ‡ถๅบ†ๅŠ': u'ๅ‰ๅ‡ถๆ…ถๅผ”', u'ๅŠๅธฆ่ฃค': u'ๅŠๅธถ่คฒ', u'ๅŠๆŒ‚็€': u'ๅŠๆŽ›่‘—', u'ๅŠๆ†': u'ๅŠๆ†', u'ๅŠ็€': u'ๅŠ่‘—', u'ๅŠ่ฃค': u'ๅŠ่คฒ', u'ๅŠ่ฃคๅธฆ': u'ๅŠ่คฒๅธถ', u'ๅŠ้’Ÿ': u'ๅŠ้˜', u'ๅŒไผ™': u'ๅŒๅคฅ', u'ๅŒไบŽ': u'ๅŒๆ–ผ', u'ๅŒไฝ™': u'ๅŒ้ค˜', u'ๅŽๅ† ': u'ๅŽๅ† ', u'ๅŽๅŒ—่ก—': u'ๅŽๅŒ—่ก—', u'ๅŽๅœŸ': u'ๅŽๅœŸ', u'ๅŽๅฆƒ': u'ๅŽๅฆƒ', u'ๅŽๅฎ‰่ทฏ': u'ๅŽๅฎ‰่ทฏ', u'ๅŽๅนณ่ทฏ': u'ๅŽๅนณ่ทฏ', u'ๅŽๅบง': u'ๅŽๅบง', u'ๅŽๆตทๆนพ': u'ๅŽๆตท็ฃ', u'ๅŽๆตท็ฃ': u'ๅŽๆตท็ฃ', u'ๅŽ็จท': u'ๅŽ็จท', u'ๅŽ็พฟ': u'ๅŽ็พฟ', u'ๅŽ่ก—': u'ๅŽ่ก—', u'ๅŽ่ง’': u'ๅŽ่ง’', u'ๅŽไธฐ': u'ๅŽ่ฑ', u'ๅŽ่ฑ': u'ๅŽ่ฑ', u'ๅŽ้‡Œ': u'ๅŽ้‡Œ', u'ๅŽ้ซฎๅบง': u'ๅŽ้ซฎๅบง', u'ๅŽๅ‘ๅบง': u'ๅŽ้ซฎๅบง', u'ๅๅ“บๆ‰ๅ‘': u'ๅๅ“บๆ‰้ซฎ', u'ๅๅ“บๆกๅ‘': u'ๅๅ“บๆก้ซฎ', u'ๅ‘ๅพ€ๆฅ': u'ๅ‘ๅพ€ไพ†', u'ๅ‘ๅพ€ๅธธ': u'ๅ‘ๅพ€ๅธธ', u'ๅ‘ๅพ€ๆ—ฅ': u'ๅ‘ๅพ€ๆ—ฅ', u'ๅ‘ๅพ€ๆ—ถ': u'ๅ‘ๅพ€ๆ™‚', u'ๅ‘็€': u'ๅ‘่‘—', u'ๅžๅนถ': u'ๅžไฝต', u'ๅŸๆธธ': u'ๅŸ้Š', u'ๅซ้ฝฟๆˆดๅ‘': u'ๅซ้ฝ’ๆˆด้ซฎ', u'ๅนๅนฒ': u'ๅนไนพ', u'ๅนๅ‘': u'ๅน้ซฎ', u'ๅน่ƒก': u'ๅน้ฌ', u'ๅพไธบไน‹่Œƒๆˆ‘้ฉฐ้ฉฑ': u'ๅพ็ˆฒไน‹็ฏ„ๆˆ‘้ฆณ้ฉ…', u'ๅ•ๅŽ': u'ๅ‘‚ๅŽ', u'ๅ‘‚ๅŽ': u'ๅ‘‚ๅŽ', u'ๅ‘†ๅ‘†ๅ‚ปๅ‚ป': u'ๅ‘†ๅ‘†ๅ‚ปๅ‚ป', u'ๅ‘†ๅ‘†ๆŒฃๆŒฃ': u'ๅ‘†ๅ‘†ๆŽ™ๆŽ™', u'ๅ‘†ๅ‘†ๅ…ฝ': u'ๅ‘†ๅ‘†็ธ', u'ๅ‘†ๅ‘†็ฌจ็ฌจ': u'ๅ‘†ๅ‘†็ฌจ็ฌจ', u'ๅ‘†่‡ด่‡ด': u'ๅ‘†็ทป็ทป', u'ๅ‘†้‡Œๅ‘†ๆฐ”': u'ๅ‘†่ฃกๅ‘†ๆฐฃ', u'ๅ‘จไธ€': u'ๅ‘จไธ€', u'ๅ‘จไธ‰': u'ๅ‘จไธ‰', u'ๅ‘จไบŒ': u'ๅ‘จไบŒ', u'ๅ‘จไบ”': u'ๅ‘จไบ”', u'ๅ‘จๅ…ญ': u'ๅ‘จๅ…ญ', u'ๅ‘จๅ››': u'ๅ‘จๅ››', u'ๅ‘จๅކ': u'ๅ‘จๆ›†', u'ๅ‘จๆฐไผฆ': u'ๅ‘จๆฐๅ€ซ', u'ๅ‘จๆฐๅ€ซ': u'ๅ‘จๆฐๅ€ซ', u'ๅ‘จๅކๅฒ': u'ๅ‘จๆญทๅฒ', u'ๅ‘จๅบ„็Ž‹': u'ๅ‘จ่ŽŠ็Ž‹', u'ๅ‘จๆธธ': u'ๅ‘จ้Š', u'ๅ‘ผๅ': u'ๅ‘ผ็ฑฒ', u'ๅ‘ฝไธญๆณจๅฎš': u'ๅ‘ฝไธญๆณจๅฎš', u'ๅ’Œๅ…‹ๅˆถ': u'ๅ’Œๅ‰‹ๅˆถ', u'ๅ’Œๅฅธ': u'ๅ’Œๅงฆ', u'ๅ’Žๅพ': u'ๅ’Žๅพต', u'ๅ’•ๅ’•้’Ÿ': u'ๅ’•ๅ’•้˜', u'ๅ’ฌๅงœๅ‘ท้†‹': u'ๅ’ฌ่–‘ๅ‘ท้†‹', u'ๅ’ฏๅฝ“': u'ๅ’ฏๅ™น', u'ๅ’ณๅ—ฝ่ฏ': u'ๅ’ณๅ—ฝ่—ฅ', u'ๅ“€ๅŠ': u'ๅ“€ๅผ”', u'ๅ“€ๆŒฝ': u'ๅ“€่ผ“', u'ๅ“ๆฑ‡': u'ๅ“ๅฝ™', u'ๅ“„ๅ ‚ๅคง็ฌ‘': u'ๅ“„ๅ ‚ๅคง็ฌ‘', u'ๅ‘˜ๅฑฑๅบ„': u'ๅ“กๅฑฑๅบ„', u'ๅ“ช้‡Œ': u'ๅ“ช่ฃก', u'ๅ“ญ่„': u'ๅ“ญ้ซ’', u'ๅ”ๅŠ': u'ๅ”ๅผ”', u'ๅ‘—่ตž': u'ๅ”„่ฎš', u'ๅ”‡ๅนฒ': u'ๅ”‡ไนพ', u'ๅ”ฏไธ€ๅช': u'ๅ”ฏไธ€ๅช', u'ๅ”ฑๆธธ': u'ๅ”ฑ้Š', u'ๅ”พ้ข่‡ชๅนฒ': u'ๅ”พ้ข่‡ชไนพ', u'ๅ”พไฝ™': u'ๅ”พ้ค˜', u'ๅ•†ๅކ': u'ๅ•†ๆ›†', u'ๅ•†ๅކๅฒ': u'ๅ•†ๆญทๅฒ', u'ๅ•ทๅฝ“': u'ๅ•ทๅ™น', u'ๅ–‚ไบ†ไธ€ๅฃฐ': u'ๅ–‚ไบ†ไธ€่ฒ', u'ๅ–„ไบŽ': u'ๅ–„ๆ–ผ', u'ๅ–œๅ‘ๅพ€': u'ๅ–œๅ‘ๅพ€', u'ๅ–œๆฌข่กจ': u'ๅ–œๆญก้Œถ', u'ๅ–œๆฌข้’Ÿ': u'ๅ–œๆญก้˜', u'ๅ–œๆฌข้’Ÿ่กจ': u'ๅ–œๆญก้˜้Œถ', u'ๅ–ๅนฒ': u'ๅ–ไนพ', u'ๅ–งๅ“„': u'ๅ–ง้ฌจ', u'ไธง้’Ÿ': u'ๅ–ช้˜', u'ไน”ๅฒณ': u'ๅ–ฌๅถฝ', u'ๅ•ไบŽ': u'ๅ–ฎไบŽ', u'ๅ•ๅ•ไบŽ': u'ๅ–ฎๅ–ฎๆ–ผ', u'ๅ•ๅนฒ': u'ๅ–ฎๅนน', u'ๅ•ๆ‰“็‹ฌๆ–—': u'ๅ–ฎๆ‰“็จ้ฌฅ', u'ๅ•ๅช': u'ๅ–ฎ้šป', u'ๅ—‘่ฏ': u'ๅ—‘่—ฅ', u'ๅ˜€ๅ—’็š„่กจ': u'ๅ˜€ๅ—’็š„้Œถ', u'ๅ˜‰่ฐท': u'ๅ˜‰็ฉ€', u'ๅ˜‰่‚ด': u'ๅ˜‰่‚ด', u'ๅ˜ด้‡Œ': u'ๅ˜ด่ฃก', u'ๆถๅฟƒ': u'ๅ™ๅฟƒ', u'ๅ™™้ฝฟๆˆดๅ‘': u'ๅ™™้ฝ’ๆˆด้ซฎ', u'ๅ–ทๆด’': u'ๅ™ดๆด’', u'ๅฝ“ๅ•ท': u'ๅ™นๅ•ท', u'ๅฝ“ๅฝ“': u'ๅ™นๅ™น', u'ๅ™œ่‹': u'ๅš•ๅ›Œ', u'ๅ‘ๅฏผ': u'ๅšฎๅฐŽ', u'ๅ‘ๅพ€': u'ๅšฎๅพ€', u'ๅ‘ๅบ”': u'ๅšฎๆ‡‰', u'ๅ‘่ฟฉ': u'ๅšฎ้‚‡', u'ไธฅไบŽ': u'ๅšดๆ–ผ', u'ไธฅไธๅˆ็ผ': u'ๅšด็ตฒๅˆ็ธซ', u'ๅšผ่ฐท': u'ๅšผ็ฉ€', u'ๅ›‰ๅ›‰่‹่‹': u'ๅ›‰ๅ›‰ๅ›Œๅ›Œ', u'ๅ›‰่‹': u'ๅ›‰ๅ›Œ', u'ๅ˜ฑๆ‰˜': u'ๅ›‘่จ—', u'ๅ››ไธช': u'ๅ››ๅ€‹', u'ๅ››ๅ‡บๅˆŠ': u'ๅ››ๅ‡บๅˆŠ', u'ๅ››ๅ‡บๅฃ': u'ๅ››ๅ‡บๅฃ', u'ๅ››ๅ‡บๅพๆ”ถ': u'ๅ››ๅ‡บๅพตๆ”ถ', u'ๅ››ๅ‡บ็‰ˆ': u'ๅ››ๅ‡บ็‰ˆ', u'ๅ››ๅ‡บ็”Ÿ': u'ๅ››ๅ‡บ็”Ÿ', u'ๅ››ๅ‡บ็ฅๅฑฑ': u'ๅ››ๅ‡บ็ฅๅฑฑ', u'ๅ››ๅ‡บ้€ƒ': u'ๅ››ๅ‡บ้€ƒ', u'ๅ››ๅˆ†ๅކ': u'ๅ››ๅˆ†ๆ›†', u'ๅ››ๅˆ†ๅކๅฒ': u'ๅ››ๅˆ†ๆญทๅฒ', u'ๅ››ๅคฉๅŽ': u'ๅ››ๅคฉๅพŒ', u'ๅ››่ˆไบ”ๅ…ฅ': u'ๅ››ๆจไบ”ๅ…ฅ', u'ๅ››่ˆๅ…ญๅ…ฅ': u'ๅ››ๆจๅ…ญๅ…ฅ', u'ๅ››ๆ‰Ž': u'ๅ››็ดฎ', u'ๅ››ๅช': u'ๅ››้šป', u'ๅ››้ขๅŒ…': u'ๅ››้ขๅŒ…', u'ๅ››้ข้’Ÿ': u'ๅ››้ข้˜', u'ๅ››ไฝ™': u'ๅ››้ค˜', u'ๅ››ๅ‡บ': u'ๅ››้ฝฃ', u'ๅ›ž้‡‡': u'ๅ›žๆŽก', u'ๅ›žๆ—‹ๅŠ ้€Ÿ': u'ๅ›žๆ—‹ๅŠ ้€Ÿ', u'ๅ›žๅކ': u'ๅ›žๆ›†', u'ๅ›žๅކๅฒ': u'ๅ›žๆญทๅฒ', u'ๅ›žไธ': u'ๅ›ž็ตฒ', u'ๅ›ž็€': u'ๅ›ž่‘—', u'ๅ›ž่ก': u'ๅ›ž่•ฉ', u'ๅ›žๆธธ': u'ๅ›ž้Š', u'ๅ›ž้˜ณ่กๆฐ”': u'ๅ›ž้™ฝ่•ฉๆฐฃ', u'ๅ› ไบŽ': u'ๅ› ๆ–ผ', u'ๅ›ฐๅ€ฆ่ตทๆฅ': u'ๅ›ฐๅ€ฆ่ตทไพ†', u'ๅ›ฐๅ…ฝไน‹ๆ–—': u'ๅ›ฐ็ธไน‹้ฌฅ', u'ๅ›ฐๅ…ฝ็Šนๆ–—': u'ๅ›ฐ็ธ็Œถ้ฌฅ', u'ๅ›ฐๆ–—': u'ๅ›ฐ้ฌฅ', u'ๅ›บๅพ': u'ๅ›บๅพต', u'ๅ›ฟไบŽ': u'ๅ›ฟๆ–ผ', u'ๅœˆๅ ': u'ๅœˆไฝ”', u'ๅœˆๅญ้‡Œ': u'ๅœˆๅญ่ฃก', u'ๅœˆๆข': u'ๅœˆๆจ‘', u'ๅœˆ้‡Œ': u'ๅœˆ่ฃก', u'ๅ›ฝไน‹ๆกขๅนฒ': u'ๅœ‹ไน‹ๆฅจๆฆฆ', u'ๅ›ฝไบŽ': u'ๅœ‹ๆ–ผ', u'ๅ›ฝๅކ': u'ๅœ‹ๆ›†', u'ๅ›ฝๅކไปฃ': u'ๅœ‹ๆญทไปฃ', u'ๅ›ฝๅކไปป': u'ๅœ‹ๆญทไปป', u'ๅ›ฝๅކๅฒ': u'ๅœ‹ๆญทๅฒ', u'ๅ›ฝๅކๅฑŠ': u'ๅœ‹ๆญทๅฑ†', u'ๅ›ฝไป‡': u'ๅœ‹่ฎŽ', u'ๅ›ญ้‡Œ': u'ๅœ’่ฃก', u'ๅ›ญๆธธไผš': u'ๅœ’้Šๆœƒ', u'ๅ›พ้‡Œ': u'ๅœ–่ฃก', u'ๅ›พ้‰ด': u'ๅœ–้‘‘', u'ๅœŸ้‡Œ': u'ๅœŸ่ฃก', u'ๅœŸๅˆถ': u'ๅœŸ่ฃฝ', u'ๅœŸ้œ‰็ด ': u'ๅœŸ้œ‰็ด ', u'ๅœจๅˆถๅ“': u'ๅœจๅˆถๅ“', u'ๅœจๅ…‹ๅˆถ': u'ๅœจๅ‰‹ๅˆถ', u'ๅœจไบŽ': u'ๅœจๆ–ผ', u'ๅœฐๅ ': u'ๅœฐไฝ”', u'ๅœฐๅ…‹ๅˆถ': u'ๅœฐๅ‰‹ๅˆถ', u'ๅœฐๆ–นๅฟ—': u'ๅœฐๆ–นๅฟ—', u'ๅœฐๅฟ—': u'ๅœฐ่ชŒ', u'ๅœฐไธ‘ๅพท้ฝ': u'ๅœฐ้†œๅพท้ฝŠ', u'ๅไบŽ': u'ๅๆ–ผ', u'ๅๅฆ‚้’Ÿ': u'ๅๅฆ‚้˜', u'ๅๅบ„': u'ๅ่ŽŠ', u'ๅ้’Ÿ': u'ๅ้˜', u'ๅ‘้‡Œ': u'ๅ‘่ฃก', u'ๅค่Œƒ': u'ๅค็ฏ„', u'ๅฆ่ก': u'ๅฆ่•ฉ', u'ๅฆ่ก่ก': u'ๅฆ่•ฉ่•ฉ', u'ๅฑ้ƒ': u'ๅฑ้ฌฑ', u'ๅž‚ไบŽ': u'ๅž‚ๆ–ผ', u'ๅž‚่Œƒ': u'ๅž‚็ฏ„', u'ๅž‚ๅ‘': u'ๅž‚้ซฎ', u'ๅž‹่Œƒ': u'ๅž‹็ฏ„', u'ๅŸƒๅŠๅކ': u'ๅŸƒๅŠๆ›†', u'ๅŸƒๅŠๅކๅฒ': u'ๅŸƒๅŠๆญทๅฒ', u'ๅŸƒๅŠ่‰ณๅŽ': u'ๅŸƒๅŠ่ฑ”ๅŽ', u'ๅŸƒ่ฃๅ†ฒ': u'ๅŸƒๆฆฎ่ก', u'ๅŸ‹ๅคดๅฏป่กจ': u'ๅŸ‹้ ญๅฐ‹้Œถ', u'ๅŸ‹ๅคดๅฏป้’Ÿ': u'ๅŸ‹้ ญๅฐ‹้˜', u'ๅŸ‹ๅคดๅฏป้’Ÿ่กจ': u'ๅŸ‹้ ญๅฐ‹้˜้Œถ', u'ๅŸŽ้‡Œ': u'ๅŸŽ่ฃก', u'ๅŸ”่ฃก็คพๆ’ซๅขพๅฑ€': u'ๅŸ”่ฃ็คพๆ’ซๅขพๅฑ€', u'ๅŸ”้‡Œ็คพๆŠšๅžฆๅฑ€': u'ๅŸ”่ฃ็คพๆ’ซๅขพๅฑ€', u'ๅŸ”่ฃ็คพๆ’ซๅขพๅฑ€': u'ๅŸ”่ฃ็คพๆ’ซๅขพๅฑ€', u'ๅŸบๅนฒ': u'ๅŸบๅนน', u'ๅŸบไบŽ': u'ๅŸบๆ–ผ', u'ๅŸบๅ‡†': u'ๅŸบๆบ–', u'ๅš่‡ด': u'ๅ …็ทป', u'ๅ ™ๆท€': u'ๅ ™ๆพฑ', u'ๆถ‚็€': u'ๅก—่‘—', u'ๆถ‚่ฏ': u'ๅก—่—ฅ', u'ๅกž่€ณ็›—้’Ÿ': u'ๅกž่€ณ็›œ้˜', u'ๅกž่ฏ': u'ๅกž่—ฅ', u'ๅข“ๅฟ—้“ญ': u'ๅข“ๅฟ—้Š˜', u'ๅข“ๅฟ—': u'ๅข“่ชŒ', u'ๅขž่พŸ': u'ๅขž้—ข', u'ๅขจๆฒˆ': u'ๅขจๆฒˆ', u'ๅขจๆฒˆๆœชๅนฒ': u'ๅขจ็€‹ๆœชไนพ', u'ๅ •่ƒŽ่ฏ': u'ๅขฎ่ƒŽ่—ฅ', u'ๅžฆๅค': u'ๅขพ่ค‡', u'ๅžฆ่พŸ': u'ๅขพ้—ข', u'ๅž„ๆ–ญไปทๆ ผ': u'ๅฃŸๆ–ทๅƒนๆ ผ', u'ๅž„ๆ–ญ่ต„ไบง': u'ๅฃŸๆ–ท่ณ‡็”ข', u'ๅž„ๆ–ญ้›†ๅ›ข': u'ๅฃŸๆ–ท้›†ๅœ˜', u'ๅฃฎๆธธ': u'ๅฃฏ้Š', u'ๅฃฎ้ข': u'ๅฃฏ้บต', u'ๅฃน้ƒ': u'ๅฃน้ฌฑ', u'ๅฃถ้‡Œ': u'ๅฃบ่ฃก', u'ๅฃธ่Œƒ': u'ๅฃผ็ฏ„', u'ๅฏฟ้ข': u'ๅฃฝ้บต', u'ๅคไบŽไน”': u'ๅคไบŽๅ–ฌ', u'ๅคไบŽๅ–ฌ': u'ๅคไบŽๅ–ฌ', u'ๅคๅคฉ้‡Œ': u'ๅคๅคฉ่ฃก', u'ๅคๆ—ฅ้‡Œ': u'ๅคๆ—ฅ่ฃก', u'ๅคๅކ': u'ๅคๆ›†', u'ๅคๅކๅฒ': u'ๅคๆญทๅฒ', u'ๅคๆธธ': u'ๅค้Š', u'ๅค–ๅผบไธญๅนฒ': u'ๅค–ๅผทไธญไนพ', u'ๅค–ๅˆถ': u'ๅค–่ฃฝ', u'ๅคšๅ ': u'ๅคšไฝ”', u'ๅคšๅˆ’': u'ๅคšๅŠƒ', u'ๅคšๅŠๅช': u'ๅคšๅŠๅช', u'ๅคšๅชๅฏ': u'ๅคšๅชๅฏ', u'ๅคšๅชๅœจ': u'ๅคšๅชๅœจ', u'ๅคšๅชๆ˜ฏ': u'ๅคšๅชๆ˜ฏ', u'ๅคšๅชไผš': u'ๅคšๅชๆœƒ', u'ๅคšๅชๆœ‰': u'ๅคšๅชๆœ‰', u'ๅคšๅช่ƒฝ': u'ๅคšๅช่ƒฝ', u'ๅคšๅช้œ€': u'ๅคšๅช้œ€', u'ๅคšๅคฉๅŽ': u'ๅคšๅคฉๅพŒ', u'ๅคšไบŽ': u'ๅคšๆ–ผ', u'ๅคšๅ†ฒ': u'ๅคš่ก', u'ๅคšไธ‘': u'ๅคš้†œ', u'ๅคšๅช': u'ๅคš้šป', u'ๅคšไฝ™': u'ๅคš้ค˜', u'ๅคšไนˆ': u'ๅคš้บผ', u'ๅคœๅ…‰่กจ': u'ๅคœๅ…‰้Œถ', u'ๅคœ้‡Œ': u'ๅคœ่ฃก', u'ๅคœๆธธ': u'ๅคœ้Š', u'ๅคŸๅ…‹ๅˆถ': u'ๅค ๅ‰‹ๅˆถ', u'ๆขฆๆœ‰ไบ”ไธๅ ': u'ๅคขๆœ‰ไบ”ไธๅ ', u'ๆขฆ้‡Œ': u'ๅคข่ฃก', u'ๆขฆๆธธ': u'ๅคข้Š', u'ไผ™ไผด': u'ๅคฅไผด', u'ไผ™ๅ‹': u'ๅคฅๅ‹', u'ไผ™ๅŒ': u'ๅคฅๅŒ', u'ไผ™ไผ—': u'ๅคฅ็œพ', u'ไผ™่ฎก': u'ๅคฅ่จˆ', u'ๅคงไธ‘': u'ๅคงไธ‘', u'ๅคงไผ™ๅ„ฟ': u'ๅคงไผ™ๅ…’', u'ๅคงๅชๅฏ': u'ๅคงๅชๅฏ', u'ๅคงๅชๅœจ': u'ๅคงๅชๅœจ', u'ๅคงๅชๆ˜ฏ': u'ๅคงๅชๆ˜ฏ', u'ๅคงๅชไผš': u'ๅคงๅชๆœƒ', u'ๅคงๅชๆœ‰': u'ๅคงๅชๆœ‰', u'ๅคงๅช่ƒฝ': u'ๅคงๅช่ƒฝ', u'ๅคงๅช้œ€': u'ๅคงๅช้œ€', u'ๅคงๅ‘จๅŽ': u'ๅคงๅ‘จๅŽ', u'ๅคงๅž‹้’Ÿ': u'ๅคงๅž‹้˜', u'ๅคงๅž‹้’Ÿ่กจ้ข': u'ๅคงๅž‹้˜่กจ้ข', u'ๅคงๅž‹้’Ÿ่กจ': u'ๅคงๅž‹้˜้Œถ', u'ๅคงๅž‹้’Ÿ้ข': u'ๅคงๅž‹้˜้ข', u'ๅคงไผ™': u'ๅคงๅคฅ', u'ๅคงๅนฒ': u'ๅคงๅนน', u'ๅคงๆ‰นๆถŒๅˆฐ': u'ๅคงๆ‰นๆนงๅˆฐ', u'ๅคงๆŠ˜ๅ„ฟ': u'ๅคงๆ‘บๅ…’', u'ๅคงๆ˜Žๅކ': u'ๅคงๆ˜Žๆ›†', u'ๅคงๆ˜Žๅކๅฒ': u'ๅคงๆ˜Žๆญทๅฒ', u'ๅคงๅކ': u'ๅคงๆ›†', u'ๅคงๆœฌ้’Ÿ': u'ๅคงๆœฌ้˜', u'ๅคงๆœฌ้’Ÿๆ•ฒ': u'ๅคงๆœฌ้˜ๆ•ฒ', u'ๅคงๅކๅฒ': u'ๅคงๆญทๅฒ', u'ๅคงๅ‘†': u'ๅคง็ƒ', u'ๅคง็—…ๅˆๆ„ˆ': u'ๅคง็—…ๅˆ็™’', u'ๅคง็›ฎๅนฒ่ฟž': u'ๅคง็›ฎไนพ้€ฃ', u'ๅคง็ฌจ้’Ÿ': u'ๅคง็ฌจ้˜', u'ๅคง็ฌจ้’Ÿๆ•ฒ': u'ๅคง็ฌจ้˜ๆ•ฒ', u'ๅคง่œก': u'ๅคง่œก', u'ๅคง่กๅކ': u'ๅคง่กๆ›†', u'ๅคง่กๅކๅฒ': u'ๅคง่กๆญทๅฒ', u'ๅคง่จ€้žๅคธ': u'ๅคง่จ€้žๅคธ', u'ๅคง่ตž': u'ๅคง่ฎš', u'ๅคงๅ‘จๆŠ˜': u'ๅคง้€ฑๆ‘บ', u'ๅคง้‡‘ๅ‘่‹”': u'ๅคง้‡‘้ซฎ่‹”', u'ๅคง้”ค': u'ๅคง้Žš', u'ๅคง้’Ÿ': u'ๅคง้˜', u'ๅคงๅช': u'ๅคง้šป', u'ๅคง้ฃŽๅŽ': u'ๅคง้ขจๅพŒ', u'ๅคงๆ›ฒ': u'ๅคง้บด', u'ๅคฉๅนฒ็‰ฉ็‡ฅ': u'ๅคฉไนพ็‰ฉ็‡ฅ', u'ๅคฉๅ…‹ๅœฐๅ†ฒ': u'ๅคฉๅ…‹ๅœฐ่ก', u'ๅคฉๅŽ': u'ๅคฉๅŽ', u'ๅคฉๅŽๅฎซ': u'ๅคฉๅŽๅฎฎ', u'ๅคฉๅœฐๅฟ—็‹ผ': u'ๅคฉๅœฐๅฟ—็‹ผ', u'ๅคฉๅœฐไธบ่Œƒ': u'ๅคฉๅœฐ็‚บ็ฏ„', u'ๅคฉๅนฒๅœฐๆ”ฏ': u'ๅคฉๅนฒๅœฐๆ”ฏ', u'ๅคฉๆ–‡ๅญฆ้’Ÿ': u'ๅคฉๆ–‡ๅญธ้˜', u'ๅคฉๆ–‡้’Ÿ': u'ๅคฉๆ–‡้˜', u'ๅคฉๅކ': u'ๅคฉๆ›†', u'ๅคฉๅކๅฒ': u'ๅคฉๆญทๅฒ', u'ๅคฉ็ฟปๅœฐ่ฆ†': u'ๅคฉ็ฟปๅœฐ่ฆ†', u'ๅคฉ่ฆ†ๅœฐ่ฝฝ': u'ๅคฉ่ฆ†ๅœฐ่ผ‰', u'ๅคชไป†': u'ๅคชๅƒ•', u'ๅคชๅˆๅކ': u'ๅคชๅˆๆ›†', u'ๅคชๅˆๅކๅฒ': u'ๅคชๅˆๆญทๅฒ', u'ๅคชๅŽ': u'ๅคชๅŽ', u'ๅคฏๅนฒ': u'ๅคฏๅนน', u'ๅคธไบบ': u'ๅคธไบบ', u'ๅคธๅ…‹': u'ๅคธๅ…‹', u'ๅคธๅคธๅ…ถ่ฐˆ': u'ๅคธๅคธๅ…ถ่ซ‡', u'ๅคธๅงฃ': u'ๅคธๅงฃ', u'ๅคธๅฎน': u'ๅคธๅฎน', u'ๅคธๆฏ—': u'ๅคธๆฏ—', u'ๅคธ็ˆถ': u'ๅคธ็ˆถ', u'ๅคธ็‰น': u'ๅคธ็‰น', u'ๅคธ่„ฑ': u'ๅคธ่„ซ', u'ๅคธ่ฏž': u'ๅคธ่ช•', u'ๅคธ่ฏžไธ็ป': u'ๅคธ่ช•ไธ็ถ“', u'ๅคธไธฝ': u'ๅคธ้บ—', u'ๅฅ‡่ฟน': u'ๅฅ‡่ฟน', u'ๅฅ‡ไธ‘': u'ๅฅ‡้†œ', u'ๅฅๆŠ˜': u'ๅฅๆ‘บ', u'ๅฅฅๅ ': u'ๅฅงไฝ”', u'ๅคบๆ–—': u'ๅฅช้ฌฅ', u'ๅฅ‹ๆ–—': u'ๅฅฎ้ฌฅ', u'ๅฅณไธ‘': u'ๅฅณไธ‘', u'ๅฅณไฝฃไบบ': u'ๅฅณไฝฃไบบ', u'ๅฅณไฝฃ': u'ๅฅณๅ‚ญ', u'ๅฅณไป†': u'ๅฅณๅƒ•', u'ๅฅดไป†': u'ๅฅดๅƒ•', u'ๅฅธๆทซๆŽณๆŽ ': u'ๅฅธๆทซๆ“„ๆŽ ', u'ๅฅนๅ…‹ๅˆถ': u'ๅฅนๅ‰‹ๅˆถ', u'ๅฅฝๅนฒ': u'ๅฅฝไนพ', u'ๅฅฝๅฎถไผ™': u'ๅฅฝๅ‚ขๅคฅ', u'ๅฅฝๅ‹‡ๆ–—็‹ ': u'ๅฅฝๅ‹‡้ฌฅ็‹ ', u'ๅฅฝๆ–—ๅคง': u'ๅฅฝๆ–—ๅคง', u'ๅฅฝๆ–—ๅฎค': u'ๅฅฝๆ–—ๅฎค', u'ๅฅฝๆ–—็ฌ ': u'ๅฅฝๆ–—็ฌ ', u'ๅฅฝๆ–—็ฏท': u'ๅฅฝๆ–—็ฏท', u'ๅฅฝๆ–—่ƒ†': u'ๅฅฝๆ–—่†ฝ', u'ๅฅฝๆ–—่“ฌ': u'ๅฅฝๆ–—่“ฌ', u'ๅฅฝไบŽ': u'ๅฅฝๆ–ผ', u'ๅฅฝๅ‘†': u'ๅฅฝ็ƒ', u'ๅฅฝๅ›ฐ': u'ๅฅฝ็', u'ๅฅฝ็ญพ': u'ๅฅฝ็ฑค', u'ๅฅฝไธ‘': u'ๅฅฝ้†œ', u'ๅฅฝๆ–—': u'ๅฅฝ้ฌฅ', u'ๅฆ‚ๆžœๅนฒ': u'ๅฆ‚ๆžœๅนน', u'ๅฆ‚้ฅฅไผผๆธด': u'ๅฆ‚้ฅ‘ไผผๆธด', u'ๅฆ–ๅŽ': u'ๅฆ–ๅŽ', u'ๅฆ™่ฏ': u'ๅฆ™่—ฅ', u'ๅง‹ไบŽ': u'ๅง‹ๆ–ผ', u'ๅง”ๆ‰˜': u'ๅง”่จ—', u'ๅง”ๆ‰˜ไนฆ': u'ๅง”่จ—ๆ›ธ', u'ๅงœๆ–‡ๆฐ': u'ๅงœๆ–‡ๆฐ', u'ๅฅธๅคซ': u'ๅงฆๅคซ', u'ๅฅธๅฆ‡': u'ๅงฆๅฉฆ', u'ๅฅธๅฎ„': u'ๅงฆๅฎ„', u'ๅฅธๆƒ…': u'ๅงฆๆƒ…', u'ๅฅธๆ€': u'ๅงฆๆฎบ', u'ๅฅธๆฑก': u'ๅงฆๆฑ™', u'ๅฅธๆทซ': u'ๅงฆๆทซ', u'ๅฅธ็Œพ': u'ๅงฆ็Œพ', u'ๅฅธ็ป†': u'ๅงฆ็ดฐ', u'ๅฅธ้‚ช': u'ๅงฆ้‚ช', u'ๅจๆฃฑ': u'ๅจ็จœ', u'ๅฉขไป†': u'ๅฉขๅƒ•', u'ๅจฒๆ†': u'ๅชงๆ†', u'ๅซ็ฅธไบŽ': u'ๅซ็ฆๆ–ผ', u'ๅซŒๅ‡ถ': u'ๅซŒๅ…‡', u'ๅซŒๅฅฝ้“ไธ‘': u'ๅซŒๅฅฝ้“้†œ', u'ๅฌ‰ๆธธ': u'ๅฌ‰้Š', u'ๅฌ–ๅนธ': u'ๅฌ–ๅ€–', u'ๅฌดไฝ™': u'ๅฌด้ค˜', u'ๅญไน‹ไธฐๅ…ฎ': u'ๅญไน‹ไธฐๅ…ฎ', u'ๅญไบ‘': u'ๅญไบ‘', u'ๅญ—ๆฑ‡': u'ๅญ—ๅฝ™', u'ๅญ—็ ่กจ': u'ๅญ—็ขผ่กจ', u'ๅญ—้‡Œ่กŒ้—ด': u'ๅญ—่ฃก่กŒ้–“', u'ๅญ˜ๅไธ€ไบŽๅƒ็™พ': u'ๅญ˜ๅไธ€ๆ–ผๅƒ็™พ', u'ๅญ˜ๆŠ˜': u'ๅญ˜ๆ‘บ', u'ๅญ˜ไบŽ': u'ๅญ˜ๆ–ผ', u'ๅญคๅฏกไธ่ฐท': u'ๅญคๅฏกไธ็ฉ€', u'ๅญฆ้‡Œ': u'ๅญธ่ฃก', u'ๅฎ‡ๅฎ™ๅฟ—': u'ๅฎ‡ๅฎ™่ชŒ', u'ๅฎ‰ไบŽ': u'ๅฎ‰ๆ–ผ', u'ๅฎ‰ๆฒˆ้“่ทฏ': u'ๅฎ‰็€‹้ต่ทฏ', u'ๅฎ‰็œ ่ฏ': u'ๅฎ‰็œ ่—ฅ', u'ๅฎ‰่ƒŽ่ฏ': u'ๅฎ‰่ƒŽ่—ฅ', u'ๅฎ—ๅ‘จ้’Ÿ': u'ๅฎ—ๅ‘จ้˜', u'ๅฎ˜ไธๆ€•ๅคงๅชๆ€•็ฎก': u'ๅฎ˜ไธๆ€•ๅคงๅชๆ€•็ฎก', u'ๅฎ˜ๅœฐไธบ้‡‡': u'ๅฎ˜ๅœฐ็‚บๅฏ€', u'ๅฎ˜ๅކ': u'ๅฎ˜ๆ›†', u'ๅฎ˜ๅކๅฒ': u'ๅฎ˜ๆญทๅฒ', u'ๅฎ˜ๅบ„': u'ๅฎ˜่ŽŠ', u'ๅฎšไบŽ': u'ๅฎšๆ–ผ', u'ๅฎšๅ‡†': u'ๅฎšๆบ–', u'ๅฎšๅˆถ': u'ๅฎš่ฃฝ', u'ๅฎœไบ‘': u'ๅฎœไบ‘', u'ๅฎฃๆณ„': u'ๅฎฃๆดฉ', u'ๅฎฆๆธธ': u'ๅฎฆ้Š', u'ๅฎซ้‡Œ': u'ๅฎฎ่ฃก', u'ๅฎณไบŽ': u'ๅฎณๆ–ผ', u'ๅฎดๆธธ': u'ๅฎด้Š', u'ๅฎถไป†': u'ๅฎถๅƒ•', u'ๅฎถๅ…ทๅค‡': u'ๅฎถๅ…ทๅ‚™', u'ๅฎถๅ…ทๆœ‰': u'ๅฎถๅ…ทๆœ‰', u'ๅฎถๅ…ทๆœจๅทฅ็ง‘': u'ๅฎถๅ…ทๆœจๅทฅ็ง‘', u'ๅฎถๅ…ท่กŒ': u'ๅฎถๅ…ท่กŒ', u'ๅฎถๅ…ทไฝ“': u'ๅฎถๅ…ท้ซ”', u'ๅฎถๅบ„': u'ๅฎถ่ŽŠ', u'ๅฎถ้‡Œ': u'ๅฎถ่ฃก', u'ๅฎถไธ‘': u'ๅฎถ้†œ', u'ๅฎนไบŽ': u'ๅฎนๆ–ผ', u'ๅฎน่Œƒ': u'ๅฎน็ฏ„', u'ๅฎฟ่ˆ': u'ๅฎฟ่ˆ', u'ๅฏ„ๆ‰˜ๅœจ': u'ๅฏ„ๆ‰˜ๅœจ', u'ๅฏ„ๆ‰˜': u'ๅฏ„่จ—', u'ๅฏ†่‡ด': u'ๅฏ†็ทป', u'ๅฏ‡ๅ‡†': u'ๅฏ‡ๆบ–', u'ๅฏ‡ไป‡': u'ๅฏ‡่ฎŽ', u'ๅฏŒไฝ™': u'ๅฏŒ้ค˜', u'ๅฏ’ๅ‡้‡Œ': u'ๅฏ’ๅ‡่ฃก', u'ๅฏ’ๆ —': u'ๅฏ’ๆ…„', u'ๅฏ’ไบŽ': u'ๅฏ’ๆ–ผ', u'ๅฏ“ไบŽ': u'ๅฏ“ๆ–ผ', u'ๅฏกๅ ': u'ๅฏกไฝ”', u'ๅฏกๆฌฒ': u'ๅฏกๆ…พ', u'ๅฎžๅนฒ': u'ๅฏฆๅนน', u'ๅ†™ๅญ—ๅฐ': u'ๅฏซๅญ—ๆชฏ', u'ๅฎฝๅฎฝๆพๆพ': u'ๅฏฌๅฏฌ้ฌ†้ฌ†', u'ๅฎฝไบŽ': u'ๅฏฌๆ–ผ', u'ๅฎฝไฝ™': u'ๅฏฌ้ค˜', u'ๅฎฝๆพ': u'ๅฏฌ้ฌ†', u'ๅฏฎ้‡‡': u'ๅฏฎๅฏ€', u'ๅฎๅฑฑๅบ„': u'ๅฏถๅฑฑๅบ„', u'ๅฏถๆ›†': u'ๅฏถๆ›†', u'ๅฎๅކ': u'ๅฏถๆ›†', u'ๅฎๅކๅฒ': u'ๅฏถๆญทๅฒ', u'ๅฎๅบ„': u'ๅฏถ่ŽŠ', u'ๅฎ้‡Œๅฎๆฐ”': u'ๅฏถ่ฃกๅฏถๆฐฃ', u'ๅฏธๅ‘ๅƒ้‡‘': u'ๅฏธ้ซฎๅƒ้‡‘', u'ๅฏบ้’Ÿ': u'ๅฏบ้˜', u'ๅฐๅŽ': u'ๅฐๅŽ', u'ๅฐ้ข้‡Œ': u'ๅฐ้ข่ฃก', u'ๅฐ„้›•': u'ๅฐ„้ตฐ', u'ๅฐ†ๅ ': u'ๅฐ‡ไฝ”', u'ๅฐ†ๅ ๅœ': u'ๅฐ‡ๅ ๅœ', u'ไธ“ๅ‘ๅพ€': u'ๅฐˆๅ‘ๅพ€', u'ไธ“ๆณจ': u'ๅฐˆ่จป', u'ไธ“่พ‘้‡Œ': u'ๅฐˆ่ผฏ่ฃก', u'ๅฏนๆŠ˜': u'ๅฐๆ‘บ', u'ๅฏนไบŽ': u'ๅฐๆ–ผ', u'ๅฏนๅ‡†': u'ๅฐๆบ–', u'ๅฏนๅ‡†่กจ': u'ๅฐๆบ–้Œถ', u'ๅฏนๅ‡†้’Ÿ': u'ๅฐๆบ–้˜', u'ๅฏนๅ‡†้’Ÿ่กจ': u'ๅฐๆบ–้˜้Œถ', u'ๅฏนๅŽๅ‘ๅŠจ': u'ๅฐ่ฏ็™ผๅ‹•', u'ๅฏน่กจไธญ': u'ๅฐ่กจไธญ', u'ๅฏน่กจๆ‰ฌ': u'ๅฐ่กจๆš', u'ๅฏน่กจๆ˜Ž': u'ๅฐ่กจๆ˜Ž', u'ๅฏน่กจๆผ”': u'ๅฐ่กจๆผ”', u'ๅฏน่กจ็Žฐ': u'ๅฐ่กจ็พ', u'ๅฏน่กจ่พพ': u'ๅฐ่กจ้”', u'ๅฏน่กจ': u'ๅฐ้Œถ', u'ๅฏผๆธธ': u'ๅฐŽ้Š', u'ๅฐไธ‘': u'ๅฐไธ‘', u'ๅฐไปท': u'ๅฐไปท', u'ๅฐไป†': u'ๅฐๅƒ•', u'ๅฐๅ‡ ': u'ๅฐๅ‡ ', u'ๅฐๅชๅฏ': u'ๅฐๅชๅฏ', u'ๅฐๅชๅœจ': u'ๅฐๅชๅœจ', u'ๅฐๅชๆ˜ฏ': u'ๅฐๅชๆ˜ฏ', u'ๅฐๅชไผš': u'ๅฐๅชๆœƒ', u'ๅฐๅชๆœ‰': u'ๅฐๅชๆœ‰', u'ๅฐๅช่ƒฝ': u'ๅฐๅช่ƒฝ', u'ๅฐๅช้œ€': u'ๅฐๅช้œ€', u'ๅฐๅ‘จๅŽ': u'ๅฐๅ‘จๅŽ', u'ๅฐๅž‹้’Ÿ': u'ๅฐๅž‹้˜', u'ๅฐๅž‹้’Ÿ่กจ้ข': u'ๅฐๅž‹้˜่กจ้ข', u'ๅฐๅž‹้’Ÿ่กจ': u'ๅฐๅž‹้˜้Œถ', u'ๅฐๅž‹้’Ÿ้ข': u'ๅฐๅž‹้˜้ข', u'ๅฐไผ™ๅญ': u'ๅฐๅคฅๅญ', u'ๅฐ็ฑณ้ข': u'ๅฐ็ฑณ้บต', u'ๅฐๅช': u'ๅฐ้šป', u'ๅฐ‘ๅ ': u'ๅฐ‘ไฝ”', u'ๅฐ‘้‡‡': u'ๅฐ‘ๆŽก', u'ๅฐฑๅ…‹ๅˆถ': u'ๅฐฑๅ‰‹ๅˆถ', u'ๅฐฑ่Œƒ': u'ๅฐฑ็ฏ„', u'ๅฐฑ้‡Œ': u'ๅฐฑ่ฃก', u'ๅฐธไฝ็ด ้ค': u'ๅฐธไฝ็ด ้ค', u'ๅฐธๅˆฉ': u'ๅฐธๅˆฉ', u'ๅฐธๅฑ…ไฝ™ๆฐ”': u'ๅฐธๅฑ…้ค˜ๆฐฃ', u'ๅฐธ็ฅ': u'ๅฐธ็ฅ', u'ๅฐธ็ฆ„': u'ๅฐธ็ฅฟ', u'ๅฐธ่‡ฃ': u'ๅฐธ่‡ฃ', u'ๅฐธ่ฐ': u'ๅฐธ่ซซ', u'ๅฐธ้ญ‚็•Œ': u'ๅฐธ้ญ‚็•Œ', u'ๅฐธ้ธ ': u'ๅฐธ้ณฉ', u'ๅฑ€้‡Œ': u'ๅฑ€่ฃก', u'ๅฑ่‚กๅคงๅŠไบ†ๅฟƒ': u'ๅฑ่‚กๅคงๅผ”ไบ†ๅฟƒ', u'ๅฑ‹ๅญ้‡Œ': u'ๅฑ‹ๅญ่ฃก', u'ๅฑ‹ๆข': u'ๅฑ‹ๆจ‘', u'ๅฑ‹้‡Œ': u'ๅฑ‹่ฃก', u'ๅฑ้ฃŽๅŽ': u'ๅฑ้ขจๅพŒ', u'ๅฑ‘ไบŽ': u'ๅฑ‘ๆ–ผ', u'ๅฑก้กพๅฐ”ไป†': u'ๅฑข้กง็ˆพๅƒ•', u'ๅฑžไบŽ': u'ๅฑฌๆ–ผ', u'ๅฑžๆ‰˜': u'ๅฑฌ่จ—', u'ๅฑฏๆ‰Ž': u'ๅฑฏ็ดฎ', u'ๅฑฏ้‡Œ': u'ๅฑฏ่ฃก', u'ๅฑฑๅดฉ้’Ÿๅบ”': u'ๅฑฑๅดฉ้˜ๆ‡‰', u'ๅฑฑๅฒณ': u'ๅฑฑๅถฝ', u'ๅฑฑๆข': u'ๅฑฑๆจ‘', u'ๅฑฑๆดž้‡Œ': u'ๅฑฑๆดž่ฃก', u'ๅฑฑๆฃฑ': u'ๅฑฑ็จœ', u'ๅฑฑ็พŠ่ƒก': u'ๅฑฑ็พŠ้ฌ', u'ๅฑฑๅบ„': u'ๅฑฑ่ŽŠ', u'ๅฑฑ่ฏ': u'ๅฑฑ่—ฅ', u'ๅฑฑ้‡Œ': u'ๅฑฑ่ฃก', u'ๅฑฑ้‡ๆฐดๅค': u'ๅฑฑ้‡ๆฐด่ค‡', u'ๅฒฑๅฒณ': u'ๅฒฑๅถฝ', u'ๅณฐๅ›ž': u'ๅณฐ่ฟด', u'ๅณปๅฒญ': u'ๅณปๅฒญ', u'ๆ˜†ๅ‰ง': u'ๅด‘ๅЇ', u'ๆ˜†ๅฑฑ': u'ๅด‘ๅฑฑ', u'ๆ˜†ไป‘': u'ๅด‘ๅด™', u'ๆ˜†ไป‘ๅฑฑ่„‰': u'ๅด‘ๅด™ๅฑฑ่„ˆ', u'ๆ˜†ๆ›ฒ': u'ๅด‘ๆ›ฒ', u'ๆ˜†่…”': u'ๅด‘่…”', u'ๆ˜†่‹': u'ๅด‘่˜‡', u'ๆ˜†่ฐƒ': u'ๅด‘่ชฟ', u'ๅด–ๅนฟ': u'ๅด–ๅนฟ', u'ไป‘่ƒŒ': u'ๅด™่ƒŒ', u'ๅถ’ๆฃฑ': u'ๅถ’็จœ', u'ๅฒณๅฒณ': u'ๅถฝๅถฝ', u'ๅฒณ้บ“': u'ๅถฝ้บ“', u'ๅท่ฐท': u'ๅท็ฉ€', u'ๅทกๅ›žๅŒป็–—': u'ๅทกๅ›ž้†ซ็™‚', u'ๅทกๅ›ž': u'ๅทก่ฟด', u'ๅทกๆธธ': u'ๅทก้Š', u'ๅทฅ่‡ด': u'ๅทฅ็ทป', u'ๅทฆๅ†ฒๅณ็ช': u'ๅทฆ่กๅณ็ช', u'ๅทงๅฆ‡ๅšไธๅพ—ๆ— ้ข้ฆŽ้ฅฆ': u'ๅทงๅฉฆๅšไธๅพ—็„ก้บต้คบ้ฃฅ', u'ๅทงๅนฒ': u'ๅทงๅนน', u'ๅทงๅކ': u'ๅทงๆ›†', u'ๅทงๅކๅฒ': u'ๅทงๆญทๅฒ', u'ๅทฎไน‹ๆฏซๅŽ˜': u'ๅทฎไน‹ๆฏซๅŽ˜', u'ๅทฎไน‹ๆฏซๅŽ˜๏ผŒ่ฐฌไปฅๅƒ้‡Œ': u'ๅทฎไน‹ๆฏซ้‡๏ผŒ่ฌฌไปฅๅƒ้‡Œ', u'ๅทฎไบŽ': u'ๅทฎๆ–ผ', u'ๅทฑไธ‘': u'ๅทฑไธ‘', u'ๅทฒๅ ': u'ๅทฒไฝ”', u'ๅทฒๅ ๅœ': u'ๅทฒๅ ๅœ', u'ๅทฒๅ ็ฎ—': u'ๅทฒๅ ็ฎ—', u'ๅทดๅฐ”ๅนฒ': u'ๅทด็ˆพๅนน', u'ๅทท้‡Œ': u'ๅทท่ฃก', u'ๅธ‚ๅ ': u'ๅธ‚ไฝ”', u'ๅธ‚ๅ ็އ': u'ๅธ‚ไฝ”็އ', u'ๅธ‚้‡Œ': u'ๅธ‚่ฃก', u'ๅธƒ่ฐท': u'ๅธƒ็ฉ€', u'ๅธƒ่ฐท้ธŸ้’Ÿ': u'ๅธƒ็ฉ€้ณฅ้˜', u'ๅธƒๅบ„': u'ๅธƒ่ŽŠ', u'ๅธƒ่ฐท้ธŸ': u'ๅธƒ่ฐท้ณฅ', u'ๅธŒไผฏๆฅๅކ': u'ๅธŒไผฏไพ†ๆ›†', u'ๅธŒไผฏๆฅๅކๅฒ': u'ๅธŒไผฏไพ†ๆญทๅฒ', u'ๅธ˜ๅญ': u'ๅธ˜ๅญ', u'ๅธ˜ๅธƒ': u'ๅธ˜ๅธƒ', u'ๅธˆ่Œƒ': u'ๅธซ็ฏ„', u'ๅธญๅท': u'ๅธญๆฒ', u'ๅธฆๅ›ขๅ‚ๅŠ ': u'ๅธถๅœ˜ๅƒๅŠ ', u'ๅธฆๅพ': u'ๅธถๅพต', u'ๅธฆๅ‘ไฟฎ่กŒ': u'ๅธถ้ซฎไฟฎ่กŒ', u'ๅธฎไฝฃ': u'ๅนซๅ‚ญ', u'ๅนฒ็ณป': u'ๅนฒไฟ‚', u'ๅนฒ็€ๆ€ฅ': u'ๅนฒ่‘—ๆ€ฅ', u'ๅนณๅนณๅฝ“ๅฝ“': u'ๅนณๅนณ็•ถ็•ถ', u'ๅนณๆณ‰ๅบ„': u'ๅนณๆณ‰่ŽŠ', u'ๅนณๅ‡†': u'ๅนณๆบ–', u'ๅนดไปฃ้‡Œ': u'ๅนดไปฃ่ฃก', u'ๅนดๅކ': u'ๅนดๆ›†', u'ๅนดๅކๅฒ': u'ๅนดๆญทๅฒ', u'ๅนด่ฐท': u'ๅนด็ฉ€', u'ๅนด้‡Œ': u'ๅนด่ฃก', u'ๅนถๅŠ›': u'ๅนถๅŠ›', u'ๅนถๅž': u'ๅนถๅž', u'ๅนถๅทž': u'ๅนถๅทž', u'ๅนถๆ—ฅ่€Œ้ฃŸ': u'ๅนถๆ—ฅ่€Œ้ฃŸ', u'ๅนถ่กŒ': u'ๅนถ่กŒ', u'ๅนถ่ฟญ': u'ๅนถ่ฟญ', u'ๅนธๅ…ไบŽ้šพ': u'ๅนธๅ…ๆ–ผ้›ฃ', u'ๅนธไบŽ': u'ๅนธๆ–ผ', u'ๅนธ่ฟ่ƒก': u'ๅนธ้‹้ฌ', u'ๅนฒไธŠ': u'ๅนนไธŠ', u'ๅนฒไธ‹ๅŽป': u'ๅนนไธ‹ๅŽป', u'ๅนฒไธไบ†': u'ๅนนไธไบ†', u'ๅนฒไธๆˆ': u'ๅนนไธๆˆ', u'ๅนฒไบ†': u'ๅนนไบ†', u'ๅนฒไบ‹': u'ๅนนไบ‹', u'ๅนฒไบ›': u'ๅนนไบ›', u'ๅนฒไบบ': u'ๅนนไบบ', u'ๅนฒไป€ไนˆ': u'ๅนนไป€้บผ', u'ๅนฒไธช': u'ๅนนๅ€‹', u'ๅนฒๅŠฒ': u'ๅนนๅ‹', u'ๅนฒๅŠฒๅ†ฒๅคฉ': u'ๅนนๅ‹ๆฒ–ๅคฉ', u'ๅนฒๅ': u'ๅนนๅ', u'ๅนฒๅ‘˜': u'ๅนนๅ“ก', u'ๅนฒๅ•ฅ': u'ๅนนๅ•ฅ', u'ๅนฒๅ—': u'ๅนนๅ—Ž', u'ๅนฒๅ˜›': u'ๅนนๅ˜›', u'ๅนฒๅไบ‹': u'ๅนนๅฃžไบ‹', u'ๅนฒๅฎŒ': u'ๅนนๅฎŒ', u'ๅนฒๅฎถ': u'ๅนนๅฎถ', u'ๅนฒๅฐ†': u'ๅนนๅฐ‡', u'ๅนฒๅพ—': u'ๅนนๅพ—', u'ๅนฒๆ€งๆฒน': u'ๅนนๆ€งๆฒน', u'ๅนฒๆ‰': u'ๅนนๆ‰', u'ๅนฒๆމ': u'ๅนนๆމ', u'ๅนฒๆŽข': u'ๅนนๆŽข', u'ๅนฒๆ ก': u'ๅนนๆ ก', u'ๅนฒๆดป': u'ๅนนๆดป', u'ๅนฒๆต': u'ๅนนๆต', u'ๅนฒๆตŽ': u'ๅนนๆฟŸ', u'ๅนฒ่ฅ็”Ÿ': u'ๅนน็‡Ÿ็”Ÿ', u'ๅนฒ็ˆถไน‹่›Š': u'ๅนน็ˆถไน‹่ ฑ', u'ๅนฒ็ƒๆธฉๅบฆ': u'ๅนน็ƒๆบซๅบฆ', u'ๅนฒ็”šไนˆ': u'ๅนน็”š้บผ', u'ๅนฒ็•ฅ': u'ๅนน็•ฅ', u'ๅนฒๅฝ“': u'ๅนน็•ถ', u'ๅนฒ็š„ๅœๅฝ“': u'ๅนน็š„ๅœ็•ถ', u'ๅนฒ็ป†่ƒž': u'ๅนน็ดฐ่ƒž', u'ๅนฒ็ดฐ่ƒž': u'ๅนน็ดฐ่ƒž', u'ๅนฒ็บฟ': u'ๅนน็ทš', u'ๅนฒ็ปƒ': u'ๅนน็ทด', u'ๅนฒ็ผบ': u'ๅนน็ผบ', u'ๅนฒ็พคๅ…ณ็ณป': u'ๅนน็พค้—œไฟ‚', u'ๅนฒ่›Š': u'ๅนน่ ฑ', u'ๅนฒ่ญฆ': u'ๅนน่ญฆ', u'ๅนฒ่ตทๆฅ': u'ๅนน่ตทไพ†', u'ๅนฒ่ทฏ': u'ๅนน่ทฏ', u'ๅนฒๅŠž': u'ๅนน่พฆ', u'ๅนฒ่ฟ™ไธ€่กŒ': u'ๅนน้€™ไธ€่กŒ', u'ๅนฒ่ฟ™็งไบ‹': u'ๅนน้€™็จฎไบ‹', u'ๅนฒ้“': u'ๅนน้“', u'ๅนฒ้ƒจ': u'ๅนน้ƒจ', u'ๅนฒ้ฉๅ‘ฝ': u'ๅนน้ฉๅ‘ฝ', u'ๅนฒๅคด': u'ๅนน้ ญ', u'ๅนฒไนˆ': u'ๅนน้บผ', u'ๅ‡ ๅˆ’': u'ๅนพๅŠƒ', u'ๅ‡ ๅคฉๅŽ': u'ๅนพๅคฉๅพŒ', u'ๅ‡ ๅช': u'ๅนพ้šป', u'ๅ‡ ๅ‡บ': u'ๅนพ้ฝฃ', u'ๅนฟ้ƒจ': u'ๅนฟ้ƒจ', u'ๅบ„็จผไบบ': u'ๅบ„็จผไบบ', u'ๅบ„็จผ้™ข': u'ๅบ„็จผ้™ข', u'ๅบ—้‡Œ': u'ๅบ—่ฃก', u'ๅบœๅนฒๅฟ': u'ๅบœๅนฒๅฟ', u'ๅบœๅนฒๆ“พ': u'ๅบœๅนฒๆ“พ', u'ๅบœๅนฒๆ‰ฐ': u'ๅบœๅนฒๆ“พ', u'ๅบœๅนฒๆ”ฟ': u'ๅบœๅนฒๆ”ฟ', u'ๅบœๅนฒๆถ‰': u'ๅบœๅนฒๆถ‰', u'ๅบœๅนฒ็Šฏ': u'ๅบœๅนฒ็Šฏ', u'ๅบœๅนฒ้ ': u'ๅบœๅนฒ้ ', u'ๅบœๅนฒ้ข„': u'ๅบœๅนฒ้ ', u'ๅบœๅนฒ': u'ๅบœๅนน', u'ๅบง้’Ÿ': u'ๅบง้˜', u'ๅบทๅบ„ๅคง้“': u'ๅบทๅบ„ๅคง้“', u'ๅบท้‡‡ๆฉ': u'ๅบทๆŽกๆฉ', u'ๅบทๅบ„': u'ๅบท่ŽŠ', u'ๅŽจไฝ™': u'ๅปš้ค˜', u'ๅŽฎๆ–—': u'ๅป้ฌฅ', u'ๅบ™้‡Œ': u'ๅปŸ่ฃก', u'ๅบŸๅŽ': u'ๅปขๅŽ', u'ๅปขๅŽ': u'ๅปขๅŽ', u'ๅนฟๅพ': u'ๅปฃๅพต', u'ๅนฟ่ˆ': u'ๅปฃๆจ', u'ๅปบไบŽ': u'ๅปบๆ–ผ', u'ๅผ„ๅนฒ': u'ๅผ„ไนพ', u'ๅผ„ไธ‘': u'ๅผ„้†œ', u'ๅผ„่„': u'ๅผ„้ซ’', u'ๅผ„ๆพ': u'ๅผ„้ฌ†', u'ๅผ„้ฌผๅŠ็Œด': u'ๅผ„้ฌผๅผ”็Œด', u'ๅŠๅ„ฟ้ƒŽๅฝ“': u'ๅผ”ๅ…’้ƒŽ็•ถ', u'ๅŠๅท': u'ๅผ”ๅท', u'ๅŠๅ–': u'ๅผ”ๅ–', u'ๅŠๅค': u'ๅผ”ๅค', u'ๅŠๅคๅฏปๅนฝ': u'ๅผ”ๅคๅฐ‹ๅนฝ', u'ๅŠๅ”': u'ๅผ”ๅ”', u'ๅŠ้—ฎ': u'ๅผ”ๅ•', u'ๅŠๅ–‰': u'ๅผ”ๅ–‰', u'ๅŠไธง': u'ๅผ”ๅ–ช', u'ๅŠไธง้—ฎ็–พ': u'ๅผ”ๅ–ชๅ•็–พ', u'ๅŠๅ–ญ': u'ๅผ”ๅ–ญ', u'ๅŠๅœบ': u'ๅผ”ๅ ด', u'ๅŠๅฅ ': u'ๅผ”ๅฅ ', u'ๅŠๅญ': u'ๅผ”ๅญ', u'ๅŠๅฎข': u'ๅผ”ๅฎข', u'ๅŠๅฎด': u'ๅผ”ๅฎด', u'ๅŠๅธฆ': u'ๅผ”ๅธถ', u'ๅŠๅฝฑ': u'ๅผ”ๅฝฑ', u'ๅŠๆ…ฐ': u'ๅผ”ๆ…ฐ', u'ๅŠๆ‰ฃ': u'ๅผ”ๆ‰ฃ', u'ๅŠๆ‹ท': u'ๅผ”ๆ‹ท', u'ๅŠๆ‹ท็ปทๆ‰’': u'ๅผ”ๆ‹ท็นƒๆ‰’', u'ๅŠๆŒ‚': u'ๅผ”ๆŽ›', u'ๅŠๆ’’': u'ๅผ”ๆ’’', u'ๅŠๆ–‡': u'ๅผ”ๆ–‡', u'ๅŠๆ——': u'ๅผ”ๆ——', u'ๅŠไนฆ': u'ๅผ”ๆ›ธ', u'ๅŠๆกฅ': u'ๅผ”ๆฉ‹', u'ๅŠๆญป': u'ๅผ”ๆญป', u'ๅŠๆญป้—ฎๅญค': u'ๅผ”ๆญปๅ•ๅญค', u'ๅŠๆญป้—ฎ็–พ': u'ๅผ”ๆญปๅ•็–พ', u'ๅŠๆฐ‘': u'ๅผ”ๆฐ‘', u'ๅŠๆฐ‘ไผ็ฝช': u'ๅผ”ๆฐ‘ไผ็ฝช', u'ๅŠ็ฅญ': u'ๅผ”็ฅญ', u'ๅŠ็บธ': u'ๅผ”็ด™', u'ๅŠ่€…ๅคงๆ‚ฆ': u'ๅผ”่€…ๅคงๆ‚…', u'ๅŠ่…ฐๆ’’่ทจ': u'ๅผ”่…ฐๆ’’่ทจ', u'ๅŠ่„šๅ„ฟไบ‹': u'ๅผ”่…ณๅ…’ไบ‹', u'ๅŠ่†€ๅญ': u'ๅผ”่†€ๅญ', u'ๅŠ่ฏ': u'ๅผ”่ฉž', u'ๅŠ่ฏก': u'ๅผ”่ฉญ', u'ๅŠ่ฏก็Ÿœๅฅ‡': u'ๅผ”่ฉญ็Ÿœๅฅ‡', u'ๅŠ่ฐŽ': u'ๅผ”่ฌŠ', u'ๅŠ่ดบ่ฟŽ้€': u'ๅผ”่ณ€่ฟŽ้€', u'ๅŠๅคด': u'ๅผ”้ ญ', u'ๅŠ้ขˆ': u'ๅผ”้ ธ', u'ๅŠ้นค': u'ๅผ”้ถด', u'ๅผ•ๆ–—': u'ๅผ•้ฌฅ', u'ๅผ˜ๅކ': u'ๅผ˜ๆ›†', u'ๅผ˜ๅކๅฒ': u'ๅผ˜ๆญทๅฒ', u'ๅผฑไบŽ': u'ๅผฑๆ–ผ', u'ๅผฑๆฐดไธ‰ๅƒๅชๅ–ไธ€็“ข': u'ๅผฑๆฐดไธ‰ๅƒๅชๅ–ไธ€็“ข', u'ๅผ ไธ‰ไธฐ': u'ๅผตไธ‰ไธฐ', u'ๅผตไธ‰ไธฐ': u'ๅผตไธ‰ไธฐ', u'ๅผ ๅ‹‹': u'ๅผตๅ‹ณ', u'ๅผบๅ ': u'ๅผทไฝ”', u'ๅผบๅˆถไฝœ็”จ': u'ๅผทๅˆถไฝœ็”จ', u'ๅผบๅฅธ': u'ๅผทๅงฆ', u'ๅผบๅนฒ': u'ๅผทๅนน', u'ๅผบไบŽ': u'ๅผทๆ–ผ', u'ๅˆซๅฃๆฐ”': u'ๅฝ†ๅฃๆฐฃ', u'ๅˆซๅผบ': u'ๅฝ†ๅผท', u'ๅˆซๆ‰ญ': u'ๅฝ†ๆ‰ญ', u'ๅˆซๆ‹—': u'ๅฝ†ๆ‹—', u'ๅˆซๆฐ”': u'ๅฝ†ๆฐฃ', u'ๅผนๅญๅฐ': u'ๅฝˆๅญๆชฏ', u'ๅผน็ ๅฐ': u'ๅฝˆ็ ๆชฏ', u'ๅผน่ฏ': u'ๅฝˆ่—ฅ', u'ๆฑ‡ๅˆŠ': u'ๅฝ™ๅˆŠ', u'ๆฑ‡ๆŠฅ': u'ๅฝ™ๅ ฑ', u'ๆฑ‡ๆ•ด': u'ๅฝ™ๆ•ด', u'ๆฑ‡็ฎ—': u'ๅฝ™็ฎ—', u'ๆฑ‡็ผ–': u'ๅฝ™็ทจ', u'ๆฑ‡็บ‚': u'ๅฝ™็บ‚', u'ๆฑ‡่พ‘': u'ๅฝ™่ผฏ', u'ๆฑ‡้›†': u'ๅฝ™้›†', u'ๅฝขๅ•ๅฝฑๅช': u'ๅฝขๅ–ฎๅฝฑ้šป', u'ๅฝขๅฝฑ็›ธๅŠ': u'ๅฝขๅฝฑ็›ธๅผ”', u'ๅฝขไบŽ': u'ๅฝขๆ–ผ', u'ๅฝฑๅŽ': u'ๅฝฑๅŽ', u'ไปฟไฝ›': u'ๅฝทๅฝฟ', u'ๅฝนไบŽ': u'ๅฝนๆ–ผ', u'ๅฝผๆญคๅ…‹ๅˆถ': u'ๅฝผๆญคๅ‰‹ๅˆถ', u'ๅพ€ๆ—ฅ็„กไป‡': u'ๅพ€ๆ—ฅ็„ก่ฎŽ', u'ๅพ€้‡Œ': u'ๅพ€่ฃก', u'ๅพ€ๅค': u'ๅพ€่ค‡', u'ๅพˆๅนฒ': u'ๅพˆไนพ', u'ๅพˆๅ‡ถ': u'ๅพˆๅ…‡', u'ๅพˆไธ‘': u'ๅพˆ้†œ', u'ๅพ‹ๅކๅฟ—': u'ๅพ‹ๆ›†ๅฟ—', u'ๅŽๅฐ': u'ๅพŒๅฐ', u'ๅŽๅฐ่€ๆฟ': u'ๅพŒๅฐ่€ๆฟ', u'ๅŽๅบ„': u'ๅพŒๅบ„', u'ๅŽ้ขๅบ—': u'ๅพŒ้ขๅบ—', u'ๅพๅนฒ': u'ๅพๅนน', u'ๅพ’ๆ‰˜็ฉบ่จ€': u'ๅพ’่จ—็ฉบ่จ€', u'ๅพ—ๅ…‹ๅˆถ': u'ๅพ—ๅ‰‹ๅˆถ', u'ไปŽไบŽ': u'ๅพžๆ–ผ', u'ไปŽ้‡Œๅˆฐๅค–': u'ๅพž่ฃกๅˆฐๅค–', u'ไปŽ้‡Œๅ‘ๅค–': u'ๅพž่ฃกๅ‘ๅค–', u'ๅคๅง‹': u'ๅพฉๅง‹', u'ๅพไบบ': u'ๅพตไบบ', u'ๅพไปค': u'ๅพตไปค', u'ๅพๅ ': u'ๅพตไฝ”', u'ๅพไฟก': u'ๅพตไฟก', u'ๅพๅ€™': u'ๅพตๅ€™', u'ๅพๅ…†': u'ๅพตๅ…†', u'ๅพๅ…ต': u'ๅพตๅ…ต', u'ๅพๅˆฐ': u'ๅพตๅˆฐ', u'ๅพๅ‹Ÿ': u'ๅพตๅ‹Ÿ', u'ๅพๅ‹': u'ๅพตๅ‹', u'ๅพๅฌ': u'ๅพตๅฌ', u'ๅพๅ่ดฃๅฎž': u'ๅพตๅ่ฒฌๅฏฆ', u'ๅพๅ': u'ๅพตๅ', u'ๅพๅ’Ž': u'ๅพตๅ’Ž', u'ๅพๅฏ': u'ๅพตๅ•Ÿ', u'ๅพๅฃซ': u'ๅพตๅฃซ', u'ๅพๅฉš': u'ๅพตๅฉš', u'ๅพๅฎž': u'ๅพตๅฏฆ', u'ๅพๅบธ': u'ๅพตๅบธ', u'ๅพๅผ•': u'ๅพตๅผ•', u'ๅพๅพ—': u'ๅพตๅพ—', u'ๅพๆ€ช': u'ๅพตๆ€ช', u'ๅพๆ‰': u'ๅพตๆ‰', u'ๅพๆ‹›': u'ๅพตๆ‹›', u'ๅพๆ”ถ': u'ๅพตๆ”ถ', u'ๅพๆ•ˆ': u'ๅพตๆ•ˆ', u'ๅพๆ–‡': u'ๅพตๆ–‡', u'ๅพๆฑ‚': u'ๅพตๆฑ‚', u'ๅพ็Šถ': u'ๅพต็‹€', u'ๅพ็”จ': u'ๅพต็”จ', u'ๅพๅ‘': u'ๅพต็™ผ', u'ๅพ็จŽ': u'ๅพต็จ…', u'ๅพ็จฟ': u'ๅพต็จฟ', u'ๅพ็ญ”': u'ๅพต็ญ”', u'ๅพ็ป“': u'ๅพต็ต', u'ๅพๅœฃ': u'ๅพต่–', u'ๅพ่˜': u'ๅพต่˜', u'ๅพ่ฎญ': u'ๅพต่จ“', u'ๅพ่ฏข': u'ๅพต่ฉข', u'ๅพ่ฐƒ': u'ๅพต่ชฟ', u'ๅพ่ฑก': u'ๅพต่ฑก', u'ๅพ่ดญ': u'ๅพต่ณผ', u'ๅพ่ฟน': u'ๅพต่ทก', u'ๅพ่ฝฆ': u'ๅพต่ปŠ', u'ๅพ่พŸ': u'ๅพต่พŸ', u'ๅพ้€': u'ๅพต้€', u'ๅพ้€‰': u'ๅพต้ธ', u'ๅพ้›†': u'ๅพต้›†', u'ๅพ้ฃŽๅฌ้›จ': u'ๅพต้ขจๅฌ้›จ', u'ๅพ้ชŒ': u'ๅพต้ฉ—', u'ๅพทๅ ': u'ๅพทไฝ”', u'ๅฟƒๆ„ฟ': u'ๅฟƒๆ„ฟ', u'ๅฟƒไบŽ': u'ๅฟƒๆ–ผ', u'ๅฟƒ็†': u'ๅฟƒ็†', u'ๅฟƒ็ป†ๅฆ‚ๅ‘': u'ๅฟƒ็ดฐๅฆ‚้ซฎ', u'ๅฟƒ็ณปไธ€': u'ๅฟƒ็นซไธ€', u'ๅฟƒ็ณปไธ–': u'ๅฟƒ็นซไธ–', u'ๅฟƒ็ณปไธญ': u'ๅฟƒ็นซไธญ', u'ๅฟƒ็ณปไน”': u'ๅฟƒ็นซไน”', u'ๅฟƒ็ณปไบ”': u'ๅฟƒ็นซไบ”', u'ๅฟƒ็ณปไบฌ': u'ๅฟƒ็นซไบฌ', u'ๅฟƒ็ณปไบบ': u'ๅฟƒ็นซไบบ', u'ๅฟƒ็ณปไป–': u'ๅฟƒ็นซไป–', u'ๅฟƒ็ณปไผŠ': u'ๅฟƒ็นซไผŠ', u'ๅฟƒ็ณปไฝ•': u'ๅฟƒ็นซไฝ•', u'ๅฟƒ็ณปไฝ ': u'ๅฟƒ็นซไฝ ', u'ๅฟƒ็ณปๅฅ': u'ๅฟƒ็นซๅฅ', u'ๅฟƒ็ณปไผ ': u'ๅฟƒ็นซๅ‚ณ', u'ๅฟƒ็ณปๅ…จ': u'ๅฟƒ็นซๅ…จ', u'ๅฟƒ็ณปไธค': u'ๅฟƒ็นซๅ…ฉ', u'ๅฟƒ็ณปๅ†œ': u'ๅฟƒ็นซๅ†œ', u'ๅฟƒ็ณปๅŠŸ': u'ๅฟƒ็นซๅŠŸ', u'ๅฟƒ็ณปๅŠจ': u'ๅฟƒ็นซๅ‹•', u'ๅฟƒ็ณปๅ‹Ÿ': u'ๅฟƒ็นซๅ‹Ÿ', u'ๅฟƒ็ณปๅŒ—': u'ๅฟƒ็นซๅŒ—', u'ๅฟƒ็ณปๅ': u'ๅฟƒ็นซๅ', u'ๅฟƒ็ณปๅƒ': u'ๅฟƒ็นซๅƒ', u'ๅฟƒ็ณปๅ—': u'ๅฟƒ็นซๅ—', u'ๅฟƒ็ณปๅฐ': u'ๅฟƒ็นซๅฐ', u'ๅฟƒ็ณปๅ’Œ': u'ๅฟƒ็นซๅ’Œ', u'ๅฟƒ็ณปๅ“ช': u'ๅฟƒ็นซๅ“ช', u'ๅฟƒ็ณปๅ”': u'ๅฟƒ็นซๅ”', u'ๅฟƒ็ณปๅ˜ฑ': u'ๅฟƒ็นซๅ›‘', u'ๅฟƒ็ณปๅ››': u'ๅฟƒ็นซๅ››', u'ๅฟƒ็ณปๅ›ฐ': u'ๅฟƒ็นซๅ›ฐ', u'ๅฟƒ็ณปๅ›ฝ': u'ๅฟƒ็นซๅœ‹', u'ๅฟƒ็ณปๅœจ': u'ๅฟƒ็นซๅœจ', u'ๅฟƒ็ณปๅœฐ': u'ๅฟƒ็นซๅœฐ', u'ๅฟƒ็ณปๅคง': u'ๅฟƒ็นซๅคง', u'ๅฟƒ็ณปๅคฉ': u'ๅฟƒ็นซๅคฉ', u'ๅฟƒ็ณปๅคซ': u'ๅฟƒ็นซๅคซ', u'ๅฟƒ็ณปๅฅฅ': u'ๅฟƒ็นซๅฅง', u'ๅฟƒ็ณปๅฅณ': u'ๅฟƒ็นซๅฅณ', u'ๅฟƒ็ณปๅฅน': u'ๅฟƒ็นซๅฅน', u'ๅฟƒ็ณปๅฆป': u'ๅฟƒ็นซๅฆป', u'ๅฟƒ็ณปๅฆ‡': u'ๅฟƒ็นซๅฉฆ', u'ๅฟƒ็ณปๅญ': u'ๅฟƒ็นซๅญ', u'ๅฟƒ็ณปๅฎƒ': u'ๅฟƒ็นซๅฎƒ', u'ๅฟƒ็ณปๅฎฃ': u'ๅฟƒ็นซๅฎฃ', u'ๅฟƒ็ณปๅฎถ': u'ๅฟƒ็นซๅฎถ', u'ๅฟƒ็ณปๅฏŒ': u'ๅฟƒ็นซๅฏŒ', u'ๅฟƒ็ณปๅฐ': u'ๅฟƒ็นซๅฐ', u'ๅฟƒ็ณปๅฑฑ': u'ๅฟƒ็นซๅฑฑ', u'ๅฟƒ็ณปๅท': u'ๅฟƒ็นซๅท', u'ๅฟƒ็ณปๅนผ': u'ๅฟƒ็นซๅนผ', u'ๅฟƒ็ณปๅนฟ': u'ๅฟƒ็นซๅปฃ', u'ๅฟƒ็ณปๅฝผ': u'ๅฟƒ็นซๅฝผ', u'ๅฟƒ็ณปๅพท': u'ๅฟƒ็นซๅพท', u'ๅฟƒ็ณปๆ‚จ': u'ๅฟƒ็นซๆ‚จ', u'ๅฟƒ็ณปๆ…ˆ': u'ๅฟƒ็นซๆ…ˆ', u'ๅฟƒ็ณปๆˆ‘': u'ๅฟƒ็นซๆˆ‘', u'ๅฟƒ็ณปๆ‘ฉ': u'ๅฟƒ็นซๆ‘ฉ', u'ๅฟƒ็ณปๆ•…': u'ๅฟƒ็นซๆ•…', u'ๅฟƒ็ณปๆ–ฐ': u'ๅฟƒ็นซๆ–ฐ', u'ๅฟƒ็ณปๆ—ฅ': u'ๅฟƒ็นซๆ—ฅ', u'ๅฟƒ็ณปๆ˜Œ': u'ๅฟƒ็นซๆ˜Œ', u'ๅฟƒ็ณปๆ™“': u'ๅฟƒ็นซๆ›‰', u'ๅฟƒ็ณปๆ›ผ': u'ๅฟƒ็นซๆ›ผ', u'ๅฟƒ็ณปไธœ': u'ๅฟƒ็นซๆฑ', u'ๅฟƒ็ณปๆž—': u'ๅฟƒ็นซๆž—', u'ๅฟƒ็ณปๆฏ': u'ๅฟƒ็นซๆฏ', u'ๅฟƒ็ณปๆฐ‘': u'ๅฟƒ็นซๆฐ‘', u'ๅฟƒ็ณปๆฑŸ': u'ๅฟƒ็นซๆฑŸ', u'ๅฟƒ็ณปๆฑถ': u'ๅฟƒ็นซๆฑถ', u'ๅฟƒ็ณปๆฒˆ': u'ๅฟƒ็นซๆฒˆ', u'ๅฟƒ็ณปๆฒ™': u'ๅฟƒ็นซๆฒ™', u'ๅฟƒ็ณปๆณฐ': u'ๅฟƒ็นซๆณฐ', u'ๅฟƒ็ณปๆต™': u'ๅฟƒ็นซๆต™', u'ๅฟƒ็ณปๆธฏ': u'ๅฟƒ็นซๆธฏ', u'ๅฟƒ็ณปๆน–': u'ๅฟƒ็นซๆน–', u'ๅฟƒ็ณปๆพณ': u'ๅฟƒ็นซๆพณ', u'ๅฟƒ็ณป็พ': u'ๅฟƒ็นซ็ฝ', u'ๅฟƒ็ณป็ˆถ': u'ๅฟƒ็นซ็ˆถ', u'ๅฟƒ็ณป็”Ÿ': u'ๅฟƒ็นซ็”Ÿ', u'ๅฟƒ็ณป็—…': u'ๅฟƒ็นซ็—…', u'ๅฟƒ็ณป็™พ': u'ๅฟƒ็นซ็™พ', u'ๅฟƒ็ณป็š„': u'ๅฟƒ็นซ็š„', u'ๅฟƒ็ณปไผ—': u'ๅฟƒ็นซ็œพ', u'ๅฟƒ็ณป็คพ': u'ๅฟƒ็นซ็คพ', u'ๅฟƒ็ณป็ฅ–': u'ๅฟƒ็นซ็ฅ–', u'ๅฟƒ็ณป็ฅž': u'ๅฟƒ็นซ็ฅž', u'ๅฟƒ็ณป็บข': u'ๅฟƒ็นซ็ด…', u'ๅฟƒ็ณป็พŽ': u'ๅฟƒ็นซ็พŽ', u'ๅฟƒ็ณป็พค': u'ๅฟƒ็นซ็พค', u'ๅฟƒ็ณป่€': u'ๅฟƒ็นซ่€', u'ๅฟƒ็ณป่ˆž': u'ๅฟƒ็นซ่ˆž', u'ๅฟƒ็ณป่‹ฑ': u'ๅฟƒ็นซ่‹ฑ', u'ๅฟƒ็ณป่Œถ': u'ๅฟƒ็นซ่Œถ', u'ๅฟƒ็ณปไธ‡': u'ๅฟƒ็นซ่ฌ', u'ๅฟƒ็ณป็€': u'ๅฟƒ็นซ่‘—', u'ๅฟƒ็ณปๅ…ฐ': u'ๅฟƒ็นซ่˜ญ', u'ๅฟƒ็ณป่ฅฟ': u'ๅฟƒ็นซ่ฅฟ', u'ๅฟƒ็ณป่ดซ': u'ๅฟƒ็นซ่ฒง', u'ๅฟƒ็ณป่พ“': u'ๅฟƒ็นซ่ผธ', u'ๅฟƒ็ณป่ฟ‘': u'ๅฟƒ็นซ่ฟ‘', u'ๅฟƒ็ณป่ฟœ': u'ๅฟƒ็นซ้ ', u'ๅฟƒ็ณป้€‰': u'ๅฟƒ็นซ้ธ', u'ๅฟƒ็ณป้‡': u'ๅฟƒ็นซ้‡', u'ๅฟƒ็ณป้•ฟ': u'ๅฟƒ็นซ้•ท', u'ๅฟƒ็ณป้˜ฎ': u'ๅฟƒ็นซ้˜ฎ', u'ๅฟƒ็ณป้œ‡': u'ๅฟƒ็นซ้œ‡', u'ๅฟƒ็ณป้ž': u'ๅฟƒ็นซ้ž', u'ๅฟƒ็ณป้ฃŽ': u'ๅฟƒ็นซ้ขจ', u'ๅฟƒ็ณป้ฆ™': u'ๅฟƒ็นซ้ฆ™', u'ๅฟƒ็ณป้ซ˜': u'ๅฟƒ็นซ้ซ˜', u'ๅฟƒ็ณป้บฆ': u'ๅฟƒ็นซ้บฅ', u'ๅฟƒ็ณป้ป„': u'ๅฟƒ็นซ้ปƒ', u'ๅฟƒ่„': u'ๅฟƒ่‡Ÿ', u'ๅฟƒ่ก': u'ๅฟƒ่•ฉ', u'ๅฟƒ่ฏ': u'ๅฟƒ่—ฅ', u'ๅฟƒ้‡Œ้ข': u'ๅฟƒ่ฃ้ข', u'ๅฟƒ้‡Œ': u'ๅฟƒ่ฃก', u'ๅฟƒ้•ฟๅ‘็Ÿญ': u'ๅฟƒ้•ท้ซฎ็Ÿญ', u'ๅฟƒไฝ™': u'ๅฟƒ้ค˜', u'ๅฟ…้กป': u'ๅฟ…้ ˆ', u'ๅฟ™ๅนถ': u'ๅฟ™ไฝต', u'ๅฟ™้‡Œ': u'ๅฟ™่ฃก', u'ๅฟ™้‡Œๅท้—ฒ': u'ๅฟ™่ฃกๅท้–’', u'ๅฟ ไบบไน‹ๆ‰˜': u'ๅฟ ไบบไน‹ๆ‰˜', u'ๅฟ ไป†': u'ๅฟ ๅƒ•', u'ๅฟ ไบŽ': u'ๅฟ ๆ–ผ', u'ๅฟซๅนฒ': u'ๅฟซไนพ', u'ๅฟซๅ…‹ๅˆถ': u'ๅฟซๅ‰‹ๅˆถ', u'ๅฟซๅฟซๅฝ“ๅฝ“': u'ๅฟซๅฟซ็•ถ็•ถ', u'ๅฟซๅ†ฒ': u'ๅฟซ่ก', u'ๆ€Žไนˆ': u'ๆ€Ž้บผ', u'ๆ€Žไนˆ็€': u'ๆ€Ž้บผ่‘—', u'ๆ€’ไบŽ': u'ๆ€’ๆ–ผ', u'ๆ€’ๅ‘ๅ†ฒๅ† ': u'ๆ€’้ซฎ่กๅ† ', u'ๆ€ๅฆ‚ๆณ‰ๆถŒ': u'ๆ€ๅฆ‚ๆณ‰ๆนง', u'ๆ€ ไบŽ': u'ๆ€ ๆ–ผ', u'ๆ€ฅไบŽ': u'ๆ€ฅๆ–ผ', u'ๆ€ฅๅ†ฒ่€Œไธ‹': u'ๆ€ฅ่ก่€Œไธ‹', u'ๆ€งๅพ': u'ๆ€งๅพต', u'ๆ€งๆฌฒ': u'ๆ€งๆ…พ', u'ๆ€ช้‡Œๆ€ชๆฐ”': u'ๆ€ช่ฃกๆ€ชๆฐฃ', u'ๆ€ซ้ƒ': u'ๆ€ซ้ฌฑ', u'ๆ‚ๆ —': u'ๆ‚ๆ…„', u'ๆ’็”ŸๆŒ‡ๆ•ฐ': u'ๆ’็”ŸๆŒ‡ๆ•ธ', u'ๆ’็”Ÿ่‚กไปทๆŒ‡ๆ•ฐ': u'ๆ’็”Ÿ่‚กๅƒนๆŒ‡ๆ•ธ', u'ๆ’็”Ÿ้“ถ่กŒ': u'ๆ’็”Ÿ้Š€่กŒ', u'ๆ•ไนไปทๅ‚ฌ': u'ๆ•ไนไปทๅ‚ฌ', u'ๆฏไบค็ปๆธธ': u'ๆฏไบค็ต•้Š', u'ๆฏ่ฐท': u'ๆฏ็ฉ€', u'ๆฐๆ‰': u'ๆฐ็บ”', u'ๆ‚่ฏ': u'ๆ‚่—ฅ', u'ๆ‚’้ƒ': u'ๆ‚’้ฌฑ', u'ๆ‚ ๆ‚ ่ก่ก': u'ๆ‚ ๆ‚ ่•ฉ่•ฉ', u'ๆ‚ ่ก': u'ๆ‚ ่•ฉ', u'ๆ‚ ๆธธ': u'ๆ‚ ้Š', u'ๆ‚จๅ…‹ๅˆถ': u'ๆ‚จๅ‰‹ๅˆถ', u'ๆ‚ฒ็ญ‘': u'ๆ‚ฒ็ญ‘', u'ๆ‚ฒ้ƒ': u'ๆ‚ฒ้ฌฑ', u'้—ท็€ๅคดๅ„ฟๅนฒ': u'ๆ‚ถ่‘—้ ญๅ…’ๅนน', u'ๆ‚ธๆ —': u'ๆ‚ธๆ…„', u'ๆƒ…ๆฌฒ': u'ๆƒ…ๆ…พ', u'ๆƒ‡ๆœด': u'ๆƒ‡ๆจธ', u'ๆถ็›ดไธ‘ๆญฃ': u'ๆƒก็›ด้†œๆญฃ', u'ๆถๆ–—': u'ๆƒก้ฌฅ', u'ๆƒณๅ…‹ๅˆถ': u'ๆƒณๅ‰‹ๅˆถ', u'ๆƒดๆ —': u'ๆƒดๆ…„', u'ๆ„ๅ ': u'ๆ„ไฝ”', u'ๆ„ๅ…‹ๅˆถ': u'ๆ„ๅ‰‹ๅˆถ', u'ๆ„ๅคงๅˆฉ้ข': u'ๆ„ๅคงๅˆฉ้บต', u'ๆ„้ข': u'ๆ„้บต', u'็ˆฑๅ›ฐ': u'ๆ„›็', u'ๆ„Ÿๅ†’่ฏ': u'ๆ„Ÿๅ†’่—ฅ', u'ๆ„ŸไบŽ': u'ๆ„Ÿๆ–ผ', u'ๆ„ฟๆœด': u'ๆ„ฟๆจธ', u'ๆ„ฟ่€Œๆญ': u'ๆ„ฟ่€Œๆญ', u'ๆ —ๅ†ฝ': u'ๆ…„ๅ†ฝ', u'ๆ —ๆ —': u'ๆ…„ๆ…„', u'ๆ…Œ้‡Œๆ…Œๅผ ': u'ๆ…Œ่ฃกๆ…Œๅผต', u'ๅบ†ๅŠ': u'ๆ…ถๅผ”', u'ๅบ†ๅކ': u'ๆ…ถๆ›†', u'ๅบ†ๅކๅฒ': u'ๆ…ถๆญทๅฒ', u'ๆฌฒไปคๆ™บๆ˜': u'ๆ…พไปคๆ™บๆ˜', u'ๆฌฒๅฃ‘้šพๅกซ': u'ๆ…พๅฃ‘้›ฃๅกซ', u'ๆฌฒๅฟต': u'ๆ…พๅฟต', u'ๆฌฒๆœ›': u'ๆ…พๆœ›', u'ๆฌฒๆตท': u'ๆ…พๆตท', u'ๆฌฒ็ซ': u'ๆ…พ็ซ', u'ๆฌฒ้šœ': u'ๆ…พ้šœ', u'ๅฟง้ƒ': u'ๆ†‚้ฌฑ', u'ๅ‡ญๅ‡ ': u'ๆ†‘ๅ‡ ', u'ๅ‡ญๅŠ': u'ๆ†‘ๅผ”', u'ๅ‡ญๆŠ˜': u'ๆ†‘ๆ‘บ', u'ๅ‡ญๅ‡†': u'ๆ†‘ๆบ–', u'ๅ‡ญๅ€Ÿ': u'ๆ†‘่—‰', u'ๅ‡ญๅ€Ÿ็€': u'ๆ†‘่—‰่‘—', u'ๆณๆ‰˜': u'ๆ‡‡่จ—', u'ๆ‡ˆๆพ': u'ๆ‡ˆ้ฌ†', u'ๅบ”ๅ…‹ๅˆถ': u'ๆ‡‰ๅ‰‹ๅˆถ', u'ๅบ”ๅพ': u'ๆ‡‰ๅพต', u'ๅบ”้’Ÿ': u'ๆ‡‰้˜', u'ๆ‡”ๆ —': u'ๆ‡ๆ…„', u'่’™ๆ‡‚': u'ๆ‡žๆ‡‚', u'่’™่’™ๆ‡‚ๆ‡‚': u'ๆ‡žๆ‡žๆ‡‚ๆ‡‚', u'่’™็›ด': u'ๆ‡ž็›ด', u'ๆƒฉๅฟฟ็ช’ๆฌฒ': u'ๆ‡ฒๅฟฟ็ช’ๆฌฒ', u'ๆ€€้‡Œ': u'ๆ‡ท่ฃก', u'ๆ€€่กจ': u'ๆ‡ท้Œถ', u'ๆ€€้’Ÿ': u'ๆ‡ท้˜', u'ๆ‚ฌๆŒ‚': u'ๆ‡ธๆŽ›', u'ๆ‚ฌๆข': u'ๆ‡ธๆจ‘', u'ๆ‚ฌ่‡‚ๆข': u'ๆ‡ธ่‡‚ๆจ‘', u'ๆ‚ฌ้’Ÿ': u'ๆ‡ธ้˜', u'ๆ‡ฟ่Œƒ': u'ๆ‡ฟ็ฏ„', u'ๆ‹ๆ‹ไธ่ˆ': u'ๆˆ€ๆˆ€ไธๆจ', u'ๆˆไบŽ': u'ๆˆๆ–ผ', u'ๆˆไบŽๆ€': u'ๆˆๆ–ผๆ€', u'ๆˆ่ฏ': u'ๆˆ่—ฅ', u'ๆˆ‘ๅ…‹ๅˆถ': u'ๆˆ‘ๅ‰‹ๅˆถ', u'ๆˆฌ่ฐท': u'ๆˆฉ็ฉ€', u'ๆˆชๅ‘': u'ๆˆช้ซฎ', u'ๆˆ˜ๅคฉๆ–—ๅœฐ': u'ๆˆฐๅคฉ้ฌฅๅœฐ', u'ๆˆ˜ๆ —': u'ๆˆฐๆ…„', u'ๆˆ˜ๆ–—': u'ๆˆฐ้ฌฅ', u'ๆˆๅฝฉๅจฑไบฒ': u'ๆˆฒ็ถตๅจ›่ฆช', u'ๆˆ้‡Œ': u'ๆˆฒ่ฃก', u'ๆˆด่กจ': u'ๆˆด้Œถ', u'ๆˆดๅ‘ๅซ้ฝฟ': u'ๆˆด้ซฎๅซ้ฝ’', u'ๆˆฟ้‡Œ': u'ๆˆฟ่ฃก', u'ๆ‰€ไบ‘': u'ๆ‰€ไบ‘', u'ๆ‰€ไบ‘ไบ‘': u'ๆ‰€ไบ‘ไบ‘', u'ๆ‰€ๅ ': u'ๆ‰€ไฝ”', u'ๆ‰€ๅ ๅœ': u'ๆ‰€ๅ ๅœ', u'ๆ‰€ๅ ๆ˜Ÿ': u'ๆ‰€ๅ ๆ˜Ÿ', u'ๆ‰€ๅ ็ฎ—': u'ๆ‰€ๅ ็ฎ—', u'ๆ‰€ๆ‰˜': u'ๆ‰€่จ—', u'ๆ‰ๆ‹Ÿ่ฐท็›—่™ซ': u'ๆ‰ๆ“ฌ็ฉ€็›œ่Ÿฒ', u'ๆ‰‹ๅกšๆฒป่™ซ': u'ๆ‰‹ๅกšๆฒป่™ซ', u'ๆ‰‹ๅ†ขๆฒป่™ซ': u'ๆ‰‹ๅกšๆฒป่™ซ', u'ๆ‰‹ๆŠ˜': u'ๆ‰‹ๆ‘บ', u'ๆ‰‹่กจๆ€': u'ๆ‰‹่กจๆ…‹', u'ๆ‰‹่กจๆ˜Ž': u'ๆ‰‹่กจๆ˜Ž', u'ๆ‰‹่กจๅ†ณ': u'ๆ‰‹่กจๆฑบ', u'ๆ‰‹่กจๆผ”': u'ๆ‰‹่กจๆผ”', u'ๆ‰‹่กจ็Žฐ': u'ๆ‰‹่กจ็พ', u'ๆ‰‹่กจ็คบ': u'ๆ‰‹่กจ็คบ', u'ๆ‰‹่กจ่พพ': u'ๆ‰‹่กจ้”', u'ๆ‰‹่กจ้œฒ': u'ๆ‰‹่กจ้œฒ', u'ๆ‰‹่กจ้ข': u'ๆ‰‹่กจ้ข', u'ๆ‰‹้‡Œ': u'ๆ‰‹่ฃก', u'ๆ‰‹่กจ': u'ๆ‰‹้Œถ', u'ๆ‰‹ๆพ': u'ๆ‰‹้ฌ†', u'ๆ‰ๅ…‹ๅˆถ': u'ๆ‰ๅ‰‹ๅˆถ', u'ๆ‰ๅนฒไผ‘': u'ๆ‰ๅนฒไผ‘', u'ๆ‰ๅนฒๆˆˆ': u'ๆ‰ๅนฒๆˆˆ', u'ๆ‰ๅนฒๆ‰ฐ': u'ๆ‰ๅนฒๆ“พ', u'ๆ‰ๅนฒๆ”ฟ': u'ๆ‰ๅนฒๆ”ฟ', u'ๆ‰ๅนฒๆถ‰': u'ๆ‰ๅนฒๆถ‰', u'ๆ‰ๅนฒ้ข„': u'ๆ‰ๅนฒ้ ', u'ๆ‰ๅนฒ': u'ๆ‰ๅนน', u'ๆ‰Žๅฅฝๅบ•ๅญ': u'ๆ‰Žๅฅฝๅบ•ๅญ', u'ๆ‰Žๅฅฝๆ น': u'ๆ‰Žๅฅฝๆ น', u'ๆ‰‘ไฝœๆ•™ๅˆ‘': u'ๆ‰‘ไฝœๆ•™ๅˆ‘', u'ๆ‰‘ๆ‰“': u'ๆ‰‘ๆ‰“', u'ๆ‰‘ๆŒž': u'ๆ‰‘ๆ’ป', u'ๆ‰“ๅนฒๅ“•': u'ๆ‰“ไนพๅ™ฆ', u'ๆ‰“ๅนถ': u'ๆ‰“ไฝต', u'ๆ‰“ๅ‡บๅŠๅ…ฅ': u'ๆ‰“ๅ‡บๅผ”ๅ…ฅ', u'ๆ‰“ๅก้’Ÿ': u'ๆ‰“ๅก้˜', u'ๆ‰“ๅจ': u'ๆ‰“ๅจ', u'ๆ‰“ๅนฒ': u'ๆ‰“ๅนน', u'ๆ‰“ๆ‹ผ': u'ๆ‰“ๆ‹š', u'ๆ‰“ๆ–ญๅ‘': u'ๆ‰“ๆ–ท็™ผ', u'ๆ‰“่ฐท': u'ๆ‰“็ฉ€', u'ๆ‰“็€้’Ÿ': u'ๆ‰“่‘—้˜', u'ๆ‰“่ทฏๅบ„ๆฟ': u'ๆ‰“่ทฏ่ŽŠๆฟ', u'ๆ‰“้’Ÿ': u'ๆ‰“้˜', u'ๆ‰“้ฃŽๅŽ': u'ๆ‰“้ขจๅพŒ', u'ๆ‰“ๆ–—': u'ๆ‰“้ฌฅ', u'ๆ‰˜็ฎกๅ›ฝ': u'ๆ‰˜็ฎกๅœ‹', u'ๆ‰›ๅคงๆข': u'ๆ‰›ๅคงๆจ‘', u'ๆ‰žๅพก': u'ๆ‰ž็ฆฆ', u'ๆ‰ฏ้ข': u'ๆ‰ฏ้บต', u'ๆ‰ถไฝ™ๅ›ฝ': u'ๆ‰ถ้ค˜ๅœ‹', u'ๆ‰นๅ‡†็š„': u'ๆ‰นๅ‡†็š„', u'ๆ‰นๅค': u'ๆ‰น่ค‡', u'ๆ‰นๆณจ': u'ๆ‰น่จป', u'ๆ‰นๆ–—': u'ๆ‰น้ฌฅ', u'ๆ‰ฟๅˆถ': u'ๆ‰ฟ่ฃฝ', u'ๆŠ‘ๅˆถไฝœ็”จ': u'ๆŠ‘ๅˆถไฝœ็”จ', u'ๆŠ‘้ƒ': u'ๆŠ‘้ฌฑ', u'ๆŠ“ๅฅธ': u'ๆŠ“ๅงฆ', u'ๆŠ“่ฏ': u'ๆŠ“่—ฅ', u'ๆŠ“ๆ–—': u'ๆŠ“้ฌฅ', u'ๆŠ•่ฏ': u'ๆŠ•่—ฅ', u'ๆŠ—็™Œ่ฏ': u'ๆŠ—็™Œ่—ฅ', u'ๆŠ—ๅพก': u'ๆŠ—็ฆฆ', u'ๆŠ—่ฏ': u'ๆŠ—่—ฅ', u'ๆŠ˜ๅ‘ๅพ€': u'ๆŠ˜ๅ‘ๅพ€', u'ๆŠ˜ๅญๆˆ': u'ๆŠ˜ๅญๆˆฒ', u'ๆŠ˜ๆˆŸๆฒˆๆฒณ': u'ๆŠ˜ๆˆŸๆฒˆๆฒณ', u'ๆŠ˜ๅ†ฒ': u'ๆŠ˜่ก', u'ๆŠซๆฆ›้‡‡ๅ…ฐ': u'ๆŠซๆฆ›ๆŽก่˜ญ', u'ๆŠซๅคดๆ•ฃๅ‘': u'ๆŠซ้ ญๆ•ฃ้ซฎ', u'ๆŠซๅ‘': u'ๆŠซ้ซฎ', u'ๆŠฑๆœด่€Œ้•ฟๅŸๅ…ฎ': u'ๆŠฑๆœด่€Œ้•ทๅŸๅ…ฎ', u'ๆŠฑ็ด ๆ€€ๆœด': u'ๆŠฑ็ด ๆ‡ทๆจธ', u'ๆŠตๅพก': u'ๆŠต็ฆฆ', u'ๆŠนๅนฒ': u'ๆŠนไนพ', u'ๆŠฝๅ…ฌ็ญพ': u'ๆŠฝๅ…ฌ็ฑค', u'ๆŠฝ็ญพ': u'ๆŠฝ็ฑค', u'ๆŠฟๅ‘': u'ๆŠฟ้ซฎ', u'ๆ‹‚้’Ÿๆ— ๅฃฐ': u'ๆ‹‚้˜็„ก่ฒ', u'ๆ‹†ไผ™': u'ๆ‹†ๅคฅ', u'ๆ‹ˆ้กป': u'ๆ‹ˆ้ฌš', u'ๆ‹‰ๅ…‹ๆ–ฝๅฐ”ๅพท้’Ÿ': u'ๆ‹‰ๅ…‹ๆ–ฝ็ˆพๅพท้˜', u'ๆ‹‰ๆ†': u'ๆ‹‰ๆ†', u'ๆ‹‰็บค': u'ๆ‹‰็ธด', u'ๆ‹‰้ขไธŠ': u'ๆ‹‰้ขไธŠ', u'ๆ‹‰้ขๅ…ท': u'ๆ‹‰้ขๅ…ท', u'ๆ‹‰้ขๅ‰': u'ๆ‹‰้ขๅ‰', u'ๆ‹‰้ขๅทพ': u'ๆ‹‰้ขๅทพ', u'ๆ‹‰้ขๆ— ': u'ๆ‹‰้ข็„ก', u'ๆ‹‰้ข็šฎ': u'ๆ‹‰้ข็šฎ', u'ๆ‹‰้ข็ฝฉ': u'ๆ‹‰้ข็ฝฉ', u'ๆ‹‰้ข่‰ฒ': u'ๆ‹‰้ข่‰ฒ', u'ๆ‹‰้ข้ƒจ': u'ๆ‹‰้ข้ƒจ', u'ๆ‹‰้ข': u'ๆ‹‰้บต', u'ๆ‹’ไบบไบŽ': u'ๆ‹’ไบบๆ–ผ', u'ๆ‹’ไบŽ': u'ๆ‹’ๆ–ผ', u'ๆ‹“ๆœด': u'ๆ‹“ๆจธ', u'ๆ‹”ๅ‘': u'ๆ‹”้ซฎ', u'ๆ‹”้กป': u'ๆ‹”้ฌš', u'ๆ‹—ๅˆซ': u'ๆ‹—ๅฝ†', u'ๆ‹˜ไบŽ': u'ๆ‹˜ๆ–ผ', u'ๆ‹™ไบŽ': u'ๆ‹™ๆ–ผ', u'ๆ‹™ๆœด': u'ๆ‹™ๆจธ', u'ๆ‹ผๅด': u'ๆ‹šๅป', u'ๆ‹ผๅ‘ฝ': u'ๆ‹šๅ‘ฝ', u'ๆ‹ผ่ˆ': u'ๆ‹šๆจ', u'ๆ‹ผๆญป': u'ๆ‹šๆญป', u'ๆ‹ผ็”Ÿๅฐฝๆญป': u'ๆ‹š็”Ÿ็›กๆญป', u'ๆ‹ผ็ป': u'ๆ‹š็ต•', u'ๆ‹ผ่€ๅ‘ฝ': u'ๆ‹š่€ๅ‘ฝ', u'ๆ‹ผๆ–—': u'ๆ‹š้ฌฅ', u'ๆ‹œๆ‰˜': u'ๆ‹œ่จ—', u'ๆ‹ฌๅ‘': u'ๆ‹ฌ้ซฎ', u'ๆ‹ญๅนฒ': u'ๆ‹ญไนพ', u'ๆ‹ฎๆฎ': u'ๆ‹ฎๆฎ', u'ๆ‹ผๆญปๆ‹ผๆดป': u'ๆ‹ผๆญปๆ‹ผๆดป', u'ๆ‹พๆฒˆ': u'ๆ‹พ็€‹', u'ๆ‹ฟไธ‹่กจ': u'ๆ‹ฟไธ‹้Œถ', u'ๆ‹ฟไธ‹้’Ÿ': u'ๆ‹ฟไธ‹้˜', u'ๆ‹ฟๅ‡†': u'ๆ‹ฟๆบ–', u'ๆ‹ฟ็ ดไป‘': u'ๆ‹ฟ็ ดๅด™', u'ๆŒ‚ๅ': u'ๆŒ‚ๅ', u'ๆŒ‚ๅ›พ': u'ๆŒ‚ๅœ–', u'ๆŒ‚ๅธ…': u'ๆŒ‚ๅธฅ', u'ๆŒ‚ๅฝฉ': u'ๆŒ‚ๅฝฉ', u'ๆŒ‚ๅฟต': u'ๆŒ‚ๅฟต', u'ๆŒ‚ๅท': u'ๆŒ‚่™Ÿ', u'ๆŒ‚่ฝฆ': u'ๆŒ‚่ปŠ', u'ๆŒ‚้ข': u'ๆŒ‚้ข', u'ๆŒ‡ๆ‰‹ๅˆ’่„š': u'ๆŒ‡ๆ‰‹ๅŠƒ่…ณ', u'ๆŒŒๆ–—': u'ๆŒŒ้ฌฅ', u'ๆŒ‘ๅคงๆข': u'ๆŒ‘ๅคงๆจ‘', u'ๆŒ‘ๆ–—': u'ๆŒ‘้ฌฅ', u'ๆŒฏ่ก': u'ๆŒฏ่•ฉ', u'ๆ†ๆ‰Ž': u'ๆ†็ดฎ', u'ๆ‰ๅฅธๅพ’': u'ๆ‰ๅฅธๅพ’', u'ๆ‰ๅฅธ็ป†': u'ๆ‰ๅฅธ็ดฐ', u'ๆ‰ๅฅธ่ดผ': u'ๆ‰ๅฅธ่ณŠ', u'ๆ‰ๅฅธๅ…š': u'ๆ‰ๅฅธ้ปจ', u'ๆ‰ๅฅธ': u'ๆ‰ๅงฆ', u'ๆ‰ๅ‘': u'ๆ‰้ซฎ', u'ๆๅพก': u'ๆ็ฆฆ', u'ๆ้ขไบบ': u'ๆ้บตไบบ', u'่ˆไธๅพ—': u'ๆจไธๅพ—', u'่ˆๅ‡บ': u'ๆจๅ‡บ', u'่ˆๅŽป': u'ๆจๅŽป', u'่ˆๅ‘ฝ': u'ๆจๅ‘ฝ', u'่ˆๅ •': u'ๆจๅขฎ', u'่ˆๅฎ‰ๅฐฑๅฑ': u'ๆจๅฎ‰ๅฐฑๅฑ', u'่ˆๅฎž': u'ๆจๅฏฆ', u'่ˆๅทฑไปŽไบบ': u'ๆจๅทฑๅพžไบบ', u'่ˆๅทฑๆ•‘ไบบ': u'ๆจๅทฑๆ•‘ไบบ', u'่ˆๅทฑไธบไบบ': u'ๆจๅทฑ็‚บไบบ', u'่ˆๅทฑไธบๅ…ฌ': u'ๆจๅทฑ็‚บๅ…ฌ', u'่ˆๅทฑไธบๅ›ฝ': u'ๆจๅทฑ็‚บๅœ‹', u'่ˆๅพ—': u'ๆจๅพ—', u'่ˆๆˆ‘ๅ…ถ่ฐ': u'ๆจๆˆ‘ๅ…ถ่ชฐ', u'่ˆๆœฌ้€ๆœซ': u'ๆจๆœฌ้€ๆœซ', u'่ˆๅผƒ': u'ๆจๆฃ„', u'่ˆๆญปๅฟ˜็”Ÿ': u'ๆจๆญปๅฟ˜็”Ÿ', u'่ˆ็”Ÿ': u'ๆจ็”Ÿ', u'่ˆ็Ÿญๅ–้•ฟ': u'ๆจ็Ÿญๅ–้•ท', u'่ˆ่บซ': u'ๆจ่บซ', u'่ˆ่ฝฆไฟๅธ…': u'ๆจ่ปŠไฟๅธฅ', u'่ˆ่ฟ‘ๆฑ‚่ฟœ': u'ๆจ่ฟ‘ๆฑ‚้ ', u'ๅทไฝ': u'ๆฒไฝ', u'ๅทๆฅ': u'ๆฒไพ†', u'ๅทๅ„ฟ': u'ๆฒๅ…’', u'ๅทๅ…ฅ': u'ๆฒๅ…ฅ', u'ๅทๅŠจ': u'ๆฒๅ‹•', u'ๅทๅŽป': u'ๆฒๅŽป', u'ๅทๅ›พ': u'ๆฒๅœ–', u'ๅทๅœŸ้‡ๆฅ': u'ๆฒๅœŸ้‡ไพ†', u'ๅทๅฐบ': u'ๆฒๅฐบ', u'ๅทๅฟƒ่œ': u'ๆฒๅฟƒ่œ', u'ๅทๆˆ': u'ๆฒๆˆ', u'ๅทๆ›ฒ': u'ๆฒๆ›ฒ', u'ๅทๆฌพ': u'ๆฒๆฌพ', u'ๅทๆฏ›': u'ๆฒๆฏ›', u'ๅท็ƒŸ': u'ๆฒ็…™', u'ๅท็ญ’': u'ๆฒ็ญ’', u'ๅทๅธ˜': u'ๆฒ็ฐพ', u'ๅท็บธ': u'ๆฒ็ด™', u'ๅท็ผฉ': u'ๆฒ็ธฎ', u'ๅท่ˆŒ': u'ๆฒ่ˆŒ', u'ๅท่ˆ–็›–': u'ๆฒ่ˆ–่“‹', u'ๅท่ธ': u'ๆฒ่ธ', u'ๅท่ข–': u'ๆฒ่ข–', u'ๅท่ตฐ': u'ๆฒ่ตฐ', u'ๅท่ตท': u'ๆฒ่ตท', u'ๅท่ฝด': u'ๆฒ่ปธ', u'ๅท้€ƒ': u'ๆฒ้€ƒ', u'ๅท้“บ็›–': u'ๆฒ้‹ช่“‹', u'ๅทไบ‘': u'ๆฒ้›ฒ', u'ๅท้ฃŽ': u'ๆฒ้ขจ', u'ๅทๅ‘': u'ๆฒ้ซฎ', u'ๆต้ข': u'ๆต้บต', u'ๆถ็‚ผ': u'ๆถ้Š', u'ๆ‰ซ่ก': u'ๆŽƒ่•ฉ', u'ๆŽŒๆŸœ': u'ๆŽŒๆŸœ', u'ๆŽ’้ชจ้ข': u'ๆŽ’้ชจ้บต', u'ๆŒ‚ๅธ˜': u'ๆŽ›ๅธ˜', u'ๆŒ‚ๅކ': u'ๆŽ›ๆ›†', u'ๆŒ‚้’ฉ': u'ๆŽ›้ˆŽ', u'ๆŒ‚้’Ÿ': u'ๆŽ›้˜', u'้‡‡ไธ‹': u'ๆŽกไธ‹', u'้‡‡ไผ': u'ๆŽกไผ', u'้‡‡ไฝ': u'ๆŽกไฝ', u'้‡‡ไฟก': u'ๆŽกไฟก', u'้‡‡ๅ…‰': u'ๆŽกๅ…‰', u'้‡‡ๅˆฐ': u'ๆŽกๅˆฐ', u'้‡‡ๅˆถ': u'ๆŽกๅˆถ', u'้‡‡ๅŒบ': u'ๆŽกๅ€', u'้‡‡ๅŽป': u'ๆŽกๅŽป', u'้‡‡ๅ–': u'ๆŽกๅ–', u'้‡‡ๅ›ž': u'ๆŽกๅ›ž', u'้‡‡ๅœจ': u'ๆŽกๅœจ', u'้‡‡ๅฅฝ': u'ๆŽกๅฅฝ', u'้‡‡ๅพ—': u'ๆŽกๅพ—', u'้‡‡ๆ‹พ': u'ๆŽกๆ‹พ', u'้‡‡ๆŒ–': u'ๆŽกๆŒ–', u'้‡‡ๆŽ˜': u'ๆŽกๆŽ˜', u'้‡‡ๆ‘˜': u'ๆŽกๆ‘˜', u'้‡‡ๆ‘ญ': u'ๆŽกๆ‘ญ', u'้‡‡ๆ‹ฉ': u'ๆŽกๆ“‡', u'้‡‡ๆ’ท': u'ๆŽกๆ“ท', u'้‡‡ๆ”ถ': u'ๆŽกๆ”ถ', u'้‡‡ๆ–™': u'ๆŽกๆ–™', u'้‡‡ๆš–': u'ๆŽกๆš–', u'้‡‡ๆก‘': u'ๆŽกๆก‘', u'้‡‡ๆ ท': u'ๆŽกๆจฃ', u'้‡‡ๆจตไบบ': u'ๆŽกๆจตไบบ', u'้‡‡ๆ ‘็ง': u'ๆŽกๆจน็จฎ', u'้‡‡ๆฐ”': u'ๆŽกๆฐฃ', u'้‡‡ๆฒน': u'ๆŽกๆฒน', u'้‡‡ไธบ': u'ๆŽก็‚บ', u'้‡‡็…ค': u'ๆŽก็…ค', u'้‡‡่Žท': u'ๆŽก็ฒ', u'้‡‡็ŒŽ': u'ๆŽก็ต', u'้‡‡็ ': u'ๆŽก็ ', u'้‡‡็”ŸๆŠ˜ๅ‰ฒ': u'ๆŽก็”ŸๆŠ˜ๅ‰ฒ', u'้‡‡็”จ': u'ๆŽก็”จ', u'้‡‡็š„': u'ๆŽก็š„', u'้‡‡็Ÿณ': u'ๆŽก็Ÿณ', u'้‡‡็ ‚ๅœบ': u'ๆŽก็ ‚ๅ ด', u'้‡‡็Ÿฟ': u'ๆŽก็คฆ', u'้‡‡็ง': u'ๆŽก็จฎ', u'้‡‡็ฉบๅŒบ': u'ๆŽก็ฉบๅ€', u'้‡‡็ฉบ้‡‡็ฉ—': u'ๆŽก็ฉบๆŽก็ฉ—', u'้‡‡็ด': u'ๆŽก็ด', u'้‡‡็บณ': u'ๆŽก็ด', u'้‡‡็ป™': u'ๆŽก็ตฆ', u'้‡‡่Šฑ': u'ๆŽก่Šฑ', u'้‡‡่Šนไบบ': u'ๆŽก่Šนไบบ', u'้‡‡่Œถ': u'ๆŽก่Œถ', u'้‡‡่Š': u'ๆŽก่Š', u'้‡‡่Žฒ': u'ๆŽก่“ฎ', u'้‡‡่–‡': u'ๆŽก่–‡', u'้‡‡่–ช': u'ๆŽก่–ช', u'้‡‡่ฏ': u'ๆŽก่—ฅ', u'้‡‡่กŒ': u'ๆŽก่กŒ', u'้‡‡่กฅ': u'ๆŽก่ฃœ', u'้‡‡่ฎฟ': u'ๆŽก่จช', u'้‡‡่ฏ': u'ๆŽก่ญ‰', u'้‡‡ไนฐ': u'ๆŽก่ฒท', u'้‡‡่ดญ': u'ๆŽก่ณผ', u'้‡‡ๅŠž': u'ๆŽก่พฆ', u'้‡‡่ฟ': u'ๆŽก้‹', u'้‡‡่ฟ‡': u'ๆŽก้Ž', u'้‡‡้€‰': u'ๆŽก้ธ', u'้‡‡้‡‘': u'ๆŽก้‡‘', u'้‡‡ๅฝ•': u'ๆŽก้Œ„', u'้‡‡้“': u'ๆŽก้ต', u'้‡‡้›†': u'ๆŽก้›†', u'้‡‡้ฃŽ': u'ๆŽก้ขจ', u'้‡‡้ฃŽ้—ฎไฟ—': u'ๆŽก้ขจๅ•ไฟ—', u'้‡‡้ฃŸ': u'ๆŽก้ฃŸ', u'้‡‡็›': u'ๆŽก้นฝ', u'ๆŽฃ็ญพ': u'ๆŽฃ็ฑค', u'ๆŽฅ็€่ฏด': u'ๆŽฅ่‘—่ชช', u'ๆŽงๅˆถ': u'ๆŽงๅˆถ', u'ๆŽจๆƒ…ๅ‡†็†': u'ๆŽจๆƒ…ๆบ–็†', u'ๆŽจๆ‰˜ไน‹่ฏ': u'ๆŽจๆ‰˜ไน‹่ฉž', u'ๆŽจ่ˆŸไบŽ้™†': u'ๆŽจ่ˆŸๆ–ผ้™ธ', u'ๆŽจๆ‰˜': u'ๆŽจ่จ—', u'ๆๅญๅนฒ': u'ๆๅญไนพ', u'ๆๅฟƒๅŠ่ƒ†': u'ๆๅฟƒๅผ”่†ฝ', u'ๆๆ‘ฉๅคชๅŽไนฆ': u'ๆๆ‘ฉๅคชๅพŒๆ›ธ', u'ๆ’ไบŽ': u'ๆ’ๆ–ผ', u'ๆข็ญพ': u'ๆ›็ฑค', u'ๆข่ฏ': u'ๆ›่—ฅ', u'ๆขๅช': u'ๆ›้šป', u'ๆขๅ‘': u'ๆ›้ซฎ', u'ๆกๅ‘': u'ๆก้ซฎ', u'ๆฉๅนฒ': u'ๆฉไนพ', u'ๆช้‡‡': u'ๆชๆŽก', u'ๆชๅ‘': u'ๆช้ซฎ', u'ๆช้กป': u'ๆช้ฌš', u'ๆญไธ‘': u'ๆญ้†œ', u'ๆŒฅๆ‰‹่กจ': u'ๆฎๆ‰‹่กจ', u'ๆŒฅๆ†': u'ๆฎๆ†', u'ๆ‹้ข': u'ๆ‹้บต', u'ๆŸไบŽ': u'ๆๆ–ผ', u'ๆๆ–—': u'ๆ้ฌฅ', u'ๆ‘‡ๆ‘‡่ก่ก': u'ๆ–ๆ–่•ฉ่•ฉ', u'ๆ‘‡่ก': u'ๆ–่•ฉ', u'ๆฃ้ฌผๅŠ็™ฝ': u'ๆ—้ฌผๅผ”็™ฝ', u'ๆค่‚ฎๆ‹Š่ƒŒ': u'ๆค่‚ฎๆ‹Š่ƒŒ', u'ๆฌๆ–—': u'ๆฌ้ฌฅ', u'ๆญๅนฒ้“บ': u'ๆญไนพ้‹ช', u'ๆญไผ™': u'ๆญๅคฅ', u'ๆŠขๅ ': u'ๆถไฝ”', u'ๆฝ่ฏ': u'ๆฝ่—ฅ', u'ๆ‘งๅš่Žทไธ‘': u'ๆ‘งๅ …็ฒ้†œ', u'ๆ‘ญ้‡‡': u'ๆ‘ญๆŽก', u'ๆ‘ธๆฃฑ': u'ๆ‘ธ็จœ', u'ๆ‘ธ้’Ÿ': u'ๆ‘ธ้˜', u'ๆŠ˜ๅˆ': u'ๆ‘บๅˆ', u'ๆŠ˜ๅฅ': u'ๆ‘บๅฅ', u'ๆŠ˜ๅญ': u'ๆ‘บๅญ', u'ๆŠ˜ๅฐบ': u'ๆ‘บๅฐบ', u'ๆŠ˜ๆ‰‡': u'ๆ‘บๆ‰‡', u'ๆŠ˜ๆขฏ': u'ๆ‘บๆขฏ', u'ๆŠ˜ๆค…': u'ๆ‘บๆค…', u'ๆŠ˜ๅ ': u'ๆ‘บ็–Š', u'ๆŠ˜็—•': u'ๆ‘บ็—•', u'ๆŠ˜็ฏท': u'ๆ‘บ็ฏท', u'ๆŠ˜็บธ': u'ๆ‘บ็ด™', u'ๆŠ˜่ฃ™': u'ๆ‘บ่ฃ™', u'ๆ’‡ๅŠ': u'ๆ’‡ๅผ”', u'ๆžๅนฒ': u'ๆ’ˆไนพ', u'ๆž้ข': u'ๆ’ˆ้บต', u'ๆ’š้กป': u'ๆ’š้ฌš', u'ๆ’ž็ƒๅฐ': u'ๆ’ž็ƒๆชฏ', u'ๆ’ž้’Ÿ': u'ๆ’ž้˜', u'ๆ’ž้˜ตๅ†ฒๅ†›': u'ๆ’ž้™ฃ่ก่ป', u'ๆ’คๅนถ': u'ๆ’คไฝต', u'ๆ‹จ่ฐท': u'ๆ’ฅ็ฉ€', u'ๆ’ฉๆ–—': u'ๆ’ฉ้ฌฅ', u'ๆ’ญไบŽ': u'ๆ’ญๆ–ผ', u'ๆ‰‘ๅ†ฌ': u'ๆ’ฒ้ผ•', u'ๆ‰‘ๅ†ฌๅ†ฌ': u'ๆ’ฒ้ผ•้ผ•', u'ๆ“€้ข': u'ๆ“€้บต', u'ๅ‡ปๆ‰‘': u'ๆ“Šๆ‰‘', u'ๅ‡ป้’Ÿ': u'ๆ“Š้˜', u'ๆ“ไฝœ้’Ÿ': u'ๆ“ไฝœ้˜', u'ๆ‹…ไป”้ข': u'ๆ“”ไป”้บต', u'ๆ‹…ๆ‹…้ข': u'ๆ“”ๆ“”้บต', u'ๆ‹…็€': u'ๆ“”่‘—', u'ๆ‹…่ดŸ็€': u'ๆ“”่ฒ ่‘—', u'ๆ“˜ๅˆ’': u'ๆ“˜ๅŠƒ', u'ๆฎไบ‘': u'ๆ“šไบ‘', u'ๆฎๅนฒ่€Œ็ชฅไบ•ๅบ•': u'ๆ“šๆฆฆ่€Œ็ชบไบ•ๅบ•', u'ๆ“ขๅ‘': u'ๆ“ข้ซฎ', u'ๆ“ฆๅนฒ': u'ๆ“ฆไนพ', u'ๆ“ฆๅนฒๅ‡€': u'ๆ“ฆไนพๆทจ', u'ๆ“ฆ่ฏ': u'ๆ“ฆ่—ฅ', u'ๆ‹งๅนฒ': u'ๆ“ฐไนพ', u'ๆ‘†้’Ÿ': u'ๆ“บ้˜', u'ๆ‘„ๅˆถ': u'ๆ”่ฃฝ', u'ๆ”ฏๅนฒ': u'ๆ”ฏๅนน', u'ๆ”ฏๆ†': u'ๆ”ฏๆ†', u'ๆ”ถ่Žท': u'ๆ”ถ็ฉซ', u'ๆ”นๅพ': u'ๆ”นๅพต', u'ๆ”ปๅ ': u'ๆ”ปไฝ”', u'ๆ”พ่’™ๆŒฃ': u'ๆ”พๆ‡žๆŽ™', u'ๆ”พ่ก': u'ๆ”พ่•ฉ', u'ๆ”พๆพ': u'ๆ”พ้ฌ†', u'ๆ•…ไบ‹้‡Œ': u'ๆ•…ไบ‹่ฃก', u'ๆ•…ไบ‘': u'ๆ•…ไบ‘', u'ๆ•ไบŽ': u'ๆ•ๆ–ผ', u'ๆ•‘่ฏ': u'ๆ•‘่—ฅ', u'่ดฅไบŽ': u'ๆ•—ๆ–ผ', u'ๅ™่ฏด็€': u'ๆ•˜่ชช่‘—', u'ๆ•™ๅญฆ้’Ÿ': u'ๆ•™ๅญธ้˜', u'ๆ•™ไบŽ': u'ๆ•™ๆ–ผ', u'ๆ•™่Œƒ': u'ๆ•™็ฏ„', u'ๆ•ขๅนฒ': u'ๆ•ขๅนน', u'ๆ•ขๆƒ…ๆฌฒ': u'ๆ•ขๆƒ…ๆฌฒ', u'ๆ•ขๆ–—ไบ†่ƒ†': u'ๆ•ขๆ–—ไบ†่†ฝ', u'ๆ•ฃไผ™': u'ๆ•ฃๅคฅ', u'ๆ•ฃไบŽ': u'ๆ•ฃๆ–ผ', u'ๆ•ฃ่ก': u'ๆ•ฃ่•ฉ', u'ๆ•ฆๆœด': u'ๆ•ฆๆจธ', u'ๆ•ฌๆŒฝ': u'ๆ•ฌ่ผ“', u'ๆ•ฒๆ‰‘': u'ๆ•ฒๆ‰‘', u'ๆ•ฒ้’Ÿ': u'ๆ•ฒ้˜', u'ๆ•ดๅบ„': u'ๆ•ด่ŽŠ', u'ๆ•ดๅช': u'ๆ•ด้šป', u'ๆ•ด้ฃŽๅŽ': u'ๆ•ด้ขจๅพŒ', u'ๆ•ดๅ‘็”จๅ“': u'ๆ•ด้ซฎ็”จๅ“', u'ๆ•ŒๅฟพๅŒไป‡': u'ๆ•ตๆ„พๅŒ่ฎŽ', u'ๆ•ท่ฏ': u'ๆ•ท่—ฅ', u'ๆ•ฐๅคฉๅŽ': u'ๆ•ธๅคฉๅพŒ', u'ๆ•ฐๅญ—้’Ÿ': u'ๆ•ธๅญ—้˜', u'ๆ•ฐๅญ—้’Ÿ่กจ': u'ๆ•ธๅญ—้˜้Œถ', u'ๆ•ฐ็ฝชๅนถ็ฝš': u'ๆ•ธ็ฝชไฝต็ฝฐ', u'ๆ•ฐไธŽ่™็กฎ': u'ๆ•ธ่ˆ‡่™œ็กฎ', u'ๆ–‡ไธ‘': u'ๆ–‡ไธ‘', u'ๆ–‡ๆฑ‡ๆŠฅ': u'ๆ–‡ๅŒฏๅ ฑ', u'ๆ–‡ๅพๆ˜Ž': u'ๆ–‡ๅพตๆ˜Ž', u'ๆ–‡ๆ€ๆณ‰ๆถŒ': u'ๆ–‡ๆ€ๆณ‰ๆนง', u'ๆ–‡้‡‡้ƒ้ƒ': u'ๆ–‡้‡‡้ƒ้ƒ', u'ๆ–—่ฝฌๅ‚ๆจช': u'ๆ–—่ฝ‰ๅƒๆฉซ', u'ๆ–ซ้›•ไธบๆœด': u'ๆ–ซ้›•็‚บๆจธ', u'ๆ–ฐๅކ': u'ๆ–ฐๆ›†', u'ๆ–ฐๅކๅฒ': u'ๆ–ฐๆญทๅฒ', u'ๆ–ฐๆ‰Ž': u'ๆ–ฐ็ดฎ', u'ๆ–ฐๅบ„': u'ๆ–ฐ่ŽŠ', u'ๆ–ฐๅบ„ๅธ‚': u'ๆ–ฐ่ŽŠๅธ‚', u'ๆ–ฒ้›•ไธบๆœด': u'ๆ–ฒ้›•็‚บๆจธ', u'ๆ–ญๅ‘': u'ๆ–ท้ซฎ', u'ๆ–ญๅ‘ๆ–‡่บซ': u'ๆ–ท้ซฎๆ–‡่บซ', u'ๆ–นไพฟ้ข': u'ๆ–นไพฟ้บต', u'ๆ–นๅ‡ ': u'ๆ–นๅ‡ ', u'ๆ–นๅ‘ๅพ€': u'ๆ–นๅ‘ๅพ€', u'ๆ–นๅฟ—': u'ๆ–น่ชŒ', u'ๆ–น้ข': u'ๆ–น้ข', u'ไบŽ0': u'ๆ–ผ0', u'ไบŽ1': u'ๆ–ผ1', u'ไบŽ2': u'ๆ–ผ2', u'ไบŽ3': u'ๆ–ผ3', u'ไบŽ4': u'ๆ–ผ4', u'ไบŽ5': u'ๆ–ผ5', u'ไบŽ6': u'ๆ–ผ6', u'ไบŽ7': u'ๆ–ผ7', u'ไบŽ8': u'ๆ–ผ8', u'ไบŽ9': u'ๆ–ผ9', u'ไบŽไธ€': u'ๆ–ผไธ€', u'ไบŽไธ€ๅฝน': u'ๆ–ผไธ€ๅฝน', u'ไบŽไธƒ': u'ๆ–ผไธƒ', u'ไบŽไธ‰': u'ๆ–ผไธ‰', u'ไบŽไธ–': u'ๆ–ผไธ–', u'ไบŽไน‹': u'ๆ–ผไน‹', u'ไบŽไนŽ': u'ๆ–ผไนŽ', u'ไบŽไน': u'ๆ–ผไน', u'ไบŽไบ‹': u'ๆ–ผไบ‹', u'ไบŽไบŒ': u'ๆ–ผไบŒ', u'ไบŽไบ”': u'ๆ–ผไบ”', u'ไบŽไบบ': u'ๆ–ผไบบ', u'ไบŽไปŠ': u'ๆ–ผไปŠ', u'ไบŽไป–': u'ๆ–ผไป–', u'ไบŽไผ': u'ๆ–ผไผ', u'ไบŽไฝ•': u'ๆ–ผไฝ•', u'ไบŽไฝ ': u'ๆ–ผไฝ ', u'ไบŽๅ…ซ': u'ๆ–ผๅ…ซ', u'ไบŽๅ…ญ': u'ๆ–ผๅ…ญ', u'ไบŽๅ…‹ๅˆถ': u'ๆ–ผๅ‰‹ๅˆถ', u'ไบŽๅ‰': u'ๆ–ผๅ‰', u'ไบŽๅŠฃ': u'ๆ–ผๅŠฃ', u'ไบŽๅ‹ค': u'ๆ–ผๅ‹ค', u'ไบŽๅ': u'ๆ–ผๅ', u'ไบŽๅŠ': u'ๆ–ผๅŠ', u'ไบŽๅ‘ผๅ“€ๅ“‰': u'ๆ–ผๅ‘ผๅ“€ๅ“‰', u'ไบŽๅ››': u'ๆ–ผๅ››', u'ไบŽๅ›ฝ': u'ๆ–ผๅœ‹', u'ไบŽๅ': u'ๆ–ผๅ', u'ไบŽๅž‚': u'ๆ–ผๅž‚', u'ไบŽๅคซ็ฝ—': u'ๆ–ผๅคซ็พ…', u'ๆ–ผๅคซ็ฝ—': u'ๆ–ผๅคซ็พ…', u'ๆ–ผๅคซ็พ…': u'ๆ–ผๅคซ็พ…', u'ไบŽๅฅน': u'ๆ–ผๅฅน', u'ไบŽๅฅฝ': u'ๆ–ผๅฅฝ', u'ไบŽๅง‹': u'ๆ–ผๅง‹', u'ๆ–ผๅง“': u'ๆ–ผๅง“', u'ไบŽๅฎƒ': u'ๆ–ผๅฎƒ', u'ไบŽๅฎถ': u'ๆ–ผๅฎถ', u'ไบŽๅฏ†': u'ๆ–ผๅฏ†', u'ไบŽๅทฎ': u'ๆ–ผๅทฎ', u'ไบŽๅทฑ': u'ๆ–ผๅทฑ', u'ไบŽๅธ‚': u'ๆ–ผๅธ‚', u'ไบŽๅน•': u'ๆ–ผๅน•', u'ไบŽๅผฑ': u'ๆ–ผๅผฑ', u'ไบŽๅผบ': u'ๆ–ผๅผท', u'ไบŽๅŽ': u'ๆ–ผๅพŒ', u'ไบŽๅพ': u'ๆ–ผๅพต', u'ไบŽๅฟƒ': u'ๆ–ผๅฟƒ', u'ไบŽๆ€€': u'ๆ–ผๆ‡ท', u'ไบŽๆˆ‘': u'ๆ–ผๆˆ‘', u'ไบŽๆˆ': u'ๆ–ผๆˆฒ', u'ไบŽๆ•': u'ๆ–ผๆ•', u'ไบŽๆ–ฏ': u'ๆ–ผๆ–ฏ', u'ไบŽๆ˜ฏ': u'ๆ–ผๆ˜ฏ', u'ไบŽๆ˜ฏไนŽ': u'ๆ–ผๆ˜ฏไนŽ', u'ไบŽๆ—ถ': u'ๆ–ผๆ™‚', u'ไบŽๆขจๅŽ': u'ๆ–ผๆขจ่ฏ', u'ๆ–ผๆขจ่ฏ': u'ๆ–ผๆขจ่ฏ', u'ไบŽไน': u'ๆ–ผๆจ‚', u'ไบŽๆญค': u'ๆ–ผๆญค', u'ๆ–ผๆฐ': u'ๆ–ผๆฐ', u'ไบŽๆฐ‘': u'ๆ–ผๆฐ‘', u'ไบŽๆฐด': u'ๆ–ผๆฐด', u'ไบŽๆณ•': u'ๆ–ผๆณ•', u'ไบŽๆฝœๅŽฟ': u'ๆ–ผๆฝ›็ธฃ', u'ไบŽ็ซ': u'ๆ–ผ็ซ', u'ไบŽ็„‰': u'ๆ–ผ็„‰', u'ไบŽๅข™': u'ๆ–ผ็‰†', u'ไบŽ็‰ฉ': u'ๆ–ผ็‰ฉ', u'ไบŽๆฏ•': u'ๆ–ผ็•ข', u'ไบŽๅฐฝ': u'ๆ–ผ็›ก', u'ไบŽ็›ฒ': u'ๆ–ผ็›ฒ', u'ไบŽ็ฅ‚': u'ๆ–ผ็ฅ‚', u'ไบŽ็ฉ†': u'ๆ–ผ็ฉ†', u'ไบŽ็ปˆ': u'ๆ–ผ็ต‚', u'ไบŽ็พŽ': u'ๆ–ผ็พŽ', u'ไบŽ่‰ฒ': u'ๆ–ผ่‰ฒ', u'ไบŽ่Ÿ': u'ๆ–ผ่Ÿ', u'ไบŽ่“': u'ๆ–ผ่—', u'ไบŽ่กŒ': u'ๆ–ผ่กŒ', u'ไบŽ่กท': u'ๆ–ผ่กท', u'ไบŽ่ฏฅ': u'ๆ–ผ่ฉฒ', u'ไบŽๅ†œ': u'ๆ–ผ่พฒ', u'ไบŽ้€”': u'ๆ–ผ้€”', u'ไบŽ่ฟ‡': u'ๆ–ผ้Ž', u'ไบŽ้‚‘': u'ๆ–ผ้‚‘', u'ไบŽไธ‘': u'ๆ–ผ้†œ', u'ไบŽ้‡Ž': u'ๆ–ผ้‡Ž', u'ไบŽ้™†': u'ๆ–ผ้™ธ', u'ไบŽ๏ผ': u'ๆ–ผ๏ผ', u'ไบŽ๏ผ‘': u'ๆ–ผ๏ผ‘', u'ไบŽ๏ผ’': u'ๆ–ผ๏ผ’', u'ไบŽ๏ผ“': u'ๆ–ผ๏ผ“', u'ไบŽ๏ผ”': u'ๆ–ผ๏ผ”', u'ไบŽ๏ผ•': u'ๆ–ผ๏ผ•', u'ไบŽ๏ผ–': u'ๆ–ผ๏ผ–', u'ไบŽ๏ผ—': u'ๆ–ผ๏ผ—', u'ไบŽ๏ผ˜': u'ๆ–ผ๏ผ˜', u'ไบŽ๏ผ™': u'ๆ–ผ๏ผ™', u'ๆ–ฝ่ˆ': u'ๆ–ฝๆจ', u'ๆ–ฝไบŽ': u'ๆ–ฝๆ–ผ', u'ๆ–ฝ่ˆไน‹้“': u'ๆ–ฝ่ˆไน‹้“', u'ๆ–ฝ่ฏ': u'ๆ–ฝ่—ฅ', u'ๆ—ๅพๅšๅผ•': u'ๆ—ๅพตๅšๅผ•', u'ๆ—ๆณจ': u'ๆ—่จป', u'ๆ—…ๆธธ': u'ๆ—…้Š', u'ๆ—‹ๅนฒ่ฝฌๅค': u'ๆ—‹ไนพ่ฝ‰ๅค', u'ๆ—‹็ป•็€': u'ๆ—‹็นž่‘—', u'ๆ—‹ๅ›ž': u'ๆ—‹่ฟด', u'ๆ—้‡Œ': u'ๆ—่ฃก', u'ๆ——ๆ†': u'ๆ——ๆ†', u'ๆ—ฅๅ ': u'ๆ—ฅไฝ”', u'ๆ—ฅๅญ้‡Œ': u'ๆ—ฅๅญ่ฃก', u'ๆ—ฅๆ™’': u'ๆ—ฅๆ™’', u'ๆ—ฅๅކ': u'ๆ—ฅๆ›†', u'ๆ—ฅๅކๅฒ': u'ๆ—ฅๆญทๅฒ', u'ๆ—ฅๅฟ—': u'ๆ—ฅ่ชŒ', u'ๆ—ฉไบŽ': u'ๆ—ฉๆ–ผ', u'ๆ—ฑๅนฒ': u'ๆ—ฑไนพ', u'ๆ˜†ไป‘ๅฑฑ': u'ๆ˜†ๅด™ๅฑฑ', u'ๅ‡ๅนณ': u'ๆ˜‡ๅนณ', u'ๅ‡้˜ณ': u'ๆ˜‡้™ฝ', u'ๆ˜ŠๅคฉไธๅŠ': u'ๆ˜Šๅคฉไธๅผ”', u'ๆ˜Žๅพ': u'ๆ˜Žๅพต', u'ๆ˜Ž็›ฎๅผ ่ƒ†': u'ๆ˜Ž็›ฎๅผต่ƒ†', u'ๆ˜Ž็ช—ๅ‡€ๅ‡ ': u'ๆ˜Ž็ช—ๆทจๅ‡ ', u'ๆ˜Ž่Œƒ': u'ๆ˜Ž็ฏ„', u'ๆ˜Ž้‡Œ': u'ๆ˜Ž่ฃก', u'ๆ˜“ๅ…‹ๅˆถ': u'ๆ˜“ๅ‰‹ๅˆถ', u'ๆ˜“ไบŽ': u'ๆ˜“ๆ–ผ', u'ๆ˜Ÿๅทดๅ…‹': u'ๆ˜Ÿๅทดๅ…‹', u'ๆ˜Ÿๅކ': u'ๆ˜Ÿๆ›†', u'ๆ˜ŸๆœŸๅŽ': u'ๆ˜ŸๆœŸๅพŒ', u'ๆ˜Ÿๅކๅฒ': u'ๆ˜Ÿๆญทๅฒ', u'ๆ˜Ÿ่พฐ่กจ': u'ๆ˜Ÿ่พฐ้Œถ', u'ๆ˜ฅๅ‡้‡Œ': u'ๆ˜ฅๅ‡่ฃก', u'ๆ˜ฅๅคฉ้‡Œ': u'ๆ˜ฅๅคฉ่ฃก', u'ๆ˜ฅๆ—ฅ้‡Œ': u'ๆ˜ฅๆ—ฅ่ฃก', u'ๆ˜ฅ่ฏ': u'ๆ˜ฅ่—ฅ', u'ๆ˜ฅๆธธ': u'ๆ˜ฅ้Š', u'ๆ˜ฅ้ฆ™ๆ–—ๅญฆ': u'ๆ˜ฅ้ฆ™้ฌฅๅญธ', u'ๆ—ถ้’Ÿ': u'ๆ™‚้˜', u'ๆ—ถ้—ด้‡Œ': u'ๆ™‚้–“่ฃก', u'ๆ™ƒ่ก': u'ๆ™ƒ่•ฉ', u'ๆ™‹ๅ‡': u'ๆ™‰้™ž', u'ๆ™’ๅนฒ': u'ๆ™’ไนพ', u'ๆ™’ไผค': u'ๆ™’ๅ‚ท', u'ๆ™’ๅ›พ': u'ๆ™’ๅœ–', u'ๆ™’ๅ›พ็บธ': u'ๆ™’ๅœ–็ด™', u'ๆ™’ๆˆ': u'ๆ™’ๆˆ', u'ๆ™’ๆ™’': u'ๆ™’ๆ™’', u'ๆ™’็ƒŸ': u'ๆ™’็…™', u'ๆ™’็ง': u'ๆ™’็จฎ', u'ๆ™’่กฃ': u'ๆ™’่กฃ', u'ๆ™’้ป‘': u'ๆ™’้ป‘', u'ๆ™šไบŽ': u'ๆ™šๆ–ผ', u'ๆ™š้’Ÿ': u'ๆ™š้˜', u'ๆ™žๅ‘': u'ๆ™ž้ซฎ', u'ๆ™จ้’Ÿ': u'ๆ™จ้˜', u'ๆ™ฎๅ†ฌๅ†ฌ': u'ๆ™ฎ้ผ•้ผ•', u'ๆ™ฏ่‡ด': u'ๆ™ฏ็ทป', u'ๆ™พๅนฒ': u'ๆ™พไนพ', u'ๆ™•่ˆน่ฏ': u'ๆšˆ่ˆน่—ฅ', u'ๆ™•่ฝฆ่ฏ': u'ๆšˆ่ปŠ่—ฅ', u'ๆš‘ๅ‡้‡Œ': u'ๆš‘ๅ‡่ฃก', u'ๆš—ๅœฐ้‡Œ': u'ๆš—ๅœฐ่ฃก', u'ๆš—ๆฒŸ้‡Œ': u'ๆš—ๆบ่ฃก', u'ๆš—้‡Œ': u'ๆš—่ฃก', u'ๆš—ๆ–—': u'ๆš—้ฌฅ', u'็•…ๆธธ': u'ๆšข้Š', u'ๆšดๆ•›ๆจชๅพ': u'ๆšดๆ–‚ๆฉซๅพต', u'ๆšดๆ™’': u'ๆšดๆ™’', u'ๅކๅ…ƒ': u'ๆ›†ๅ…ƒ', u'ๅކๅ‘ฝ': u'ๆ›†ๅ‘ฝ', u'ๅކๅง‹': u'ๆ›†ๅง‹', u'ๅކๅฎค': u'ๆ›†ๅฎค', u'ๅކๅฐพ': u'ๆ›†ๅฐพ', u'ๅކๆ•ฐ': u'ๆ›†ๆ•ธ', u'ๅކๆ—ฅ': u'ๆ›†ๆ—ฅ', u'ๅކไนฆ': u'ๆ›†ๆ›ธ', u'ๅކๆœฌ': u'ๆ›†ๆœฌ', u'ๅކๆณ•': u'ๆ›†ๆณ•', u'ๅކ็‹ฑ': u'ๆ›†็„', u'ๅކ็บช': u'ๆ›†็ด€', u'ๅކ่ฑก': u'ๆ›†่ฑก', u'ๆ›ๆ™’': u'ๆ›ๆ™’', u'ๆ™’่ฐท': u'ๆ›ฌ็ฉ€', u'ๆ›ฐไบ‘': u'ๆ›ฐไบ‘', u'ๆ›ดไป†้šพๆ•ฐ': u'ๆ›ดๅƒ•้›ฃๆ•ธ', u'ๆ›ด็ญพ': u'ๆ›ด็ฑค', u'ๆ›ด้’Ÿ': u'ๆ›ด้˜', u'ไนฆๅ‘†ๅญ': u'ๆ›ธ็ƒๅญ', u'ไนฆ็ญพ': u'ๆ›ธ็ฑค', u'ๆ›ผ่ฐทไบบ': u'ๆ›ผ่ฐทไบบ', u'ๆ›พๆœด': u'ๆ›พๆจธ', u'ๆœ€ๅคš': u'ๆœ€ๅคš', u'ไผšไธŠ็ญพ็ฝฒ': u'ๆœƒไธŠ็ฐฝ็ฝฒ', u'ไผšไธŠ็ญพ่ฎข': u'ๆœƒไธŠ็ฐฝ่จ‚', u'ไผšๅ ': u'ๆœƒไฝ”', u'ไผšๅ ๅœ': u'ๆœƒๅ ๅœ', u'ไผšๅนฒๆ‰ฐ': u'ๆœƒๅนฒๆ“พ', u'ๆœƒๅนฒๆ“พ': u'ๆœƒๅนฒๆ“พ', u'ไผšๅนฒ': u'ๆœƒๅนน', u'ไผšๅŠ': u'ๆœƒๅผ”', u'ไผš้‡Œ': u'ๆœƒ่ฃก', u'ๆœˆๅކ': u'ๆœˆๆ›†', u'ๆœˆๅކๅฒ': u'ๆœˆๆญทๅฒ', u'ๆœˆ็ฆปไบŽๆฏ•': u'ๆœˆ้›ขๆ–ผ็•ข', u'ๆœˆ้ข': u'ๆœˆ้ข', u'ๆœˆไธฝไบŽ็ฎ•': u'ๆœˆ้บ—ๆ–ผ็ฎ•', u'ๆœ‰ไบ‹ไน‹ๆ— ่Œƒ': u'ๆœ‰ไบ‹ไน‹็„ก็ฏ„', u'ๆœ‰ไป†': u'ๆœ‰ๅƒ•', u'ๆœ‰ๅชไธ': u'ๆœ‰ๅชไธ', u'ๆœ‰ๅชๅ…': u'ๆœ‰ๅชๅ…', u'ๆœ‰ๅชๅฎน': u'ๆœ‰ๅชๅฎน', u'ๆœ‰ๅชๆŽก': u'ๆœ‰ๅชๆŽก', u'ๆœ‰ๅช้‡‡': u'ๆœ‰ๅชๆŽก', u'ๆœ‰ๅชๆ˜ฏ': u'ๆœ‰ๅชๆ˜ฏ', u'ๆœ‰ๅช็”จ': u'ๆœ‰ๅช็”จ', u'ๆœ‰ๅคŸ่ตž': u'ๆœ‰ๅค ่ฎš', u'ๆœ‰ๅพไผ': u'ๆœ‰ๅพไผ', u'ๆœ‰ๅพๆˆ˜': u'ๆœ‰ๅพๆˆฐ', u'ๆœ‰ๅพๆœ': u'ๆœ‰ๅพๆœ', u'ๆœ‰ๅพ่ฎจ': u'ๆœ‰ๅพ่จŽ', u'ๆœ‰ๅพ': u'ๆœ‰ๅพต', u'ๆœ‰ๆ’่ก—': u'ๆœ‰ๆ’่ก—', u'ๆœ‰ๆ –ๅท': u'ๆœ‰ๆ –ๅท', u'ๆœ‰ๅ‡†': u'ๆœ‰ๆบ–', u'ๆœ‰ๆฃฑๆœ‰่ง’': u'ๆœ‰็จœๆœ‰่ง’', u'ๆœ‰ๅช': u'ๆœ‰้šป', u'ๆœ‰ไฝ™': u'ๆœ‰้ค˜', u'ๆœ‰ๅ‘ๅคด้™€ๅฏบ': u'ๆœ‰้ซฎ้ ญ้™€ๅฏบ', u'ๆœไบŽ': u'ๆœๆ–ผ', u'ๆœ่ฏ': u'ๆœ่—ฅ', u'ๆœ›ไบ†ๆœ›': u'ๆœ›ไบ†ๆœ›', u'ๆœ›ๅŽ็Ÿณ': u'ๆœ›ๅŽ็Ÿณ', u'ๆœ›็€่กจ': u'ๆœ›่‘—้Œถ', u'ๆœ›็€้’Ÿ': u'ๆœ›่‘—้˜', u'ๆœ›็€้’Ÿ่กจ': u'ๆœ›่‘—้˜้Œถ', u'ๆœไนพๅค•ๆƒ•': u'ๆœไนพๅค•ๆƒ•', u'ๆœ้’Ÿ': u'ๆœ้˜', u'ๆœฆ่ƒง': u'ๆœฆๆœง', u'่’™่ƒง': u'ๆœฆๆœง', u'ๆœจๅถๆˆๆ‰Ž': u'ๆœจๅถๆˆฒ็ดฎ', u'ๆœจๆ†': u'ๆœจๆ†', u'ๆœจๆๅนฒ้ฆ': u'ๆœจๆไนพ้คพ', u'ๆœจๆข': u'ๆœจๆจ‘', u'ๆœจๅˆถ': u'ๆœจ่ฃฝ', u'ๆœจ้’Ÿ': u'ๆœจ้˜', u'ๆœชๅนฒ': u'ๆœชไนพ', u'ๆœซ่ฏ': u'ๆœซ่—ฅ', u'ๆœฌๅพ': u'ๆœฌๅพต', u'ๆœฏ่ตค': u'ๆœฎ่ตค', u'ๆœฑไป‘่ก—': u'ๆœฑๅด™่ก—', u'ๆœฑๅบ†ไฝ™': u'ๆœฑๆ…ถ้ค˜', u'ๆœฑ็†ๅฎ‰ๅކ': u'ๆœฑ็†ๅฎ‰ๆ›†', u'ๆœฑ็†ๅฎ‰ๅކๅฒ': u'ๆœฑ็†ๅฎ‰ๆญทๅฒ', u'ๆ†ๅญ': u'ๆ†ๅญ', u'ๆŽ้€ฃๆฐ': u'ๆŽ้€ฃๆฐ', u'ๆŽ่ฟžๆฐ': u'ๆŽ้€ฃๆฐ', u'ๆๅนฒ': u'ๆๅนน', u'ๆ‘ๅญ้‡Œ': u'ๆ‘ๅญ่ฃก', u'ๆ‘ๅบ„': u'ๆ‘่ŽŠ', u'ๆ‘่ฝๅ‘': u'ๆ‘่ฝ็™ผ', u'ๆ‘้‡Œ': u'ๆ‘่ฃก', u'ๆœ่€ๅฟ—้“': u'ๆœ่€่ชŒ้“', u'ๆžๅฎ‹ๆ— ๅพ': u'ๆžๅฎ‹็„กๅพต', u'ๆŸๅ‘': u'ๆŸ้ซฎ', u'ๆฏๅนฒ': u'ๆฏไนพ', u'ๆฏ้ข': u'ๆฏ้บต', u'ๆฐไผฆ': u'ๆฐๅ€ซ', u'ๆฐ็‰น': u'ๆฐ็‰น', u'ไธœๅ‘จ้’Ÿ': u'ๆฑๅ‘จ้˜', u'ไธœๅฒณ': u'ๆฑๅถฝ', u'ไธœๅ†ฒ่ฅฟ็ช': u'ๆฑ่ก่ฅฟ็ช', u'ไธœๆธธ': u'ๆฑ้Š', u'ๆพๅฑฑๅบ„': u'ๆพๅฑฑๅบ„', u'ๆฟ็€่„ธ': u'ๆฟ่‘—่‡‰', u'ๆฟ่ก': u'ๆฟ่•ฉ', u'ๆž—ๅฎๅฒณ': u'ๆž—ๅฎๅถฝ', u'ๆž—้ƒๆ–น': u'ๆž—้ƒๆ–น', u'ๆž—้’Ÿ': u'ๆž—้˜', u'ๆžœๅนฒ': u'ๆžœไนพ', u'ๆžœๅญๅนฒ': u'ๆžœๅญไนพ', u'ๆžไธๅพ—ๅคงไบŽๅนฒ': u'ๆžไธๅพ—ๅคงๆ–ผๆฆฆ', u'ๆžๅนฒ': u'ๆžๅนน', u'ๆžฏๅนฒ': u'ๆžฏไนพ', u'ๅฐๅކ': u'ๆžฑๆ›†', u'ๆžถ้’Ÿ': u'ๆžถ้˜', u'ๆŸๅช': u'ๆŸ้šป', u'ๆŸ“ๆŒ‡ไบŽ': u'ๆŸ“ๆŒ‡ๆ–ผ', u'ๆŸ“ๆฎฟๅŽ': u'ๆŸ“ๆฎฟๅŽ', u'ๆŸ“ๅ‘': u'ๆŸ“้ซฎ', u'ๆŸœไธŠ': u'ๆŸœไธŠ', u'ๆŸœๅญ': u'ๆŸœๅญ', u'ๆŸœๆŸณ': u'ๆŸœๆŸณ', u'ๆŸฑๆข': u'ๆŸฑๆจ‘', u'ๆŸณ่ฏ’ๅพ': u'ๆŸณ่ฉ’ๅพต', u'ๆ –ๆ –็š‡็š‡': u'ๆ –ๆ –็š‡็š‡', u'ๆ กๅ‡†': u'ๆ กๆบ–', u'ๆ กไป‡': u'ๆ ก่ฎŽ', u'ๆ ธๅ‡†็š„': u'ๆ ธๅ‡†็š„', u'ๆ ผไบŽ': u'ๆ ผๆ–ผ', u'ๆ ผ่Œƒ': u'ๆ ผ็ฏ„', u'ๆ ผ้‡Œๅކ': u'ๆ ผ้‡Œๆ›†', u'ๆ ผ้‡Œ้ซ˜ๅˆฉๅކ': u'ๆ ผ้‡Œ้ซ˜ๅˆฉๆ›†', u'ๆ ผๆ–—': u'ๆ ผ้ฌฅ', u'ๆก‚ๅœ†ๅนฒ': u'ๆก‚ๅœ“ไนพ', u'ๆก…ๆ†': u'ๆก…ๆ†', u'ๆกŒๅ‡ ': u'ๆกŒๅ‡ ', u'ๆกŒๅކ': u'ๆกŒๆ›†', u'ๆกŒๅކๅฒ': u'ๆกŒๆญทๅฒ', u'ๆก‘ๅนฒ': u'ๆก‘ไนพ', u'ๆขไธŠๅ›ๅญ': u'ๆขไธŠๅ›ๅญ', u'ๆกๅนฒ': u'ๆขๅนน', u'ๆขจๅนฒ': u'ๆขจไนพ', u'ๆขฏๅ†ฒ': u'ๆขฏ่ก', u'ๆขฐ็ณป': u'ๆขฐ็นซ', u'ๆขฐๆ–—': u'ๆขฐ้ฌฅ', u'ๅผƒ่ˆ': u'ๆฃ„ๆจ', u'ๆฃ‰ๅˆถ': u'ๆฃ‰่ฃฝ', u'ๆฃ’ๅญ้ข': u'ๆฃ’ๅญ้บต', u'ๆžฃๅบ„': u'ๆฃ—่ŽŠ', u'ๆ ‹ๆข': u'ๆฃŸๆจ‘', u'ๆฃซๆœด': u'ๆฃซๆจธ', u'ๆฃฎๆž—้‡Œ': u'ๆฃฎๆž—่ฃก', u'ๆฃบๆ้‡Œ': u'ๆฃบๆ่ฃก', u'ๆคๅ‘': u'ๆค้ซฎ', u'ๆคฐๆžฃๅนฒ': u'ๆคฐๆฃ—ไนพ', u'ๆฅšๅบ„้—ฎ้ผŽ': u'ๆฅš่ŽŠๅ•้ผŽ', u'ๆฅšๅบ„็Ž‹': u'ๆฅš่ŽŠ็Ž‹', u'ๆฅšๅบ„็ป็ผจ': u'ๆฅš่ŽŠ็ต•็บ“', u'ๆกขๅนฒ': u'ๆฅจๅนน', u'ไธšไฝ™': u'ๆฅญ้ค˜', u'ๆฆจๅนฒ': u'ๆฆจไนพ', u'ๆ ๆ†': u'ๆง“ๆกฟ', u'ไนๅ™จ้’Ÿ': u'ๆจ‚ๅ™จ้˜', u'ๆจŠไบŽๆœŸ': u'ๆจŠๆ–ผๆœŸ', u'ๆขไธŠ': u'ๆจ‘ไธŠ', u'ๆขๆŸฑ': u'ๆจ‘ๆŸฑ', u'ๆ ‡ๆ†': u'ๆจ™ๆ†', u'ๆ ‡ๆ ‡่‡ด่‡ด': u'ๆจ™ๆจ™่‡ด่‡ด', u'ๆ ‡ๅ‡†': u'ๆจ™ๆบ–', u'ๆ ‡็ญพ': u'ๆจ™็ฑค', u'ๆ ‡่‡ด': u'ๆจ™็ทป', u'ๆ ‡ๆณจ': u'ๆจ™่จป', u'ๆ ‡ๅฟ—': u'ๆจ™่ชŒ', u'ๆจกๆฃฑ': u'ๆจก็จœ', u'ๆจก่Œƒ': u'ๆจก็ฏ„', u'ๆจก่Œƒๆฃ’ๆฃ’ๅ ‚': u'ๆจก่Œƒๆฃ’ๆฃ’ๅ ‚', u'ๆจกๅˆถ': u'ๆจก่ฃฝ', u'ๆ ท่Œƒ': u'ๆจฃ็ฏ„', u'ๆจต้‡‡': u'ๆจตๆŽก', u'ๆœดไฟฎๆ–ฏ': u'ๆจธไฟฎๆ–ฏ', u'ๆœดๅŽš': u'ๆจธๅŽš', u'ๆœดๅญฆ': u'ๆจธๅญธ', u'ๆœดๅฎž': u'ๆจธๅฏฆ', u'ๆœดๅฟตไป': u'ๆจธๅฟตไป', u'ๆœดๆ‹™': u'ๆจธๆ‹™', u'ๆœดๆจ•': u'ๆจธๆจ•', u'ๆœด็ˆถ': u'ๆจธ็ˆถ', u'ๆœด็›ด': u'ๆจธ็›ด', u'ๆœด็ด ': u'ๆจธ็ด ', u'ๆœด่ฎท': u'ๆจธ่จฅ', u'ๆœด่ดจ': u'ๆจธ่ณช', u'ๆœด้„™': u'ๆจธ้„™', u'ๆœด้‡': u'ๆจธ้‡', u'ๆœด้‡Ž': u'ๆจธ้‡Ž', u'ๆœด้’': u'ๆจธ้ˆ', u'ๆœด้™‹': u'ๆจธ้™‹', u'ๆœด้ฉฌ': u'ๆจธ้ฆฌ', u'ๆœด้ฒ': u'ๆจธ้ญฏ', u'ๆ ‘ๅนฒ': u'ๆจนๆฆฆ', u'ๆ ‘ๆข': u'ๆจนๆจ‘', u'ๆกฅๆข': u'ๆฉ‹ๆจ‘', u'ๆฉŸๆขฐ็ณป': u'ๆฉŸๆขฐ็ณป', u'ๆœบๆขฐ็ณป': u'ๆฉŸๆขฐ็ณป', u'ๆœบๆขฐ่กจ': u'ๆฉŸๆขฐ้Œถ', u'ๆœบๆขฐ้’Ÿ': u'ๆฉŸๆขฐ้˜', u'ๆœบๆขฐ้’Ÿ่กจ': u'ๆฉŸๆขฐ้˜้Œถ', u'ๆœบ็ปฃ': u'ๆฉŸ็นก', u'ๆจชๅพๆšดๆ•›': u'ๆฉซๅพตๆšดๆ–‚', u'ๆจชๆ†': u'ๆฉซๆ†', u'ๆจชๆข': u'ๆฉซๆจ‘', u'ๆจชๅ†ฒ': u'ๆฉซ่ก', u'ๅฐๅญ': u'ๆชฏๅญ', u'ๅฐๅธƒ': u'ๆชฏๅธƒ', u'ๅฐ็ฏ': u'ๆชฏ็‡ˆ', u'ๅฐ็ƒ': u'ๆชฏ็ƒ', u'ๅฐ้ข': u'ๆชฏ้ข', u'ๆŸœๅฐ': u'ๆซƒๆชฏ', u'ๆ ‰ๅ‘ๅทฅ': u'ๆซ›้ซฎๅทฅ', u'ๆ ๆ†': u'ๆฌ„ๆ†', u'ๆฌฒๆตท้šพๅกซ': u'ๆฌฒๆตท้›ฃๅกซ', u'ๆฌบ่’™': u'ๆฌบ็Ÿ‡', u'ๆญŒๅŽ': u'ๆญŒๅŽ', u'ๆญŒ้’Ÿ': u'ๆญŒ้˜', u'ๆฌงๆธธ': u'ๆญ้Š', u'ๆญขๅ’ณ่ฏ': u'ๆญขๅ’ณ่—ฅ', u'ๆญขไบŽ': u'ๆญขๆ–ผ', u'ๆญข็—›่ฏ': u'ๆญข็—›่—ฅ', u'ๆญข่ก€่ฏ': u'ๆญข่ก€่—ฅ', u'ๆญฃๅœจๅฑๅ’ค': u'ๆญฃๅœจๅฑๅ’ค', u'ๆญฃๅฎ˜ๅบ„': u'ๆญฃๅฎ˜ๅบ„', u'ๆญฃๅฝ“็€': u'ๆญฃ็•ถ่‘—', u'ๆญฆไธ‘': u'ๆญฆไธ‘', u'ๆญฆๅŽ': u'ๆญฆๅŽ', u'ๆญฆๆ–—': u'ๆญฆ้ฌฅ', u'ๅฒ่ฟไบ‘ๆšฎ': u'ๆญฒ่ฟไบ‘ๆšฎ', u'ๅކๅฒ้‡Œ': u'ๆญทๅฒ่ฃก', u'ๅฝ’ๅนถ': u'ๆญธไฝต', u'ๅฝ’ไบŽ': u'ๆญธๆ–ผ', u'ๅฝ’ไฝ™': u'ๆญธ้ค˜', u'ๆญนๆ–—': u'ๆญน้ฌฅ', u'ๆญปไบŽ': u'ๆญปๆ–ผ', u'ๆญป่ƒกๅŒ': u'ๆญป่ƒกๅŒ', u'ๆญป้‡Œๆฑ‚็”Ÿ': u'ๆญป่ฃกๆฑ‚็”Ÿ', u'ๆญป้‡Œ้€ƒ็”Ÿ': u'ๆญป่ฃก้€ƒ็”Ÿ', u'ๆฎ–่ฐท': u'ๆฎ–็ฉ€', u'ๆฎ‹่‚ด': u'ๆฎ˜่‚ด', u'ๆฎ‹ไฝ™': u'ๆฎ˜้ค˜', u'ๅƒตๅฐธ': u'ๆฎญๅฑ', u'ๆฎทๅธˆ็‰›ๆ–—': u'ๆฎทๅธซ็‰›้ฌฅ', u'ๆ€่™ซ่ฏ': u'ๆฎบ่Ÿฒ่—ฅ', u'ๅฃณ้‡Œ': u'ๆฎผ่ฃก', u'ๆฎฟ้’Ÿ่‡ช้ธฃ': u'ๆฎฟ้˜่‡ช้ณด', u'ๆฏไบŽ': u'ๆฏ€ๆ–ผ', u'ๆฏ้’Ÿไธบ้“Ž': u'ๆฏ€้˜็‚บ้ธ', u'ๆฎดๆ–—': u'ๆฏ†้ฌฅ', u'ๆฏๅŽ': u'ๆฏๅŽ', u'ๆฏ่Œƒ': u'ๆฏ็ฏ„', u'ๆฏไธ‘': u'ๆฏ้†œ', u'ๆฏๆฏๅช': u'ๆฏๆฏๅช', u'ๆฏๅช': u'ๆฏ้šป', u'ๆฏ’่ฏ': u'ๆฏ’่—ฅ', u'ๆฏ”ๅˆ’': u'ๆฏ”ๅŠƒ', u'ๆฏ›ๅ': u'ๆฏ›ๅ', u'ๆฏ›ๅงœ': u'ๆฏ›่–‘', u'ๆฏ›ๅ‘': u'ๆฏ›้ซฎ', u'ๆฏซๅŽ˜': u'ๆฏซ้‡', u'ๆฏซๅ‘': u'ๆฏซ้ซฎ', u'ๆฐ”ๅ†ฒๆ–—็‰›': u'ๆฐฃๆฒ–ๆ–—็‰›', u'ๆฐ”้ƒ': u'ๆฐฃ้ฌฑ', u'ๆฐค้ƒ': u'ๆฐค้ฌฑ', u'ๆฐดๆฅๆฑค้‡ŒๅŽป': u'ๆฐดไพ†ๆนฏ่ฃกๅŽป', u'ๆฐดๅ‡†': u'ๆฐดๆบ–', u'ๆฐด้‡Œ': u'ๆฐด่ฃก', u'ๆฐด้‡Œ้„‰': u'ๆฐด้‡Œ้„‰', u'ๆฐด้‡Œไนก': u'ๆฐด้‡Œ้„‰', u'ๆฐธๅކ': u'ๆฐธๆ›†', u'ๆฐธๅކๅฒ': u'ๆฐธๆญทๅฒ', u'ๆฐธๅฟ—ไธๅฟ˜': u'ๆฐธ่ชŒไธๅฟ˜', u'ๆฑ‚็Ÿฅๆฌฒ': u'ๆฑ‚็Ÿฅๆ…พ', u'ๆฑ‚็ญพ': u'ๆฑ‚็ฑค', u'ๆฑ‚้“ไบŽ็›ฒ': u'ๆฑ‚้“ๆ–ผ็›ฒ', u'ๆฑ ้‡Œ': u'ๆฑ ่ฃก', u'ๆฑก่”‘': u'ๆฑก่กŠ', u'ๆฑฒไบŽ': u'ๆฑฒๆ–ผ', u'ๅ†ณๆ–—': u'ๆฑบ้ฌฅ', u'ๆฒˆๆท€': u'ๆฒˆๆพฑ', u'ๆฒˆ็€': u'ๆฒˆ่‘—', u'ๆฒˆ้ƒ': u'ๆฒˆ้ฌฑ', u'ๆฒ‰ๆท€': u'ๆฒ‰ๆพฑ', u'ๆฒ‰้ƒ': u'ๆฒ‰้ฌฑ', u'ๆฒกๅนฒๆฒกๅ‡€': u'ๆฒ’ไนพๆฒ’ๆทจ', u'ๆฒกไบ‹ๅนฒ': u'ๆฒ’ไบ‹ๅนน', u'ๆฒกๅนฒ': u'ๆฒ’ๅนน', u'ๆฒกๆŠ˜่‡ณ': u'ๆฒ’ๆ‘บ่‡ณ', u'ๆฒกๆขขๅนฒ': u'ๆฒ’ๆขขๅนน', u'ๆฒกๆ ท่Œƒ': u'ๆฒ’ๆจฃ็ฏ„', u'ๆฒกๅ‡†': u'ๆฒ’ๆบ–', u'ๆฒก่ฏ': u'ๆฒ’่—ฅ', u'ๅ†ฒๅ† ๅ‘ๆ€’': u'ๆฒ–ๅ† ้ซฎๆ€’', u'ๆฒ™้‡Œๆท˜้‡‘': u'ๆฒ™่ฃกๆท˜้‡‘', u'ๆฒณๅฒณ': u'ๆฒณๅถฝ', u'ๆฒณๆตๆฑ‡้›†': u'ๆฒณๆตๅŒฏ้›†', u'ๆฒณ้‡Œ': u'ๆฒณ่ฃก', u'ๆฒนๆ–—': u'ๆฒน้ฌฅ', u'ๆฒน้ข': u'ๆฒน้บต', u'ๆฒปๆ„ˆ': u'ๆฒป็™’', u'ๆฒฟๆบฏ': u'ๆฒฟๆณ', u'ๆณ•ๅ ': u'ๆณ•ไฝ”', u'ๆณ•่‡ชๅˆถ': u'ๆณ•่‡ชๅˆถ', u'ๆณ›ๆธธ': u'ๆณ›้Š', u'ๆณกๅˆถ': u'ๆณก่ฃฝ', u'ๆณก้ข': u'ๆณก้บต', u'ๆณขๆฃฑ่œ': u'ๆณข็จœ่œ', u'ๆณขๅ‘่—ป': u'ๆณข้ซฎ่—ป', u'ๆณฅไบŽ': u'ๆณฅๆ–ผ', u'ๆณจไบ‘': u'ๆณจไบ‘', u'ๆณจ้‡Š': u'ๆณจ้‡‹', u'ๆณฐๅฑฑๆขๆœจ': u'ๆณฐๅฑฑๆขๆœจ', u'ๆณฑ้ƒ': u'ๆณฑ้ฌฑ', u'ๆณณๆฐ”้’Ÿ': u'ๆณณๆฐฃ้˜', u'ๆด„ๆธธ': u'ๆด„้Š', u'ๆด’ๅฎถ': u'ๆด’ๅฎถ', u'ๆด’ๆ‰ซ': u'ๆด’ๆŽƒ', u'ๆด’ๆฐด': u'ๆด’ๆฐด', u'ๆด’ๆด’': u'ๆด’ๆด’', u'ๆด’ๆท…': u'ๆด’ๆท…', u'ๆด’ๆถค': u'ๆด’ๆปŒ', u'ๆด’ๆฟฏ': u'ๆด’ๆฟฏ', u'ๆด’็„ถ': u'ๆด’็„ถ', u'ๆด’่„ฑ': u'ๆด’่„ซ', u'ๆด—็‚ผ': u'ๆด—้Š', u'ๆด—็ปƒ': u'ๆด—้Š', u'ๆด—ๅ‘': u'ๆด—้ซฎ', u'ๆด›้’Ÿไธœๅบ”': u'ๆด›้˜ๆฑๆ‡‰', u'ๆณ„ๆฌฒ': u'ๆดฉๆ…พ', u'ๆดช่Œƒ': u'ๆดช็ฏ„', u'ๆดช้€‚': u'ๆดช้€‚', u'ๆดช้’Ÿ': u'ๆดช้˜', u'ๆฑนๆถŒ': u'ๆดถๆนง', u'ๆดพๅ›ขๅ‚ๅŠ ': u'ๆดพๅœ˜ๅƒๅŠ ', u'ๆตๅพ': u'ๆตๅพต', u'ๆตไบŽ': u'ๆตๆ–ผ', u'ๆต่ก': u'ๆต่•ฉ', u'ๆต้ฃŽไฝ™ไฟ—': u'ๆต้ขจ้ค˜ไฟ—', u'ๆต้ฃŽไฝ™้Ÿต': u'ๆต้ขจ้ค˜้Ÿป', u'ๆตฉๆตฉ่ก่ก': u'ๆตฉๆตฉ่•ฉ่•ฉ', u'ๆตฉ่ก': u'ๆตฉ่•ฉ', u'ๆตช็ด่กจ': u'ๆตช็ด้Œถ', u'ๆตช่ก': u'ๆตช่•ฉ', u'ๆตชๆธธ': u'ๆตช้Š', u'ๆตฎไบŽ': u'ๆตฎๆ–ผ', u'ๆตฎ่ก': u'ๆตฎ่•ฉ', u'ๆตฎๅคธ': u'ๆตฎ่ช‡', u'ๆตฎๆพ': u'ๆตฎ้ฌ†', u'ๆตทไธŠๅธƒ้›ท': u'ๆตทไธŠไฝˆ้›ท', u'ๆตทๅนฒ': u'ๆตทไนพ', u'ๆตทๆนพๅธƒ้›ท': u'ๆตท็ฃไฝˆ้›ท', u'ๆถ‚ๅ–„ๅฆฎ': u'ๆถ‚ๅ–„ๅฆฎ', u'ๆถ‚ๅค': u'ๆถ‚ๅค', u'ๆถ‚ๅฃฏๅ‹ณ': u'ๆถ‚ๅฃฏๅ‹ณ', u'ๆถ‚ๅฃฎๅ‹‹': u'ๆถ‚ๅฃฏๅ‹ณ', u'ๆถ‚ๅคฉ็›ธ': u'ๆถ‚ๅคฉ็›ธ', u'ๆถ‚ๅง“': u'ๆถ‚ๅง“', u'ๆถ‚ๅบ็‘„': u'ๆถ‚ๅบ็‘„', u'ๆถ‚ๆ•ๆ’': u'ๆถ‚ๆ•ๆ†', u'ๆถ‚ๆ•ๆ†': u'ๆถ‚ๆ•ๆ†', u'ๆถ‚ๆพคๆฐ‘': u'ๆถ‚ๆพคๆฐ‘', u'ๆถ‚ๆณฝๆฐ‘': u'ๆถ‚ๆพคๆฐ‘', u'ๆถ‚็ป็…ƒ': u'ๆถ‚็ดน็…ƒ', u'ๆถ‚็พฝๅฟ': u'ๆถ‚็พฝๅฟ', u'ๆถ‚่ฌน็”ณ': u'ๆถ‚่ฌน็”ณ', u'ๆถ‚่ฐจ็”ณ': u'ๆถ‚่ฌน็”ณ', u'ๆถ‚้€ขๅนด': u'ๆถ‚้€ขๅนด', u'ๆถ‚้†’ๅ“ฒ': u'ๆถ‚้†’ๅ“ฒ', u'ๆถ‚้•ทๆœ›': u'ๆถ‚้•ทๆœ›', u'ๆถ‚้•ฟๆœ›': u'ๆถ‚้•ทๆœ›', u'ๆถ‚้ธฟ้’ฆ': u'ๆถ‚้ดปๆฌฝ', u'ๆถ‚้ดปๆฌฝ': u'ๆถ‚้ดปๆฌฝ', u'ๆถˆ็‚Ž่ฏ': u'ๆถˆ็‚Ž่—ฅ', u'ๆถˆ่‚ฟ่ฏ': u'ๆถˆ่…ซ่—ฅ', u'ๆถฒๆ™ถ่กจ': u'ๆถฒๆ™ถ้Œถ', u'ๆถณ่’™': u'ๆถณๆฟ›', u'ๆถธๅนฒ': u'ๆถธไนพ', u'ๅ‡‰้ข': u'ๆถผ้บต', u'ๆท‹ไฝ™ๅœŸ': u'ๆท‹้ค˜ๅœŸ', u'ๆท‘่Œƒ': u'ๆท‘็ฏ„', u'ๆณชๅนฒ': u'ๆทšไนพ', u'ๆณชๅฆ‚ๆณ‰ๆถŒ': u'ๆทšๅฆ‚ๆณ‰ๆนง', u'ๆทกไบŽ': u'ๆทกๆ–ผ', u'ๆทก่’™่’™': u'ๆทกๆฟ›ๆฟ›', u'ๆทกๆœฑ': u'ๆทก็กƒ', u'ๅ‡€ไฝ™': u'ๆทจ้ค˜', u'ๅ‡€ๅ‘': u'ๆทจ้ซฎ', u'ๆทซๆฌฒ': u'ๆทซๆ…พ', u'ๆทซ่ก': u'ๆทซ่•ฉ', u'ๆทฌ็‚ผ': u'ๆทฌ้Š', u'ๆทฑๅฑฑไฝ•ๅค„้’Ÿ': u'ๆทฑๅฑฑไฝ•่™•้˜', u'ๆทฑๆธŠ้‡Œ': u'ๆทฑๆทต่ฃก', u'ๆทณไบŽ': u'ๆทณไบŽ', u'ๆทณๆœด': u'ๆทณๆจธ', u'ๆธŠๆทณๅฒณๅณ™': u'ๆทตๆทณๅถฝๅณ™', u'ๆต…ๆท€': u'ๆทบๆพฑ', u'ๆธ…ๅฟƒๅฏกๆฌฒ': u'ๆธ…ๅฟƒๅฏกๆฌฒ', u'ๆธ…ๆฑคๆŒ‚้ข': u'ๆธ…ๆนฏๆŽ›้บต', u'ๅ‡่‚ฅ่ฏ': u'ๆธ›่‚ฅ่—ฅ', u'ๆธ ๅ†ฒ': u'ๆธ ่ก', u'ๆธฏๅˆถ': u'ๆธฏ่ฃฝ', u'ๆต‘ๆœด': u'ๆธพๆจธ', u'ๆต‘ไธช': u'ๆธพ็ฎ‡', u'ๅ‡‘ๅˆ็€': u'ๆนŠๅˆ่‘—', u'ๆน–้‡Œ': u'ๆน–่ฃก', u'ๆน˜็ปฃ': u'ๆน˜็นก', u'ๆน˜็ดฏ': u'ๆน˜็บ', u'ๆนŸๆฝฆ็”Ÿ่‹น': u'ๆนŸๆฝฆ็”Ÿ่‹น', u'ๆถŒไธŠ': u'ๆนงไธŠ', u'ๆถŒๆฅ': u'ๆนงไพ†', u'ๆถŒๅ…ฅ': u'ๆนงๅ…ฅ', u'ๆถŒๅ‡บ': u'ๆนงๅ‡บ', u'ๆถŒๅ‘': u'ๆนงๅ‘', u'ๆถŒๆณ‰': u'ๆนงๆณ‰', u'ๆถŒ็Žฐ': u'ๆนง็พ', u'ๆถŒ่ตท': u'ๆนง่ตท', u'ๆถŒ่ฟ›': u'ๆนง้€ฒ', u'ๆนฎ้ƒ': u'ๆนฎ้ฌฑ', u'ๆฑคไธ‹้ข': u'ๆนฏไธ‹้บต', u'ๆฑคๅ›ข': u'ๆนฏ็ณฐ', u'ๆฑค่ฏ': u'ๆนฏ่—ฅ', u'ๆฑค้ข': u'ๆนฏ้บต', u'ๆบไบŽ': u'ๆบๆ–ผ', u'ๅ‡†ไธๅ‡†': u'ๆบ–ไธๆบ–', u'ๅ‡†ไพ‹': u'ๆบ–ไพ‹', u'ๅ‡†ไฟ': u'ๆบ–ไฟ', u'ๅ‡†ๅค‡': u'ๆบ–ๅ‚™', u'ๅ‡†ๅ„ฟ': u'ๆบ–ๅ…’', u'ๅ‡†ๅˆ†ๅญ': u'ๆบ–ๅˆ†ๅญ', u'ๅ‡†ๅˆ™': u'ๆบ–ๅ‰‡', u'ๅ‡†ๅ™ถๅฐ”': u'ๆบ–ๅ™ถ็ˆพ', u'ๅ‡†ๅฎš': u'ๆบ–ๅฎš', u'ๅ‡†ๅนณๅŽŸ': u'ๆบ–ๅนณๅŽŸ', u'ๅ‡†ๅบฆ': u'ๆบ–ๅบฆ', u'ๅ‡†ๅผ': u'ๆบ–ๅผ', u'ๅ‡†ๆ‹ฟ็ฃ': u'ๆบ–ๆ‹ฟ็ฃ', u'ๅ‡†ๆฎ': u'ๆบ–ๆ“š', u'ๅ‡†ๆ‹Ÿ': u'ๆบ–ๆ“ฌ', u'ๅ‡†ๆ–ฐๅจ˜': u'ๆบ–ๆ–ฐๅจ˜', u'ๅ‡†ๆ–ฐ้ƒŽ': u'ๆบ–ๆ–ฐ้ƒŽ', u'ๅ‡†ๆ˜Ÿ': u'ๆบ–ๆ˜Ÿ', u'ๅ‡†ๆ˜ฏ': u'ๆบ–ๆ˜ฏ', u'ๅ‡†ๆ—ถ': u'ๆบ–ๆ™‚', u'ๅ‡†ไผš': u'ๆบ–ๆœƒ', u'ๅ‡†ๅ†ณ่ต›': u'ๆบ–ๆฑบ่ณฝ', u'ๅ‡†็š„': u'ๆบ–็š„', u'ๅ‡†็กฎ': u'ๆบ–็ขบ', u'ๅ‡†็บฟ': u'ๆบ–็ทš', u'ๅ‡†็ปณ': u'ๆบ–็นฉ', u'ๅ‡†่ฏ': u'ๆบ–่ฉฑ', u'ๅ‡†่ฐฑ': u'ๆบ–่ญœ', u'ๅ‡†่ดงๅธ': u'ๆบ–่ฒจๅนฃ', u'ๅ‡†ๅคด': u'ๆบ–้ ญ', u'ๅ‡†็‚น': u'ๆบ–้ปž', u'ๆบŸ่’™': u'ๆบŸๆฟ›', u'ๆบขไบŽ': u'ๆบขๆ–ผ', u'ๆบฒ้ข': u'ๆบฒ้บต', u'ๆบบไบŽ': u'ๆบบๆ–ผ', u'ๆปƒ้ƒ': u'ๆปƒ้ฌฑ', u'ๆป‘ๅ€Ÿ': u'ๆป‘่—‰', u'ๆฑ‡ไธฐ': u'ๆป™่ฑ', u'ๅคๅ‘ณ': u'ๆปทๅ‘ณ', u'ๅคๆฐด': u'ๆปทๆฐด', u'ๅคๆฑ': u'ๆปทๆฑ', u'ๅคๆน–': u'ๆปทๆน–', u'ๅค่‚‰': u'ๆปท่‚‰', u'ๅค่œ': u'ๆปท่œ', u'ๅค่›‹': u'ๆปท่›‹', u'ๅคๅˆถ': u'ๆปท่ฃฝ', u'ๅค้ธก': u'ๆปท้›ž', u'ๅค้ข': u'ๆปท้บต', u'ๆปกๆ‹ผ่‡ชๅฐฝ': u'ๆปฟๆ‹š่‡ช็›ก', u'ๆปกๆปกๅฝ“ๅฝ“': u'ๆปฟๆปฟ็•ถ็•ถ', u'ๆปกๅคดๆด‹ๅ‘': u'ๆปฟ้ ญๆด‹้ซฎ', u'ๆผ‚่ก': u'ๆผ‚่•ฉ', u'ๆผ•ๆŒฝ': u'ๆผ•่ผ“', u'ๆฒค้ƒ': u'ๆผš้ฌฑ', u'ๆฑ‰ๅผฅ็™ป้’Ÿ': u'ๆผขๅฝŒ็™ป้˜', u'ๆฑ‰ๅผฅ็™ป้’Ÿ่กจๅ…ฌๅธ': u'ๆผขๅฝŒ็™ป้˜้Œถๅ…ฌๅธ', u'ๆผซๆธธ': u'ๆผซ้Š', u'ๆฝœๆ„่ฏ†้‡Œ': u'ๆฝ›ๆ„่ญ˜่ฃก', u'ๆฝœๆฐด่กจ': u'ๆฝ›ๆฐด้Œถ', u'ๆฝœๆฐด้’Ÿ': u'ๆฝ›ๆฐด้˜', u'ๆฝœๆฐด้’Ÿ่กจ': u'ๆฝ›ๆฐด้˜้Œถ', u'ๆฝญ้‡Œ': u'ๆฝญ่ฃก', u'ๆฝฎๆถŒ': u'ๆฝฎๆนง', u'ๆบƒไบŽ': u'ๆฝฐๆ–ผ', u'ๆพ„ๆพน็ฒพ่‡ด': u'ๆพ„ๆพน็ฒพ่‡ด', u'ๆพ’่’™': u'ๆพ’ๆฟ›', u'ๆณฝๆธ—ๆผ“่€Œไธ‹้™': u'ๆพคๆปฒ็•่€Œไธ‹้™', u'ๆท€ไนƒไธ่€•ไน‹ๅœฐ': u'ๆพฑไนƒไธ่€•ไน‹ๅœฐ', u'ๆท€ๅŒ—็‰‡': u'ๆพฑๅŒ—็‰‡', u'ๆท€ๅฑฑ': u'ๆพฑๅฑฑ', u'ๆท€ๆท€': u'ๆพฑๆพฑ', u'ๆท€็งฏ': u'ๆพฑ็ฉ', u'ๆท€็ฒ‰': u'ๆพฑ็ฒ‰', u'ๆท€่งฃ็‰ฉ': u'ๆพฑ่งฃ็‰ฉ', u'ๆท€่ฐ“ไน‹ๆป“': u'ๆพฑ่ฌ‚ไน‹ๆป“', u'ๆพนๅฐ': u'ๆพน่‡บ', u'ๆพน่ก': u'ๆพน่•ฉ', u'ๆฟ€่ก': u'ๆฟ€่•ฉ', u'ๆต“ๅ‘': u'ๆฟƒ้ซฎ', u'่’™ๆฑœ': u'ๆฟ›ๆฑœ', u'่’™่’™็ป†้›จ': u'ๆฟ›ๆฟ›็ดฐ้›จ', u'่’™้›พ': u'ๆฟ›้œง', u'่’™ๆพ้›จ': u'ๆฟ›้ฌ†้›จ', u'่’™้ธฟ': u'ๆฟ›้ดป', u'ๆณป่ฏ': u'็€‰่—ฅ', u'ๆฒˆๅ‰็บฟ': u'็€‹ๅ‰็ทš', u'ๆฒˆๅฑฑ็บฟ': u'็€‹ๅฑฑ็ทš', u'ๆฒˆๅทž': u'็€‹ๅทž', u'ๆฒˆๆฐด': u'็€‹ๆฐด', u'ๆฒˆๆฒณ': u'็€‹ๆฒณ', u'ๆฒˆๆตท': u'็€‹ๆตท', u'ๆฒˆๆตท้“่ทฏ': u'็€‹ๆตท้ต่ทฏ', u'ๆฒˆ้˜ณ': u'็€‹้™ฝ', u'ๆฝ‡ๆด’': u'็€Ÿๆด’', u'ๅผฅๅฑฑ้้‡Ž': u'็€ฐๅฑฑ้้‡Ž', u'ๅผฅๆผซ': u'็€ฐๆผซ', u'ๅผฅๆผซ็€': u'็€ฐๆผซ่‘—', u'ๅผฅๅผฅ': u'็€ฐ็€ฐ', u'็Œ่ฏ': u'็Œ่—ฅ', u'ๆผ“ๆฐด': u'็•ๆฐด', u'ๆผ“ๆฑŸ': u'็•ๆฑŸ', u'ๆผ“ๆน˜': u'็•ๆน˜', u'ๆผ“็„ถ': u'็•็„ถ', u'ๆปฉๆถ‚': u'็˜ๆถ‚', u'็ซๅนถ้ž': u'็ซไธฆ้ž', u'็ซๅนถ': u'็ซไฝต', u'็ซๆ‹ผ': u'็ซๆ‹š', u'็ซๆŠ˜ๅญ': u'็ซๆ‘บๅญ', u'็ซ็ฎญๅธƒ้›ท': u'็ซ็ฎญไฝˆ้›ท', u'็ซ็ญพ': u'็ซ็ฑค', u'็ซ่ฏ': u'็ซ่—ฅ', u'็ฐ่’™': u'็ฐๆฟ›', u'็ฐ่’™่’™': u'็ฐๆฟ›ๆฟ›', u'็‚†้ข': u'็‚†้บต', u'็‚’้ข': u'็‚’้บต', u'็‚ฎๅˆถ': u'็‚ฎ่ฃฝ', u'็‚ธ่ฏ': u'็‚ธ่—ฅ', u'็‚ธ้…ฑ้ข': u'็‚ธ้†ฌ้บต', u'ไธบๅ‡†': u'็‚บๆบ–', u'ไธบ็€': u'็‚บ่‘—', u'ไนŒๅ‘': u'็ƒ้ซฎ', u'ไนŒ้พ™้ข': u'็ƒ้พ้บต', u'็ƒ˜ๅนฒ': u'็ƒ˜ไนพ', u'็ƒ˜ๅˆถ': u'็ƒ˜่ฃฝ', u'็ƒคๅนฒ': u'็ƒคไนพ', u'็ƒคๆ™’': u'็ƒคๆ™’', u'็„™ๅนฒ': u'็„™ไนพ', u'ๆ— ๅพไธไฟก': u'็„กๅพตไธไฟก', u'ๆ— ไธšๆธธๆฐ‘': u'็„กๆฅญๆธธๆฐ‘', u'ๆ— ๆขๆฅผ็›–': u'็„กๆจ‘ๆจ“่“‹', u'ๆ— ๆณ•ๅ…‹ๅˆถ': u'็„กๆณ•ๅ‰‹ๅˆถ', u'ๆ— ่ฏๅฏๆ•‘': u'็„ก่—ฅๅฏๆ•‘', u'็„ก่จ€ไธไป‡': u'็„ก่จ€ไธ่ฎŽ', u'ๆ— ไฝ™': u'็„ก้ค˜', u'็„ถ่บซๆญปๆ‰ๆ•ฐๆœˆ่€ณ': u'็„ถ่บซๆญป็บ”ๆ•ธๆœˆ่€ณ', u'็‚ผ่ฏ': u'็…‰่—ฅ', u'็‚ผๅˆถ': u'็…‰่ฃฝ', u'็…Ž่ฏ': u'็…Ž่—ฅ', u'็…Ž้ข': u'็…Ž้บต', u'็ƒŸๅท': u'็…™ๆฒ', u'็ƒŸๆ–—ไธ': u'็…™ๆ–—็ตฒ', u'็…งๅ ': u'็…งไฝ”', u'็…งๅ…ฅ็ญพ': u'็…งๅ…ฅ็ฑค', u'็…งๅ‡†': u'็…งๆบ–', u'็…ง็›ธๅนฒ็‰‡': u'็…ง็›ธไนพ็‰‡', u'็…จๅนฒ': u'็…จไนพ', u'็…ฎ้ข': u'็…ฎ้บต', u'่ง้ƒ': u'็†’้ฌฑ', u'็†ฌ่ฏ': u'็†ฌ่—ฅ', u'็‚–่ฏ': u'็‡‰่—ฅ', u'็‡Žๅ‘': u'็‡Ž้ซฎ', u'็ƒงๅนฒ': u'็‡’ไนพ', u'็‡•ๅ‡ ': u'็‡•ๅ‡ ', u'็‡•ๅทขไบŽๅน•': u'็‡•ๅทขๆ–ผๅน•', u'็‡•็‡•ไบŽ้ฃž': u'็‡•็‡•ไบŽ้ฃ›', u'็‡•ๆธธ': u'็‡•้Š', u'็ƒซไธ€ไธชๅ‘': u'็‡™ไธ€ๅ€‹้ซฎ', u'็ƒซไธ€ๆฌกๅ‘': u'็‡™ไธ€ๆฌก้ซฎ', u'็ƒซไธชๅ‘': u'็‡™ๅ€‹้ซฎ', u'็ƒซๅฎŒๅ‘': u'็‡™ๅฎŒ้ซฎ', u'็ƒซๆฌกๅ‘': u'็‡™ๆฌก้ซฎ', u'็ƒซๅ‘': u'็‡™้ซฎ', u'็ƒซ้ข': u'็‡™้บต', u'่ฅๅนฒ': u'็‡Ÿๅนน', u'็ƒฌไฝ™': u'็‡ผ้ค˜', u'ไบ‰ๅฅ‡ๆ–—ๅฆ': u'็ˆญๅฅ‡้ฌฅๅฆ', u'ไบ‰ๅฅ‡ๆ–—ๅผ‚': u'็ˆญๅฅ‡้ฌฅ็•ฐ', u'ไบ‰ๅฅ‡ๆ–—่‰ณ': u'็ˆญๅฅ‡้ฌฅ่ฑ”', u'ไบ‰ๅฆๆ–—ๅฅ‡': u'็ˆญๅฆ้ฌฅๅฅ‡', u'ไบ‰ๅฆๆ–—่‰ณ': u'็ˆญๅฆ้ฌฅ่ฑ”', u'ไบ‰็บขๆ–—็ดซ': u'็ˆญ็ด…้ฌฅ็ดซ', u'ไบ‰ๆ–—': u'็ˆญ้ฌฅ', u'็ˆฐๅฎš็ฅฅๅކ': u'็ˆฐๅฎš็ฅฅๅŽค', u'็ˆฝ่ก': u'็ˆฝ่•ฉ', u'ๅฐ”ๅ†ฌๅ‡': u'็ˆพๅ†ฌ้™ž', u'ๅข™้‡Œ': u'็‰†่ฃก', u'็‰‡่จ€ๅช่ฏญ': u'็‰‡่จ€้šป่ชž', u'็‰™็ญพ': u'็‰™็ฑค', u'็‰›่‚‰้ข': u'็‰›่‚‰้บต', u'็‰›ๅช': u'็‰›้šป', u'็‰ฉๆฌฒ': u'็‰ฉๆ…พ', u'็‰นๅˆซ่‡ด': u'็‰นๅˆซ่‡ด', u'็‰นๅˆถไฝ': u'็‰นๅˆถไฝ', u'็‰นๅˆถๅฎš': u'็‰นๅˆถๅฎš', u'็‰นๅˆถๆญข': u'็‰นๅˆถๆญข', u'็‰นๅˆถ่ฎข': u'็‰นๅˆถ่จ‚', u'็‰นๅพ': u'็‰นๅพต', u'็‰นๆ•ˆ่ฏ': u'็‰นๆ•ˆ่—ฅ', u'็‰นๅˆถ': u'็‰น่ฃฝ', u'็‰ตไธ€ๅ‘': u'็‰ฝไธ€้ซฎ', u'็‰ตๆŒ‚': u'็‰ฝๆŒ‚', u'็‰ต็ณป': u'็‰ฝ็นซ', u'่ฆ็กฎ': u'็Š–็กฎ', u'็‹‚ๅ ': u'็‹‚ไฝ”', u'็‹‚ๅนถๆฝฎ': u'็‹‚ไฝตๆฝฎ', u'็‹ƒไบŽ': u'็‹ƒๆ–ผ', u'็‹ๅ€Ÿ่™Žๅจ': u'็‹่—‰่™Žๅจ', u'็Œ›ไบŽ': u'็Œ›ๆ–ผ', u'็Œ›ๅ†ฒ': u'็Œ›่ก', u'็Œœไธ‰ๅˆ’ไบ”': u'็Œœไธ‰ๅˆ’ไบ”', u'็Šนๅฆ‚่กจ': u'็Œถๅฆ‚้Œถ', u'็Šนๅฆ‚้’Ÿ': u'็Œถๅฆ‚้˜', u'็Šนๅฆ‚้’Ÿ่กจ': u'็Œถๅฆ‚้˜้Œถ', u'ๅ‘†ไธฒไบ†็šฎ': u'็ƒไธฒไบ†็šฎ', u'ๅ‘†ไบ‹': u'็ƒไบ‹', u'ๅ‘†ไบบ': u'็ƒไบบ', u'ๅ‘†ๅญ': u'็ƒๅญ', u'ๅ‘†ๆ€ง': u'็ƒๆ€ง', u'ๅ‘†ๆƒณ': u'็ƒๆƒณ', u'ๅ‘†ๆ†จๅ‘†': u'็ƒๆ†จ็ƒ', u'ๅ‘†ๆ น': u'็ƒๆ น', u'ๅ‘†ๆฐ”': u'็ƒๆฐฃ', u'ๅ‘†ๆปž': u'็ƒๆปฏ', u'ๅ‘†ๅ‘†': u'็ƒ็ƒ', u'ๅ‘†็—ด': u'็ƒ็—ด', u'ๅ‘†็ฃ•': u'็ƒ็ฃ•', u'ๅ‘†็ญ‰': u'็ƒ็ญ‰', u'ๅ‘†่„‘': u'็ƒ่…ฆ', u'ๅ‘†็€': u'็ƒ่‘—', u'ๅ‘†่ฏ': u'็ƒ่ฉฑ', u'ๅ‘†ๅคด': u'็ƒ้ ญ', u'็‹ฑ้‡Œ': u'็„่ฃก', u'ๅฅ–ๆฏ': u'็Ž็›ƒ', u'็‹ฌๅ ': u'็จไฝ”', u'็‹ฌๅ ้ณŒๅคด': u'็จไฝ”้ฐฒ้ ญ', u'็‹ฌ่พŸ่นŠๅพ„': u'็จ้—ข่นŠๅพ‘', u'่ŽทๅŒชๅ…ถไธ‘': u'็ฒๅŒชๅ…ถ้†œ', u'ๅ…ฝๆฌฒ': u'็ธๆ…พ', u'็Œฎไธ‘': u'็ป้†œ', u'็އๅ›ขๅ‚ๅŠ ': u'็އๅœ˜ๅƒๅŠ ', u'็މๅކ': u'็މๆ›†', u'็މๅކๅฒ': u'็މๆญทๅฒ', u'็Ž‹ไพฏๅŽ': u'็Ž‹ไพฏๅŽ', u'็Ž‹ๅŽ': u'็Ž‹ๅŽ', u'็Ž‹ๅบ„': u'็Ž‹่ŽŠ', u'็Ž‹ไฝ™้ฑผ': u'็Ž‹้ค˜้ญš', u'็่‚ดๅผ‚้ฆ”': u'็่‚ด็•ฐ้ฅŒ', u'็ญ้‡Œ': u'็ญ่ฃก', u'็ŽฐไบŽ': u'็พๆ–ผ', u'็ƒๆ†': u'็ƒๆ†', u'็†ไธ€ไธชๅ‘': u'็†ไธ€ๅ€‹้ซฎ', u'็†ไธ€ๆฌกๅ‘': u'็†ไธ€ๆฌก้ซฎ', u'็†ไธชๅ‘': u'็†ๅ€‹้ซฎ', u'็†ๅฎŒๅ‘': u'็†ๅฎŒ้ซฎ', u'็†ๆฌกๅ‘': u'็†ๆฌก้ซฎ', u'็†ๅ‘': u'็†้ซฎ', u'็ด้’Ÿ': u'็ด้˜', u'็‘žๅพ': u'็‘žๅพต', u'็‘ถ็ญพ': u'็‘ค็ฑค', u'็Žฏๆธธ': u'็’ฐ้Š', u'็“ฎๅฎ‰': u'็”•ๅฎ‰', u'็”šไบŽ': u'็”šๆ–ผ', u'็”šไนˆ': u'็”š้บผ', u'็”œๆฐด้ข': u'็”œๆฐด้บต', u'็”œ้ข้…ฑ': u'็”œ้บต้†ฌ', u'็”ŸๅŠ›้ข': u'็”ŸๅŠ›้บต', u'็”ŸไบŽ': u'็”Ÿๆ–ผ', u'็”Ÿๆฎ–ๆด„ๆธธ': u'็”Ÿๆฎ–ๆด„ๆธธ', u'็”Ÿ็‰ฉ้’Ÿ': u'็”Ÿ็‰ฉ้˜', u'็”Ÿๅ‘็”Ÿ': u'็”Ÿ็™ผ็”Ÿ', u'็”ŸๅŽๅ‘': u'็”Ÿ่ฏ้ซฎ', u'็”Ÿๅงœ': u'็”Ÿ่–‘', u'็”Ÿ้”ˆ': u'็”Ÿ้ฝ', u'็”Ÿๅ‘': u'็”Ÿ้ซฎ', u'ไบงๅตๆด„ๆธธ': u'็”ขๅตๆด„ๆธธ', u'็”จ่ฏ': u'็”จ่—ฅ', u'็”ฉๅ‘': u'็”ฉ้ซฎ', u'็”ฐ่ฐท': u'็”ฐ็ฉ€', u'็”ฐๅบ„': u'็”ฐ่ŽŠ', u'็”ฐ้‡Œ': u'็”ฐ่ฃก', u'็”ฑไฝ™': u'็”ฑไฝ™', u'็”ฑไบŽ': u'็”ฑๆ–ผ', u'็”ฑ่กจๅŠ้‡Œ': u'็”ฑ่กจๅŠ่ฃก', u'็”ทไฝฃไบบ': u'็”ทไฝฃไบบ', u'็”ทไป†': u'็”ทๅƒ•', u'็”ท็”จ่กจ': u'็”ท็”จ้Œถ', u'็•ไบŽ': u'็•ๆ–ผ', u'็•™ๅ‘': u'็•™้ซฎ', u'ๆฏ•ไบŽ': u'็•ขๆ–ผ', u'ๆฏ•ไธšไบŽ': u'็•ขๆฅญๆ–ผ', u'ๆฏ•็”Ÿๅ‘ๅฑ•': u'็•ข็”Ÿ็™ผๅฑ•', u'็”ป็€': u'็•ซ่‘—', u'ๅฝ“ๅฎถๆ‰็ŸฅๆŸด็ฑณไปท': u'็•ถๅฎถ็บ”็ŸฅๆŸด็ฑณๅƒน', u'ๅฝ“ๅ‡†': u'็•ถๆบ–', u'ๅฝ“ๅฝ“ไธไธ': u'็•ถ็•ถไธไธ', u'ๅฝ“็€': u'็•ถ่‘—', u'็–ๆพ': u'็–้ฌ†', u'็–‘็ณป': u'็–‘ไฟ‚', u'็–‘ๅ‡ถ': u'็–‘ๅ…‡', u'็–ฒไบŽ': u'็–ฒๆ–ผ', u'็–ฒๅ›ฐ': u'็–ฒ็', u'็—…ๅพ': u'็—…ๅพต', u'็—…ๆ„ˆ': u'็—…็™’', u'็—…ไฝ™': u'็—…้ค˜', u'็—‡ๅ€™็พค': u'็—‡ๅ€™็พค', u'็—Šๆ„ˆ': u'็—Š็™’', u'็—’็–น': u'็—’็–น', u'็—’็—’': u'็—’็—’', u'็—•่ฟน': u'็—•่ฟน', u'ๆ„ˆๅˆ': u'็™’ๅˆ', u'็—‡ๅ€™': u'็™ฅๅ€™', u'็—‡็Šถ': u'็™ฅ็‹€', u'็—‡็ป“': u'็™ฅ็ต', u'็™ธไธ‘': u'็™ธไธ‘', u'ๅ‘ๅนฒ': u'็™ผไนพ', u'ๅ‘ๆฑ—่ฏ': u'็™ผๆฑ—่—ฅ', u'ๅ‘ๅ‘†': u'็™ผ็ƒ', u'ๅ‘่’™': u'็™ผ็Ÿ‡', u'ๅ‘็ญพ': u'็™ผ็ฑค', u'ๅ‘ๅบ„': u'็™ผ่ŽŠ', u'ๅ‘็€': u'็™ผ่‘—', u'ๅ‘่กจ': u'็™ผ่กจ', u'็™ผ่กจ': u'็™ผ่กจ', u'ๅ‘ๆพ': u'็™ผ้ฌ†', u'ๅ‘้ข': u'็™ผ้บต', u'็™ฝๅนฒ': u'็™ฝไนพ', u'็™ฝๅ…”ๆ“ฃ่ฏ': u'็™ฝๅ…”ๆ“ฃ่—ฅ', u'็™ฝๅนฒๅ„ฟ': u'็™ฝๅนฒๅ…’', u'็™ฝๆœฏ': u'็™ฝๆœฎ', u'็™ฝๆœด': u'็™ฝๆจธ', u'็™ฝๅ‡€้ข็šฎ': u'็™ฝๆทจ้ข็šฎ', u'็™ฝๅ‘ๅ…ถไบ‹': u'็™ฝ็™ผๅ…ถไบ‹', u'็™ฝ็šฎๆพ': u'็™ฝ็šฎๆพ', u'็™ฝ็ฒ‰้ข': u'็™ฝ็ฒ‰้บต', u'็™ฝ้‡Œ้€็บข': u'็™ฝ่ฃก้€็ด…', u'็™ฝๅ‘': u'็™ฝ้ซฎ', u'็™ฝ่ƒก': u'็™ฝ้ฌ', u'็™ฝ้œ‰': u'็™ฝ้ปด', u'็™พไธช': u'็™พๅ€‹', u'็™พๅชๅฏ': u'็™พๅชๅฏ', u'็™พๅชๅคŸ': u'็™พๅชๅค ', u'็™พๅชๆ€•': u'็™พๅชๆ€•', u'็™พๅช่ถณๅคŸ': u'็™พๅช่ถณๅค ', u'็™พๅคšๅช': u'็™พๅคš้šป', u'็™พๅคฉๅŽ': u'็™พๅคฉๅพŒ', u'็™พๆ‹™ๅƒไธ‘': u'็™พๆ‹™ๅƒ้†œ', u'็™พ็ง‘้‡Œ': u'็™พ็ง‘่ฃก', u'็™พ่ฐท': u'็™พ็ฉ€', u'็™พๆ‰Ž': u'็™พ็ดฎ', u'็™พ่Šฑๅކ': u'็™พ่Šฑๆ›†', u'็™พ่Šฑๅކๅฒ': u'็™พ่Šฑๆญทๅฒ', u'็™พ่ฏไน‹้•ฟ': u'็™พ่—ฅไน‹้•ท', u'็™พ็‚ผ': u'็™พ้Š', u'็™พๅช': u'็™พ้šป', u'็™พไฝ™': u'็™พ้ค˜', u'็š„ๅ…‹ๅˆถ': u'็š„ๅ‰‹ๅˆถ', u'็š„้’Ÿ': u'็š„้˜', u'็š„้’Ÿ่กจ': u'็š„้˜้Œถ', u'็š†ๅฏไฝœๆท€': u'็š†ๅฏไฝœๆพฑ', u'็š†ๅ‡†': u'็š†ๆบ–', u'็š‡ๅŽ': u'็š‡ๅŽ', u'็š‡ๅކ': u'็š‡ๆ›†', u'็š‡ๆžๅކ': u'็š‡ๆฅตๆ›†', u'็š‡ๆžๅކๅฒ': u'็š‡ๆฅตๆญทๅฒ', u'็š‡ๅކๅฒ': u'็š‡ๆญทๅฒ', u'็š‡ๅบ„': u'็š‡่ŽŠ', u'็š“ๅ‘': u'็š“้ซฎ', u'็šฎๅˆถๆœ': u'็šฎๅˆถๆœ', u'็šฎ้‡Œๆ˜ฅ็ง‹': u'็šฎ่ฃกๆ˜ฅ็ง‹', u'็šฎ้‡Œ้˜ณ็ง‹': u'็šฎ่ฃก้™ฝ็ง‹', u'็šฎๅˆถ': u'็šฎ่ฃฝ', u'็šฎๆพ': u'็šฎ้ฌ†', u'็šฑๅˆซ': u'็šบๅฝ†', u'็šฑๆŠ˜': u'็šบๆ‘บ', u'็›†ๅŠ': u'็›†ๅผ”', u'็›ˆไฝ™': u'็›ˆ้ค˜', u'็›ŠไบŽ': u'็›Šๆ–ผ', u'็›’้‡Œ': u'็›’่ฃก', u'็››่ตž': u'็››่ฎš', u'็›—้‡‡': u'็›œๆŽก', u'็›—้’Ÿ': u'็›œ้˜', u'ๅฐฝ้‡ๅ…‹ๅˆถ': u'็›ก้‡ๅ‰‹ๅˆถ', u'็›‘ๅˆถ': u'็›ฃ่ฃฝ', u'็›˜้‡Œ': u'็›ค่ฃก', u'็›˜ๅ›ž': u'็›ค่ฟด', u'ๅขๆฃฑไผฝ': u'็›ง็จœไผฝ', u'็›ฒๅนฒ': u'็›ฒๅนน', u'็›ดๆŽฅๅ‚ไธŽ': u'็›ดๆŽฅๅƒไธŽ', u'็›ดไบŽ': u'็›ดๆ–ผ', u'็›ดๅ†ฒ': u'็›ด่ก', u'็›ธๅนถ': u'็›ธไฝต', u'็›ธๅ…‹ๅˆถ': u'็›ธๅ…‹ๅˆถ', u'็›ธๅ…‹ๆœ': u'็›ธๅ…‹ๆœ', u'็›ธๅ…‹': u'็›ธๅ‰‹', u'็›ธๅนฒ': u'็›ธๅนฒ', u'็›ธไบŽ': u'็›ธๆ–ผ', u'็›ธๅ†ฒ': u'็›ธ่ก', u'็›ธๆ–—': u'็›ธ้ฌฅ', u'็œ‹ไธ‹่กจ': u'็œ‹ไธ‹้Œถ', u'็œ‹ไธ‹้’Ÿ': u'็œ‹ไธ‹้˜', u'็œ‹ๅ‡†': u'็œ‹ๆบ–', u'็œ‹็€่กจ': u'็œ‹่‘—้Œถ', u'็œ‹็€้’Ÿ': u'็œ‹่‘—้˜', u'็œ‹็€้’Ÿ่กจ': u'็œ‹่‘—้˜้Œถ', u'็œ‹่กจ้ข': u'็œ‹่กจ้ข', u'็œ‹่กจ': u'็œ‹้Œถ', u'็œ‹้’Ÿ': u'็œ‹้˜', u'็œŸๅ‡ถ': u'็œŸๅ…‡', u'็œŸไธช': u'็œŸ็ฎ‡', u'็œผๅนฒ': u'็œผไนพ', u'็œผๅธ˜': u'็œผๅธ˜', u'็œผ็œถ้‡Œ': u'็œผ็œถ่ฃก', u'็œผ็›้‡Œ': u'็œผ็›่ฃก', u'็œผ่ฏ': u'็œผ่—ฅ', u'็œผ้‡Œ': u'็œผ่ฃก', u'ๅ›ฐไน': u'็ไน', u'ๅ›ฐๅ€ฆ': u'็ๅ€ฆ', u'ๅ›ฐ่ง‰': u'็่ฆบ', u'็ก็€ไบ†': u'็ก่‘—ไบ†', u'็กๆธธ็—…': u'็ก้Š็—…', u'็ž„ๅ‡†': u'็ž„ๆบ–', u'็ž…ไธ‹่กจ': u'็ž…ไธ‹้Œถ', u'็ž…ไธ‹้’Ÿ': u'็ž…ไธ‹้˜', u'็žง็€่กจ': u'็žง่‘—้Œถ', u'็žง็€้’Ÿ': u'็žง่‘—้˜', u'็žง็€้’Ÿ่กจ': u'็žง่‘—้˜้Œถ', u'ไบ†ๆœ›': u'็žญๆœ›', u'ไบ†็„ถ': u'็žญ็„ถ', u'ไบ†่‹ฅๆŒ‡ๆŽŒ': u'็žญ่‹ฅๆŒ‡ๆŽŒ', u'็žณ่’™': u'็žณ็Ÿ‡', u'่’™ไบ‹': u'็Ÿ‡ไบ‹', u'่’™ๆ˜งๆ— ็Ÿฅ': u'็Ÿ‡ๆ˜ง็„ก็Ÿฅ', u'่’™ๆทท': u'็Ÿ‡ๆทท', u'่’™็ž': u'็Ÿ‡็ž', u'่’™็œฌ': u'็Ÿ‡็Ÿ“', u'่’™่ฉ': u'็Ÿ‡่ต', u'่’™็€': u'็Ÿ‡่‘—', u'่’™็€้”…ๅ„ฟ': u'็Ÿ‡่‘—้‹ๅ…’', u'่’™ๅคด่ฝฌ': u'็Ÿ‡้ ญ่ฝ‰', u'่’™้ช—': u'็Ÿ‡้จ™', u'็žฉๆ‰˜': u'็Ÿš่จ—', u'็Ÿœๅบ„': u'็Ÿœ่ŽŠ', u'็Ÿญๅ‡ ': u'็Ÿญๅ‡ ', u'็ŸญไบŽ': u'็Ÿญๆ–ผ', u'็Ÿญๅ‘': u'็Ÿญ้ซฎ', u'็Ÿฎๅ‡ ': u'็Ÿฎๅ‡ ', u'็Ÿณๅ‡ ': u'็Ÿณๅ‡ ', u'็Ÿณๅฎถๅบ„': u'็Ÿณๅฎถ่ŽŠ', u'็Ÿณๆข': u'็Ÿณๆจ‘', u'็Ÿณ่‹ฑ่กจ': u'็Ÿณ่‹ฑ้Œถ', u'็Ÿณ่‹ฑ้’Ÿ': u'็Ÿณ่‹ฑ้˜', u'็Ÿณ่‹ฑ้’Ÿ่กจ': u'็Ÿณ่‹ฑ้˜้Œถ', u'็Ÿณ่Žผ': u'็Ÿณ่“ด', u'็Ÿณ้’Ÿไนณ': u'็Ÿณ้˜ไนณ', u'็Ÿฝ่ฐท': u'็Ÿฝ่ฐท', u'็ ”ๅˆถ': u'็ ”่ฃฝ', u'็ ฐๅฝ“': u'็ ฐๅ™น', u'ๆœฑๅ”‡็š“้ฝฟ': u'็กƒๅ”‡็š“้ฝ’', u'ๆœฑๆ‰น': u'็กƒๆ‰น', u'ๆœฑ็ ‚': u'็กƒ็ ‚', u'ๆœฑ็ฌ”': u'็กƒ็ญ†', u'ๆœฑ็บข่‰ฒ': u'็กƒ็ด…่‰ฒ', u'ๆœฑ่‰ฒ': u'็กƒ่‰ฒ', u'ๆœฑ่ฐ•': u'็กƒ่ซญ', u'็กฌๅนฒ': u'็กฌๅนน', u'็กฎ็˜ ': u'็กฎ็˜ ', u'็ข‘ๅฟ—': u'็ข‘่ชŒ', u'็ขฐ้’Ÿ': u'็ขฐ้˜', u'็ ่กจ': u'็ขผ้Œถ', u'็ฃๅˆถ': u'็ฃ่ฃฝ', u'็ฃจๅˆถ': u'็ฃจ่ฃฝ', u'็ฃจ็‚ผ': u'็ฃจ้Š', u'็ฃฌ้’Ÿ': u'็ฃฌ้˜', u'็ก—็กฎ': u'็ฃฝ็กฎ', u'็ข้šพ็…งๅ‡†': u'็ค™้›ฃ็…งๅ‡†', u'็ ป่ฐทๆœบ': u'็คฑ็ฉ€ๆฉŸ', u'็คบ่Œƒ': u'็คบ็ฏ„', u'็คพ้‡Œ': u'็คพ่ฃก', u'็ฅ่ตž': u'็ฅ่ฎš', u'็ฅๅ‘': u'็ฅ้ซฎ', u'็ฅž่ผ้ƒๅž’': u'็ฅž่ผ้ฌฑๅฃ˜', u'็ฅžๆธธ': u'็ฅž้Š', u'็ฅž้›•ๅƒ': u'็ฅž้›•ๅƒ', u'็ฅž้›•': u'็ฅž้ตฐ', u'็ฅจๅบ„': u'็ฅจ่ŽŠ', u'็ฅญๅŠ': u'็ฅญๅผ”', u'็ฅญๅŠๆ–‡': u'็ฅญๅผ”ๆ–‡', u'็ฆๆฌฒ': u'็ฆๆ…พ', u'็ฆๆฌฒไธปไน‰': u'็ฆๆฌฒไธป็พฉ', u'็ฆ่ฏ': u'็ฆ่—ฅ', u'็ฅธไบŽ': u'็ฆๆ–ผ', u'ๅพกไพฎ': u'็ฆฆไพฎ', u'ๅพกๅฏ‡': u'็ฆฆๅฏ‡', u'ๅพกๅฏ’': u'็ฆฆๅฏ’', u'ๅพกๆ•Œ': u'็ฆฆๆ•ต', u'็คผ่ตž': u'็ฆฎ่ฎš', u'็ฆนไฝ™็ฒฎ': u'็ฆน้ค˜็ณง', u'็ฆพ่ฐท': u'็ฆพ็ฉ€', u'็งƒๅฆƒไน‹ๅ‘': u'็ฆฟๅฆƒไน‹้ซฎ', u'็งƒๅ‘': u'็ฆฟ้ซฎ', u'็ง€ๅ‘': u'็ง€้ซฎ', u'็งไธ‹้‡Œ': u'็งไธ‹่ฃก', u'็งๆฌฒ': u'็งๆ…พ', u'็งๆ–—': u'็ง้ฌฅ', u'็ง‹ๅ‡้‡Œ': u'็ง‹ๅ‡่ฃก', u'็ง‹ๅคฉ้‡Œ': u'็ง‹ๅคฉ่ฃก', u'็ง‹ๆ—ฅ้‡Œ': u'็ง‹ๆ—ฅ่ฃก', u'็ง‹่ฃค': u'็ง‹่คฒ', u'็ง‹ๆธธ': u'็ง‹้Š', u'็ง‹้˜ดๅ…ฅไบ•ๅนฒ': u'็ง‹้™ฐๅ…ฅไบ•ๅนน', u'็ง‹ๅ‘': u'็ง‹้ซฎ', u'็งๅธˆไธญ': u'็งๅธซไธญ', u'็งๅธˆ้“': u'็งๅธซ้“', u'็งๆ”พ': u'็งๆ”พ', u'็ง‘ๆ–—': u'็ง‘ๆ–—', u'็ง‘่Œƒ': u'็ง‘็ฏ„', u'็ง’่กจๆ˜Ž': u'็ง’่กจๆ˜Ž', u'็ง’่กจ็คบ': u'็ง’่กจ็คบ', u'็ง’่กจ': u'็ง’้Œถ', u'็ง’้’Ÿ': u'็ง’้˜', u'็งป็ฅธไบŽ': u'็งป็ฆๆ–ผ', u'็จ€ๆพ': u'็จ€้ฌ†', u'ๆฃฑๅฐ': u'็จœๅฐ', u'ๆฃฑๅญ': u'็จœๅญ', u'ๆฃฑๅฑ‚': u'็จœๅฑค', u'ๆฃฑๆŸฑ': u'็จœๆŸฑ', u'ๆฃฑ็™ป': u'็จœ็™ป', u'ๆฃฑๆฃฑ': u'็จœ็จœ', u'ๆฃฑ็ญ‰็™ป': u'็จœ็ญ‰็™ป', u'ๆฃฑ็บฟ': u'็จœ็ทš', u'ๆฃฑ็ผ': u'็จœ็ธซ', u'ๆฃฑ่ง’': u'็จœ่ง’', u'ๆฃฑ้”ฅ': u'็จœ้Œ', u'ๆฃฑ้•œ': u'็จœ้ก', u'ๆฃฑไฝ“': u'็จœ้ซ”', u'็ง่ฐท': u'็จฎ็ฉ€', u'็งฐ่ตž': u'็จฑ่ฎš', u'็จป่ฐท': u'็จป็ฉ€', u'็จฝๅพ': u'็จฝๅพต', u'่ฐทไบบ': u'็ฉ€ไบบ', u'่ฐทไฟๅฎถๅ•†': u'็ฉ€ไฟๅฎถๅ•†', u'่ฐทไป“': u'็ฉ€ๅ€‰', u'่ฐทๅœญ': u'็ฉ€ๅœญ', u'่ฐทๅœบ': u'็ฉ€ๅ ด', u'่ฐทๅญ': u'็ฉ€ๅญ', u'่ฐทๆ—ฅ': u'็ฉ€ๆ—ฅ', u'่ฐทๆ—ฆ': u'็ฉ€ๆ—ฆ', u'่ฐทๆข': u'็ฉ€ๆข', u'่ฐทๅฃณ': u'็ฉ€ๆฎผ', u'่ฐท็‰ฉ': u'็ฉ€็‰ฉ', u'่ฐท็šฎ': u'็ฉ€็šฎ', u'่ฐท็ฅž': u'็ฉ€็ฅž', u'่ฐท่ฐท': u'็ฉ€็ฉ€', u'่ฐท็ฑณ': u'็ฉ€็ฑณ', u'่ฐท็ฒ’': u'็ฉ€็ฒ’', u'่ฐท่ˆฑ': u'็ฉ€่‰™', u'่ฐท่‹—': u'็ฉ€่‹—', u'่ฐท่‰': u'็ฉ€่‰', u'่ฐท่ดต้ฅฟๅ†œ': u'็ฉ€่ฒด้ค“่พฒ', u'่ฐท่ดฑไผคๅ†œ': u'็ฉ€่ณคๅ‚ท่พฒ', u'่ฐท้“': u'็ฉ€้“', u'่ฐท้›จ': u'็ฉ€้›จ', u'่ฐท็ฑป': u'็ฉ€้กž', u'่ฐท้ฃŸ': u'็ฉ€้ฃŸ', u'็ฉ†็ฝ•้ป˜ๅพทๅކ': u'็ฉ†็ฝ•้ป˜ๅพทๆ›†', u'็ฉ†็ฝ•้ป˜ๅพทๅކๅฒ': u'็ฉ†็ฝ•้ป˜ๅพทๆญทๅฒ', u'็งฏๆžๅ‚ไธŽ': u'็ฉๆžๅƒไธŽ', u'็งฏๆžๅ‚ๅŠ ': u'็ฉๆžๅƒๅŠ ', u'็งฏๆท€': u'็ฉๆพฑ', u'็งฏ่ฐท': u'็ฉ็ฉ€', u'็งฏ่ฐท้˜ฒ้ฅฅ': u'็ฉ็ฉ€้˜ฒ้ฅ‘', u'็งฏ้ƒ': u'็ฉ้ฌฑ', u'็จณๅ ': u'็ฉฉไฝ”', u'็จณๆ‰Ž': u'็ฉฉ็ดฎ', u'็ฉบไธญๅธƒ้›ท': u'็ฉบไธญไฝˆ้›ท', u'็ฉบๆŠ•ๅธƒ้›ท': u'็ฉบๆŠ•ไฝˆ้›ท', u'็ฉบ่’™': u'็ฉบๆฟ›', u'็ฉบ่ก': u'็ฉบ่•ฉ', u'็ฉบ่ก่ก': u'็ฉบ่•ฉ่•ฉ', u'็ฉบ่ฐทๅ›ž้Ÿณ': u'็ฉบ่ฐทๅ›ž้Ÿณ', u'็ฉบ้’Ÿ': u'็ฉบ้˜', u'็ฉบไฝ™': u'็ฉบ้ค˜', u'็ช’ๆฌฒ': u'็ช’ๆ…พ', u'็ช—ๅฐไธŠ': u'็ช—ๅฐไธŠ', u'็ช—ๅธ˜': u'็ช—ๅธ˜', u'็ช—ๆ˜Žๅ‡ ไบฎ': u'็ช—ๆ˜Žๅ‡ ไบฎ', u'็ช—ๆ˜Žๅ‡ ๅ‡€': u'็ช—ๆ˜Žๅ‡ ๆทจ', u'็ช—ๅฐ': u'็ช—ๆชฏ', u'็ช้‡Œ': u'็ชฉ่ฃก', u'็ฉทไบŽ': u'็ชฎๆ–ผ', u'็ฉท่ฟฝไธ่ˆ': u'็ชฎ่ฟฝไธๆจ', u'็ฉทๅ‘': u'็ชฎ้ซฎ', u'็ชƒ้’ŸๆŽฉ่€ณ': u'็ซŠ้˜ๆŽฉ่€ณ', u'็ซ‹ไบŽ': u'็ซ‹ๆ–ผ', u'็ซ‹่Œƒ': u'็ซ‹็ฏ„', u'็ซ™ๅนฒๅฒธๅ„ฟ': u'็ซ™ไนพๅฒธๅ…’', u'็ซฅไป†': u'็ซฅๅƒ•', u'็ซฏๅบ„': u'็ซฏ่ŽŠ', u'็ซžๆ–—': u'็ซถ้ฌฅ', u'็ซนๅ‡ ': u'็ซนๅ‡ ', u'็ซนๆž—ไน‹ๆธธ': u'็ซนๆž—ไน‹้Š', u'็ซน็ญพ': u'็ซน็ฑค', u'็ฌ‘้‡Œ่—ๅˆ€': u'็ฌ‘่ฃก่—ๅˆ€', u'็ฌจ็ฌจๅ‘†ๅ‘†': u'็ฌจ็ฌจๅ‘†ๅ‘†', u'็ฌฌๅ››ๅ‡บๅฑ€': u'็ฌฌๅ››ๅ‡บๅฑ€', u'็ฌ”ๅˆ’': u'็ญ†ๅŠƒ', u'็ฌ”็งƒๅขจๅนฒ': u'็ญ†็ฆฟๅขจไนพ', u'็ญ‰ไบŽ': u'็ญ‰ๆ–ผ', u'็ฌ‹ๅนฒ': u'็ญไนพ', u'็ญ‘ๅ‰': u'็ญ‘ๅ‰', u'็ญ‘ๅŒ—': u'็ญ‘ๅŒ—', u'็ญ‘ๅทž': u'็ญ‘ๅทž', u'็ญ‘ๅพŒ': u'็ญ‘ๅพŒ', u'็ญ‘ๅŽ': u'็ญ‘ๅพŒ', u'็ญ‘ๆณข': u'็ญ‘ๆณข', u'็ญ‘็ดซ': u'็ญ‘็ดซ', u'็ญ‘่‚ฅ': u'็ญ‘่‚ฅ', u'็ญ‘่ฅฟ': u'็ญ‘่ฅฟ', u'็ญ‘้‚ฆ': u'็ญ‘้‚ฆ', u'็ญ‘้™ฝ': u'็ญ‘้™ฝ', u'็ญ‘้˜ณ': u'็ญ‘้™ฝ', u'็ญ”ๅค': u'็ญ”่ฆ†', u'็ญ”่ฆ†': u'็ญ”่ฆ†', u'็ญ–ๅˆ’': u'็ญ–ๅŠƒ', u'็ญตๅ‡ ': u'็ญตๅ‡ ', u'ไธชไธญๅŽŸๅ› ': u'็ฎ‡ไธญๅŽŸๅ› ', u'ไธชไธญๅฅฅๅฆ™': u'็ฎ‡ไธญๅฅงๅฆ™', u'ไธชไธญๅฅฅ็ง˜': u'็ฎ‡ไธญๅฅง็ง˜', u'ไธชไธญๅฅฝๆ‰‹': u'็ฎ‡ไธญๅฅฝๆ‰‹', u'ไธชไธญๅผบๆ‰‹': u'็ฎ‡ไธญๅผทๆ‰‹', u'ไธชไธญๆถˆๆฏ': u'็ฎ‡ไธญๆถˆๆฏ', u'ไธชไธญๆป‹ๅ‘ณ': u'็ฎ‡ไธญๆป‹ๅ‘ณ', u'ไธชไธญ็Ž„ๆœบ': u'็ฎ‡ไธญ็Ž„ๆฉŸ', u'ไธชไธญ็†็”ฑ': u'็ฎ‡ไธญ็†็”ฑ', u'ไธชไธญ่ฎฏๆฏ': u'็ฎ‡ไธญ่จŠๆฏ', u'ไธชไธญ่ต„่ฎฏ': u'็ฎ‡ไธญ่ณ‡่จŠ', u'ไธชไธญ้ซ˜ๆ‰‹': u'็ฎ‡ไธญ้ซ˜ๆ‰‹', u'ไธชๆ—ง': u'็ฎ‡่ˆŠ', u'็ฎ—ๅކ': u'็ฎ—ๆ›†', u'็ฎ—ๅކๅฒ': u'็ฎ—ๆญทๅฒ', u'็ฎ—ๅ‡†': u'็ฎ—ๆบ–', u'็ฎ—ๅ‘': u'็ฎ—้ซฎ', u'็ฎกไบบๅŠ่„šๅ„ฟไบ‹': u'็ฎกไบบๅผ”่…ณๅ…’ไบ‹', u'็ฎกๅˆถๆณ•': u'็ฎกๅˆถๆณ•', u'็ฎกๅนฒ': u'็ฎกๅนน', u'่Š‚ๆฌฒ': u'็ฏ€ๆ…พ', u'่Š‚ไฝ™': u'็ฏ€้ค˜', u'่Œƒไพ‹': u'็ฏ„ไพ‹', u'่Œƒๅ›ด': u'็ฏ„ๅœ', u'่Œƒๅญ—': u'็ฏ„ๅญ—', u'่Œƒๅผ': u'็ฏ„ๅผ', u'่Œƒๆ€งๅฝขๅ˜': u'็ฏ„ๆ€งๅฝข่ฎŠ', u'่Œƒๆ–‡': u'็ฏ„ๆ–‡', u'่Œƒๆœฌ': u'็ฏ„ๆœฌ', u'่Œƒ็•ด': u'็ฏ„็–‡', u'่Œƒ้‡‘': u'็ฏ„้‡‘', u'็ฎ€ๅนถ': u'็ฐกไฝต', u'็ฎ€ๆœด': u'็ฐกๆจธ', u'็ฐธ่ก': u'็ฐธ่•ฉ', u'็ญพ็€': u'็ฐฝ่‘—', u'็ญนๅˆ’': u'็ฑŒๅŠƒ', u'็ญพๅน': u'็ฑคๅน', u'็ญพๆŠผ': u'็ฑคๆŠผ', u'็ญพๆก': u'็ฑคๆข', u'็ญพ่ฏ—': u'็ฑค่ฉฉ', u'ๅๅคฉ': u'็ฑฒๅคฉ', u'ๅๆฑ‚': u'็ฑฒๆฑ‚', u'ๅ่ฏท': u'็ฑฒ่ซ‹', u'็ฑณ่ฐท': u'็ฑณ็ฉ€', u'็ฒ‰ๆ‹ณ็ปฃ่…ฟ': u'็ฒ‰ๆ‹ณ็นก่…ฟ', u'็ฒ‰็ญพๅญ': u'็ฒ‰็ฑคๅญ', u'็ฒ—ๅˆถ': u'็ฒ—่ฃฝ', u'็ฒพๅˆถไผ': u'็ฒพๅˆถไผ', u'็ฒพๅˆถไฝ': u'็ฒพๅˆถไฝ', u'็ฒพๅˆถๆœ': u'็ฒพๅˆถๆœ', u'็ฒพๅนฒ': u'็ฒพๅนน', u'็ฒพไบŽ': u'็ฒพๆ–ผ', u'็ฒพๅ‡†': u'็ฒพๆบ–', u'็ฒพ่‡ด': u'็ฒพ็ทป', u'็ฒพๅˆถ': u'็ฒพ่ฃฝ', u'็ฒพ็‚ผ': u'็ฒพ้Š', u'็ฒพ่พŸ': u'็ฒพ้—ข', u'็ฒพๆพ': u'็ฒพ้ฌ†', u'็ณŠ้‡Œ็ณŠๆถ‚': u'็ณŠ่ฃก็ณŠๅก—', u'็ณ•ๅนฒ': u'็ณ•ไนพ', u'็ฒช็งฝ่”‘้ข': u'็ณž็ฉข่กŠ้ข', u'ๅ›ขๅญ': u'็ณฐๅญ', u'็ณปๅˆ—้‡Œ': u'็ณปๅˆ—่ฃก', u'็ณป็€': u'็ณป่‘—', u'็ณป้‡Œ': u'็ณป่ฃก', u'็บชๅކ': u'็ด€ๆ›†', u'็บชๅކๅฒ': u'็ด€ๆญทๅฒ', u'็บฆๅ ': u'็ด„ไฝ”', u'็บข็ปณ็ณป่ถณ': u'็ด…็นฉ็นซ่ถณ', u'็บข้’Ÿ': u'็ด…้˜', u'็บข้œ‰็ด ': u'็ด…้œ‰็ด ', u'็บขๅ‘': u'็ด…้ซฎ', u'็บกๅ›ž': u'็ด†่ฟด', u'็บกไฝ™': u'็ด†้ค˜', u'็บก้ƒ': u'็ด†้ฌฑ', u'็บณๅพ': u'็ดๅพต', u'็บฏๆœด': u'็ด”ๆจธ', u'็บธๆ‰Ž': u'็ด™็ดฎ', u'็ด ๆœด': u'็ด ๆจธ', u'็ด ๅ‘': u'็ด ้ซฎ', u'็ด ้ข': u'็ด ้บต', u'็ดข้ฉฌ้‡Œ': u'็ดข้ฆฌ้‡Œ', u'็ดข้ฆฌ้‡Œ': u'็ดข้ฆฌ้‡Œ', u'็ดข้ข': u'็ดข้บต', u'็ดซๅงœ': u'็ดซ่–‘', u'ๆ‰ŽไธŠ': u'็ดฎไธŠ', u'ๆ‰Žไธ‹': u'็ดฎไธ‹', u'ๆ‰Žๅ›ฎ': u'็ดฎๅ›ฎ', u'ๆ‰Žๅฅฝ': u'็ดฎๅฅฝ', u'ๆ‰Žๅฎž': u'็ดฎๅฏฆ', u'ๆ‰Žๅฏจ': u'็ดฎๅฏจ', u'ๆ‰Žๅธฆๅญ': u'็ดฎๅธถๅญ', u'ๆ‰Žๆˆ': u'็ดฎๆˆ', u'ๆ‰Žๆ น': u'็ดฎๆ น', u'ๆ‰Ž่ฅ': u'็ดฎ็‡Ÿ', u'ๆ‰Ž็ดง': u'็ดฎ็ทŠ', u'ๆ‰Ž่„š': u'็ดฎ่…ณ', u'ๆ‰Ž่ฃน': u'็ดฎ่ฃน', u'ๆ‰Ž่ฏˆ': u'็ดฎ่ฉ', u'ๆ‰Ž่ตท': u'็ดฎ่ตท', u'ๆ‰Ž้“': u'็ดฎ้ต', u'็ป†ไธๅฎนๅ‘': u'็ดฐไธๅฎน้ซฎ', u'็ป†ๅฆ‚ๅ‘': u'็ดฐๅฆ‚้ซฎ', u'็ป†่‡ด': u'็ดฐ็ทป', u'็ป†็‚ผ': u'็ดฐ้Š', u'็ปˆไบŽ': u'็ต‚ๆ–ผ', u'็ป„้‡Œ': u'็ต„่ฃก', u'็ป“ไผดๅŒๆธธ': u'็ตไผดๅŒ้Š', u'็ป“ไผ™': u'็ตๅคฅ', u'็ป“ๆ‰Ž': u'็ต็ดฎ', u'็ป“ๅฝฉ': u'็ต็ถต', u'็ป“ไฝ™': u'็ต้ค˜', u'็ป“ๅ‘': u'็ต้ซฎ', u'็ปๅฏนๅ‚็…ง': u'็ต•ๅฐๅƒ็…ง', u'็ปไบŽ': u'็ต•ๆ–ผ', u'็ปžๅนฒ': u'็ตžไนพ', u'็ปœ่…ฎ่ƒก': u'็ตก่…ฎ้ฌ', u'็ป™ๆˆ‘ๅนฒ่„†': u'็ตฆๆˆ‘ๅนฒ่„†', u'็ป™ไบŽ': u'็ตฆๆ–ผ', u'ไธๆฅ็บฟๅŽป': u'็ตฒไพ†็ทšๅŽป', u'ไธๅธƒ': u'็ตฒๅธƒ', u'ไธๆฉๅ‘ๆ€จ': u'็ตฒๆฉ้ซฎๆ€จ', u'ไธๆฟ': u'็ตฒๆฟ', u'ไธ็“œๅธƒ': u'็ตฒ็“œๅธƒ', u'ไธ็ป’ๅธƒ': u'็ตฒ็ตจๅธƒ', u'ไธ็บฟ': u'็ตฒ็ทš', u'ไธ็ป‡ๅŽ‚': u'็ตฒ็น”ๅป ', u'ไธ่™ซ': u'็ตฒ่Ÿฒ', u'ไธๅ‘': u'็ตฒ้ซฎ', u'็ป‘ๆ‰Ž': u'็ถ็ดฎ', u'็ถ‘ๆ‰Ž': u'็ถ‘็ดฎ', u'็ปๆœ‰ไบ‘': u'็ถ“ๆœ‰ไบ‘', u'็ถ“ๆœ‰ไบ‘': u'็ถ“ๆœ‰ไบ‘', u'็ปฟๅ‘': u'็ถ ้ซฎ', u'็ปธ็ผŽๅบ„': u'็ถข็ทž่ŽŠ', u'็ปด็ณป': u'็ถญ็นซ', u'็ปพๅ‘': u'็ถฐ้ซฎ', u'็ฝ‘้‡Œ': u'็ถฒ่ฃก', u'็ฝ‘ๅฟ—': u'็ถฒ่ชŒ', u'ๅฝฉๅธฆ': u'็ถตๅธถ', u'ๅฝฉๆŽ’': u'็ถตๆŽ’', u'ๅฝฉๆฅผ': u'็ถตๆจ“', u'ๅฝฉ็‰Œๆฅผ': u'็ถต็‰Œๆจ“', u'ๅฝฉ็ƒ': u'็ถต็ƒ', u'ๅฝฉ็ปธ': u'็ถต็ถข', u'ๅฝฉ็บฟ': u'็ถต็ทš', u'ๅฝฉ่ˆน': u'็ถต่ˆน', u'ๅฝฉ่กฃ': u'็ถต่กฃ', u'็ดง่‡ด': u'็ทŠ็ทป', u'็ดง็ปท': u'็ทŠ็นƒ', u'็ดง็ปท็ปท': u'็ทŠ็นƒ็นƒ', u'็ดง็ปท็€': u'็ทŠ็นƒ่‘—', u'็ดง่ฟฝไธ่ˆ': u'็ทŠ่ฟฝไธๆจ', u'็ปชไฝ™': u'็ท’้ค˜', u'็ทๅ‡ถ': u'็ทๅ…‡', u'็ผ‰ๅ‡ถ': u'็ทๅ…‡', u'็ผ–ไฝ™': u'็ทจไฝ™', u'็ผ–ๅˆถๆณ•': u'็ทจๅˆถๆณ•', u'็ผ–้‡‡': u'็ทจๆŽก', u'็ผ–็ ่กจ': u'็ทจ็ขผ่กจ', u'็ผ–ๅˆถ': u'็ทจ่ฃฝ', u'็ผ–้’Ÿ': u'็ทจ้˜', u'็ผ–ๅ‘': u'็ทจ้ซฎ', u'็ผ“ๅพ': u'็ทฉๅพต', u'็ผ“ๅ†ฒ': u'็ทฉ่ก', u'่‡ดๅฏ†': u'็ทปๅฏ†', u'่ฆๅ›ž': u'็ธˆ่ฟด', u'็ผœ่‡ด': u'็ธ็ทป', u'ๅŽฟ้‡Œ': u'็ธฃ่ฃก', u'ๅŽฟๅฟ—': u'็ธฃ่ชŒ', u'็ผ้‡Œ': u'็ธซ่ฃก', u'็ผๅˆถ': u'็ธซ่ฃฝ', u'็ผฉๆ —': u'็ธฎๆ…„', u'็บตๆฌฒ': u'็ธฑๆ…พ', u'็บคๅคซ': u'็ธดๅคซ', u'็บคๆ‰‹': u'็ธดๆ‰‹', u'ๆ€ป่ฃๅˆถ': u'็ธฝ่ฃๅˆถ', u'็นๅค': u'็น่ค‡', u'็น้’Ÿ': u'็น้˜', u'็ปทไฝ': u'็นƒไฝ', u'็ปทๅญ': u'็นƒๅญ', u'็ปทๅธฆ': u'็นƒๅธถ', u'็ปทๆ‰’ๅŠๆ‹ท': u'็นƒๆ‰’ๅผ”ๆ‹ท', u'็ปท็ดง': u'็นƒ็ทŠ', u'็ปท่„ธ': u'็นƒ่‡‰', u'็ปท็€': u'็นƒ่‘—', u'็ปท็€่„ธ': u'็นƒ่‘—่‡‰', u'็ปท็€่„ธๅ„ฟ': u'็นƒ่‘—่‡‰ๅ…’', u'็ปทๅผ€': u'็นƒ้–‹', u'็ฉ—ๅธ้ฃ˜ไบ•ๅนฒ': u'็นๅนƒ้ฃ„ไบ•ๅนน', u'็ป•ๆข': u'็นžๆจ‘', u'็ปฃๅƒ': u'็นกๅƒ', u'็ปฃๅฃ': u'็นกๅฃ', u'็ปฃๅพ—': u'็นกๅพ—', u'็ปฃๆˆท': u'็นกๆˆถ', u'็ปฃๆˆฟ': u'็นกๆˆฟ', u'็ปฃๆฏฏ': u'็นกๆฏฏ', u'็ปฃ็ƒ': u'็นก็ƒ', u'็ปฃ็š„': u'็นก็š„', u'็ปฃ่Šฑ': u'็นก่Šฑ', u'็ปฃ่กฃ': u'็นก่กฃ', u'็ปฃ่ตท': u'็นก่ตท', u'็ปฃ้˜': u'็นก้–ฃ', u'็ปฃ้ž‹': u'็นก้ž‹', u'็ป˜ๅˆถ': u'็นช่ฃฝ', u'็ณปไธŠ': u'็นซไธŠ', u'็ณปไธ–': u'็นซไธ–', u'็ณปๅˆฐ': u'็นซๅˆฐ', u'็ณปๅ›š': u'็นซๅ›š', u'็ณปๅฟƒ': u'็นซๅฟƒ', u'็ณปๅฟต': u'็นซๅฟต', u'็ณปๆ€€': u'็นซๆ‡ท', u'็ณปๆ‹': u'็นซๆˆ€', u'็ณปไบŽ': u'็นซๆ–ผ', u'็ณปไบŽไธ€ๅ‘': u'็นซๆ–ผไธ€้ซฎ', u'็ณป็ป“': u'็นซ็ต', u'็ณป็ดง': u'็นซ็ทŠ', u'็ณป็ปณ': u'็นซ็นฉ', u'็ณป็ดฏ': u'็นซ็บ', u'็ณป่พž': u'็นซ่พญ', u'็ณป้ฃŽๆ•ๅฝฑ': u'็นซ้ขจๆ•ๅฝฑ', u'็ดฏๅ›š': u'็บๅ›š', u'็ดฏๅ †': u'็บๅ †', u'็ดฏ็“ฆ็ป“็ปณ': u'็บ็“ฆ็ต็นฉ', u'็ดฏ็ป': u'็บ็ดฒ', u'็ดฏ่‡ฃ': u'็บ่‡ฃ', u'็ผ ๆ–—': u'็บ้ฌฅ', u'ๆ‰ๅˆ™': u'็บ”ๅ‰‡', u'ๆ‰ๅฏๅฎน้ขœๅไบ”ไฝ™': u'็บ”ๅฏๅฎน้กๅไบ”้ค˜', u'ๆ‰ๅพ—ไธคๅนด': u'็บ”ๅพ—ๅ…ฉๅนด', u'ๆ‰ๆญค': u'็บ”ๆญค', u'ๅ›ๅญ': u'็ฝˆๅญ', u'ๅ›ๅ›็ฝ็ฝ': u'็ฝˆ็ฝˆ็ฝ็ฝ', u'ๅ›้จž': u'็ฝˆ้จž', u'็ฝฎไบŽ': u'็ฝฎๆ–ผ', u'็ฝฎ่จ€ๆˆ่Œƒ': u'็ฝฎ่จ€ๆˆ็ฏ„', u'้ช‚็€': u'็ฝต่‘—', u'็ฝขไบŽ': u'็ฝทๆ–ผ', u'็พ็ณป': u'็พˆ็นซ', u'็พŽๅ ': u'็พŽไฝ”', u'็พŽไป‘': u'็พŽๅด™', u'็พŽไบŽ': u'็พŽๆ–ผ', u'็พŽๅˆถ': u'็พŽ่ฃฝ', u'็พŽไธ‘': u'็พŽ้†œ', u'็พŽๅ‘': u'็พŽ้ซฎ', u'็พคไธ‘': u'็พค้†œ', u'็พกไฝ™': u'็พจ้ค˜', u'ไน‰ๅ ': u'็พฉไฝ”', u'ไน‰ไป†': u'็พฉๅƒ•', u'ไน‰ๅบ„': u'็พฉ่ŽŠ', u'็ฟ•่พŸ': u'็ฟ•้—ข', u'็ฟฑๆธธ': u'็ฟฑ้Š', u'็ฟปๆถŒ': u'็ฟปๆนง', u'็ฟปไบ‘่ฆ†้›จ': u'็ฟป้›ฒ่ฆ†้›จ', u'็ฟปๆพ': u'็ฟป้ฌ†', u'่€ๅนฒ': u'่€ไนพ', u'่€ไป†': u'่€ๅƒ•', u'่€ๅนฒ้ƒจ': u'่€ๅนน้ƒจ', u'่€่’™': u'่€ๆ‡ž', u'่€ไบŽ': u'่€ๆ–ผ', u'่€็ˆท้’Ÿ': u'่€็ˆบ้˜', u'่€ๅบ„': u'่€่ŽŠ', u'่€ๅงœ': u'่€่–‘', u'่€ๆฟ': u'่€้—†', u'่€้ข็šฎ': u'่€้ข็šฎ', u'่€ƒๅพ': u'่€ƒๅพต', u'่€Œๅ…‹ๅˆถ': u'่€Œๅ‰‹ๅˆถ', u'่€ๆ–—': u'่€้ฌฅ', u'่€•ไฝฃ': u'่€•ๅ‚ญ', u'่€•่Žท': u'่€•็ฉซ', u'่€ณไฝ™': u'่€ณ้ค˜', u'่€ฟไบŽ': u'่€ฟๆ–ผ', u'่Šๆ–‹ๅฟ—ๅผ‚': u'่Š้ฝ‹ๅฟ—็•ฐ', u'่˜้›‡': u'่˜ๅƒฑ', u'้—ป้ฃŽๅŽ': u'่ž้ขจๅพŒ', u'่”็ณป': u'่ฏ็นซ', u'ๅฌไบŽ': u'่ฝๆ–ผ', u'่‚‰ๅนฒ': u'่‚‰ไนพ', u'่‚‰ๆฌฒ': u'่‚‰ๆ…พ', u'่‚‰ไธ้ข': u'่‚‰็ตฒ้บต', u'่‚‰็พน้ข': u'่‚‰็พน้บต', u'่‚‰ๆพ': u'่‚‰้ฌ†', u'่‚š้‡Œ': u'่‚š่ฃก', u'่‚่„': u'่‚่‡Ÿ', u'่‚้ƒ': u'่‚้ฌฑ', u'่‚กๆ —': u'่‚กๆ…„', u'่‚ฅ็ญ‘ๆ–น่จ€': u'่‚ฅ็ญ‘ๆ–น่จ€', u'่‚ด้ฆ”': u'่‚ด้ฅŒ', u'่‚บ่„': u'่‚บ่‡Ÿ', u'่ƒƒ่ฏ': u'่ƒƒ่—ฅ', u'่ƒƒ้‡Œ': u'่ƒƒ่ฃก', u'่ƒŒๅ‘็€': u'่ƒŒๅ‘่‘—', u'่ƒŒๅœฐ้‡Œ': u'่ƒŒๅœฐ่ฃก', u'่ƒŽๅ‘': u'่ƒŽ้ซฎ', u'่ƒœ่‚ฝ': u'่ƒœ่‚ฝ', u'่ƒœ้”ฎ': u'่ƒœ้ต', u'่ƒกไบ‘': u'่ƒกไบ‘', u'่ƒกๅญๆ˜‚': u'่ƒกๅญๆ˜‚', u'่ƒกๆœดๅฎ‰': u'่ƒกๆจธๅฎ‰', u'่ƒก้‡Œ่ƒกๆถ‚': u'่ƒก่ฃก่ƒกๅก—', u'่ƒฝๅ…‹ๅˆถ': u'่ƒฝๅ‰‹ๅˆถ', u'่ƒฝๅนฒไผ‘': u'่ƒฝๅนฒไผ‘', u'่ƒฝๅนฒๆˆˆ': u'่ƒฝๅนฒๆˆˆ', u'่ƒฝๅนฒๆ‰ฐ': u'่ƒฝๅนฒๆ“พ', u'่ƒฝๅนฒๆ”ฟ': u'่ƒฝๅนฒๆ”ฟ', u'่ƒฝๅนฒๆถ‰': u'่ƒฝๅนฒๆถ‰', u'่ƒฝๅนฒ้ข„': u'่ƒฝๅนฒ้ ', u'่ƒฝๅนฒ': u'่ƒฝๅนน', u'่ƒฝ่‡ชๅˆถ': u'่ƒฝ่‡ชๅˆถ', u'่„‰ๅ†ฒ': u'่„ˆ่ก', u'่„Šๆข่ƒŒ': u'่„Šๆข่ƒŒ', u'่„Šๆข้ชจ': u'่„Šๆข้ชจ', u'่„Šๆข': u'่„Šๆจ‘', u'่„ฑ่ฐทๆœบ': u'่„ซ็ฉ€ๆฉŸ', u'่„ฑๅ‘': u'่„ซ้ซฎ', u'่„พ่„': u'่„พ่‡Ÿ', u'่…Šไน‹ไปฅไธบ้ฅต': u'่…Šไน‹ไปฅ็‚บ้คŒ', u'่…Šๅ‘ณ': u'่…Šๅ‘ณ', u'่…Šๆฏ’': u'่…Šๆฏ’', u'่…Š็ฌ”': u'่…Š็ญ†', u'่‚พ่„': u'่…Ž่‡Ÿ', u'่…ๅนฒ': u'่…ไนพ', u'่…ไฝ™': u'่…้ค˜', u'่…•่กจ': u'่…•้Œถ', u'่„‘ๅญ้‡Œ': u'่…ฆๅญ่ฃก', u'่„‘ๅนฒ': u'่…ฆๅนน', u'่…ฐ้‡Œ': u'่…ฐ่ฃก', u'่„šๆณจ': u'่…ณ่จป', u'่„š็‚ผ': u'่…ณ้Š', u'่†่ฏ': u'่†่—ฅ', u'่‚คๅ‘': u'่†š้ซฎ', u'่ƒถๅท': u'่† ๆฒ', u'่†จๆพ': u'่†จ้ฌ†', u'่‡ฃไป†': u'่‡ฃๅƒ•', u'ๅงๆธธ': u'่‡ฅ้Š', u'่‡ง่ฐทไบก็พŠ': u'่‡ง็ฉ€ไบก็พŠ', u'ไธดๆฝผๆ–—ๅฎ': u'่‡จๆฝผ้ฌฅๅฏถ', u'่‡ชๅˆถไธ€ไธ‹': u'่‡ชๅˆถไธ€ไธ‹', u'่‡ชๅˆถไธ‹ๆฅ': u'่‡ชๅˆถไธ‹ไพ†', u'่‡ชๅˆถไธ': u'่‡ชๅˆถไธ', u'่‡ชๅˆถไน‹ๅŠ›': u'่‡ชๅˆถไน‹ๅŠ›', u'่‡ชๅˆถไน‹่ƒฝ': u'่‡ชๅˆถไน‹่ƒฝ', u'่‡ชๅˆถไป–': u'่‡ชๅˆถไป–', u'่‡ชๅˆถไผ': u'่‡ชๅˆถไผ', u'่‡ชๅˆถไฝ ': u'่‡ชๅˆถไฝ ', u'่‡ชๅˆถๅŠ›': u'่‡ชๅˆถๅŠ›', u'่‡ชๅˆถๅœฐ': u'่‡ชๅˆถๅœฐ', u'่‡ชๅˆถๅฅน': u'่‡ชๅˆถๅฅน', u'่‡ชๅˆถๆƒ…': u'่‡ชๅˆถๆƒ…', u'่‡ชๅˆถๆˆ‘': u'่‡ชๅˆถๆˆ‘', u'่‡ชๅˆถๆœ': u'่‡ชๅˆถๆœ', u'่‡ชๅˆถ็š„่ƒฝ': u'่‡ชๅˆถ็š„่ƒฝ', u'่‡ชๅˆถ่ƒฝๅŠ›': u'่‡ชๅˆถ่ƒฝๅŠ›', u'่‡ชไบŽ': u'่‡ชๆ–ผ', u'่‡ชๅˆถ': u'่‡ช่ฃฝ', u'่‡ช่ง‰่‡ชๆ„ฟ': u'่‡ช่ฆบ่‡ชๆ„ฟ', u'่‡ณๅคš': u'่‡ณๅคš', u'่‡ณไบŽ': u'่‡ณๆ–ผ', u'่‡ดไบŽ': u'่‡ดๆ–ผ', u'่‡ปไบŽ': u'่‡ปๆ–ผ', u'่ˆ‚่ฐท': u'่ˆ‚็ฉ€', u'ไธŽๅ…‹ๅˆถ': u'่ˆ‡ๅ‰‹ๅˆถ', u'ๅ…ด่‡ด': u'่ˆˆ็ทป', u'ไธพๆ‰‹่กจ': u'่ˆ‰ๆ‰‹่กจ', u'ไธพๆ‰‹่กจๅ†ณ': u'่ˆ‰ๆ‰‹่กจๆฑบ', u'ๆ—งๅบ„': u'่ˆŠๅบ„', u'ๆ—งๅކ': u'่ˆŠๆ›†', u'ๆ—งๅކๅฒ': u'่ˆŠๆญทๅฒ', u'ๆ—ง่ฏ': u'่ˆŠ่—ฅ', u'ๆ—งๆธธ': u'่ˆŠ้Š', u'ๆ—ง่กจ': u'่ˆŠ้Œถ', u'ๆ—ง้’Ÿ': u'่ˆŠ้˜', u'ๆ—ง้’Ÿ่กจ': u'่ˆŠ้˜้Œถ', u'่ˆŒๅนฒๅ”‡็„ฆ': u'่ˆŒไนพๅ”‡็„ฆ', u'่ˆ’ๅท': u'่ˆ’ๆฒ', u'่ˆชๆตทๅކ': u'่ˆชๆตทๆ›†', u'่ˆชๆตทๅކๅฒ': u'่ˆชๆตทๆญทๅฒ', u'่ˆนๅชๅพ—': u'่ˆนๅชๅพ—', u'่ˆนๅชๆœ‰': u'่ˆนๅชๆœ‰', u'่ˆนๅช่ƒฝ': u'่ˆนๅช่ƒฝ', u'่ˆน้’Ÿ': u'่ˆน้˜', u'่ˆนๅช': u'่ˆน้šป', u'่ˆฐๅช': u'่‰ฆ้šป', u'่‰ฏ่ฏ': u'่‰ฏ่—ฅ', u'่‰ฒๆฌฒ': u'่‰ฒๆ…พ', u'่‰ทๅŽ': u'่‰ทๅŽ', u'่‰ณๅŽ': u'่‰ทๅŽ', u'่‰ธๆœจไธฐไธฐ': u'่‰ธๆœจไธฐไธฐ', u'่Š่ฏ': u'่Š่—ฅ', u'่Š’ๆžœๅนฒ': u'่Š’ๆžœไนพ', u'่Šฑๆ‹ณ็ปฃ่…ฟ': u'่Šฑๆ‹ณ็นก่…ฟ', u'่Šฑๅท': u'่Šฑๆฒ', u'่Šฑ็›†้‡Œ': u'่Šฑ็›†่ฃก', u'่Šฑๅบต่ฏ้€‰': u'่Šฑ่ด่ฉž้ธ', u'่Šฑ่ฏ': u'่Šฑ่—ฅ', u'่Šฑ้’Ÿ': u'่Šฑ้˜', u'่Šฑ้ฉฌๅŠๅ˜ด': u'่Šฑ้ฆฌๅผ”ๅ˜ด', u'่Šฑๅ“„': u'่Šฑ้ฌจ', u'่‹‘้‡Œ': u'่‹‘่ฃก', u'่‹ฅๅนฒ': u'่‹ฅๅนฒ', u'่‹ฆๅนฒ': u'่‹ฆๅนน', u'่‹ฆ่ฏ': u'่‹ฆ่—ฅ', u'่‹ฆ้‡Œ': u'่‹ฆ่ฃก', u'่‹ฆๆ–—': u'่‹ฆ้ฌฅ', u'่‹Ž้บป': u'่‹ง้บป', u'่‹ฑๅ ': u'่‹ฑไฝ”', u'่‹น่ฆ': u'่‹น็ธˆ', u'่Œ‚้ƒฝๆท€': u'่Œ‚้ƒฝๆพฑ', u'่Œƒๆ–‡ๅŒ': u'่Œƒๆ–‡ๅŒ', u'่Œƒๆ–‡ๆญฃๅ…ฌ': u'่Œƒๆ–‡ๆญฃๅ…ฌ', u'่Œƒๆ–‡็€พ': u'่Œƒๆ–‡็€พ', u'่Œƒๆ–‡ๆพœ': u'่Œƒๆ–‡็€พ', u'่Œƒๆ–‡็…ง': u'่Œƒๆ–‡็…ง', u'่Œƒๆ–‡็จ‹': u'่Œƒๆ–‡็จ‹', u'่Œƒๆ–‡่Šณ': u'่Œƒๆ–‡่Šณ', u'่Œƒๆ–‡่—ค': u'่Œƒๆ–‡่—ค', u'่Œƒๆ–‡่™Ž': u'่Œƒๆ–‡่™Ž', u'่Œƒ็™ปๅ ก': u'่Œƒ็™ปๅ ก', u'่Œถๅ‡ ': u'่Œถๅ‡ ', u'่Œถๅบ„': u'่Œถ่ŽŠ', u'่Œถไฝ™': u'่Œถ้ค˜', u'่Œถ้ข': u'่Œถ้บต', u'่‰ไธ›้‡Œ': u'่‰ๅข่ฃก', u'่‰ๅนฟ': u'่‰ๅนฟ', u'่‰่': u'่‰่', u'่‰่ฏ': u'่‰่—ฅ', u'่ๅฑ…': u'่ๅฑ…', u'่่‡ป': u'่่‡ป', u'่้ฅฅ': u'่้ฅ‘', u'่ท่Šฑๆท€': u'่ท่Šฑๆพฑ', u'ๅบ„ไธŠ': u'่ŽŠไธŠ', u'ๅบ„ไธป': u'่ŽŠไธป', u'ๅบ„ๅ‘จ': u'่ŽŠๅ‘จ', u'ๅบ„ๅ‘˜': u'่ŽŠๅ“ก', u'ๅบ„ไธฅ': u'่ŽŠๅšด', u'ๅบ„ๅ›ญ': u'่ŽŠๅœ’', u'ๅบ„ๅฃซ้กฟ้“': u'่ŽŠๅฃซ้ “้“', u'ๅบ„ๅญ': u'่ŽŠๅญ', u'ๅบ„ๅฎข': u'่ŽŠๅฎข', u'ๅบ„ๅฎถ': u'่ŽŠๅฎถ', u'ๅบ„ๆˆท': u'่ŽŠๆˆถ', u'ๅบ„ๆˆฟ': u'่ŽŠๆˆฟ', u'ๅบ„ๆ•ฌ': u'่ŽŠๆ•ฌ', u'ๅบ„็”ฐ': u'่ŽŠ็”ฐ', u'ๅบ„็จผ': u'่ŽŠ็จผ', u'ๅบ„่ˆ„่ถŠๅŸ': u'่ŽŠ่ˆ„่ถŠๅŸ', u'ๅบ„้‡Œ': u'่ŽŠ่ฃก', u'ๅบ„่ฏญ': u'่ŽŠ่ชž', u'ๅบ„ๅ†œ': u'่ŽŠ่พฒ', u'ๅบ„้‡': u'่ŽŠ้‡', u'ๅบ„้™ข': u'่ŽŠ้™ข', u'ๅบ„้ชš': u'่ŽŠ้จท', u'่ŒŽๅนฒ': u'่Ž–ๅนน', u'่Žฝ่ก': u'่Žฝ่•ฉ', u'่Œไธไฝ“': u'่Œ็ตฒ้ซ”', u'่œๅนฒ': u'่œไนพ', u'่œ่‚ด': u'่œ่‚ด', u'่ ๆฃฑ่œ': u'่ ็จœ่œ', u'่ ่ๅนฒ': u'่ ่˜ฟไนพ', u'ๅŽไธฅ้’Ÿ': u'่ฏๅšด้˜', u'ๅŽๅ‘': u'่ฏ้ซฎ', u'ไธ‡ไธ€ๅช': u'่ฌไธ€ๅช', u'ไธ‡ไธช': u'่ฌๅ€‹', u'ไธ‡ๅคšๅช': u'่ฌๅคš้šป', u'ไธ‡ๅคฉๅŽ': u'่ฌๅคฉๅพŒ', u'ไธ‡ๅนดๅކ่กจ': u'่ฌๅนดๆ›†้Œถ', u'ไธ‡ๅކ': u'่ฌๆ›†', u'ไธ‡ๅކๅฒ': u'่ฌๆญทๅฒ', u'ไธ‡็ญพๆ’ๆžถ': u'่ฌ็ฑคๆ’ๆžถ', u'ไธ‡ๆ‰Ž': u'่ฌ็ดฎ', u'ไธ‡่ฑก': u'่ฌ่ฑก', u'ไธ‡ๅช': u'่ฌ้šป', u'ไธ‡ไฝ™': u'่ฌ้ค˜', u'่ฝ่…ฎ่ƒก': u'่ฝ่…ฎ้ฌ', u'่ฝๅ‘': u'่ฝ้ซฎ', u'ๅถๅถ็น': u'่‘‰ๅถ็น', u'็€ๅ„ฟ': u'่‘—ๅ…’', u'็€ๅ…‹ๅˆถ': u'่‘—ๅ‰‹ๅˆถ', u'็€ไนฆ็ซ‹่ฏด': u'่‘—ๆ›ธ็ซ‹่ชช', u'็€่‰ฒ่ฝฏไฝ“': u'่‘—่‰ฒ่ปŸ้ซ”', u'็€้‡ๆŒ‡ๅ‡บ': u'่‘—้‡ๆŒ‡ๅ‡บ', u'็€ๅฝ•': u'่‘—้Œ„', u'็€ๅฝ•่ง„ๅˆ™': u'่‘—้Œ„่ฆๅ‰‡', u'่‘กๅ ': u'่‘กไฝ”', u'่‘ก่„ๅนฒ': u'่‘ก่„ไนพ', u'่‘ฃๆฐๅฐๅ‘': u'่‘ฃๆฐๅฐ้ซฎ', u'่‘ซ่Šฆ้‡Œๅ–็”šไนˆ่ฏ': u'่‘ซ่˜†่ฃก่ณฃ็”š้บผ่—ฅ', u'่’™ๆฑ—่ฏ': u'่’™ๆฑ—่—ฅ', u'่’™ๅบ„': u'่’™่ŽŠ', u'่’™้›พ้œฒ': u'่’™้œง้œฒ', u'่’œๅ‘': u'่’œ้ซฎ', u'่‹ๆœฏ': u'่’ผๆœฎ', u'่‹ๅ‘': u'่’ผ้ซฎ', u'่‹้ƒ': u'่’ผ้ฌฑ', u'่“„ๅ‘': u'่“„้ซฎ', u'่“„่ƒก': u'่“„้ฌ', u'่“„้กป': u'่“„้ฌš', u'่“Š้ƒ': u'่“Š้ฌฑ', u'่“ฌ่“ฌๆพๆพ': u'่“ฌ่“ฌ้ฌ†้ฌ†', u'่“ฌๅ‘': u'่“ฌ้ซฎ', u'่“ฌๆพ': u'่“ฌ้ฌ†', u'ๅ‚็ปฅ': u'่”˜็ถ', u'่‘ฑ้ƒ': u'่”ฅ้ฌฑ', u'่ž้บฆ้ข': u'่•Ž้บฅ้บต', u'่กๆฅ่กๅŽป': u'่•ฉไพ†่•ฉๅŽป', u'่กๅฅณ': u'่•ฉๅฅณ', u'่กๅฆ‡': u'่•ฉๅฉฆ', u'่กๅฏ‡': u'่•ฉๅฏ‡', u'่กๅนณ': u'่•ฉๅนณ', u'่กๆฐ”ๅ›ž่‚ ': u'่•ฉๆฐฃ่ฟด่…ธ', u'่กๆถค': u'่•ฉๆปŒ', u'่กๆผพ': u'่•ฉๆผพ', u'่ก็„ถ': u'่•ฉ็„ถ', u'่กไบง': u'่•ฉ็”ข', u'่ก่ˆŸ': u'่•ฉ่ˆŸ', u'่ก่ˆน': u'่•ฉ่ˆน', u'่ก่ก': u'่•ฉ่•ฉ', u'่งๅ‚': u'่•ญ่”˜', u'่–„ๅนธ': u'่–„ๅ€–', u'่–„ๅนฒ': u'่–„ๅนน', u'ๅงœๆ˜ฏ่€็š„่พฃ': u'่–‘ๆ˜ฏ่€็š„่พฃ', u'ๅงœๆœซ': u'่–‘ๆœซ', u'ๅงœๆก‚': u'่–‘ๆก‚', u'ๅงœๆฏ': u'่–‘ๆฏ', u'ๅงœๆฑ': u'่–‘ๆฑ', u'ๅงœๆฑค': u'่–‘ๆนฏ', u'ๅงœ็‰‡': u'่–‘็‰‡', u'ๅงœ็ณ–': u'่–‘็ณ–', u'ๅงœไธ': u'่–‘็ตฒ', u'ๅงœ่€่พฃ': u'่–‘่€่พฃ', u'ๅงœ่Œถ': u'่–‘่Œถ', u'ๅงœ่“‰': u'่–‘่“‰', u'ๅงœ้ฅผ': u'่–‘้ค…', u'ๅงœ้ป„': u'่–‘้ปƒ', u'่–™ๅ‘': u'่–™้ซฎ', u'่–ๅœ': u'่–่””', u'่‹งๆ‚ด': u'่–ดๆ‚ด', u'่–ด็ƒฏ': u'่–ด็ƒฏ', u'่‹ง็ƒฏ': u'่–ด็ƒฏ', u'ๅ€Ÿไปฅ': u'่—‰ไปฅ', u'ๅ€ŸๅŠฉ': u'่—‰ๅŠฉ', u'ๅ€Ÿๅฏ‡ๅ…ต': u'่—‰ๅฏ‡ๅ…ต', u'ๅ€Ÿๆ‰‹': u'่—‰ๆ‰‹', u'ๅ€Ÿๆœบ': u'่—‰ๆฉŸ', u'ๅ€Ÿๆญค': u'่—‰ๆญค', u'ๅ€Ÿ็”ฑ': u'่—‰็”ฑ', u'ๅ€Ÿ็ฎธไปฃ็ญน': u'่—‰็ฎธไปฃ็ฑŒ', u'ๅ€Ÿ็€': u'่—‰่‘—', u'ๅ€Ÿ่ต„': u'่—‰่ณ‡', u'่“ๆท€': u'่—ๆพฑ', u'่—ไบŽ': u'่—ๆ–ผ', u'่—ๅކ': u'่—ๆ›†', u'่—ๅކๅฒ': u'่—ๆญทๅฒ', u'่—่’™ๆญŒๅ„ฟ': u'่—็Ÿ‡ๆญŒๅ…’', u'่—คๅˆถ': u'่—ค่ฃฝ', u'่ฏไธธ': u'่—ฅไธธ', u'่ฏๅ…ธ': u'่—ฅๅ…ธ', u'่ฏๅˆฐๅ‘ฝ้™ค': u'่—ฅๅˆฐๅ‘ฝ้™ค', u'่ฏๅˆฐ็—…้™ค': u'่—ฅๅˆฐ็—…้™ค', u'่ฏๅ‰‚': u'่—ฅๅŠ‘', u'่ฏๅŠ›': u'่—ฅๅŠ›', u'่ฏๅŒ…': u'่—ฅๅŒ…', u'่ฏๅ': u'่—ฅๅ', u'่ฏๅ‘ณ': u'่—ฅๅ‘ณ', u'่ฏๅ“': u'่—ฅๅ“', u'่ฏๅ•†': u'่—ฅๅ•†', u'่ฏๅ•': u'่—ฅๅ–ฎ', u'่ฏๅฉ†': u'่—ฅๅฉ†', u'่ฏๅญฆ': u'่—ฅๅญธ', u'่ฏๅฎณ': u'่—ฅๅฎณ', u'่ฏไธ“': u'่—ฅๅฐˆ', u'่ฏๅฑ€': u'่—ฅๅฑ€', u'่ฏๅธˆ': u'่—ฅๅธซ', u'่ฏๅบ—': u'่—ฅๅบ—', u'่ฏๅŽ‚': u'่—ฅๅป ', u'่ฏๅผ•': u'่—ฅๅผ•', u'่ฏๆ€ง': u'่—ฅๆ€ง', u'่ฏๆˆฟ': u'่—ฅๆˆฟ', u'่ฏๆ•ˆ': u'่—ฅๆ•ˆ', u'่ฏๆ–น': u'่—ฅๆ–น', u'่ฏๆ': u'่—ฅๆ', u'่ฏๆฃ‰': u'่—ฅๆฃ‰', u'่ฏๆฃ€ๅฑ€': u'่—ฅๆชขๅฑ€', u'่ฏๆฐด': u'่—ฅๆฐด', u'่ฏๆฒน': u'่—ฅๆฒน', u'่ฏๆถฒ': u'่—ฅๆถฒ', u'่ฏๆธฃ': u'่—ฅๆธฃ', u'่ฏ็‰‡': u'่—ฅ็‰‡', u'่ฏ็‰ฉ': u'่—ฅ็‰ฉ', u'่ฏ็Ž‹': u'่—ฅ็Ž‹', u'่ฏ็†': u'่—ฅ็†', u'่ฏ็“ถ': u'่—ฅ็“ถ', u'่ฏ็”จ': u'่—ฅ็”จ', u'่ฏ็š‚': u'่—ฅ็š‚', u'่ฏ็›’': u'่—ฅ็›’', u'่ฏ็Ÿณ': u'่—ฅ็Ÿณ', u'่ฏ็ง‘': u'่—ฅ็ง‘', u'่ฏ็ฎฑ': u'่—ฅ็ฎฑ', u'่ฏ็ญพ': u'่—ฅ็ฑค', u'่ฏ็ฒ‰': u'่—ฅ็ฒ‰', u'่ฏ็ณ–': u'่—ฅ็ณ–', u'่ฏ็บฟ': u'่—ฅ็ทš', u'่ฏ็ฝ': u'่—ฅ็ฝ', u'่ฏ่†': u'่—ฅ่†', u'่ฏ่ˆ–': u'่—ฅ่ˆ–', u'่ฏ่Œถ': u'่—ฅ่Œถ', u'่ฏ่‰': u'่—ฅ่‰', u'่ฏ่กŒ': u'่—ฅ่กŒ', u'่ฏ่ดฉ': u'่—ฅ่ฒฉ', u'่ฏ่ดน': u'่—ฅ่ฒป', u'่ฏ้…’': u'่—ฅ้…’', u'่ฏๅŒปๅญฆ็ณป': u'่—ฅ้†ซๅญธ็ณป', u'่ฏ้‡': u'่—ฅ้‡', u'่ฏ้’ˆ': u'่—ฅ้‡', u'่ฏ้“บ': u'่—ฅ้‹ช', u'่ฏๅคด': u'่—ฅ้ ญ', u'่ฏ้ฅต': u'่—ฅ้คŒ', u'่ฏ้ขๅ„ฟ': u'่—ฅ้บตๅ…’', u'่‹ๆ˜†': u'่˜‡ๅด‘', u'่•ดๅซ็€': u'่˜Šๅซ่‘—', u'่•ดๆถต็€': u'่˜Šๆถต่‘—', u'่‹นๆžœๅนฒ': u'่˜‹ๆžœไนพ', u'่ๅœ': u'่˜ฟ่””', u'่ๅœๅนฒ': u'่˜ฟ่””ไนพ', u'่™Ž้กป': u'่™Ž้ฌš', u'่™Žๆ–—': u'่™Ž้ฌฅ', u'ๅทๅฟ—': u'่™Ÿ่ชŒ', u'่™ซ้ƒจ': u'่™ซ้ƒจ', u'่šŠๅŠจ็‰›ๆ–—': u'่šŠๅ‹•็‰›้ฌฅ', u'่›‡ๅ‘ๅฅณๅฆ–': u'่›‡้ซฎๅฅณๅฆ–', u'่›”่™ซ่ฏ': u'่›”่Ÿฒ่—ฅ', u'่œ‚ๅŽ': u'่œ‚ๅŽ', u'่œ‚ๆถŒ': u'่œ‚ๆนง', u'่œ‚ๅ‡†': u'่œ‚ๆบ–', u'่œœ้‡Œ่ฐƒๆฒน': u'่œœ่ฃก่ชฟๆฒน', u'่œกๆœˆ': u'่œกๆœˆ', u'่œก็ฅญ': u'่œก็ฅญ', u'่Ž่Ž่žซ่žซ': u'่Ž่Ž่žซ่žซ', u'่Ž่ฐฎ': u'่Ž่ญ–', u'่™ฎ่จ็›ธๅŠ': u'่Ÿฃ่จ็›ธๅผ”', u'่›ๅนฒ': u'่Ÿถไนพ', u'่šๅŽ': u'่ŸปๅŽ', u'่ŸปๅŽ': u'่ŸปๅŽ', u'่ ๅนฒ': u'่ ๅนน', u'่›ฎๅนฒ': u'่ ปๅนน', u'่ก€ๆ‹ผ': u'่ก€ๆ‹š', u'่ก€ไฝ™': u'่ก€้ค˜', u'่กŒไบ‹ๅކ': u'่กŒไบ‹ๆ›†', u'่กŒไบ‹ๅކๅฒ': u'่กŒไบ‹ๆญทๅฒ', u'่กŒๅ‡ถ': u'่กŒๅ…‡', u'่กŒๅ‡ถๅ‰': u'่กŒๅ…‡ๅ‰', u'่กŒๅ‡ถๅพŒ': u'่กŒๅ…‡ๅพŒ', u'่กŒไบŽ': u'่กŒๆ–ผ', u'่กŒ็™พ้‡Œ่€…ๅŠไบŽไนๅ': u'่กŒ็™พ้‡Œ่€…ๅŠๆ–ผไนๅ', u'่ƒกๅŒ': u'่กš่ก•', u'ๅซๆ˜Ÿ้’Ÿ': u'่ก›ๆ˜Ÿ้˜', u'ๅ†ฒไธŠ': u'่กไธŠ', u'ๅ†ฒไธ‹': u'่กไธ‹', u'ๅ†ฒๆฅ': u'่กไพ†', u'ๅ†ฒๅ€’': u'่กๅ€’', u'ๅ†ฒๅ† ': u'่กๅ† ', u'ๅ†ฒๅ‡บ': u'่กๅ‡บ', u'ๅ†ฒๅˆฐ': u'่กๅˆฐ', u'ๅ†ฒๅˆบ': u'่กๅˆบ', u'ๅ†ฒๅ…‹': u'่กๅ‰‹', u'ๅ†ฒๅŠ›': u'่กๅŠ›', u'ๅ†ฒๅŠฒ': u'่กๅ‹', u'ๅ†ฒๅŠจ': u'่กๅ‹•', u'ๅ†ฒๅŽป': u'่กๅŽป', u'ๅ†ฒๅฃ': u'่กๅฃ', u'ๅ†ฒๅžฎ': u'่กๅžฎ', u'ๅ†ฒๅ ‚': u'่กๅ ‚', u'ๅ†ฒๅš้™ท้˜ต': u'่กๅ …้™ท้™ฃ', u'ๅ†ฒๅŽ‹': u'่กๅฃ“', u'ๅ†ฒๅคฉ': u'่กๅคฉ', u'ๅ†ฒๅทžๆ’žๅบœ': u'่กๅทžๆ’žๅบœ', u'ๅ†ฒๅฟƒ': u'่กๅฟƒ', u'ๅ†ฒๆމ': u'่กๆމ', u'ๅ†ฒๆ’ž': u'่กๆ’ž', u'ๅ†ฒๅ‡ป': u'่กๆ“Š', u'ๅ†ฒๆ•ฃ': u'่กๆ•ฃ', u'ๅ†ฒๆ€': u'่กๆฎบ', u'ๅ†ฒๅ†ณ': u'่กๆฑบ', u'ๅ†ฒๆณข': u'่กๆณข', u'ๅ†ฒๆตช': u'่กๆตช', u'ๅ†ฒๆฟ€': u'่กๆฟ€', u'ๅ†ฒ็„ถ': u'่ก็„ถ', u'ๅ†ฒ็›น': u'่ก็›น', u'ๅ†ฒ็ ด': u'่ก็ ด', u'ๅ†ฒ็จ‹': u'่ก็จ‹', u'ๅ†ฒ็ช': u'่ก็ช', u'ๅ†ฒ็บฟ': u'่ก็ทš', u'ๅ†ฒ็€': u'่ก่‘—', u'ๅ†ฒ่ฆ': u'่ก่ฆ', u'ๅ†ฒ่ตท': u'่ก่ตท', u'ๅ†ฒ่ฝฆ': u'่ก่ปŠ', u'ๅ†ฒ่ฟ›': u'่ก้€ฒ', u'ๅ†ฒ่ฟ‡': u'่ก้Ž', u'ๅ†ฒ้‡': u'่ก้‡', u'ๅ†ฒ้”‹': u'่ก้‹’', u'ๅ†ฒ้™ท': u'่ก้™ท', u'ๅ†ฒๅคด้˜ต': u'่ก้ ญ้™ฃ', u'ๅ†ฒ้ฃŽ': u'่ก้ขจ', u'่กฃ็ปฃๆ˜ผ่กŒ': u'่กฃ็นกๆ™่กŒ', u'่กจๅพ': u'่กจๅพต', u'่กจ้‡Œ': u'่กจ่ฃก', u'่กจ้ข': u'่กจ้ข', u'่กทไบŽ': u'่กทๆ–ผ', u'่ข‹้‡Œ': u'่ข‹่ฃก', u'่ข‹่กจ': u'่ข‹้Œถ', u'่ข–้‡Œ': u'่ข–่ฃก', u'่ขซ้‡Œ': u'่ขซ่ฃก', u'่ขซๅค': u'่ขซ่ค‡', u'่ขซ่ฆ†็€': u'่ขซ่ฆ†่‘—', u'่ขซๅ‘ไฝฏ็‹‚': u'่ขซ้ซฎไฝฏ็‹‚', u'่ขซๅ‘ๅ…ฅๅฑฑ': u'่ขซ้ซฎๅ…ฅๅฑฑ', u'่ขซๅ‘ๅทฆ่กฝ': u'่ขซ้ซฎๅทฆ่กฝ', u'่ขซๅ‘็ผจๅ† ': u'่ขซ้ซฎ็บ“ๅ† ', u'่ขซๅ‘้˜ณ็‹‚': u'่ขซ้ซฎ้™ฝ็‹‚', u'่ฃๅนถ': u'่ฃไฝต', u'่ฃๅˆถ': u'่ฃ่ฃฝ', u'้‡Œๆ‰‹': u'่ฃๆ‰‹', u'้‡Œๆตท': u'่ฃๆตท', u'่กฅไบŽ': u'่ฃœๆ–ผ', u'่กฅ่ฏ': u'่ฃœ่—ฅ', u'่กฅ่ก€่ฏ': u'่ฃœ่ก€่—ฅ', u'่กฅๆณจ': u'่ฃœ่จป', u'่ฃ…ๆŠ˜': u'่ฃๆ‘บ', u'้‡Œๅ‹พๅค–่ฟž': u'่ฃกๅ‹พๅค–้€ฃ', u'้‡Œๅค–': u'่ฃกๅค–', u'้‡Œๅฑ‹': u'่ฃกๅฑ‹', u'้‡Œๅฑ‚': u'่ฃกๅฑค', u'้‡Œๅธƒ': u'่ฃกๅธƒ', u'้‡Œๅธฆ': u'่ฃกๅธถ', u'้‡Œๅผฆ': u'่ฃกๅผฆ', u'้‡Œๅบ”ๅค–ๅˆ': u'่ฃกๆ‡‰ๅค–ๅˆ', u'้‡Œ่„Š': u'่ฃก่„Š', u'้‡Œ่กฃ': u'่ฃก่กฃ', u'้‡Œ้€šๅค–ๅ›ฝ': u'่ฃก้€šๅค–ๅœ‹', u'้‡Œ้€šๅค–ๆ•Œ': u'่ฃก้€šๅค–ๆ•ต', u'้‡Œ่พน': u'่ฃก้‚Š', u'้‡Œ้—ด': u'่ฃก้–“', u'้‡Œ้ข': u'่ฃก้ข', u'้‡Œ้ขๅŒ…': u'่ฃก้ขๅŒ…', u'้‡Œๅคด': u'่ฃก้ ญ', u'ๅˆถไปถ': u'่ฃฝไปถ', u'ๅˆถไฝœ': u'่ฃฝไฝœ', u'ๅˆถๅš': u'่ฃฝๅš', u'ๅˆถๅค‡': u'่ฃฝๅ‚™', u'ๅˆถๅ†ฐ': u'่ฃฝๅ†ฐ', u'ๅˆถๅ†ท': u'่ฃฝๅ†ท', u'ๅˆถๅ‰‚': u'่ฃฝๅŠ‘', u'ๅˆถๅ–': u'่ฃฝๅ–', u'ๅˆถๅ“': u'่ฃฝๅ“', u'ๅˆถๅ›พ': u'่ฃฝๅœ–', u'ๅˆถๅพ—': u'่ฃฝๅพ—', u'ๅˆถๆˆ': u'่ฃฝๆˆ', u'ๅˆถๆณ•': u'่ฃฝๆณ•', u'ๅˆถๆต†': u'่ฃฝๆผฟ', u'ๅˆถไธบ': u'่ฃฝ็‚บ', u'ๅˆถ็‰‡': u'่ฃฝ็‰‡', u'ๅˆถ็‰ˆ': u'่ฃฝ็‰ˆ', u'ๅˆถ็จ‹': u'่ฃฝ็จ‹', u'ๅˆถ็ณ–': u'่ฃฝ็ณ–', u'ๅˆถ็บธ': u'่ฃฝ็ด™', u'ๅˆถ่ฏ': u'่ฃฝ่—ฅ', u'ๅˆถ่กจ': u'่ฃฝ่กจ', u'ๅˆถ้€ ': u'่ฃฝ้€ ', u'ๅˆถ้ฉ': u'่ฃฝ้ฉ', u'ๅˆถ้ž‹': u'่ฃฝ้ž‹', u'ๅˆถ็›': u'่ฃฝ้นฝ', u'ๅคไปžๅนดๅฆ‚': u'่ค‡ไปžๅนดๅฆ‚', u'ๅคไปฅ็™พไธ‡': u'่ค‡ไปฅ็™พ่ฌ', u'ๅคไฝ': u'่ค‡ไฝ', u'ๅคไฟก': u'่ค‡ไฟก', u'ๅคๅ…ƒ้Ÿณ': u'่ค‡ๅ…ƒ้Ÿณ', u'ๅคๅ‡ฝๆ•ฐ': u'่ค‡ๅ‡ฝๆ•ธ', u'ๅคๅˆ†ๆ•ฐ': u'่ค‡ๅˆ†ๆ•ธ', u'ๅคๅˆ†ๆž': u'่ค‡ๅˆ†ๆž', u'ๅคๅˆ†่งฃ': u'่ค‡ๅˆ†่งฃ', u'ๅคๅˆ—': u'่ค‡ๅˆ—', u'ๅคๅˆฉ': u'่ค‡ๅˆฉ', u'ๅคๅฐ': u'่ค‡ๅฐ', u'ๅคๅฅ': u'่ค‡ๅฅ', u'ๅคๅˆ': u'่ค‡ๅˆ', u'ๅคๅ': u'่ค‡ๅ', u'ๅคๅ‘˜': u'่ค‡ๅ“ก', u'ๅคๅฃ': u'่ค‡ๅฃ', u'ๅคๅฃฎ': u'่ค‡ๅฃฏ', u'ๅคๅง“': u'่ค‡ๅง“', u'ๅคๅญ—้”ฎ': u'่ค‡ๅญ—้ต', u'ๅคๅฎก': u'่ค‡ๅฏฉ', u'ๅคๅ†™': u'่ค‡ๅฏซ', u'ๅคๅฏนๆ•ฐ': u'่ค‡ๅฐๆ•ธ', u'ๅคๅนณ้ข': u'่ค‡ๅนณ้ข', u'ๅคๅผ': u'่ค‡ๅผ', u'ๅคๅค': u'่ค‡ๅพฉ', u'ๅคๆ•ฐ': u'่ค‡ๆ•ธ', u'ๅคๆœฌ': u'่ค‡ๆœฌ', u'ๅคๆŸฅ': u'่ค‡ๆŸฅ', u'ๅคๆ ธ': u'่ค‡ๆ ธ', u'ๅคๆฃ€': u'่ค‡ๆชข', u'ๅคๆฌก': u'่ค‡ๆฌก', u'ๅคๆฏ”': u'่ค‡ๆฏ”', u'ๅคๅ†ณ': u'่ค‡ๆฑบ', u'ๅคๆต': u'่ค‡ๆต', u'ๅคๆต‹': u'่ค‡ๆธฌ', u'ๅคไบฉ็': u'่ค‡็•็', u'ๅคๅ‘': u'่ค‡็™ผ', u'ๅค็›ฎ': u'่ค‡็›ฎ', u'ๅค็œผ': u'่ค‡็œผ', u'ๅค็ง': u'่ค‡็จฎ', u'ๅค็บฟ': u'่ค‡็ทš', u'ๅคไน ': u'่ค‡็ฟ’', u'ๅค่‰ฒ': u'่ค‡่‰ฒ', u'ๅคๅถ': u'่ค‡่‘‰', u'ๅคๅˆถ': u'่ค‡่ฃฝ', u'ๅค่ฏŠ': u'่ค‡่จบ', u'ๅค่ฏ„': u'่ค‡่ฉ•', u'ๅค่ฏ': u'่ค‡่ฉž', u'ๅค่ฏ•': u'่ค‡่ฉฆ', u'ๅค่ฏพ': u'่ค‡่ชฒ', u'ๅค่ฎฎ': u'่ค‡่ญฐ', u'ๅคๅ˜ๅ‡ฝๆ•ฐ': u'่ค‡่ฎŠๅ‡ฝๆ•ธ', u'ๅค่ต›': u'่ค‡่ณฝ', u'ๅค่พ…้Ÿณ': u'่ค‡่ผ”้Ÿณ', u'ๅค่ฟฐ': u'่ค‡่ฟฐ', u'ๅค้€‰': u'่ค‡้ธ', u'ๅค้’ฑ': u'่ค‡้Œข', u'ๅค้˜…': u'่ค‡้–ฑ', u'ๅคๆ‚': u'่ค‡้›œ', u'ๅค็”ต': u'่ค‡้›ป', u'ๅค้Ÿณ': u'่ค‡้Ÿณ', u'ๅค้Ÿต': u'่ค‡้Ÿป', u'่ค’่ตž': u'่ค’่ฎš', u'่กฌ้‡Œ': u'่ฅฏ่ฃก', u'่ฅฟๅ ': u'่ฅฟไฝ”', u'่ฅฟๅ‘จ้’Ÿ': u'่ฅฟๅ‘จ้˜', u'่ฅฟๅฒณ': u'่ฅฟๅถฝ', u'่ฅฟๆ™’': u'่ฅฟๆ™’', u'่ฅฟๅކ': u'่ฅฟๆ›†', u'่ฅฟๅކๅฒ': u'่ฅฟๆญทๅฒ', u'่ฅฟ็ฑณ่ฐท': u'่ฅฟ็ฑณ่ฐท', u'่ฅฟ่ฏ': u'่ฅฟ่—ฅ', u'่ฅฟ่ฐท็ฑณ': u'่ฅฟ่ฐท็ฑณ', u'่ฅฟๆธธ': u'่ฅฟ้Š', u'่ฆๅ ': u'่ฆไฝ”', u'่ฆๅ…‹ๅˆถ': u'่ฆๅ‰‹ๅˆถ', u'่ฆๅ ๅœ': u'่ฆๅ ๅœ', u'่ฆ่‡ชๅˆถ': u'่ฆ่‡ชๅˆถ', u'่ฆๅ†ฒ': u'่ฆ่ก', u'่ฆไนˆ': u'่ฆ้บผ', u'่ฆ†ไบก': u'่ฆ†ไบก', u'่ฆ†ๅ‘ฝ': u'่ฆ†ๅ‘ฝ', u'่ฆ†ๅทขไน‹ไธ‹ๆ— ๅฎŒๅต': u'่ฆ†ๅทขไน‹ไธ‹็„กๅฎŒๅต', u'่ฆ†ๆฐด้šพๆ”ถ': u'่ฆ†ๆฐด้›ฃๆ”ถ', u'่ฆ†ๆฒก': u'่ฆ†ๆฒ’', u'่ฆ†็€': u'่ฆ†่‘—', u'่ฆ†็›–': u'่ฆ†่“‹', u'่ฆ†็›–็€': u'่ฆ†่“‹่‘—', u'่ฆ†่พ™': u'่ฆ†่ฝ', u'่ฆ†้›จ็ฟปไบ‘': u'่ฆ†้›จ็ฟป้›ฒ', u'่งไบŽ': u'่ฆ‹ๆ–ผ', u'่งๆฃฑ่ง่ง’': u'่ฆ‹็จœ่ฆ‹่ง’', u'่ง็ด ๆŠฑๆœด': u'่ฆ‹็ด ๆŠฑๆจธ', u'่ง้’Ÿไธๆ‰“': u'่ฆ‹้˜ไธๆ‰“', u'่ง„ๅˆ’': u'่ฆๅŠƒ', u'่ง„่Œƒ': u'่ฆ็ฏ„', u'่ฆ–ๅฆ‚ๅฏ‡ไป‡': u'่ฆ–ๅฆ‚ๅฏ‡่ฎŽ', u'่ง†ไบŽ': u'่ฆ–ๆ–ผ', u'่ง‚้‡‡': u'่ง€ๆŽก', u'่ง’่ฝๅ‘': u'่ง’่ฝ็™ผ', u'่ง’่ฝ้‡Œ': u'่ง’่ฝ่ฃก', u'่งšๆฃฑ': u'่งš็จœ', u'่งฃ้›‡': u'่งฃๅƒฑ', u'่งฃ็—›่ฏ': u'่งฃ็—›่—ฅ', u'่งฃ่ฏ': u'่งฃ่—ฅ', u'่งฃ้“ƒไป้กป็ณป้“ƒไบบ': u'่งฃ้ˆดไป้ ˆ็นซ้ˆดไบบ', u'่งฃ้“ƒ่ฟ˜้กป็ณป้“ƒไบบ': u'่งฃ้ˆด้‚„้ ˆ็นซ้ˆดไบบ', u'่งฃๅ‘ไฝฏ็‹‚': u'่งฃ้ซฎไฝฏ็‹‚', u'่งฆ้กป': u'่งธ้ฌš', u'่จ€ไบ‘': u'่จ€ไบ‘', u'่จ€ๅคง่€Œๅคธ': u'่จ€ๅคง่€Œๅคธ', u'่จ€่พฉ่€Œ็กฎ': u'่จ€่พฏ่€Œ็กฎ', u'่ฎขๅˆถ': u'่จ‚่ฃฝ', u'่ฎกๅˆ’': u'่จˆๅŠƒ', u'่ฎกๆ—ถ่กจ': u'่จˆๆ™‚้Œถ', u'ๆ‰˜ไบ†': u'่จ—ไบ†', u'ๆ‰˜ไบ‹': u'่จ—ไบ‹', u'ๆ‰˜ไบค': u'่จ—ไบค', u'ๆ‰˜ไบบ': u'่จ—ไบบ', u'ๆ‰˜ไป˜': u'่จ—ไป˜', u'ๆ‰˜ๅ„ฟๆ‰€': u'่จ—ๅ…’ๆ‰€', u'ๆ‰˜ๅค่ฎฝไปŠ': u'่จ—ๅค่ซทไปŠ', u'ๆ‰˜ๅ': u'่จ—ๅ', u'ๆ‰˜ๅ‘ฝ': u'่จ—ๅ‘ฝ', u'ๆ‰˜ๅ’Ž': u'่จ—ๅ’Ž', u'ๆ‰˜ๆขฆ': u'่จ—ๅคข', u'ๆ‰˜ๅคง': u'่จ—ๅคง', u'ๆ‰˜ๅญค': u'่จ—ๅญค', u'ๆ‰˜ๅบ‡': u'่จ—ๅบ‡', u'ๆ‰˜ๆ•…': u'่จ—ๆ•…', u'ๆ‰˜็–พ': u'่จ—็–พ', u'ๆ‰˜็—…': u'่จ—็—…', u'ๆ‰˜็ฎก': u'่จ—็ฎก', u'ๆ‰˜่จ€': u'่จ—่จ€', u'ๆ‰˜่ฏ': u'่จ—่ฉž', u'ๆ‰˜ไนฐ': u'่จ—่ฒท', u'ๆ‰˜ๅ–': u'่จ—่ณฃ', u'ๆ‰˜่บซ': u'่จ—่บซ', u'ๆ‰˜่พž': u'่จ—่พญ', u'ๆ‰˜่ฟ': u'่จ—้‹', u'ๆ‰˜่ฟ‡': u'่จ—้Ž', u'ๆ‰˜้™„': u'่จ—้™„', u'่ฎธๆ„ฟ่ตท็ป': u'่จฑๆ„ฟ่ตท็ถ“', u'่ฏ‰่ฏด็€': u'่จด่ชช่‘—', u'ๆณจไธŠ': u'่จปไธŠ', u'ๆณจๅ†Œ': u'่จปๅ†Š', u'ๆณจๅคฑ': u'่จปๅคฑ', u'ๆณจๅฎš': u'่จปๅฎš', u'ๆณจๆ˜Ž': u'่จปๆ˜Ž', u'ๆณจๆ ‡': u'่จปๆจ™', u'ๆณจ็”Ÿๅจ˜ๅจ˜': u'่จป็”Ÿๅจ˜ๅจ˜', u'ๆณจ็–': u'่จป็–', u'ๆณจ่„š': u'่จป่…ณ', u'ๆณจ่งฃ': u'่จป่งฃ', u'ๆณจ่ฎฐ': u'่จป่จ˜', u'ๆณจ่ฏ‘': u'่จป่ญฏ', u'ๆณจ้”€': u'่จป้Šท', u'ๆณจ๏ผš': u'่จป๏ผš', u'่ฏ„ๆ–ญๅ‘': u'่ฉ•ๆ–ท็™ผ', u'่ฏ„ๆณจ': u'่ฉ•่จป', u'่ฏๅนฒ': u'่ฉžๅนน', u'่ฏๆฑ‡': u'่ฉžๅฝ™', u'่ฏไฝ™': u'่ฉž้ค˜', u'่ฏขไบŽ': u'่ฉขๆ–ผ', u'่ฏขไบŽๅˆ่›': u'่ฉขๆ–ผ่Šป่•˜', u'่ฏ•่ฏ': u'่ฉฆ่—ฅ', u'่ฏ•ๅˆถ': u'่ฉฆ่ฃฝ', u'่ฏ—ไบ‘': u'่ฉฉไบ‘', u'่ฉฉไบ‘': u'่ฉฉไบ‘', u'่ฏ—่ตž': u'่ฉฉ่ฎš', u'่ฏ—้’Ÿ': u'่ฉฉ้˜', u'่ฏ—ไฝ™': u'่ฉฉ้ค˜', u'่ฏ้‡Œๆœ‰่ฏ': u'่ฉฑ่ฃกๆœ‰่ฉฑ', u'่ฏฅ้’Ÿ': u'่ฉฒ้˜', u'่ฏฆๅพๅšๅผ•': u'่ฉณๅพตๅšๅผ•', u'่ฏฆๆณจ': u'่ฉณ่จป', u'่ฏ”่ตž': u'่ช„่ฎš', u'ๅคธๅคšๆ–—้ก': u'่ช‡ๅคš้ฌฅ้ก', u'ๅคธ่ƒฝๆ–—ๆ™บ': u'่ช‡่ƒฝ้ฌฅๆ™บ', u'ๅคธ่ตž': u'่ช‡่ฎš', u'ๅฟ—ๅ“€': u'่ชŒๅ“€', u'ๅฟ—ๅ–œ': u'่ชŒๅ–œ', u'ๅฟ—ๅบ†': u'่ชŒๆ…ถ', u'ๅฟ—ๅผ‚': u'่ชŒ็•ฐ', u'่ฎคๅ‡†': u'่ชๆบ–', u'่ฏฑๅฅธ': u'่ช˜ๅงฆ', u'่ฏญไบ‘': u'่ชžไบ‘', u'่ฏญๆฑ‡': u'่ชžๅฝ™', u'่ฏญๆœ‰ไบ‘': u'่ชžๆœ‰ไบ‘', u'่ชžๆœ‰ไบ‘': u'่ชžๆœ‰ไบ‘', u'่ฏšๅพ': u'่ช ๅพต', u'่ฏšๆœด': u'่ช ๆจธ', u'่ฏฌ่”‘': u'่ชฃ่กŠ', u'่ฏด็€': u'่ชช่‘—', u'่ฐๅนฒ็š„': u'่ชฐๅนน็š„', u'่ฏพๅพ': u'่ชฒๅพต', u'่ฏพไฝ™': u'่ชฒ้ค˜', u'่ฐƒๅ‡†': u'่ชฟๆบ–', u'่ฐƒๅˆถ': u'่ชฟ่ฃฝ', u'่ฐƒ่กจ': u'่ชฟ้Œถ', u'่ฐƒ้’Ÿ่กจ': u'่ชฟ้˜้Œถ', u'่ฐˆๅพ': u'่ซ‡ๅพต', u'่ฏทๅ‚้˜…': u'่ซ‹ๅƒ้–ฑ', u'่ฏทๅ›ๅ…ฅ็“ฎ': u'่ซ‹ๅ›ๅ…ฅ็”•', u'่ฏทๆ‰˜': u'่ซ‹่จ—', u'ๅ’จ่ฏข': u'่ซฎ่ฉข', u'่ฏธไฝ™': u'่ซธ้ค˜', u'่ฐ‹ๅนฒ': u'่ฌ€ๅนน', u'่ฐข็ปๅ‚่ง‚': u'่ฌ็ต•ๅƒ่ง€', u'่ฐฌ้‡‡่™šๅฃฐ': u'่ฌฌๆŽก่™›่ฒ', u'่ฐฌ่ตž': u'่ฌฌ่ฎš', u'่ฌทไธ‘': u'่ฌท้†œ', u'่ฐจไบŽๅฟƒ': u'่ฌนๆ–ผๅฟƒ', u'่ญฆไธ–้’Ÿ': u'่ญฆไธ–้˜', u'่ญฆๆŠฅ้’Ÿ': u'่ญฆๅ ฑ้˜', u'่ญฆ็คบ้’Ÿ': u'่ญฆ็คบ้˜', u'่ญฆ้’Ÿ': u'่ญฆ้˜', u'่ฏ‘ๆณจ': u'่ญฏ่จป', u'ๆŠคๅ‘': u'่ญท้ซฎ', u'ๅ˜ๅพ': u'่ฎŠๅพต', u'ๅ˜ไธ‘': u'่ฎŠ้†œ', u'ๅ˜่„': u'่ฎŠ้ซ’', u'ๅ˜้ซ’': u'่ฎŠ้ซ’', u'ไป‡ๅ•': u'่ฎŽๅ•', u'ไป‡ๅคท': u'่ฎŽๅคท', u'ไป‡ๆ ก': u'่ฎŽๆ ก', u'ไป‡ๆญฃ': u'่ฎŽๆญฃ', u'ไป‡้š™': u'่ฎŽ้š™', u'่ตžไธ็ปๅฃ': u'่ฎšไธ็ต•ๅฃ', u'่ตžไฝฉ': u'่ฎšไฝฉ', u'่ตžๅ‘—': u'่ฎšๅ”„', u'่ตžๅนไธๅทฒ': u'่ฎšๅ˜†ไธๅทฒ', u'่ตžๆ‰ฌ': u'่ฎšๆš', u'่ตžไน': u'่ฎšๆจ‚', u'่ตžๆญŒ': u'่ฎšๆญŒ', u'่ตžๅน': u'่ฎšๆญŽ', u'่ตž็พŽ': u'่ฎš็พŽ', u'่ตž็พก': u'่ฎš็พจ', u'่ตž่ฎธ': u'่ฎš่จฑ', u'่ตž่ฏ': u'่ฎš่ฉž', u'่ตž่ช‰': u'่ฎš่ญฝ', u'่ตž่ต': u'่ฎš่ณž', u'่ตž่พž': u'่ฎš่พญ', u'่ตž้ข‚': u'่ฎš้ Œ', u'่ฑ†ๅนฒ': u'่ฑ†ไนพ', u'่ฑ†่…ๅนฒ': u'่ฑ†่…ไนพ', u'็ซ–็€': u'่ฑŽ่‘—', u'็ซ–่ตท่„Šๆข': u'่ฑŽ่ตท่„Šๆข', u'ไธฐๆปจ': u'่ฑๆฟฑ', u'ไธฐๆปจไนก': u'่ฑๆฟฑ้„‰', u'่ฑกๅพ': u'่ฑกๅพต', u'่ฑกๅพ็€': u'่ฑกๅพต่‘—', u'่ดŸๅ€บ็ดฏ็ดฏ': u'่ฒ ๅ‚ต็บ็บ', u'่ดชๆฌฒ': u'่ฒชๆ…พ', u'่ดตไปท': u'่ฒดไปท', u'่ดตๅนฒ': u'่ฒดๅนน', u'่ดตๅพ': u'่ฒดๅพต', u'่ฒทๅ‡ถ': u'่ฒทๅ…‡', u'ไนฐๅ‡ถ': u'่ฒทๅ…‡', u'ไนฐๆ–ญๅ‘': u'่ฒทๆ–ท็™ผ', u'่ดนๅ ': u'่ฒปไฝ”', u'่ดป่Œƒ': u'่ฒฝ็ฏ„', u'่ต„้‡‘ๅ ็”จ': u'่ณ‡้‡‘ๅ ็”จ', u'่ดพๅŽ': u'่ณˆๅŽ', u'่ณˆๅŽ': u'่ณˆๅŽ', u'่ตˆ้ฅฅ': u'่ณ‘้ฅ‘', u'่ต่ตž': u'่ณž่ฎš', u'่ดคๅŽ': u'่ณขๅŽ', u'่ณขๅŽ': u'่ณขๅŽ', u'ๅ–ๆ–ญๅ‘': u'่ณฃๆ–ท็™ผ', u'ๅ–ๅ‘†': u'่ณฃ็ƒ', u'่ดจๆœด': u'่ณชๆจธ', u'่ตŒๅฐ': u'่ณญๆชฏ', u'่ตŒๆ–—': u'่ณญ้ฌฅ', u'่ณธไฝ™': u'่ณธ้ค˜', u'่ดญๅนถ': u'่ณผไฝต', u'่ดญไนฐๆฌฒ': u'่ณผ่ฒทๆ…พ', u'่ตขไฝ™': u'่ด้ค˜', u'่ตคๆœฏ': u'่ตคๆœฎ', u'่ตค็ปณ็ณป่ถณ': u'่ตค็นฉ็นซ่ถณ', u'่ตค้œ‰็ด ': u'่ตค้œ‰็ด ', u'่ตฐๅ›ž่ทฏ': u'่ตฐๅ›ž่ทฏ', u'่ตทๅค': u'่ตท่ค‡', u'่ตทๅ“„': u'่ตท้ฌจ', u'่ถ…็บงๆฏ': u'่ถ…็ดš็›ƒ', u'่ตถๅˆถ': u'่ถ•่ฃฝ', u'่ตถ้ขๆฃ': u'่ถ•้บตๆฃ', u'่ตตๆฒปๅ‹‹': u'่ถ™ๆฒปๅ‹ณ', u'่ตตๅบ„': u'่ถ™่ŽŠ', u'่ถฑๅนฒ': u'่ถฒๅนน', u'่ถณไบŽ': u'่ถณๆ–ผ', u'่ทŒๆ‰‘': u'่ทŒๆ‰‘', u'่ทŒ่ก': u'่ทŒ่•ฉ', u'่ทฏ็ญพ': u'่ทฏ็ฑค', u'่ทณๆขๅฐไธ‘': u'่ทณๆจ‘ๅฐไธ‘', u'่ทณ่ก': u'่ทณ่•ฉ', u'่ทณ่กจ': u'่ทณ้Œถ', u'่นชไบŽ': u'่นชๆ–ผ', u'่นญๆฃฑๅญ': u'่นญ็จœๅญ', u'่บ้ƒ': u'่บ้ฌฑ', u'่บซไบŽ': u'่บซๆ–ผ', u'่บซไฝ“ๅ‘่‚ค': u'่บซ้ซ”้ซฎ่†š', u'่บฏๅนฒ': u'่ป€ๅนน', u'่ฝฆๅบ“้‡Œ': u'่ปŠๅบซ่ฃก', u'่ฝฆ็ซ™้‡Œ': u'่ปŠ็ซ™่ฃก', u'่ฝฆ้‡Œ': u'่ปŠ่ฃก', u'่ฝจ่Œƒ': u'่ปŒ็ฏ„', u'ๅ†›้˜Ÿๅ…‹ๅˆถ': u'่ป้šŠๅ‰‹ๅˆถ', u'่ฝฉ่พŸ': u'่ป’้—ข', u'่พƒไบŽ': u'่ผƒๆ–ผ', u'ๆŒฝๆ›ฒ': u'่ผ“ๆ›ฒ', u'ๆŒฝๆญŒ': u'่ผ“ๆญŒ', u'ๆŒฝ่ฏ': u'่ผ“่ฏ', u'ๆŒฝ่”': u'่ผ“่ฏ', u'ๆŒฝ่ฉž': u'่ผ“่ฉž', u'ๆŒฝ่ฏ': u'่ผ“่ฉž', u'ๆŒฝ่ฏ—': u'่ผ“่ฉฉ', u'ๆŒฝ่ฉฉ': u'่ผ“่ฉฉ', u'่ฝปไบŽ': u'่ผ•ๆ–ผ', u'่ฝป่ฝปๆพๆพ': u'่ผ•่ผ•้ฌ†้ฌ†', u'่ฝปๆพ': u'่ผ•้ฌ†', u'่ฝฎๅฅธ': u'่ผชๅงฆ', u'่ฝฎๅ›ž': u'่ผช่ฟด', u'่ฝฌๅ‘ๅพ€': u'่ฝ‰ๅ‘ๅพ€', u'่ฝฌๅฐ': u'่ฝ‰ๆชฏ', u'่ฝฌๆ‰˜': u'่ฝ‰่จ—', u'่ฝฌๆ–—ๅƒ้‡Œ': u'่ฝ‰้ฌฅๅƒ้‡Œ', u'่พ›ไธ‘': u'่พ›ไธ‘', u'่พŸ่ฐท': u'่พŸ็ฉ€', u'ๅŠžๅ…ฌๅฐ': u'่พฆๅ…ฌๆชฏ', u'่พžๆฑ‡': u'่พญๅฝ™', u'่พซๅ‘': u'่พฎ้ซฎ', u'่พฉๆ–—': u'่พฏ้ฌฅ', u'ๅ†œๅކ': u'่พฒๆ›†', u'ๅ†œๅކๅฒ': u'่พฒๆญทๅฒ', u'ๅ†œๆฐ‘ๅކ': u'่พฒๆฐ‘ๆ›†', u'ๅ†œๆฐ‘ๅކๅฒ': u'่พฒๆฐ‘ๆญทๅฒ', u'ๅ†œๅบ„': u'่พฒ่ŽŠ', u'ๅ†œ่ฏ': u'่พฒ่—ฅ', u'่ฟ‚ๅ›ž': u'่ฟ‚่ฟด', u'่ฟ‘ๆ—ฅ็„กไป‡': u'่ฟ‘ๆ—ฅ็„ก่ฎŽ', u'่ฟ‘ๆ—ฅ้‡Œ': u'่ฟ‘ๆ—ฅ่ฃก', u'่ฟ”ๆœด': u'่ฟ”ๆจธ', u'่ฟฅ็„ถๅ›žๅผ‚': u'่ฟฅ็„ถ่ฟด็•ฐ', u'่ฟซไบŽ': u'่ฟซๆ–ผ', u'ๅ›žๅ…‰่ฟ”็…ง': u'่ฟดๅ…‰่ฟ”็…ง', u'ๅ›žๅ‘': u'่ฟดๅ‘', u'ๅ›žๅœˆ': u'่ฟดๅœˆ', u'ๅ›žๅปŠ': u'่ฟดๅปŠ', u'ๅ›žๅฝขๅคน': u'่ฟดๅฝขๅคพ', u'ๅ›žๆ–‡': u'่ฟดๆ–‡', u'ๅ›žๆ—‹': u'่ฟดๆ—‹', u'ๅ›žๆต': u'่ฟดๆต', u'ๅ›ž็Žฏ': u'่ฟด็’ฐ', u'ๅ›ž็บน้’ˆ': u'่ฟด็ด‹้‡', u'ๅ›ž็ป•': u'่ฟด็นž', u'ๅ›ž็ฟ”': u'่ฟด็ฟ”', u'ๅ›ž่‚ ': u'่ฟด่…ธ', u'ๅ›ž่ฏต': u'่ฟด่ชฆ', u'ๅ›ž่ทฏ': u'่ฟด่ทฏ', u'ๅ›ž่ฝฌ': u'่ฟด่ฝ‰', u'ๅ›ž้€’ๆ€ง': u'่ฟด้žๆ€ง', u'ๅ›ž้ฟ': u'่ฟด้ฟ', u'ๅ›ž้Šฎ': u'่ฟด้‘พ', u'ๅ›ž้Ÿณ': u'่ฟด้Ÿณ', u'ๅ›žๅ“': u'่ฟด้Ÿฟ', u'ๅ›ž้ฃŽ': u'่ฟด้ขจ', u'่ฟทๅนป่ฏ': u'่ฟทๅนป่—ฅ', u'่ฟทไบŽ': u'่ฟทๆ–ผ', u'่ฟท่’™': u'่ฟทๆฟ›', u'่ฟท่ฏ': u'่ฟท่—ฅ', u'่ฟท้ญ‚่ฏ': u'่ฟท้ญ‚่—ฅ', u'่ฟฝๅ‡ถ': u'่ฟฝๅ…‡', u'้€€ไผ™': u'้€€ๅคฅ', u'้€€็ƒง่ฏ': u'้€€็‡’่—ฅ', u'้€€่—ไบŽๅฏ†': u'้€€่—ๆ–ผๅฏ†', u'้€†้’Ÿ': u'้€†้˜', u'้€†้’Ÿๅ‘': u'้€†้˜ๅ‘', u'้€†้ฃŽๅŽ': u'้€†้ขจๅพŒ', u'้€‹ๅ‘': u'้€‹้ซฎ', u'้€้ฅๆธธ': u'้€้™้Š', u'้€่พŸ': u'้€้—ข', u'่ฟ™ๅชไธ': u'้€™ๅชไธ', u'่ฟ™ๅชๅ…': u'้€™ๅชๅ…', u'่ฟ™ๅชๅฎน': u'้€™ๅชๅฎน', u'่ฟ™ๅช้‡‡': u'้€™ๅชๆŽก', u'่ฟ™ๅชๆ˜ฏ': u'้€™ๅชๆ˜ฏ', u'่ฟ™ๅช็”จ': u'้€™ๅช็”จ', u'่ฟ™ไผ™ไบบ': u'้€™ๅคฅไบบ', u'่ฟ™้‡Œ': u'้€™่ฃก', u'่ฟ™้’Ÿ': u'้€™้˜', u'่ฟ™ๅช': u'้€™้šป', u'่ฟ™ไนˆ': u'้€™้บผ', u'่ฟ™ไนˆ็€': u'้€™้บผ่‘—', u'้€šๅฅธ': u'้€šๅงฆ', u'้€šๅฟƒ้ข': u'้€šๅฟƒ้บต', u'้€šไบŽ': u'้€šๆ–ผ', u'้€šๅކ': u'้€šๆ›†', u'้€šๅކๅฒ': u'้€šๆญทๅฒ', u'้€šๅบ„': u'้€š่ŽŠ', u'้€žๅ‡ถ้ฌฅ็‹ ': u'้€žๅ…‡้ฌฅ็‹ ', u'้€žๅ‡ถๆ–—็‹ ': u'้€žๅ…‡้ฌฅ็‹ ', u'้€ ้’Ÿ': u'้€ ้˜', u'้€ ้’Ÿ่กจ': u'้€ ้˜้Œถ', u'้€ ๆ›ฒ': u'้€ ้บฏ', u'่ฟžไธ‰ๅนถๅ››': u'้€ฃไธ‰ไฝตๅ››', u'่ฟžๅ ': u'้€ฃไฝ”', u'่ฟž้‡‡': u'้€ฃๆŽก', u'่ฟž็ณป': u'้€ฃ็นซ', u'่ฟžๅบ„': u'้€ฃ่ŽŠ', u'ๅ‘จๆธธไธ–็•Œ': u'้€ฑ้Šไธ–็•Œ', u'่ฟ›ๅ ': u'้€ฒไฝ”', u'้€ผๅนถ': u'้€ผไฝต', u'้‡้ฃŽๅŽ': u'้‡้ขจๅพŒ', u'ๆธธไบ†': u'้Šไบ†', u'ๆธธไบบ': u'้Šไบบ', u'ๆธธไป™': u'้Šไป™', u'ๆธธไผด': u'้Šไผด', u'ๆธธไพ ': u'้Šไฟ ', u'ๆธธๅ†ถ': u'้Šๅ†ถ', u'ๆธธๅˆƒๆœ‰ไฝ™': u'้Šๅˆƒๆœ‰้ค˜', u'ๆธธๅŠจ': u'้Šๅ‹•', u'ๆธธๅ›ญ': u'้Šๅœ’', u'ๆธธๅญ': u'้Šๅญ', u'ๆธธๅญฆ': u'้Šๅญธ', u'ๆธธๅฎข': u'้Šๅฎข', u'ๆธธๅฎฆ': u'้Šๅฎฆ', u'ๆธธๅฑฑ็Žฉๆฐด': u'้Šๅฑฑ็Žฉๆฐด', u'ๆธธๅฟ…ๆœ‰ๆ–น': u'้Šๅฟ…ๆœ‰ๆ–น', u'ๆธธๆ†ฉ': u'้Šๆ†ฉ', u'ๆธธๆˆ': u'้Šๆˆฒ', u'ๆธธๆ‰‹ๅฅฝ้—ฒ': u'้Šๆ‰‹ๅฅฝ้–’', u'ๆธธๆ–น': u'้Šๆ–น', u'ๆธธๆ˜Ÿ': u'้Šๆ˜Ÿ', u'ๆธธไน': u'้Šๆจ‚', u'ๆธธๆ ‡ๅกๅฐบ': u'้Šๆจ™ๅกๅฐบ', u'ๆธธๅކ': u'้Šๆญท', u'ๆธธๆฐ‘': u'้Šๆฐ‘', u'ๆธธๆฒณ': u'้Šๆฒณ', u'ๆธธ็ŒŽ': u'้Š็ต', u'ๆธธ็Žฉ': u'้Š็Žฉ', u'ๆธธ่ก': u'้Š็›ช', u'ๆธธ็›ฎ้ช‹ๆ€€': u'้Š็›ฎ้จๆ‡ท', u'ๆธธ็จ‹': u'้Š็จ‹', u'ๆธธไธ': u'้Š็ตฒ', u'ๆธธๅ…ด': u'้Š่ˆˆ', u'ๆธธ่ˆน': u'้Š่ˆน', u'ๆธธ่‰‡': u'้Š่‰‡', u'ๆธธ่กไธๅฝ’': u'้Š่•ฉไธๆญธ', u'ๆธธ่‰บ': u'้Š่—', u'ๆธธ่กŒ': u'้Š่กŒ', u'ๆธธ่ก—': u'้Š่ก—', u'ๆธธ่งˆ': u'้Š่ฆฝ', u'ๆธธ่ฎฐ': u'้Š่จ˜', u'ๆธธ่ฏด': u'้Š่ชช', u'ๆธธ่ต„': u'้Š่ณ‡', u'ๆธธ่ตฐ': u'้Š่ตฐ', u'ๆธธ่ธช': u'้Š่นค', u'ๆธธ้€›': u'้Š้€›', u'ๆธธ้”™': u'้Š้Œฏ', u'ๆธธ็ฆป': u'้Š้›ข', u'ๆธธ้ช‘ๅ…ต': u'้Š้จŽๅ…ต', u'ๆธธ้ญ‚': u'้Š้ญ‚', u'่ฟ‡ไบŽ': u'้Žๆ–ผ', u'่ฟ‡ๆ†': u'้Žๆ†', u'่ฟ‡ๆฐด้ข': u'้Žๆฐด้บต', u'้“่Œƒ': u'้“็ฏ„', u'้€ŠไบŽ': u'้œๆ–ผ', u'้€’ๅ›ž': u'้ž่ฟด', u'่ฟœๅŽฟๆ‰่‡ณ': u'้ ็ธฃ็บ”่‡ณ', u'่ฟœๆธธ': u'้ ้Š', u'้จๆธธ': u'้จ้Š', u'้ฎไธ‘': u'้ฎ้†œ', u'่ฟไบŽ': u'้ทๆ–ผ', u'้€‰ๆ‰‹่กจๆ˜Ž': u'้ธๆ‰‹่กจๆ˜Ž', u'้€‰ๆ‰‹่กจๅ†ณ': u'้ธๆ‰‹่กจๆฑบ', u'้€‰ๆ‰‹่กจ็Žฐ': u'้ธๆ‰‹่กจ็พ', u'้€‰ๆ‰‹่กจ็คบ': u'้ธๆ‰‹่กจ็คบ', u'้€‰ๆ‰‹่กจ่พพ': u'้ธๆ‰‹่กจ้”', u'้—ไผ ้’Ÿ': u'้บๅ‚ณ้˜', u'้—่Œƒ': u'้บ็ฏ„', u'้—่ฟน': u'้บ่ฟน', u'่พฝๆฒˆ': u'้ผ็€‹', u'้ฟๅญ•่ฏ': u'้ฟๅญ•่—ฅ', u'้‚€ๅคฉไน‹ๅนธ': u'้‚€ๅคฉไน‹ๅ€–', u'่ฟ˜ๅ ': u'้‚„ไฝ”', u'่ฟ˜้‡‡': u'้‚„ๆŽก', u'่ฟ˜ๅ†ฒ': u'้‚„่ก', u'้‚‹้‡Œ้‚‹้ข': u'้‚‹่ฃก้‚‹้ข', u'้‚ฃๅชๆ˜ฏ': u'้‚ฃๅชๆ˜ฏ', u'้‚ฃๅชๆœ‰': u'้‚ฃๅชๆœ‰', u'้‚ฃๅท': u'้‚ฃๆฒ', u'้‚ฃ้‡Œ': u'้‚ฃ่ฃก', u'้‚ฃๅช': u'้‚ฃ้šป', u'้‚ฃไนˆ': u'้‚ฃ้บผ', u'้‚ฃไนˆ็€': u'้‚ฃ้บผ่‘—', u'้ƒๆœด': u'้ƒๆจธ', u'้ƒ้ƒ่ฒ่ฒ': u'้ƒ้ƒ่ฒ่ฒ', u'้ƒŠๆธธ': u'้ƒŠ้Š', u'้ƒ˜้’Ÿ': u'้ƒ˜้˜', u'้ƒจ่ฝๅ‘': u'้ƒจ่ฝ็™ผ', u'้ƒฝไบŽ': u'้ƒฝๆ–ผ', u'ไนกๆ„ฟ': u'้„‰ๆ„ฟ', u'้„ญๅ‡ฑไบ‘': u'้„ญๅ‡ฑไบ‘', u'้ƒ‘ๅ‡ฏไบ‘': u'้„ญๅ‡ฑไบ‘', u'้ƒ‘ๅบ„ๅ…ฌ': u'้„ญ่ŽŠๅ…ฌ', u'้…ๅˆถ้ฅฒๆ–™': u'้…ๅˆถ้ฃผๆ–™', u'้…ๅˆ็€': u'้…ๅˆ่‘—', u'้…ๆฐดๅนฒ็ฎก': u'้…ๆฐดๅนน็ฎก', u'้…่ฏ': u'้…่—ฅ', u'้…ๅˆถ': u'้…่ฃฝ', u'้…’ๅธ˜': u'้…’ๅธ˜', u'้…’ๅ›': u'้…’็ฝˆ', u'้…’่‚ด': u'้…’่‚ด', u'้…’่ฏ': u'้…’่—ฅ', u'้…’้†ดๆ›ฒ่˜–': u'้…’้†ด้บดๆซฑ', u'้…’ๆ›ฒ': u'้…’้บด', u'้…ฅๆพ': u'้…ฅ้ฌ†', u'้†‡ๆœด': u'้†‡ๆจธ', u'้†‰ไบŽ': u'้†‰ๆ–ผ', u'้†‹ๅ›': u'้†‹็ฝˆ', u'ไธ‘ไธซๅคด': u'้†œไธซ้ ญ', u'ไธ‘ไบ‹': u'้†œไบ‹', u'ไธ‘ไบบ': u'้†œไบบ', u'ไธ‘ไพช': u'้†œๅ„•', u'ไธ‘ๅ…ซๆ€ช': u'้†œๅ…ซๆ€ช', u'ไธ‘ๅ‰Œๅ‰Œ': u'้†œๅ‰Œๅ‰Œ', u'ไธ‘ๅ‰ง': u'้†œๅЇ', u'ไธ‘ๅŒ–': u'้†œๅŒ–', u'ไธ‘ๅฒ': u'้†œๅฒ', u'ไธ‘ๅ': u'้†œๅ', u'ไธ‘ๅ’ค': u'้†œๅ’', u'ไธ‘ๅœฐ': u'้†œๅœฐ', u'ไธ‘ๅคท': u'้†œๅคท', u'ไธ‘ๅฅณ': u'้†œๅฅณ', u'ไธ‘ๅฅณๆ•ˆ้ขฆ': u'้†œๅฅณๆ•ˆ้กฐ', u'ไธ‘ๅฅดๅ„ฟ': u'้†œๅฅดๅ…’', u'ไธ‘ๅฆ‡': u'้†œๅฉฆ', u'ไธ‘ๅชณ': u'้†œๅชณ', u'ไธ‘ๅชณๅฆ‡': u'้†œๅชณๅฉฆ', u'ไธ‘ๅฐ้ธญ': u'้†œๅฐ้ดจ', u'ไธ‘ๅทดๆ€ช': u'้†œๅทดๆ€ช', u'ไธ‘ๅพ’': u'้†œๅพ’', u'ไธ‘ๆถ': u'้†œๆƒก', u'ไธ‘ๆ€': u'้†œๆ…‹', u'ไธ‘ๆฏ™ไบ†': u'้†œๆ–ƒไบ†', u'ไธ‘ไบŽ': u'้†œๆ–ผ', u'ไธ‘ๆœซ': u'้†œๆœซ', u'ไธ‘ๆ ท': u'้†œๆจฃ', u'ไธ‘ๆญป': u'้†œๆญป', u'ไธ‘ๆฏ”': u'้†œๆฏ”', u'ไธ‘ๆฒฎ': u'้†œๆฒฎ', u'ไธ‘็”ท': u'้†œ็”ท', u'ไธ‘้—ป': u'้†œ่ž', u'ไธ‘ๅฃฐ': u'้†œ่ฒ', u'ไธ‘ๅฃฐ่ฟœๆ’ญ': u'้†œ่ฒ้ ๆ’ญ', u'ไธ‘่„ธ': u'้†œ่‡‰', u'ไธ‘่™': u'้†œ่™œ', u'ไธ‘่กŒ': u'้†œ่กŒ', u'ไธ‘่จ€': u'้†œ่จ€', u'ไธ‘่ฏ‹': u'้†œ่ฉ†', u'ไธ‘่ฏ': u'้†œ่ฉฑ', u'ไธ‘่ฏญ': u'้†œ่ชž', u'ไธ‘่ดผ็”Ÿ': u'้†œ่ณŠ็”Ÿ', u'ไธ‘่พž': u'้†œ่พญ', u'ไธ‘่พฑ': u'้†œ่พฑ', u'ไธ‘้€†': u'้†œ้€†', u'ไธ‘ไธ‘': u'้†œ้†œ', u'ไธ‘้™‹': u'้†œ้™‹', u'ไธ‘ๆ‚': u'้†œ้›œ', u'ไธ‘ๅคดๆ€ช่„ธ': u'้†œ้ ญๆ€ช่‡‰', u'ไธ‘็ฑป': u'้†œ้กž', u'้…้…ฟ็€': u'้†ž้‡€่‘—', u'ๅŒป่ฏ': u'้†ซ่—ฅ', u'ๅŒป้™ข้‡Œ': u'้†ซ้™ข่ฃก', u'้…ฟๅˆถ': u'้‡€่ฃฝ', u'่ก…้’Ÿ': u'้‡้˜', u'้‡‡็Ÿณไน‹ๅฝน': u'้‡‡็Ÿณไน‹ๅฝน', u'้‡‡็Ÿณไน‹ๆˆ˜': u'้‡‡็Ÿณไน‹ๆˆฐ', u'้‡‡็Ÿณไน‹ๆˆฐ': u'้‡‡็Ÿณไน‹ๆˆฐ', u'้‡‡็Ÿณ็ฃฏ': u'้‡‡็Ÿณ็ฃฏ', u'้‡‡็Ÿณ็Ÿถ': u'้‡‡็Ÿณ็ฃฏ', u'้‡‰่ฏ': u'้‡‰่—ฅ', u'้‡Œ็จ‹่กจ': u'้‡Œ็จ‹้Œถ', u'้‡ๅˆ’': u'้‡ๅŠƒ', u'้‡ๅ›ž': u'้‡ๅ›ž', u'้‡ๆŠ˜': u'้‡ๆ‘บ', u'้‡ไบŽ': u'้‡ๆ–ผ', u'้‡็ฝ—้ข': u'้‡็พ…้บต', u'้‡ๅˆถ': u'้‡่ฃฝ', u'้‡ๅค': u'้‡่ค‡', u'้‡ๆ‰˜': u'้‡่จ—', u'้‡ๆธธ': u'้‡้Š', u'้‡้”ค': u'้‡้Žš', u'้‡Žๅงœ': u'้‡Ž่–‘', u'้‡Žๆธธ': u'้‡Ž้Š', u'ๅŽ˜ๅ‡บ': u'้‡ๅ‡บ', u'ๅŽ˜ๅ‡': u'้‡ๅ‡', u'ๅŽ˜ๅฎš': u'้‡ๅฎš', u'ๅŽ˜ๆญฃ': u'้‡ๆญฃ', u'ๅŽ˜ๆธ…': u'้‡ๆธ…', u'ๅŽ˜่ฎข': u'้‡่จ‚', u'้‡‘ไป†ๅง‘': u'้‡‘ๅƒ•ๅง‘', u'้‡‘ไป‘ๆบช': u'้‡‘ๅด™ๆบช', u'้‡‘ๅธƒ้“': u'้‡‘ๅธƒ้“', u'้‡‘่Œƒ': u'้‡‘็ฏ„', u'้‡‘่กจๆƒ…': u'้‡‘่กจๆƒ…', u'้‡‘่กจๆ€': u'้‡‘่กจๆ…‹', u'้‡‘่กจๆ‰ฌ': u'้‡‘่กจๆš', u'้‡‘่กจๆ˜Ž': u'้‡‘่กจๆ˜Ž', u'้‡‘่กจๆผ”': u'้‡‘่กจๆผ”', u'้‡‘่กจ็Žฐ': u'้‡‘่กจ็พ', u'้‡‘่กจ็คบ': u'้‡‘่กจ็คบ', u'้‡‘่กจ่พพ': u'้‡‘่กจ้”', u'้‡‘่กจ้œฒ': u'้‡‘่กจ้œฒ', u'้‡‘่กจ้ข': u'้‡‘่กจ้ข', u'้‡‘่ฃ…็މ้‡Œ': u'้‡‘่ฃ็މ่ฃก', u'้‡‘่กจ': u'้‡‘้Œถ', u'้‡‘้’Ÿ': u'้‡‘้˜', u'้‡‘้ฉฌไป‘้“': u'้‡‘้ฆฌๅด™้“', u'้‡‘ๅ‘': u'้‡‘้ซฎ', u'้’‰้”ค': u'้‡˜้Žš', u'้’ฉๅฟƒๆ–—่ง’': u'้ˆŽๅฟƒ้ฌฅ่ง’', u'้“ถๆœฑ': u'้Š€็กƒ', u'้“ถๅ‘': u'้Š€้ซฎ', u'้“œ่Œƒ': u'้Š…็ฏ„', u'้“œๅˆถ': u'้Š…่ฃฝ', u'้“œ้’Ÿ': u'้Š…้˜', u'้“ฏ้’Ÿ': u'้Šซ้˜', u'้“ๅˆถ': u'้‹่ฃฝ', u'้“บ้”ฆๅˆ—็ปฃ': u'้‹ช้Œฆๅˆ—็นก', u'้’ขไน‹็‚ผ้‡‘ๆœฏๅธˆ': u'้‹ผไน‹้Š้‡‘่ก“ๅธซ', u'้’ขๆข': u'้‹ผๆจ‘', u'้’ขๅˆถ': u'้‹ผ่ฃฝ', u'ๅฝ•็€': u'้Œ„่‘—', u'ๅฝ•ๅˆถ': u'้Œ„่ฃฝ', u'้”ค็‚ผ': u'้Œ˜้Š', u'้’ฑ่ฐท': u'้Œข็ฉ€', u'้’ฑ่Œƒ': u'้Œข็ฏ„', u'้’ฑๅบ„': u'้Œข่ŽŠ', u'้”ฆ็ปฃ่Šฑๅ›ญ': u'้Œฆ็ถ‰่Šฑๅœ’', u'้”ฆ็ปฃ': u'้Œฆ็นก', u'่กจๅœ': u'้Œถๅœ', u'่กจๅ† ': u'้Œถๅ† ', u'่กจๅธฆ': u'้Œถๅธถ', u'่กจๅบ—': u'้Œถๅบ—', u'่กจๅŽ‚': u'้Œถๅป ', u'่กจๅฟซ': u'้Œถๅฟซ', u'่กจๆ…ข': u'้Œถๆ…ข', u'่กจๆฟ': u'้Œถๆฟ', u'่กจๅฃณ': u'้Œถๆฎผ', u'่กจ็Ž‹': u'้Œถ็Ž‹', u'่กจ็š„ๅ˜€ๅ—’': u'้Œถ็š„ๅ˜€ๅ—’', u'่กจ็š„ๅކๅฒ': u'้Œถ็š„ๆญทๅฒ', u'่กจ็›˜': u'้Œถ็›ค', u'่กจ่’™ๅญ': u'้Œถ่’™ๅญ', u'่กจ่กŒ': u'้Œถ่กŒ', u'่กจ่ฝฌ': u'้Œถ่ฝ‰', u'่กจ้€Ÿ': u'้Œถ้€Ÿ', u'่กจ้’ˆ': u'้Œถ้‡', u'่กจ้“พ': u'้Œถ้ˆ', u'็‚ผๅ†ถ': u'้Šๅ†ถ', u'็‚ผๅฅ': u'้Šๅฅ', u'็‚ผๅญ—': u'้Šๅญ—', u'็‚ผๅธˆ': u'้Šๅธซ', u'็‚ผๅบฆ': u'้Šๅบฆ', u'็‚ผๅฝข': u'้Šๅฝข', u'็‚ผๆฐ”': u'้Šๆฐฃ', u'็‚ผๆฑž': u'้Šๆฑž', u'็‚ผ็Ÿณ': u'้Š็Ÿณ', u'็‚ผ่ดซ': u'้Š่ฒง', u'็‚ผ้‡‘ๆœฏ': u'้Š้‡‘่ก“', u'็‚ผ้’ข': u'้Š้‹ผ', u'้”…ๅบ„': u'้‹่ŽŠ', u'้”ป็‚ผๅ‡บ': u'้›้Šๅ‡บ', u'้”ฒ่€Œไธ่ˆ': u'้ฅ่€Œไธๆจ', u'้•ฐไป“': u'้ŽŒๅ€‰', u'้”คๅ„ฟ': u'้Žšๅ…’', u'้”คๅญ': u'้Žšๅญ', u'้”คๅคด': u'้Žš้ ญ', u'้”ˆ็—…': u'้ฝ็—…', u'้”ˆ่Œ': u'้ฝ่Œ', u'้”ˆ่š€': u'้ฝ่•', u'้’ŸไธŠ': u'้˜ไธŠ', u'้’Ÿไธ‹': u'้˜ไธ‹', u'้’Ÿไธ': u'้˜ไธ', u'้’Ÿไธๆ‰ฃไธ้ธฃ': u'้˜ไธๆ‰ฃไธ้ณด', u'้’Ÿไธๆ’žไธ้ธฃ': u'้˜ไธๆ’žไธ้ณด', u'้’Ÿไธๆ•ฒไธๅ“': u'้˜ไธๆ•ฒไธ้Ÿฟ', u'้’Ÿไธ็ฉบๅˆ™ๅ“‘': u'้˜ไธ็ฉบๅ‰‡ๅ•ž', u'้’Ÿไนณๆดž': u'้˜ไนณๆดž', u'้’Ÿไนณ็Ÿณ': u'้˜ไนณ็Ÿณ', u'้’Ÿๅœ': u'้˜ๅœ', u'้’ŸๅŒ ': u'้˜ๅŒ ', u'้’Ÿๅฃ': u'้˜ๅฃ', u'้’Ÿๅœจๅฏบ้‡Œ': u'้˜ๅœจๅฏบ่ฃก', u'้’Ÿๅก”': u'้˜ๅก”', u'้’Ÿๅฃ': u'้˜ๅฃ', u'้’Ÿๅคช': u'้˜ๅคช', u'้’Ÿๅฅฝ': u'้˜ๅฅฝ', u'้’Ÿๅฑฑ': u'้˜ๅฑฑ', u'้’Ÿๅทฆๅณ': u'้˜ๅทฆๅณ', u'้’Ÿๅทฎ': u'้˜ๅทฎ', u'้’Ÿๅบง': u'้˜ๅบง', u'้’Ÿๅฝข': u'้˜ๅฝข', u'้’Ÿๅฝข่™ซ': u'้˜ๅฝข่Ÿฒ', u'้’Ÿๅพ‹': u'้˜ๅพ‹', u'้’Ÿๅฟซ': u'้˜ๅฟซ', u'้’Ÿๆ„': u'้˜ๆ„', u'้’Ÿๆ…ข': u'้˜ๆ…ข', u'้’Ÿๆ‘†': u'้˜ๆ“บ', u'้’Ÿๆ•ฒ': u'้˜ๆ•ฒ', u'้’Ÿๆœ‰': u'้˜ๆœ‰', u'้’Ÿๆฅผ': u'้˜ๆจ“', u'้’Ÿๆจก': u'้˜ๆจก', u'้’Ÿๆฒก': u'้˜ๆฒ’', u'้’Ÿๆผ': u'้˜ๆผ', u'้’Ÿ็Ž‹': u'้˜็Ž‹', u'้’Ÿ็ด': u'้˜็ด', u'้’Ÿๅ‘้Ÿณ': u'้˜็™ผ้Ÿณ', u'้’Ÿ็š„': u'้˜็š„', u'้’Ÿ็›˜': u'้˜็›ค', u'้’Ÿ็›ธ': u'้˜็›ธ', u'้’Ÿ็ฃฌ': u'้˜็ฃฌ', u'้’Ÿ็บฝ': u'้˜็ด', u'้’Ÿ็ฝฉ': u'้˜็ฝฉ', u'้’Ÿๅฃฐ': u'้˜่ฒ', u'้’Ÿ่…ฐ': u'้˜่…ฐ', u'้’Ÿ่žบ': u'้˜่žบ', u'้’Ÿ่กŒ': u'้˜่กŒ', u'้’Ÿ่กจ้ข': u'้˜่กจ้ข', u'้’Ÿ่ขซ': u'้˜่ขซ', u'้’Ÿ่ฐƒ': u'้˜่ชฟ', u'้’Ÿ่บซ': u'้˜่บซ', u'้’Ÿ้€Ÿ': u'้˜้€Ÿ', u'้’Ÿ่กจ': u'้˜้Œถ', u'้’Ÿ่กจๅœ': u'้˜้Œถๅœ', u'้’Ÿ่กจๅฟซ': u'้˜้Œถๅฟซ', u'้’Ÿ่กจๆ…ข': u'้˜้Œถๆ…ข', u'้’Ÿ่กจๅކๅฒ': u'้˜้Œถๆญทๅฒ', u'้’Ÿ่กจ็Ž‹': u'้˜้Œถ็Ž‹', u'้’Ÿ่กจ็š„': u'้˜้Œถ็š„', u'้’Ÿ่กจ็š„ๅކๅฒ': u'้˜้Œถ็š„ๆญทๅฒ', u'้’Ÿ่กจ็›˜': u'้˜้Œถ็›ค', u'้’Ÿ่กจ่กŒ': u'้˜้Œถ่กŒ', u'้’Ÿ่กจ้€Ÿ': u'้˜้Œถ้€Ÿ', u'้’Ÿๅ…ณ': u'้˜้—œ', u'้’Ÿ้™ˆๅˆ—': u'้˜้™ณๅˆ—', u'้’Ÿ้ข': u'้˜้ข', u'้’Ÿๅ“': u'้˜้Ÿฟ', u'้’Ÿ้กถ': u'้˜้ ‚', u'้’Ÿๅคด': u'้˜้ ญ', u'้’Ÿไฝ“': u'้˜้ซ”', u'้’Ÿ้ธฃ': u'้˜้ณด', u'้’Ÿ็‚น': u'้˜้ปž', u'้’Ÿ้ผŽ': u'้˜้ผŽ', u'้’Ÿ้ผ“': u'้˜้ผ“', u'้“ๆ†': u'้ตๆ†', u'้“ๆ ๆ†': u'้ตๆฌ„ๆ†', u'้“้”ค': u'้ต้Žš', u'้“้”ˆ': u'้ต้ฝ', u'้“้’Ÿ': u'้ต้˜', u'้“ธ้’Ÿ': u'้‘„้˜', u'้‰ดไบŽ': u'้‘’ๆ–ผ', u'้•ฟๅ‡ ': u'้•ทๅ‡ ', u'้•ฟไบŽ': u'้•ทๆ–ผ', u'้•ฟๅކ': u'้•ทๆ›†', u'้•ฟๅކๅฒ': u'้•ทๆญทๅฒ', u'้•ฟ็”Ÿ่ฏ': u'้•ท็”Ÿ่—ฅ', u'้•ฟ่ƒก': u'้•ท้ฌ', u'้—จๅธ˜': u'้–€ๅธ˜', u'้—จๅŠๅ„ฟ': u'้–€ๅผ”ๅ…’', u'้—จ้‡Œ': u'้–€่ฃก', u'้—ซๆ€€็คผ': u'้–†ๆ‡ท็ฆฎ', u'ๅผ€ๅŠ': u'้–‹ๅผ”', u'ๅผ€ๅพ': u'้–‹ๅพต', u'ๅผ€้‡‡': u'้–‹ๆŽก', u'ๅผ€ๅ‘': u'้–‹็™ผ', u'ๅผ€่ฏ': u'้–‹่—ฅ', u'ๅผ€่พŸ': u'้–‹้—ข', u'ๅผ€ๅ“„': u'้–‹้ฌจ', u'้—ฒๆƒ…้€ธ่‡ด': u'้–’ๆƒ…้€ธ็ทป', u'้—ฒ่ก': u'้–’่•ฉ', u'้—ฒๆธธ': u'้–’้Š', u'้—ดไธๅฎนๅ‘': u'้–“ไธๅฎน้ซฎ', u'้—ต้‡‡ๅฐ”': u'้–”ๆŽก็ˆพ', u'ๅˆๅบœ': u'้–คๅบœ', u'้—บ่Œƒ': u'้–จ็ฏ„', u'้˜ƒ่Œƒ': u'้–ซ็ฏ„', u'้—ฏ่ก': u'้—–่•ฉ', u'้—ฏ็‚ผ': u'้—–้Š', u'ๅ…ณ็ณป': u'้—œไฟ‚', u'ๅ…ณ็ณป็€': u'้—œไฟ‚่‘—', u'ๅ…ณๅผ“ไธŽๆˆ‘็กฎ': u'้—œๅผ“่ˆ‡ๆˆ‘็กฎ', u'ๅ…ณไบŽ': u'้—œๆ–ผ', u'่พŸไฝ›': u'้—ขไฝ›', u'่พŸไฝœ': u'้—ขไฝœ', u'่พŸๅˆ’': u'้—ขๅŠƒ', u'่พŸๅœŸ': u'้—ขๅœŸ', u'่พŸๅœฐ': u'้—ขๅœฐ', u'่พŸๅฎค': u'้—ขๅฎค', u'่พŸๅปบ': u'้—ขๅปบ', u'่พŸไธบ': u'้—ข็‚บ', u'่พŸ็”ฐ': u'้—ข็”ฐ', u'่พŸ็ญ‘': u'้—ข็ฏ‰', u'่พŸ่ฐฃ': u'้—ข่ฌ ', u'่พŸ่พŸ': u'้—ข่พŸ', u'่พŸ้‚ชไปฅๅพ‹': u'้—ข้‚ชไปฅๅพ‹', u'้˜ฒๆ™’': u'้˜ฒๆ™’', u'้˜ฒๆฐด่กจ': u'้˜ฒๆฐด้Œถ', u'้˜ฒๅพก': u'้˜ฒ็ฆฆ', u'้˜ฒ่Œƒ': u'้˜ฒ็ฏ„', u'้˜ฒ้”ˆ': u'้˜ฒ้ฝ', u'้˜ฒๅฐ': u'้˜ฒ้ขฑ', u'้˜ปไบŽ': u'้˜ปๆ–ผ', u'้˜ฟๅ‘†็“œ': u'้˜ฟๅ‘†็“œ', u'้˜ฟๆ–ฏๅ›พ้‡Œไบšๆ–ฏ': u'้˜ฟๆ–ฏๅœ–้‡Œไบžๆ–ฏ', u'้˜ฟๅ‘†': u'้˜ฟ็ƒ', u'้™„ไบŽ': u'้™„ๆ–ผ', u'้™„ๆณจ': u'้™„่จป', u'้™ๅŽ‹่ฏ': u'้™ๅฃ“่—ฅ', u'้™ๅˆถ': u'้™ๅˆถ', u'ๅ‡ๅฎ˜': u'้™žๅฎ˜', u'้™ค่‡ญ่ฏ': u'้™ค่‡ญ่—ฅ', u'้™ชๅŠ': u'้™ชๅผ”', u'้˜ดๅนฒ': u'้™ฐไนพ', u'้˜ดๅކ': u'้™ฐๆ›†', u'้˜ดๅކๅฒ': u'้™ฐๆญทๅฒ', u'้˜ดๆฒŸ้‡Œ็ฟป่ˆน': u'้™ฐๆบ่ฃก็ฟป่ˆน', u'้˜ด้ƒ': u'้™ฐ้ฌฑ', u'้™ˆ็‚ผ': u'้™ณ้Š', u'้™†ๆธธ': u'้™ธ้Š', u'้˜ณๆ˜ฅ้ข': u'้™ฝๆ˜ฅ้บต', u'้˜ณๅކ': u'้™ฝๆ›†', u'้˜ณๅކๅฒ': u'้™ฝๆญทๅฒ', u'้š†ๅ‡†่ฎธ': u'้š†ๅ‡†่จฑ', u'้š†ๅ‡†': u'้š†ๆบ–', u'้šไบŽ': u'้šจๆ–ผ', u'้šๅ ': u'้šฑไฝ”', u'้šๅ‡ ': u'้šฑๅ‡ ', u'้šไบŽ': u'้šฑๆ–ผ', u'ๅชๅญ—': u'้šปๅญ—', u'ๅชๅฝฑ': u'้šปๅฝฑ', u'ๅชๆ‰‹้ฎๅคฉ': u'้šปๆ‰‹้ฎๅคฉ', u'ๅช็œผ': u'้šป็œผ', u'ๅช่จ€็‰‡่ฏญ': u'้šป่จ€็‰‡่ชž', u'ๅช่บซ': u'้šป่บซ', u'้›„ๆ–—ๆ–—': u'้›„ๆ–—ๆ–—', u'้›…่Œƒ': u'้›…็ฏ„', u'้›…่‡ด': u'้›…็ทป', u'้›†ไบŽ': u'้›†ๆ–ผ', u'้›†ๆธธๆณ•': u'้›†้Šๆณ•', u'้›•ๆข็”ปๆ ‹': u'้›•ๆจ‘็•ซๆฃŸ', u'ๅŒๆŠ˜ๅฐ„': u'้›™ๆŠ˜ๅฐ„', u'ๅŒๆŠ˜': u'้›™ๆ‘บ', u'ๅŒ่ƒœ็ฑป': u'้›™่ƒœ้กž', u'ๅŒ้›•': u'้›™้ตฐ', u'ๆ‚ๅˆ้ขๅ„ฟ': u'้›œๅˆ้บตๅ…’', u'ๆ‚ๅฟ—': u'้›œ่ชŒ', u'ๆ‚้ข': u'้›œ้บต', u'้ธกๅต้น…ๆ–—': u'้›žๅต้ต้ฌฅ', u'้ธกๅฅธ': u'้›žๅงฆ', u'้ธกไบ‰้น…ๆ–—': u'้›ž็ˆญ้ต้ฌฅ', u'้ธกไธ': u'้›ž็ตฒ', u'้ธกไธ้ข': u'้›ž็ตฒ้บต', u'้ธก่…ฟ้ข': u'้›ž่…ฟ้บต', u'้ธก่›‹้‡ŒๆŒ‘้ชจๅคด': u'้›ž่›‹่ฃกๆŒ‘้ชจ้ ญ', u'้ธกๅช': u'้›ž้šป', u'็ฆปไบŽ': u'้›ขๆ–ผ', u'้šพ่ˆ': u'้›ฃๆจ', u'้šพไบŽ': u'้›ฃๆ–ผ', u'้›ช็ช—่คๅ‡ ': u'้›ช็ช—่žขๅ‡ ', u'้›ช้‡Œ': u'้›ช่ฃก', u'้›ช้‡Œ็บข': u'้›ช่ฃก็ด…', u'้›ช้‡Œ่•ป': u'้›ช่ฃก่•ป', u'ไบ‘ๅ—็™ฝ่ฏ': u'้›ฒๅ—็™ฝ่—ฅ', u'ไบ‘็ฌˆไธƒ็ญพ': u'้›ฒ็ฌˆไธƒ็ฑค', u'ไบ‘ๆธธ': u'้›ฒ้Š', u'ไบ‘้กป': u'้›ฒ้ฌš', u'้›ถไธช': u'้›ถๅ€‹', u'้›ถๅคšๅช': u'้›ถๅคš้šป', u'้›ถๅคฉๅŽ': u'้›ถๅคฉๅพŒ', u'้›ถๅช': u'้›ถ้šป', u'้›ถไฝ™': u'้›ถ้ค˜', u'็”ตๅญ่กจๆ ผ': u'้›ปๅญ่กจๆ ผ', u'็”ตๅญ่กจ': u'้›ปๅญ้Œถ', u'็”ตๅญ้’Ÿ': u'้›ปๅญ้˜', u'็”ตๅญ้’Ÿ่กจ': u'้›ปๅญ้˜้Œถ', u'็”ตๆ†': u'้›ปๆ†', u'็”ต็ ่กจ': u'้›ป็ขผ่กจ', u'็”ต็บฟๆ†': u'้›ป็ทšๆ†', u'็”ตๅ†ฒ': u'้›ป่ก', u'็”ต่กจ': u'้›ป้Œถ', u'็”ต้’Ÿ': u'้›ป้˜', u'้œ‡ๆ —': u'้œ‡ๆ…„', u'้œ‡่ก': u'้œ‡่•ฉ', u'้›พ้‡Œ': u'้œง่ฃก', u'้œฒไธ‘': u'้œฒ้†œ', u'้œธๅ ': u'้œธไฝ”', u'้œ่Œƒ': u'้œฝ็ฏ„', u'็ต่ฏ': u'้ˆ่—ฅ', u'้’ๅฑฑไธ€ๅ‘': u'้’ๅฑฑไธ€้ซฎ', u'้’่‹น': u'้’่‹น', u'้’่‹นๆžœ': u'้’่˜‹ๆžœ', u'้’่‡ๅŠๅฎข': u'้’่ …ๅผ”ๅฎข', u'้’้œ‰็ด ': u'้’้œ‰็ด ', u'้’้œ‰': u'้’้ปด', u'้žๅ ไธๅฏ': u'้žไฝ”ไธๅฏ', u'้ขๅŒ…ไฝ': u'้ขๅŒ…ไฝ', u'้ขๅŒ…ๅซ': u'้ขๅŒ…ๅซ', u'้ขๅŒ…ๅ›ด': u'้ขๅŒ…ๅœ', u'้ขๅŒ…ๅฎน': u'้ขๅŒ…ๅฎน', u'้ขๅŒ…ๅบ‡': u'้ขๅŒ…ๅบ‡', u'้ขๅŒ…ๅŽข': u'้ขๅŒ…ๅป‚', u'้ขๅŒ…ๆŠ„': u'้ขๅŒ…ๆŠ„', u'้ขๅŒ…ๆ‹ฌ': u'้ขๅŒ…ๆ‹ฌ', u'้ขๅŒ…ๆฝ': u'้ขๅŒ…ๆ”ฌ', u'้ขๅŒ…ๆถต': u'้ขๅŒ…ๆถต', u'้ขๅŒ…็ฎก': u'้ขๅŒ…็ฎก', u'้ขๅŒ…ๆ‰Ž': u'้ขๅŒ…็ดฎ', u'้ขๅŒ…็ฝ—': u'้ขๅŒ…็พ…', u'้ขๅŒ…็€': u'้ขๅŒ…่‘—', u'้ขๅŒ…่—': u'้ขๅŒ…่—', u'้ขๅŒ…่ฃ…': u'้ขๅŒ…่ฃ', u'้ขๅŒ…่ฃน': u'้ขๅŒ…่ฃน', u'้ขๅŒ…่ตท': u'้ขๅŒ…่ตท', u'้ขๅŒ…ๅŠž': u'้ขๅŒ…่พฆ', u'้ขๅบ—่ˆ–': u'้ขๅบ—่ˆ–', u'้ขๆœ็€': u'้ขๆœ่‘—', u'้ขๆก็›ฎ': u'้ขๆข็›ฎ', u'้ขๆข็›ฎ': u'้ขๆข็›ฎ', u'้ข็ฒ‰็ขŽ': u'้ข็ฒ‰็ขŽ', u'้ข็ฒ‰็บข': u'้ข็ฒ‰็ด…', u'้ขไธด็€': u'้ข่‡จ่‘—', u'้ข้ฃŸ้ฅญ': u'้ข้ฃŸ้ฃฏ', u'้ข้ฃŸ้ข': u'้ข้ฃŸ้บต', u'้ž‹้‡Œ': u'้ž‹่ฃก', u'้žฃๅˆถ': u'้žฃ่ฃฝ', u'็ง‹ๅƒ': u'้žฆ้Ÿ†', u'้žญ่พŸๅ…ฅ้‡Œ': u'้žญ่พŸๅ…ฅ่ฃก', u'้Ÿฆๅบ„': u'้Ÿ‹่ŽŠ', u'้Ÿฉๅ›ฝๅˆถ': u'้Ÿ“ๅœ‹่ฃฝ', u'้Ÿฉๅˆถ': u'้Ÿ“่ฃฝ', u'้Ÿณๅ‡†': u'้Ÿณๆบ–', u'้Ÿณๅฃฐๅฆ‚้’Ÿ': u'้Ÿณ่ฒๅฆ‚้˜', u'้Ÿถๅฑฑๅ†ฒ': u'้Ÿถๅฑฑๆฒ–', u'ๅ“้’Ÿ': u'้Ÿฟ้˜', u'้ ้ข': u'้ ้ข', u'้กต้ข': u'้ ้ข', u'้ ‚ๅคš': u'้ ‚ๅคš', u'้กถๅคš': u'้ ‚ๅคš', u'้กนๅบ„': u'้ …่ŽŠ', u'้กบไบŽ': u'้ †ๆ–ผ', u'้กบ้’Ÿๅ‘': u'้ †้˜ๅ‘', u'้กบ้ฃŽๅŽ': u'้ †้ขจๅพŒ', u'้กปๆ นๆฎ': u'้ ˆๆ นๆ“š', u'้ข‚็ณป': u'้ Œ็นซ', u'้ข‚่ตž': u'้ Œ่ฎš', u'้ข„ๅˆถ': u'้ ่ฃฝ', u'้ข†ๅŸŸ้‡Œ': u'้ ˜ๅŸŸ่ฃก', u'้ข†่ข–ๆฌฒ': u'้ ˜่ข–ๆ…พ', u'ๅคดๅทพๅŠๅœจๆฐด้‡Œ': u'้ ญๅทพๅผ”ๅœจๆฐด่ฃก', u'ๅคด้‡Œ': u'้ ญ่ฃก', u'ๅคดๅ‘': u'้ ญ้ซฎ', u'้ขŠ้กป': u'้ ฐ้ฌš', u'้ข˜็ญพ': u'้กŒ็ฑค', u'้ขๅพ': u'้กๅพต', u'้ขๆˆ‘็•ฅๅކ': u'้กๆˆ‘็•ฅๆ›†', u'้ขๆˆ‘็•ฅๅކๅฒ': u'้กๆˆ‘็•ฅๆญทๅฒ', u'้ขœ่Œƒ': u'้ก็ฏ„', u'้ข ๅนฒๅ€’ๅค': u'้ก›ไนพๅ€’ๅค', u'้ข ่ฆ†': u'้ก›่ฆ†', u'้ข ้ข ไป†ไป†': u'้ก›้ก›ไป†ไป†', u'้ขคๆ —': u'้กซๆ…„', u'ๆ˜พ็คบ่กจ': u'้กฏ็คบ้Œถ', u'ๆ˜พ็คบ้’Ÿ': u'้กฏ็คบ้˜', u'ๆ˜พ็คบ้’Ÿ่กจ': u'้กฏ็คบ้˜้Œถ', u'ๆ˜พ่‘—ๆ ‡ๅฟ—': u'้กฏ่‘—ๆจ™ๅฟ—', u'้ฃŽๅนฒ': u'้ขจไนพ', u'้ฃŽๅŽ': u'้ขจๅŽ', u'้ฃŽๅœŸๅฟ—': u'้ขจๅœŸ่ชŒ', u'้ฃŽๅŽ๏ผŒ': u'้ขจๅพŒ๏ผŒ', u'้ฃŽๅทๆฎ‹ไบ‘': u'้ขจๆฒๆฎ˜้›ฒ', u'้ฃŽ็‰ฉๅฟ—': u'้ขจ็‰ฉ่ชŒ', u'้ฃŽ่Œƒ': u'้ขจ็ฏ„', u'้ฃŽ้‡Œ': u'้ขจ่ฃก', u'้ฃŽ่ตทไบ‘ๆถŒ': u'้ขจ่ตท้›ฒๆนง', u'้ฃŽ้‡‡': u'้ขจ้‡‡', u'้ขจ้‡‡': u'้ขจ้‡‡', u'ๅฐ้ฃŽ': u'้ขฑ้ขจ', u'ๅฐ้ฃŽๅŽ': u'้ขฑ้ขจๅพŒ', u'ๅˆฎไบ†': u'้ขณไบ†', u'ๅˆฎๅ€’': u'้ขณๅ€’', u'ๅˆฎๅŽป': u'้ขณๅŽป', u'ๅˆฎๅพ—': u'้ขณๅพ—', u'ๅˆฎ่ตฐ': u'้ขณ่ตฐ', u'ๅˆฎ่ตท': u'้ขณ่ตท', u'ๅˆฎ้›ช': u'้ขณ้›ช', u'ๅˆฎ้ฃŽ': u'้ขณ้ขจ', u'ๅˆฎ้ฃŽๅŽ': u'้ขณ้ขจๅพŒ', u'้ฃ˜่ก': u'้ฃ„่•ฉ', u'้ฃ˜ๆธธ': u'้ฃ„้Š', u'้ฃ˜้ฃ˜่ก่ก': u'้ฃ„้ฃ„่•ฉ่•ฉ', u'้ฃžๆ‰Ž': u'้ฃ›็ดฎ', u'้ฃžๅˆๆŒฝ็ฒŸ': u'้ฃ›่Šป่ผ“็ฒŸ', u'้ฃž่กŒ้’Ÿ': u'้ฃ›่กŒ้˜', u'้ฃŸๆฌฒ': u'้ฃŸๆ…พ', u'้ฃŸๆฌฒไธๆŒฏ': u'้ฃŸๆฌฒไธๆŒฏ', u'้ฃŸ้‡Žไน‹่‹น': u'้ฃŸ้‡Žไน‹่‹น', u'้ฃŸ้ข': u'้ฃŸ้บต', u'้ฅญๅŽ้’Ÿ': u'้ฃฏๅพŒ้˜', u'้ฅญๅ›ข': u'้ฃฏ็ณฐ', u'้ฅญๅบ„': u'้ฃฏ่ŽŠ', u'้ฅฒๅ–‚': u'้ฃผ้คต', u'้ฅผๅนฒ': u'้ค…ไนพ', u'้ฆ‚ไฝ™': u'้ค•้ค˜', u'ไฝ™0': u'้ค˜0', u'ไฝ™1': u'้ค˜1', u'ไฝ™2': u'้ค˜2', u'ไฝ™3': u'้ค˜3', u'ไฝ™4': u'้ค˜4', u'ไฝ™5': u'้ค˜5', u'ไฝ™6': u'้ค˜6', u'ไฝ™7': u'้ค˜7', u'ไฝ™8': u'้ค˜8', u'ไฝ™9': u'้ค˜9', u'ไฝ™ใ€‡': u'้ค˜ใ€‡', u'ไฝ™ไธ€': u'้ค˜ไธ€', u'ไฝ™ไธƒ': u'้ค˜ไธƒ', u'ไฝ™ไธ‰': u'้ค˜ไธ‰', u'ไฝ™ไธ‹': u'้ค˜ไธ‹', u'ไฝ™ไน': u'้ค˜ไน', u'ไฝ™ไบ‹': u'้ค˜ไบ‹', u'ไฝ™ไบŒ': u'้ค˜ไบŒ', u'ไฝ™ไบ”': u'้ค˜ไบ”', u'ไฝ™ไบบ': u'้ค˜ไบบ', u'ไฝ™ไฟ—': u'้ค˜ไฟ—', u'ไฝ™ๅ€': u'้ค˜ๅ€', u'ไฝ™ๅƒ‡': u'้ค˜ๅƒ‡', u'ไฝ™ๅ…‰': u'้ค˜ๅ…‰', u'ไฝ™ๅ…ซ': u'้ค˜ๅ…ซ', u'ไฝ™ๅ…ญ': u'้ค˜ๅ…ญ', u'ไฝ™ๅˆƒ': u'้ค˜ๅˆƒ', u'ไฝ™ๅˆ‡': u'้ค˜ๅˆ‡', u'ไฝ™ๅˆฉ': u'้ค˜ๅˆฉ', u'ไฝ™ๅ‰ฒ': u'้ค˜ๅ‰ฒ', u'ไฝ™ๅŠ›': u'้ค˜ๅŠ›', u'ไฝ™ๅ‹‡': u'้ค˜ๅ‹‡', u'ไฝ™ๅ': u'้ค˜ๅ', u'ไฝ™ๅ‘ณ': u'้ค˜ๅ‘ณ', u'ไฝ™ๅ–˜': u'้ค˜ๅ–˜', u'ไฝ™ๅ››': u'้ค˜ๅ››', u'ไฝ™ๅœฐ': u'้ค˜ๅœฐ', u'ไฝ™ๅขจ': u'้ค˜ๅขจ', u'ไฝ™ๅค–': u'้ค˜ๅค–', u'ไฝ™ๅฆ™': u'้ค˜ๅฆ™', u'ไฝ™ๅงš': u'้ค˜ๅงš', u'ไฝ™ๅจ': u'้ค˜ๅจ', u'ไฝ™ๅญ': u'้ค˜ๅญ', u'ไฝ™ๅญ˜': u'้ค˜ๅญ˜', u'ไฝ™ๅญฝ': u'้ค˜ๅญฝ', u'ไฝ™ๅผฆ': u'้ค˜ๅผฆ', u'ไฝ™ๆ€': u'้ค˜ๆ€', u'ไฝ™ๆ‚ธ': u'้ค˜ๆ‚ธ', u'ไฝ™ๅบ†': u'้ค˜ๆ…ถ', u'ไฝ™ๆ•ฐ': u'้ค˜ๆ•ธ', u'ไฝ™ๆ˜Ž': u'้ค˜ๆ˜Ž', u'ไฝ™ๆ˜ ': u'้ค˜ๆ˜ ', u'ไฝ™ๆš‡': u'้ค˜ๆš‡', u'ไฝ™ๆ™–': u'้ค˜ๆš‰', u'ไฝ™ๆญ': u'้ค˜ๆญ', u'ไฝ™ๆฏ': u'้ค˜ๆฏ', u'ไฝ™ๆกƒ': u'้ค˜ๆกƒ', u'ไฝ™ๆกถ': u'้ค˜ๆกถ', u'ไฝ™ไธš': u'้ค˜ๆฅญ', u'ไฝ™ๆฌพ': u'้ค˜ๆฌพ', u'ไฝ™ๆญฅ': u'้ค˜ๆญฅ', u'ไฝ™ๆฎƒ': u'้ค˜ๆฎƒ', u'ไฝ™ๆฏ’': u'้ค˜ๆฏ’', u'ไฝ™ๆฐ”': u'้ค˜ๆฐฃ', u'ไฝ™ๆณข': u'้ค˜ๆณข', u'ไฝ™ๆณข่กๆผพ': u'้ค˜ๆณข็›ชๆผพ', u'ไฝ™ๆธฉ': u'้ค˜ๆบซ', u'ไฝ™ๆณฝ': u'้ค˜ๆพค', u'ไฝ™ๆฒฅ': u'้ค˜็€', u'ไฝ™็ƒˆ': u'้ค˜็ƒˆ', u'ไฝ™็ƒญ': u'้ค˜็†ฑ', u'ไฝ™็ƒฌ': u'้ค˜็‡ผ', u'ไฝ™็': u'้ค˜็', u'ไฝ™็”Ÿ': u'้ค˜็”Ÿ', u'ไฝ™ไผ—': u'้ค˜็œพ', u'ไฝ™็ช': u'้ค˜็ซ…', u'ไฝ™็ฒฎ': u'้ค˜็ณง', u'ไฝ™็ปช': u'้ค˜็ท’', u'ไฝ™็ผบ': u'้ค˜็ผบ', u'ไฝ™็ฝช': u'้ค˜็ฝช', u'ไฝ™็พก': u'้ค˜็พจ', u'ไฝ™ๅฃฐ': u'้ค˜่ฒ', u'ไฝ™่†': u'้ค˜่†', u'ไฝ™ๅ…ด': u'้ค˜่ˆˆ', u'ไฝ™่“„': u'้ค˜่“„', u'ไฝ™่ซ': u'้ค˜่”ญ', u'ไฝ™่ฃ•': u'้ค˜่ฃ•', u'ไฝ™่ง’': u'้ค˜่ง’', u'ไฝ™่ฎบ': u'้ค˜่ซ–', u'ไฝ™่ดฃ': u'้ค˜่ฒฌ', u'ไฝ™่ฒพ': u'้ค˜่ฒพ', u'ไฝ™่พ‰': u'้ค˜่ผ', u'ไฝ™่พœ': u'้ค˜่พœ', u'ไฝ™้…ฒ': u'้ค˜้…ฒ', u'ไฝ™้—ฐ': u'้ค˜้–', u'ไฝ™้—ฒ': u'้ค˜้–’', u'ไฝ™้›ถ': u'้ค˜้›ถ', u'ไฝ™้œ‡': u'้ค˜้œ‡', u'ไฝ™้œž': u'้ค˜้œž', u'ไฝ™้Ÿณ': u'้ค˜้Ÿณ', u'ไฝ™้Ÿณ็ป•ๆข': u'้ค˜้Ÿณ็นžๆข', u'ไฝ™้Ÿต': u'้ค˜้Ÿป', u'ไฝ™ๅ“': u'้ค˜้Ÿฟ', u'ไฝ™้ข': u'้ค˜้ก', u'ไฝ™้ฃŽ': u'้ค˜้ขจ', u'ไฝ™้ฃŸ': u'้ค˜้ฃŸ', u'ไฝ™ๅ…š': u'้ค˜้ปจ', u'ไฝ™๏ผ': u'้ค˜๏ผ', u'ไฝ™๏ผ‘': u'้ค˜๏ผ‘', u'ไฝ™๏ผ’': u'้ค˜๏ผ’', u'ไฝ™๏ผ“': u'้ค˜๏ผ“', u'ไฝ™๏ผ”': u'้ค˜๏ผ”', u'ไฝ™๏ผ•': u'้ค˜๏ผ•', u'ไฝ™๏ผ–': u'้ค˜๏ผ–', u'ไฝ™๏ผ—': u'้ค˜๏ผ—', u'ไฝ™๏ผ˜': u'้ค˜๏ผ˜', u'ไฝ™๏ผ™': u'้ค˜๏ผ™', u'้ฆ„้ฅจ้ข': u'้ค›้ฃฉ้บต', u'้ฆ†่ฐท': u'้คจ็ฉ€', u'้ฆ†้‡Œ': u'้คจ่ฃก', u'ๅ–‚ไนณ': u'้คตไนณ', u'ๅ–‚ไบ†': u'้คตไบ†', u'ๅ–‚ๅฅถ': u'้คตๅฅถ', u'ๅ–‚็ป™': u'้คต็ตฆ', u'ๅ–‚็พŠ': u'้คต็พŠ', u'ๅ–‚็Œช': u'้คต่ฑฌ', u'ๅ–‚่ฟ‡': u'้คต้Ž', u'ๅ–‚้ธก': u'้คต้›ž', u'ๅ–‚้ฃŸ': u'้คต้ฃŸ', u'ๅ–‚้ฅฑ': u'้คต้ฃฝ', u'ๅ–‚ๅ…ป': u'้คต้คŠ', u'ๅ–‚้ฉด': u'้คต้ฉข', u'ๅ–‚้ฑผ': u'้คต้ญš', u'ๅ–‚้ธญ': u'้คต้ดจ', u'ๅ–‚้น…': u'้คต้ต', u'้ฅฅๅฏ’': u'้ฅ‘ๅฏ’', u'้ฅฅๆฐ‘': u'้ฅ‘ๆฐ‘', u'้ฅฅๆธด': u'้ฅ‘ๆธด', u'้ฅฅๆบบ': u'้ฅ‘ๆบบ', u'้ฅฅ่’': u'้ฅ‘่’', u'้ฅฅ้ฅฑ': u'้ฅ‘้ฃฝ', u'้ฅฅ้ฆ‘': u'้ฅ‘้ฅ‰', u'้ฆ–ๅฝ“ๅ…ถๅ†ฒ': u'้ฆ–็•ถๅ…ถ่ก', u'้ฆ–ๅ‘': u'้ฆ–็™ผ', u'้ฆ–ๅช': u'้ฆ–้šป', u'้ฆ™ๅนฒ': u'้ฆ™ไนพ', u'้ฆ™ๅฑฑๅบ„': u'้ฆ™ๅฑฑๅบ„', u'้ฉฌๅนฒ': u'้ฆฌไนพ', u'้ฉฌๅ ๅฑฑ': u'้ฆฌๅ ๅฑฑ', u'้ฆฌๅ ๅฑฑ': u'้ฆฌๅ ๅฑฑ', u'้ฉฌๆ†': u'้ฆฌๆ†', u'้ฆฌๆ ผ้‡Œๅธƒ': u'้ฆฌๆ ผ้‡Œๅธƒ', u'้ฉฌๆ ผ้‡Œๅธƒ': u'้ฆฌๆ ผ้‡Œๅธƒ', u'้ฉฌ่กจ': u'้ฆฌ้Œถ', u'้ฉปๆ‰Ž': u'้ง็ดฎ', u'้ช€่ก': u'้ง˜่•ฉ', u'่…พๅ†ฒ': u'้จฐ่ก', u'ๆƒŠ่ตž': u'้ฉš่ฎš', u'ๆƒŠ้’Ÿ': u'้ฉš้˜', u'้ชจๅญ้‡Œ': u'้ชจๅญ่ฃก', u'้ชจๅนฒ': u'้ชจๅนน', u'้ชจ็ฐๅ›': u'้ชจ็ฐ็ฝˆ', u'้ชจๅ›': u'้ชจ็ฝˆ', u'้ชจๅคด้‡ŒๆŒฃๅ‡บๆฅ็š„้’ฑๆ‰ๅšๅพ—่‚‰': u'้ชจ้ ญ่ฃกๆŽ™ๅ‡บไพ†็š„้Œข็บ”ๅšๅพ—่‚‰', u'่‚ฎ่‚ฎ่„่„': u'้ชฏ้ชฏ้ซ’้ซ’', u'่‚ฎ่„': u'้ชฏ้ซ’', u'่„ไนฑ': u'้ซ’ไบ‚', u'่„ไบ†': u'้ซ’ไบ†', u'่„ๅ…ฎๅ…ฎ': u'้ซ’ๅ…ฎๅ…ฎ', u'่„ๅญ—': u'้ซ’ๅญ—', u'่„ๅพ—': u'้ซ’ๅพ—', u'่„ๅฟƒ': u'้ซ’ๅฟƒ', u'่„ไธœ่ฅฟ': u'้ซ’ๆฑ่ฅฟ', u'่„ๆฐด': u'้ซ’ๆฐด', u'่„็š„': u'้ซ’็š„', u'่„่ฏ': u'้ซ’่ฉž', u'่„่ฏ': u'้ซ’่ฉฑ', u'่„้’ฑ': u'้ซ’้Œข', u'่„ๅ‘': u'้ซ’้ซฎ', u'ไฝ“่Œƒ': u'้ซ”็ฏ„', u'ไฝ“็ณป': u'้ซ”็ณป', u'้ซ˜ๅ‡ ': u'้ซ˜ๅ‡ ', u'้ซ˜ๅนฒๆ‰ฐ': u'้ซ˜ๅนฒๆ“พ', u'้ซ˜ๅนฒ้ข„': u'้ซ˜ๅนฒ้ ', u'้ซ˜ๅนฒ': u'้ซ˜ๅนน', u'้ซ˜ๅบฆ่‡ชๅˆถ': u'้ซ˜ๅบฆ่‡ชๅˆถ', u'้ซ˜ๆธ…ๆ„ฟ': u'้ซ˜ๆธ…ๆ„ฟ', u'้ซกๅ‘': u'้ซก้ซฎ', u'้ซญ่ƒก': u'้ซญ้ฌ', u'้ซญ้กป': u'้ซญ้ฌš', u'ๅ‘ไธŠๆŒ‡ๅ† ': u'้ซฎไธŠๆŒ‡ๅ† ', u'ๅ‘ไธŠๅ†ฒๅ† ': u'้ซฎไธŠๆฒ–ๅ† ', u'ๅ‘ไนณ': u'้ซฎไนณ', u'ๅ‘ๅ…‰ๅฏ้‰ด': u'้ซฎๅ…‰ๅฏ้‘‘', u'ๅ‘ๅŒช': u'้ซฎๅŒช', u'ๅ‘ๅž‹': u'้ซฎๅž‹', u'ๅ‘ๅคน': u'้ซฎๅคพ', u'ๅ‘ๅฆป': u'้ซฎๅฆป', u'ๅ‘ๅง': u'้ซฎๅง', u'ๅ‘ๅฑ‹': u'้ซฎๅฑ‹', u'ๅ‘ๅทฒ้œœ็™ฝ': u'้ซฎๅทฒ้œœ็™ฝ', u'ๅ‘ๅธฆ': u'้ซฎๅธถ', u'ๅ‘ๅปŠ': u'้ซฎๅปŠ', u'ๅ‘ๅผ': u'้ซฎๅผ', u'ๅ‘ๅผ•ๅƒ้’ง': u'้ซฎๅผ•ๅƒ้ˆž', u'ๅ‘ๆŒ‡': u'้ซฎๆŒ‡', u'ๅ‘ๅท': u'้ซฎๆฒ', u'ๅ‘ๆ น': u'้ซฎๆ น', u'ๅ‘ๆฒน': u'้ซฎๆฒน', u'ๅ‘ๆผ‚': u'้ซฎๆผ‚', u'ๅ‘ไธบ่ก€ไน‹ๆœฌ': u'้ซฎ็‚บ่ก€ไน‹ๆœฌ', u'ๅ‘็Šถ': u'้ซฎ็‹€', u'ๅ‘็™ฃ': u'้ซฎ็™ฌ', u'ๅ‘็Ÿญๅฟƒ้•ฟ': u'้ซฎ็Ÿญๅฟƒ้•ท', u'ๅ‘็ฆ': u'้ซฎ็ฆ', u'ๅ‘็ฌบ': u'้ซฎ็ฎ‹', u'ๅ‘็บฑ': u'้ซฎ็ด—', u'ๅ‘็ป“': u'้ซฎ็ต', u'ๅ‘ไธ': u'้ซฎ็ตฒ', u'ๅ‘็ฝ‘': u'้ซฎ็ถฒ', u'ๅ‘่„š': u'้ซฎ่…ณ', u'ๅ‘่‚ค': u'้ซฎ่†š', u'ๅ‘่ƒถ': u'้ซฎ่† ', u'ๅ‘่œ': u'้ซฎ่œ', u'ๅ‘่œก': u'้ซฎ่ Ÿ', u'ๅ‘่ธŠๅ†ฒๅ† ': u'้ซฎ่ธŠๆฒ–ๅ† ', u'ๅ‘่พซ': u'้ซฎ่พฎ', u'ๅ‘้’ˆ': u'้ซฎ้‡', u'ๅ‘้’—': u'้ซฎ้‡ต', u'ๅ‘้•ฟ': u'้ซฎ้•ท', u'ๅ‘้™…': u'้ซฎ้š›', u'ๅ‘้›•': u'้ซฎ้›•', u'ๅ‘้œœ': u'้ซฎ้œœ', u'ๅ‘้ฅฐ': u'้ซฎ้ฃพ', u'ๅ‘้ซป': u'้ซฎ้ซป', u'ๅ‘้ฌ“': u'้ซฎ้ฌข', u'้ซฏ่ƒก': u'้ซฏ้ฌ', u'้ซผๆพ': u'้ซผ้ฌ†', u'้ฌ…ๆพ': u'้ฌ…้ฌ†', u'ๆพไธ€ๅฃๆฐ”': u'้ฌ†ไธ€ๅฃๆฐฃ', u'ๆพไบ†': u'้ฌ†ไบ†', u'ๆพไบ›': u'้ฌ†ไบ›', u'ๆพๅ…ƒ้Ÿณ': u'้ฌ†ๅ…ƒ้Ÿณ', u'ๆพๅŠฒ': u'้ฌ†ๅ‹', u'ๆพๅŠจ': u'้ฌ†ๅ‹•', u'ๆพๅฃ': u'้ฌ†ๅฃ', u'ๆพๅ–‰': u'้ฌ†ๅ–‰', u'ๆพๅœŸ': u'้ฌ†ๅœŸ', u'ๆพๅฎฝ': u'้ฌ†ๅฏฌ', u'ๆพๅผ›': u'้ฌ†ๅผ›', u'ๆพๅฟซ': u'้ฌ†ๅฟซ', u'ๆพๆ‡ˆ': u'้ฌ†ๆ‡ˆ', u'ๆพๆ‰‹': u'้ฌ†ๆ‰‹', u'ๆพๆމ': u'้ฌ†ๆމ', u'ๆพๆ•ฃ': u'้ฌ†ๆ•ฃ', u'ๆพๆŸ”': u'้ฌ†ๆŸ”', u'ๆพๆฐ”': u'้ฌ†ๆฐฃ', u'ๆพๆตฎ': u'้ฌ†ๆตฎ', u'ๆพ็ป‘': u'้ฌ†็ถ', u'ๆพ็ดง': u'้ฌ†็ทŠ', u'ๆพ็ผ“': u'้ฌ†็ทฉ', u'ๆพ่„†': u'้ฌ†่„†', u'ๆพ่„ฑ': u'้ฌ†่„ซ', u'ๆพ่›‹': u'้ฌ†่›‹', u'ๆพ่ตท': u'้ฌ†่ตท', u'ๆพ่ฝฏ': u'้ฌ†่ปŸ', u'ๆพ้€š': u'้ฌ†้€š', u'ๆพๅผ€': u'้ฌ†้–‹', u'ๆพ้ฅผ': u'้ฌ†้ค…', u'ๆพๆพ': u'้ฌ†้ฌ†', u'้ฌˆๅ‘': u'้ฌˆ้ซฎ', u'่ƒกๅญ': u'้ฌๅญ', u'่ƒกๆขข': u'้ฌๆขข', u'่ƒกๆธฃ': u'้ฌๆธฃ', u'่ƒก้ซญ': u'้ฌ้ซญ', u'่ƒก้ซฏ': u'้ฌ้ซฏ', u'่ƒก้กป': u'้ฌ้ฌš', u'้ฌ’ๅ‘': u'้ฌ’้ซฎ', u'้กปๆ น': u'้ฌšๆ น', u'้กปๆฏ›': u'้ฌšๆฏ›', u'้กป็”Ÿ': u'้ฌš็”Ÿ', u'้กป็œ‰': u'้ฌš็œ‰', u'้กปๅ‘': u'้ฌš้ซฎ', u'้กป่ƒก': u'้ฌš้ฌ', u'้กป้กป': u'้ฌš้ฌš', u'้กป้ฒจ': u'้ฌš้ฏŠ', u'้กป้ฒธ': u'้ฌš้ฏจ', u'้ฌ“ๅ‘': u'้ฌข้ซฎ', u'ๆ–—ไธŠ': u'้ฌฅไธŠ', u'ๆ–—ไธ่ฟ‡': u'้ฌฅไธ้Ž', u'ๆ–—ไบ†': u'้ฌฅไบ†', u'ๆ–—ๆฅๆ–—ๅŽป': u'้ฌฅไพ†้ฌฅๅŽป', u'ๆ–—ๅ€’': u'้ฌฅๅ€’', u'ๆ–—ๅˆ†ๅญ': u'้ฌฅๅˆ†ๅญ', u'ๆ–—ๅŠ›': u'้ฌฅๅŠ›', u'ๆ–—ๅŠฒ': u'้ฌฅๅ‹', u'ๆ–—่ƒœ': u'้ฌฅๅ‹', u'ๆ–—ๅฃ': u'้ฌฅๅฃ', u'ๆ–—ๅˆ': u'้ฌฅๅˆ', u'ๆ–—ๅ˜ด': u'้ฌฅๅ˜ด', u'ๆ–—ๅœฐไธป': u'้ฌฅๅœฐไธป', u'ๆ–—ๅฃซ': u'้ฌฅๅฃซ', u'ๆ–—ๅฏŒ': u'้ฌฅๅฏŒ', u'ๆ–—ๅทง': u'้ฌฅๅทง', u'ๆ–—ๅนŒๅญ': u'้ฌฅๅนŒๅญ', u'ๆ–—ๅผ„': u'้ฌฅๅผ„', u'ๆ–—ๅผ•': u'้ฌฅๅผ•', u'ๆ–—ๅˆซๆฐ”': u'้ฌฅๅฝ†ๆฐฃ', u'ๆ–—ๅฝฉ': u'้ฌฅๅฝฉ', u'ๆ–—ๅฟƒ็œผ': u'้ฌฅๅฟƒ็œผ', u'ๆ–—ๅฟ—': u'้ฌฅๅฟ—', u'ๆ–—้—ท': u'้ฌฅๆ‚ถ', u'ๆ–—ๆˆ': u'้ฌฅๆˆ', u'ๆ–—ๆ‰“': u'้ฌฅๆ‰“', u'ๆ–—ๆ‰นๆ”น': u'้ฌฅๆ‰นๆ”น', u'ๆ–—ๆŠ€': u'้ฌฅๆŠ€', u'ๆ–—ๆ–‡': u'้ฌฅๆ–‡', u'ๆ–—ๆ™บ': u'้ฌฅๆ™บ', u'ๆ–—ๆšด': u'้ฌฅๆšด', u'ๆ–—ๆญฆ': u'้ฌฅๆญฆ', u'ๆ–—ๆฎด': u'้ฌฅๆฏ†', u'ๆ–—ๆฐ”': u'้ฌฅๆฐฃ', u'ๆ–—ๆณ•': u'้ฌฅๆณ•', u'ๆ–—ไบ‰': u'้ฌฅ็ˆญ', u'ๆ–—ไบ‰ๆ–—ๅˆ': u'้ฌฅ็ˆญ้ฌฅๅˆ', u'ๆ–—็‰Œ': u'้ฌฅ็‰Œ', u'ๆ–—็‰™ๆ‹Œ้ฝฟ': u'้ฌฅ็‰™ๆ‹Œ้ฝ’', u'ๆ–—็‰™ๆ–—้ฝฟ': u'้ฌฅ็‰™้ฌฅ้ฝ’', u'ๆ–—็‰›': u'้ฌฅ็‰›', u'ๆ–—็Š€ๅฐ': u'้ฌฅ็Š€่‡บ', u'ๆ–—็Šฌ': u'้ฌฅ็Šฌ', u'ๆ–—็‹ ': u'้ฌฅ็‹ ', u'ๆ–—ๅ ': u'้ฌฅ็–Š', u'ๆ–—็™พ่‰': u'้ฌฅ็™พ่‰', u'ๆ–—็œผ': u'้ฌฅ็œผ', u'ๆ–—็งๆ‰นไฟฎ': u'้ฌฅ็งๆ‰นไฟฎ', u'ๆ–—่€Œ้“ธๅ…ต': u'้ฌฅ่€Œ้‘„ๅ…ต', u'ๆ–—่€Œ้“ธ้”ฅ': u'้ฌฅ่€Œ้‘„้Œ', u'ๆ–—่„š': u'้ฌฅ่…ณ', u'ๆ–—่ˆฐ': u'้ฌฅ่‰ฆ', u'ๆ–—่Œถ': u'้ฌฅ่Œถ', u'ๆ–—่‰': u'้ฌฅ่‰', u'ๆ–—ๅถๅ„ฟ': u'้ฌฅ่‘‰ๅ…’', u'ๆ–—ๅถๅญ': u'้ฌฅ่‘‰ๅญ', u'ๆ–—็€': u'้ฌฅ่‘—', u'ๆ–—่Ÿ‹่Ÿ€': u'้ฌฅ่Ÿ‹่Ÿ€', u'ๆ–—่ฏ': u'้ฌฅ่ฉฑ', u'ๆ–—่‰ณ': u'้ฌฅ่ฑ”', u'ๆ–—่ตท': u'้ฌฅ่ตท', u'ๆ–—่ถฃ': u'้ฌฅ่ถฃ', u'ๆ–—้—ฒๆฐ”': u'้ฌฅ้–‘ๆฐฃ', u'ๆ–—้ธก': u'้ฌฅ้›ž', u'ๆ–—้›ช็บข': u'้ฌฅ้›ช็ด…', u'ๆ–—ๅคด': u'้ฌฅ้ ญ', u'ๆ–—้ฃŽ': u'้ฌฅ้ขจ', u'ๆ–—้ฅค': u'้ฌฅ้ฃฃ', u'ๆ–—ๆ–—': u'้ฌฅ้ฌฅ', u'ๆ–—ๅ“„': u'้ฌฅ้ฌจ', u'ๆ–—้ฑผ': u'้ฌฅ้ญš', u'ๆ–—้ธญ': u'้ฌฅ้ดจ', u'ๆ–—้นŒ้น‘': u'้ฌฅ้ตช้ถ‰', u'ๆ–—ไธฝ': u'้ฌฅ้บ—', u'้—น็€็Žฉๅ„ฟ': u'้ฌง่‘—็Žฉๅ…’', u'้—น่กจ': u'้ฌง้Œถ', u'้—น้’Ÿ': u'้ฌง้˜', u'ๅ“„ๅŠจ': u'้ฌจๅ‹•', u'ๅ“„ๅ ‚': u'้ฌจๅ ‚', u'ๅ“„็ฌ‘': u'้ฌจ็ฌ‘', u'้ƒไผŠ': u'้ฌฑไผŠ', u'้ƒๅ‹ƒ': u'้ฌฑๅ‹ƒ', u'้ƒๅ’': u'้ฌฑๅ’', u'้ƒๅ—': u'้ฌฑๅ—', u'้ƒๅ ™ไธๅถ': u'้ฌฑๅ ™ไธๅถ', u'้ƒๅกž': u'้ฌฑๅกž', u'้ƒๅž’': u'้ฌฑๅฃ˜', u'้ƒๅพ‹': u'้ฌฑๅพ‹', u'้ƒๆ‚’': u'้ฌฑๆ‚’', u'้ƒ้—ท': u'้ฌฑๆ‚ถ', u'้ƒๆ„ค': u'้ฌฑๆ†ค', u'้ƒๆŠ‘': u'้ฌฑๆŠ‘', u'้ƒๆŒน': u'้ฌฑๆŒน', u'้ƒๆž—': u'้ฌฑๆž—', u'้ƒๆฐ”': u'้ฌฑๆฐฃ', u'้ƒๆฑŸ': u'้ฌฑๆฑŸ', u'้ƒๆฒ‰ๆฒ‰': u'้ฌฑๆฒ‰ๆฒ‰', u'้ƒๆณฑ': u'้ฌฑๆณฑ', u'้ƒ็ซ': u'้ฌฑ็ซ', u'้ƒ็ƒญ': u'้ฌฑ็†ฑ', u'้ƒ็‡ ': u'้ฌฑ็‡ ', u'้ƒ็—‡': u'้ฌฑ็—‡', u'้ƒ็งฏ': u'้ฌฑ็ฉ', u'้ƒ็บก': u'้ฌฑ็ด†', u'้ƒ็ป“': u'้ฌฑ็ต', u'้ƒ่’ธ': u'้ฌฑ่’ธ', u'้ƒ่“Š': u'้ฌฑ่“Š', u'้ƒ่ก€': u'้ฌฑ่ก€', u'้ƒ้‚‘': u'้ฌฑ้‚‘', u'้ƒ้ƒ': u'้ฌฑ้ƒ', u'้ƒ้‡‘': u'้ฌฑ้‡‘', u'้ƒ้—ญ': u'้ฌฑ้–‰', u'้ƒ้™ถ': u'้ฌฑ้™ถ', u'้ƒ้ƒไธๅนณ': u'้ฌฑ้ฌฑไธๅนณ', u'้ƒ้ƒไธไน': u'้ฌฑ้ฌฑไธๆจ‚', u'้ƒ้ƒๅฏกๆฌข': u'้ฌฑ้ฌฑๅฏกๆญก', u'้ƒ้ƒ่€Œ็ปˆ': u'้ฌฑ้ฌฑ่€Œ็ต‚', u'้ƒ้ƒ่‘ฑ่‘ฑ': u'้ฌฑ้ฌฑ่”ฅ่”ฅ', u'้ƒ้ป‘': u'้ฌฑ้ป‘', u'้ฌผ่ฐทๅญ': u'้ฌผ่ฐทๅญ', u'้ญ‚็‰ตๆขฆ็ณป': u'้ญ‚็‰ฝๅคข็นซ', u'้ญๅพ': u'้ญๅพต', u'้ญ”่กจ': u'้ญ”้Œถ', u'้ฑผๅนฒ': u'้ญšไนพ', u'้ฑผๆพ': u'้ญš้ฌ†', u'้ฒธ้กป': u'้ฏจ้ฌš', u'้ฒ‡้ฑผ': u'้ฏฐ้ญš', u'้ธ ๅ ้นŠๅทข': u'้ณฉไฝ”้ตฒๅทข', u'ๅ‡คๅ‡ฐไบŽ้ฃž': u'้ณณๅ‡ฐไบŽ้ฃ›', u'ๅ‡คๆขจๅนฒ': u'้ณณๆขจไนพ', u'้ธฃ้’Ÿ': u'้ณด้˜', u'้ธฟๆกˆ็›ธๅบ„': u'้ดปๆกˆ็›ธ่ŽŠ', u'้ธฟ่Œƒ': u'้ดป็ฏ„', u'้ธฟ็ฏ‡ๅทจๅˆถ': u'้ดป็ฏ‡ๅทจ่ฃฝ', u'้น…ๅ‡†': u'้ตๆบ–', u'้น„ๅ‘': u'้ต ้ซฎ', u'้›•ๅฟƒ้›็ˆช': u'้ตฐๅฟƒ้›็ˆช', u'้›•ๆ‚': u'้ตฐๆ‚', u'้›•็ฟŽ': u'้ตฐ็ฟŽ', u'้›•้น—': u'้ตฐ้ถš', u'้นคๅŠ': u'้ถดๅผ”', u'้นคๅ‘': u'้ถด้ซฎ', u'้นฐ้›•': u'้นฐ้ตฐ', u'ๅ’ธๅ‘ณ': u'้นนๅ‘ณ', u'ๅ’ธๅ˜ดๆทก่ˆŒ': u'้นนๅ˜ดๆทก่ˆŒ', u'ๅ’ธๅœŸ': u'้นนๅœŸ', u'ๅ’ธๅบฆ': u'้นนๅบฆ', u'ๅ’ธๅพ—': u'้นนๅพ—', u'ๅ’ธๆ‰น': u'้นนๆ‰น', u'ๅ’ธๆฐด': u'้นนๆฐด', u'ๅ’ธๆดพ': u'้นนๆดพ', u'ๅ’ธๆตท': u'้นนๆตท', u'ๅ’ธๆทก': u'้นนๆทก', u'ๅ’ธๆน–': u'้นนๆน–', u'ๅ’ธๆฑค': u'้นนๆนฏ', u'ๅ’ธๆฝŸ': u'้นนๆฝŸ', u'ๅ’ธ็š„': u'้นน็š„', u'ๅ’ธ็ฒฅ': u'้นน็ฒฅ', u'ๅ’ธ่‚‰': u'้นน่‚‰', u'ๅ’ธ่œ': u'้นน่œ', u'ๅ’ธ่œๅนฒ': u'้นน่œไนพ', u'ๅ’ธ่›‹': u'้นน่›‹', u'ๅ’ธ็Œช่‚‰': u'้นน่ฑฌ่‚‰', u'ๅ’ธ็ฑป': u'้นน้กž', u'ๅ’ธ้ฃŸ': u'้นน้ฃŸ', u'ๅ’ธ้ฑผ': u'้นน้ญš', u'ๅ’ธ้ธญ่›‹': u'้นน้ดจ่›‹', u'ๅ’ธๅค': u'้นน้นต', u'ๅ’ธๅ’ธ': u'้นน้นน', u'็›ๆ‰“ๆ€Žไนˆๅ’ธ': u'้นฝๆ‰“ๆ€Ž้บผ้นน', u'็›ๅค': u'้นฝๆปท', u'็›ไฝ™': u'้นฝ้ค˜', u'ไธฝไบŽ': u'้บ—ๆ–ผ', u'ๆ›ฒๅฐ˜': u'้บดๅกต', u'ๆ›ฒ่˜–': u'้บดๆซฑ', u'ๆ›ฒ็”Ÿ': u'้บด็”Ÿ', u'ๆ›ฒ็ง€ๆ‰': u'้บด็ง€ๆ‰', u'ๆ›ฒ่Œ': u'้บด่Œ', u'ๆ›ฒ่ฝฆ': u'้บด่ปŠ', u'ๆ›ฒ้“ๅฃซ': u'้บด้“ๅฃซ', u'ๆ›ฒ้’ฑ': u'้บด้Œข', u'ๆ›ฒ้™ข': u'้บด้™ข', u'ๆ›ฒ้œ‰': u'้บด้ปด', u'้ขไบบๅ„ฟ': u'้บตไบบๅ…’', u'้ขไปท': u'้บตๅƒน', u'้ขๅŒ…': u'้บตๅŒ…', u'้ขๅŠ': u'้บตๅŠ', u'้ขๅฏๅ„ฟ': u'้บตๅฏๅ…’', u'้ขๅก‘': u'้บตๅก‘', u'้ขๅบ—': u'้บตๅบ—', u'้ขๅŽ‚': u'้บตๅป ', u'้ขๆ‘Š': u'้บตๆ”ค', u'้ขๆ–': u'้บตๆ–', u'้ขๆก': u'้บตๆข', u'้ขๆฑค': u'้บตๆนฏ', u'้ขๆต†': u'้บตๆผฟ', u'้ข็ฐ': u'้บต็ฐ', u'้ข็–™็˜ฉ': u'้บต็–™็˜ฉ', u'้ข็šฎ': u'้บต็šฎ', u'้ข็ ๅ„ฟ': u'้บต็ขผๅ…’', u'้ข็ญ‹': u'้บต็ญ‹', u'้ข็ฒ‰': u'้บต็ฒ‰', u'้ข็ณŠ': u'้บต็ณŠ', u'้ขๅ›ข': u'้บต็ณฐ', u'้ข็บฟ': u'้บต็ทš', u'้ข็ผธ': u'้บต็ผธ', u'้ข่Œถ': u'้บต่Œถ', u'้ข้ฃŸ': u'้บต้ฃŸ', u'้ข้ฅบ': u'้บต้คƒ', u'้ข้ฅผ': u'้บต้ค…', u'้ข้ฆ†': u'้บต้คจ', u'้บป่ฏ': u'้บป่—ฅ', u'้บป้†‰่ฏ': u'้บป้†‰่—ฅ', u'้บป้…ฑ้ข': u'้บป้†ฌ้บต', u'้ป„ๅนฒ้ป‘็˜ฆ': u'้ปƒไนพ้ป‘็˜ฆ', u'้ป„ๅކ': u'้ปƒๆ›†', u'้ป„ๆ›ฒ้œ‰': u'้ปƒๆ›ฒ้œ‰', u'้ป„ๅކๅฒ': u'้ปƒๆญทๅฒ', u'้ป„้‡‘่กจ': u'้ปƒ้‡‘่กจ', u'้ปƒ้ˆบ็ญ‘': u'้ปƒ้ˆบ็ญ‘', u'้ป„้’ฐ็ญ‘': u'้ปƒ้ˆบ็ญ‘', u'้ป„้’Ÿ': u'้ปƒ้˜', u'้ป„ๅ‘': u'้ปƒ้ซฎ', u'้ป„ๆ›ฒๆฏ’็ด ': u'้ปƒ้บดๆฏ’็ด ', u'้ป‘ๅฅดๅๅคฉๅฝ•': u'้ป‘ๅฅด็ฑฒๅคฉ้Œ„', u'้ป‘ๅ‘': u'้ป‘้ซฎ', u'็‚นๅŠ้’Ÿ': u'้ปžๅŠ้˜', u'็‚นๅคš้’Ÿ': u'้ปžๅคš้˜', u'็‚น้‡Œ': u'้ปž่ฃก', u'็‚น้’Ÿ': u'้ปž้˜', u'้œ‰ๆฏ’': u'้ปดๆฏ’', u'้œ‰็ด ': u'้ปด็ด ', u'้œ‰่Œ': u'้ปด่Œ', u'้œ‰้ป‘': u'้ปด้ป‘', u'้œ‰้ปง': u'้ปด้ปง', u'้ผ“้‡Œ': u'้ผ“่ฃก', u'ๅ†ฌๅ†ฌ้ผ“': u'้ผ•้ผ•้ผ“', u'้ผ ่ฏ': u'้ผ ่—ฅ', u'้ผ ๆ›ฒ่‰': u'้ผ ้บด่‰', u'้ผปๆขๅ„ฟ': u'้ผปๆขๅ…’', u'้ผปๆข': u'้ผปๆจ‘', u'้ผปๅ‡†': u'้ผปๆบ–', u'้ฝ็Ž‹่ˆ็‰›': u'้ฝŠ็Ž‹ๆจ็‰›', u'้ฝๅบ„': u'้ฝŠ่ŽŠ', u'้ฝฟๅฑๅ‘็ง€': u'้ฝ’ๅฑ้ซฎ็ง€', u'้ฝฟ่ฝๅ‘็™ฝ': u'้ฝ’่ฝ้ซฎ็™ฝ', u'้ฝฟๅ‘': u'้ฝ’้ซฎ', u'ๅ‡บๅ„ฟ': u'้ฝฃๅ…’', u'ๅ‡บๅ‰ง': u'้ฝฃๅЇ', u'ๅ‡บๅŠจ็”ป': u'้ฝฃๅ‹•็•ซ', u'ๅ‡บๅก้€š': u'้ฝฃๅก้€š', u'ๅ‡บๆˆ': u'้ฝฃๆˆฒ', u'ๅ‡บ่Š‚็›ฎ': u'้ฝฃ็ฏ€็›ฎ', u'ๅ‡บ็”ตๅฝฑ': u'้ฝฃ้›ปๅฝฑ', u'ๅ‡บ็”ต่ง†': u'้ฝฃ้›ป่ฆ–', u'้พ™ๅท': u'้พๆฒ', u'้พ™็œผๅนฒ': u'้พ็œผไนพ', u'้พ™้กป': u'้พ้ฌš', u'้พ™ๆ–—่™Žไผค': u'้พ้ฌฅ่™Žๅ‚ท', u'้พŸๅฑฑๅบ„': u'้พœๅฑฑๅบ„', u'๏ผๅ…‹ๅˆถ': u'๏ผๅ‰‹ๅˆถ', u'๏ผŒๅ…‹ๅˆถ': u'๏ผŒๅ‰‹ๅˆถ', u'๏ผๅคšๅช': u'๏ผๅคš้šป', u'๏ผๅคฉๅŽ': u'๏ผๅคฉๅพŒ', u'๏ผๅช': u'๏ผ้šป', u'๏ผไฝ™': u'๏ผ้ค˜', u'๏ผ‘ๅคฉๅŽ': u'๏ผ‘ๅคฉๅพŒ', u'๏ผ‘ๅช': u'๏ผ‘้šป', u'๏ผ‘ไฝ™': u'๏ผ‘้ค˜', u'๏ผ’ๅคฉๅŽ': u'๏ผ’ๅคฉๅพŒ', u'๏ผ’ๅช': u'๏ผ’้šป', u'๏ผ’ไฝ™': u'๏ผ’้ค˜', u'๏ผ“ๅคฉๅŽ': u'๏ผ“ๅคฉๅพŒ', u'๏ผ“ๅช': u'๏ผ“้šป', u'๏ผ“ไฝ™': u'๏ผ“้ค˜', u'๏ผ”ๅคฉๅŽ': u'๏ผ”ๅคฉๅพŒ', u'๏ผ”ๅช': u'๏ผ”้šป', u'๏ผ”ไฝ™': u'๏ผ”้ค˜', u'๏ผ•ๅคฉๅŽ': u'๏ผ•ๅคฉๅพŒ', u'๏ผ•ๅช': u'๏ผ•้šป', u'๏ผ•ไฝ™': u'๏ผ•้ค˜', u'๏ผ–ๅคฉๅŽ': u'๏ผ–ๅคฉๅพŒ', u'๏ผ–ๅช': u'๏ผ–้šป', u'๏ผ–ไฝ™': u'๏ผ–้ค˜', u'๏ผ—ๅคฉๅŽ': u'๏ผ—ๅคฉๅพŒ', u'๏ผ—ๅช': u'๏ผ—้šป', u'๏ผ—ไฝ™': u'๏ผ—้ค˜', u'๏ผ˜ๅคฉๅŽ': u'๏ผ˜ๅคฉๅพŒ', u'๏ผ˜ๅช': u'๏ผ˜้šป', u'๏ผ˜ไฝ™': u'๏ผ˜้ค˜', u'๏ผ™ๅคฉๅŽ': u'๏ผ™ๅคฉๅพŒ', u'๏ผ™ๅช': u'๏ผ™้šป', u'๏ผ™ไฝ™': u'๏ผ™้ค˜', u'๏ผšๅ…‹ๅˆถ': u'๏ผšๅ‰‹ๅˆถ', u'๏ผ›ๅ…‹ๅˆถ': u'๏ผ›ๅ‰‹ๅˆถ', u'๏ผŸๅ…‹ๅˆถ': u'๏ผŸๅ‰‹ๅˆถ', }
AdvancedLangConv
/AdvancedLangConv-0.01.tar.gz/AdvancedLangConv-0.01/langconv/defaulttables/zh_hant.py
zh_hant.py
from zh_hans import convtable as oldtable convtable = oldtable.copy() convtable.update({ u'16้€ฒไฝ': u'16่ฟ›ไฝ', u'16้€ฒไฝๅˆถ': u'16่ฟ›ไฝๅˆถ', u'ใ€Ž': u'โ€˜', u'ใ€': u'โ€™', u'ใ€Œ': u'โ€œ', u'ใ€': u'โ€', u'่ฌๆ›†': u'ไธ‡ๅކ', u'ไธ‰ๆฅต้ซ”': u'ไธ‰ๆž็ฎก', u'ไธ‰ๆฅต็ฎก': u'ไธ‰ๆž็ฎก', u'ไธฒๅˆ—ๅŠ ้€Ÿๅ™จ': u'ไธฒๅˆ—ๅŠ ้€Ÿๅ™จ', u'ไธฒๅˆ—': u'ไธฒ่กŒ', u'็ƒ่Œฒๅˆฅๅ…‹': u'ไนŒๅ…นๅˆซๅ…‹ๆ–ฏๅฆ', u'่‘‰้–€': u'ไนŸ้—จ', u'่Šๅฃซ': u'ไนพ้…ช', u'ไบŒๆฅต็ฎก': u'ไบŒๆž็ฎก', u'ไบŒๆฅต้ซ”': u'ไบŒๆž็ฎก', u'ไบŒ้€ฒไฝๅˆถ': u'ไบŒ่ฟ›ไฝๅˆถ', u'ไบŒ้€ฒไฝ': u'ไบŒ่ฟ›ๅˆถ', u'็ถฒ้š›็ถฒ่ทฏ': u'ไบ’่”็ฝ‘', u'ไบ’่ฏ็ถฒ': u'ไบ’่”็ฝ‘', u'ไบ’ๅ‹•ๅผ': u'ไบคไบ’ๅผ', u'ไบบๅทฅๆ™บๆ…ง': u'ไบบๅทฅๆ™บ่ƒฝ', u'็”š้บฝ': u'ไป€ไนˆ', u'็”š้บผ': u'ไป€ไนˆ', u'ไน™ๅคช็ถฒ': u'ไปฅๅคช็ฝ‘', u'่‡ช็”ฑ็ƒ': u'ไปปๆ„็ƒ', u'ๅ„ชๅ…ˆ้ †ๅบ': u'ไผ˜ๅ…ˆ็บง', u'ๆ„Ÿๆธฌ': u'ไผ ๆ„Ÿ', u'ไผฏๅˆฉ่Œฒ': u'ไผฏๅˆฉๅ…น', u'่ฒ้‡Œๆ–ฏ': u'ไผฏๅˆฉๅ…น', u'้ปž้™ฃๅœ–': u'ไฝๅ›พ', u'็ถญๅพท่ง’': u'ไฝ›ๅพ—่ง’', u'ๅธธๅผ': u'ไพ‹็จ‹', u'ไพๅ„ธ็ด€': u'ไพ็ฝ—็บช', u'ๆตท็Š': u'ไพฏ่ต›ๅ› ', u'ๆ”œๅธถๅž‹': u'ไพฟๆบๅผ', u'่ณ‡่จŠ็†่ซ–': u'ไฟกๆฏ่ฎบ', u'ๆฏ้Ÿณ': u'ๅ…ƒ้Ÿณ', u'ๆธธๆจ™': u'ๅ…‰ๆ ‡', u'ๅ…‰็ขŸ': u'ๅ…‰็›˜', u'ๅ…‰็ขŸๆฉŸ': u'ๅ…‰้ฉฑ', u'ๆŸฏๆž—้ “': u'ๅ…‹ๆž—้กฟ', u'ๅ…‹็พ…ๅŸƒ่ฅฟไบž': u'ๅ…‹็ฝ—ๅœฐไบš', u'้€ฒ็ƒ': u'ๅ…ฅ็ƒ', u'ๅ…จๅฝข': u'ๅ…จ่ง’', u'ๅ…ซ้€ฒไฝๅˆถ': u'ๅ…ซ่ฟ›ไฝๅˆถ', u'ๅ…ซ้€ฒไฝ': u'ๅ…ซ่ฟ›ๅˆถ', u'ๅ…ฌ่ปŠ': u'ๅ…ฌๅ…ฑๆฑฝ่ฝฆ', u'ๅ…ฌ่ปŠไธŠๆ›ธ': u'ๅ…ฌ่ฝฆไธŠไนฆ', u'ๅ…ญ้€ฒไฝๅˆถ': u'ๅ…ญ่ฟ›ไฝๅˆถ', u'ๅ…ญ้€ฒไฝ': u'ๅ…ญ่ฟ›ๅˆถ', u'่จ˜ๆ†ถ้ซ”': u'ๅ†…ๅญ˜', u'็”˜ๆฏ”ไบž': u'ๅ†ˆๆฏ”ไบš', u'้˜ฒๅฏซ': u'ๅ†™ไฟๆŠค', u'ๅ†ท่œ': u'ๅ‡‰่œ', u'ๅ†ท็›ค': u'ๅ‡‰่œ', u'ๅนพๅ…งไบžๆฏ”็ดข': u'ๅ‡ ๅ†…ไบšๆฏ”็ป', u'ๆขต่ฐท': u'ๅ‡ก้ซ˜', u'่จˆ็จ‹่ปŠ': u'ๅ‡บ็งŸ่ฝฆ', u'ๅˆ†ๆ•ฃๅผ': u'ๅˆ†ๅธƒๅผ', u'่งฃๆžๅบฆ': u'ๅˆ†่พจ็އ', u'ๅˆ—ๆ”ฏๆ•ฆๆ–ฏ็™ป': u'ๅˆ—ๆ”ฏๆ•ฆๅฃซ็™ป', u'่ณดๆฏ”็‘žไบž': u'ๅˆฉๆฏ”้‡Œไบš', u'่ฟฆ็ด': u'ๅŠ ็บณ', u'ๅŠ ๅฝญ': u'ๅŠ ่“ฌ', u'่ผ‰ๅ…ฅ': u'ๅŠ ่ฝฝ', u'ๅ้€ฒไฝๅˆถ': u'ๅ่ฟ›ไฝๅˆถ', u'ๅ้€ฒไฝ': u'ๅ่ฟ›ๅˆถ', u'ๅŠๅฝข': u'ๅŠ่ง’', u'ๅŽไน่ก—': u'ๅŽไน่ก—', u'ๆณขๆœญ้‚ฃ': u'ๅš่Œจ็“ฆ็บณ', u'็›งๅฎ‰้”': u'ๅขๆ—บ่พพ', u'่กž็”Ÿ': u'ๅซ็”Ÿ', u'่ก›็”Ÿ': u'ๅซ็”Ÿ', u'็“œๅœฐ้ฆฌๆ‹‰': u'ๅฑๅœฐ้ฉฌๆ‹‰', u'ๅŽ„็“œๅคš': u'ๅŽ„็“œๅคšๅฐ”', u'ๅŽ„็“œๅคš็ˆพ': u'ๅŽ„็“œๅคšๅฐ”', u'ๅŽ„็“œๅคšๅฐ”': u'ๅŽ„็“œๅคšๅฐ”', u'ๅŽ„ๅˆฉๅž‚ไบž': u'ๅŽ„็ซ‹็‰น้‡Œไบš', u'่ฎŠๆ•ธ': u'ๅ˜้‡', u'ๆ’ž็ƒ': u'ๅฐ็ƒ', u'ๆกŒ็ƒ': u'ๅฐ็ƒ', u'ๅ‰ๅธƒๅœฐ': u'ๅ‰ๅธƒๆ', u'ๅ“ˆ่–ฉๅ…‹': u'ๅ“ˆ่จๅ…‹ๆ–ฏๅฆ', u'ๅ“ฅๆ–ฏๅคง้ปŽๅŠ ': u'ๅ“ฅๆ–ฏ่พพ้ปŽๅŠ ', u'้›œ่จŠ': u'ๅ™ชๅฃฐ', u'ๅ› ๆ•ธ': u'ๅ› ๅญ', u'ๅ็“ฆ้ญฏ': u'ๅ›พ็“ฆๅข', u'ๅœŸๅบซๆ›ผ': u'ๅœŸๅบ“ๆ›ผๆ–ฏๅฆ', u'่–้œฒ่ฅฟไบž': u'ๅœฃๅข่ฅฟไบš', u'่–ๅ‰ๆ–ฏ็ดๅŸŸๆ–ฏ': u'ๅœฃๅŸบ่Œจๅ’Œๅฐผ็ปดๆ–ฏ', u'่–ๅ…‹้‡Œๆ–ฏๅคš็ฆๅŠๅฐผ็ถญๆ–ฏ': u'ๅœฃๅŸบ่Œจๅ’Œๅฐผ็ปดๆ–ฏ', u'่–ๆ–‡ๆฃฎๅŠๆ ผ็‘ž้‚ฃไธ': u'ๅœฃๆ–‡ๆฃฎ็‰นๅ’Œๆ ผๆž—็บณไธๆ–ฏ', u'่–้ฆฌๅˆฉ่ซพ': u'ๅœฃ้ฉฌๅŠ›่ฏบ', u'่“‹ไบž้‚ฃ': u'ๅœญไบš้‚ฃ', u'ๅฆๅฐšๅฐผไบž': u'ๅฆๆก‘ๅฐผไบš', u'่กฃ็ดขๆฏ”ไบž': u'ๅŸƒๅกžไฟ„ๆฏ”ไบš', u'่กฃ็ดขๅŒนไบž': u'ๅŸƒๅกžไฟ„ๆฏ”ไบš', u'ๅŠŸ่ƒฝ่ฎŠๆ•ธๅ็จฑ': u'ๅŸŸๅ', u'ๅ‰้‡Œๅทดๆ–ฏ': u'ๅŸบ้‡Œๅทดๆ–ฏ', u'ๅก”ๅ‰ๅ…‹': u'ๅก”ๅ‰ๅ…‹ๆ–ฏๅฆ', u'ๅกžๆ‹‰ๅˆฉๆ˜‚': u'ๅกžๆ‹‰ๅˆฉๆ˜‚', u'ๅกžๆ™ฎๅ‹’ๆ–ฏ': u'ๅกžๆตฆ่ทฏๆ–ฏ', u'ๅกžๅธญ็ˆพ': u'ๅกž่ˆŒๅฐ”', u'้Ÿณๆ•ˆๅก': u'ๅฃฐๅก', u'ๅคš็ฑณๅฐผๅ…‹': u'ๅคš็ฑณๅฐผๅŠ ๅ›ฝ', u'ๅคœๅญฆ': u'ๅคœๆ ก', u'็ฆๅฃซ': u'ๅคงไผ—', u'็ฆๆ–ฏ': u'ๅคงไผ—', u'ๅคง่ก›็ขงๅ’ธ': u'ๅคงๅซยท่ดๅ…‹ๆฑ‰ๅง†', u'้ ญๆงŒ': u'ๅคด็ƒ', u'่ณ“ๅฃซ': u'ๅฅ”้ฉฐ', u'ๅนณๆฒป': u'ๅฅ”้ฉฐ', u'ๅฟŒๅป‰': u'ๅฅถๆฒน', u'ๅญ—ๅ…ƒไผš': u'ๅญ—ๅ…ƒไผš', u'ๅญ—ๅ…ƒๆœƒ': u'ๅญ—ๅ…ƒไผš', u'ๅญ—ๅ…ƒๆฟŸ': u'ๅญ—ๅ…ƒๆตŽ', u'ๅญ—ๅ…ƒๆตŽ': u'ๅญ—ๅ…ƒๆตŽ', u'ๅญ—ๅž‹ๅคงๅฐ': u'ๅญ—ๅท', u'ๅญ—ๅž‹ๆช”': u'ๅญ—ๅบ“', u'ๆฌ„ไฝ': u'ๅญ—ๆฎต', u'ๅญ—ๅ…ƒ': u'ๅญ—็ฌฆ', u'ๅญ—็ฏ€': u'ๅญ—่Š‚', u'ไฝๅ…ƒ็ต„': u'ๅญ—่Š‚', u'ๅญ˜ๆช”': u'ๅญ˜็›˜', u'ๅฎ‰ๅœฐๅกๅŠๅทดๅธƒ้”': u'ๅฎ‰ๆ็“œๅ’Œๅทดๅธƒ่พพ', u'ๅทจ้›†': u'ๅฎ', u'ๅฏฌ้ ป': u'ๅฎฝๅธฆ', u'ๅฎšๅ€': u'ๅฏปๅ€', u'ๅฅˆๅŠๅˆฉไบž': u'ๅฐผๆ—ฅๅˆฉไบš', u'ๅฐผๆ—ฅๅˆฉไบž': u'ๅฐผๆ—ฅๅˆฉไบš', u'ๅฐผๆ—ฅๅˆฉไบš': u'ๅฐผๆ—ฅๅˆฉไบš', u'ๅฐผๆ—ฅ็ˆพ': u'ๅฐผๆ—ฅๅฐ”', u'ๅฐผๆ—ฅๅฐ”': u'ๅฐผๆ—ฅๅฐ”', u'็ซ ็ฏ€้™„่จป': u'ๅฐพๆณจ', u'ๅ€ๅŸŸ็ถฒ': u'ๅฑ€ๅŸŸ็ฝ‘', u'้‰…่ณˆ': u'ๅทจๅ•†', u'ๅทด่ฒๅคš': u'ๅทดๅทดๅคšๆ–ฏ', u'ๅทดๅธƒไบž็ดๅนพๅ…งไบž': u'ๅทดๅธƒไบšๆ–ฐๅ‡ ๅ†…ไบš', u'ๅธƒๅธŒ': u'ๅธƒไป€', u'ๅธƒๆฎŠ': u'ๅธƒไป€', u'ๅธƒๅŸบ็ดๆณ•็ดข': u'ๅธƒๅŸบ็บณๆณ•็ดข', u'ๅธƒๅ‰็ดๆณ•็ดข': u'ๅธƒๅŸบ็บณๆณ•็ดข', u'ๅธƒๅธŒไบž': u'ๅธƒๅธŒไบš', u'ๅธƒๅธŒไบš': u'ๅธƒๅธŒไบš', u'่’ฒ้š†ๅœฐ': u'ๅธƒ้š†่ฟช', u'ๅธŒ็‰นๆ‹‰': u'ๅธŒ็‰นๅ‹’', u'ๅธ›็‰': u'ๅธ•ๅŠณ', u'ๅนณๆฒปไน‹ไนฑ': u'ๅนณๆฒปไน‹ไนฑ', u'ๅนณๆฒปไน‹ไบ‚': u'ๅนณๆฒปไน‹ไนฑ', u'้žๅŒๆญฅ': u'ๅผ‚ๆญฅ', u'่ฟดๅœˆ': u'ๅพช็Žฏ', u'ๅฟซ้–ƒ่จ˜ๆ†ถ้ซ”': u'ๅฟซ้—ชๅญ˜ๅ‚จๅ™จ', u'ๅŒฏๆตๆŽ’': u'ๆ€ป็บฟ', u'็พฉๅคงๅˆฉ': u'ๆ„ๅคงๅˆฉ', u'้ป›ๅฎ‰ๅจœ': u'ๆˆดๅฎ‰ๅจœ', u'ๅฑ‹ไปท': u'ๆˆฟไปท', u'็ดข็พ…้–€็พคๅณถ': u'ๆ‰€็ฝ—้—จ็พคๅฒ›', u'ๆ‰“ๅฐ': u'ๆ‰“ๅฐ', u'ๅˆ—ๅฐ': u'ๆ‰“ๅฐ', u'ๅฐ่กจๆฉŸ': u'ๆ‰“ๅฐๆœบ', u'ๆ‰“ๅฐๆฉŸ': u'ๆ‰“ๅฐๆœบ', u'ๅฐ„้–€': u'ๆ‰“้—จ', u'ๆŽƒ็ž„ๅ™จ': u'ๆ‰ซ็ž„ไปช', u'ๆ‹ฌๅผง': u'ๆ‹ฌๅท', u'ๆ‹ฟ็ ดๅด™': u'ๆ‹ฟ็ ดไป‘', u'็ฉๆžถ': u'ๆท่ฑน', u'ไป‹้ข': u'ๆŽฅๅฃ', u'ๆŽงๅˆถ้ …': u'ๆŽงไปถ', u'่ณ‡ๆ–™ๅบซ': u'ๆ•ฐๆฎๅบ“', u'ๆฑถ่Š': u'ๆ–‡่Žฑ', u'ๅฒ็“ฆๆฟŸ่˜ญ': u'ๆ–ฏๅจๅฃซๅ…ฐ', u'ๆ–ฏๆด›็ถญๅฐผไบž': u'ๆ–ฏๆด›ๆ–‡ๅฐผไบš', u'็ด่ฅฟ่˜ญ': u'ๆ–ฐ่ฅฟๅ…ฐ', u'ๅณ้ฃŸ้บต': u'ๆ–นไพฟ้ข', u'ๅฟซ้€Ÿ้ข': u'ๆ–นไพฟ้ข', u'ๆณก้บต': u'ๆ–นไพฟ้ข', u'้€Ÿ้ฃŸ้บต': u'ๆ–นไพฟ้ข', u'ไผบๆœๅ™จ': u'ๆœๅŠกๅ™จ', u'ๆฉŸๆขฐไบบ': u'ๆœบๅ™จไบบ', u'ๆฉŸๅ™จไบบ': u'ๆœบๅ™จไบบ', u'่จฑๅฏๆฌŠ': u'ๆƒ้™', u'ๅฏถ็…': u'ๆ ‡ๅฟ—', u'ๆ ผ็‘ž้‚ฃ้”': u'ๆ ผๆž—็บณ่พพ', u'ๆฆดๆงค': u'ๆฆด่Žฒ', u'ๆฆดๆขฟ': u'ๆฆด่Žฒ', u'่Œ…ๅˆฉๅก”ๅฐผไบž': u'ๆฏ›้‡Œๅก”ๅฐผไบš', u'ๆฏ›้‡Œ่ฃ˜ๆ–ฏ': u'ๆฏ›้‡Œๆฑ‚ๆ–ฏ', u'ๆจก้‡Œ่ฅฟๆ–ฏ': u'ๆฏ›้‡Œๆฑ‚ๆ–ฏ', u'ๅŽไน': u'ๆฐ‘ไน', u'ไธญๆจ‚': u'ๆฐ‘ไน', u'ๆฐธๆ›†': u'ๆฐธๅކ', u'ๆฒ™ๅœฐ้˜ฟๆ‹‰ไผฏ': u'ๆฒ™็‰น้˜ฟๆ‹‰ไผฏ', u'ๆฒ™็ƒๅœฐ้˜ฟๆ‹‰ไผฏ': u'ๆฒ™็‰น้˜ฟๆ‹‰ไผฏ', u'ๆณขๅฃซๅฐผไบž่ตซๅกžๅ“ฅ็ถญ็ด': u'ๆณขๆ–ฏๅฐผไบšๅ’Œ้ป‘ๅกžๅ“ฅ็ปด้‚ฃ', u'่พ›ๅทดๅจ': u'ๆดฅๅทดๅธƒ้Ÿฆ', u'ๅฎ้ƒฝๆ‹‰ๆ–ฏ': u'ๆดช้ƒฝๆ‹‰ๆ–ฏ', u'ๆปฟ16้€ฒไฝ': u'ๆปก16่ฟ›ไฝ', u'ๆปฟไบŒ้€ฒไฝ': u'ๆปกไบŒ่ฟ›ไฝ', u'ๆปฟๅ…ซ้€ฒไฝ': u'ๆปกๅ…ซ่ฟ›ไฝ', u'ๆปฟๅ…ญ้€ฒไฝ': u'ๆปกๅ…ญ่ฟ›ไฝ', u'ๆปฟๅๅ…ญ้€ฒไฝ': u'ๆปกๅๅ…ญ่ฟ›ไฝ', u'ๆปฟๅ้€ฒไฝ': u'ๆปกๅ่ฟ›ไฝ', u'่“‹็ซ้‹': u'็ซ้”…็›–ๅธฝ', u'ๅƒ้‡Œ้”ๆ‰˜่ฒๅ“ฅ': u'็‰น็ซ‹ๅฐผ่พพๅ’Œๆ‰˜ๅทดๅ“ฅ', u'็‹—้šป': u'็Šฌๅช', u'ๅกไฝฉ้›…่’‚': u'็ๅฆฎๅผ—ยทๅกๆ™ฎ้‡Œไบš่’‚', u'่ซพ้ญฏ': u'็‘™้ฒ', u'่ฌ้‚ฃๆœ': u'็“ฆๅŠช้˜ฟๅ›พ', u'ๆบซ็ดๅœ–': u'็“ฆๅŠช้˜ฟๅ›พ', u'็ขŸ็‰‡': u'็›˜็‰‡', u'็Ÿญ่จŠ': u'็Ÿญไฟก', u'็ฐก่จŠ': u'็Ÿญไฟก', u'็Ÿฝๅฐ˜': u'็Ÿฝๅฐ˜', u'็Ÿฝๅกต': u'็Ÿฝๅฐ˜', u'็Ÿฝ่‚บ': u'็Ÿฝ่‚บ', u'็Ÿฝ้’ข': u'็Ÿฝ้’ข', u'็Ÿฝ้‹ผ': u'็Ÿฝ้’ข', u'็Ÿฝ': u'็ก…', u'็Ÿฝ็‰‡': u'็ก…็‰‡', u'็Ÿฝ่ฐท': u'็ก…่ฐท', u'็กฌ้ซ”': u'็กฌไปถ', u'็กฌ็ขŸ': u'็กฌ็›˜', u'็ฃ็ขŸ': u'็ฃ็›˜', u'็ฃ่ปŒ': u'็ฃ้“', u'่‘›ๆ‘ฉ': u'็ง‘ๆ‘ฉ็ฝ—', u'่ฑก็‰™ๆตทๅฒธ': u'็ง‘็‰น่ฟช็“ฆ', u'่กŒๅ‹•้›ป่ฉฑ': u'็งปๅŠจ็”ต่ฏ', u'ๆตๅ‹•้›ป่ฉฑ': u'็งปๅŠจ็”ต่ฏ', u'็จ‹ๅผๆŽงๅˆถ': u'็จ‹ๆŽง', u'็ชๅฐผ่ฅฟไบž': u'็ชๅฐผๆ–ฏ', u'่ฐๆ˜Ÿ': u'็ฌ‘ๆ˜Ÿ', u'็ญ‰ๆ–ผ': u'็ญ‰ไบŽ', u'้‹็ฎ—ๅ…ƒ': u'็ฎ—ๅญ', u'ๆผ”็ฎ—ๆณ•': u'็ฎ—ๆณ•', u'้ก†้€ฒ็ƒ': u'็ฒ’ๅ…ฅ็ƒ', u'็ดข้ฆฌๅˆฉไบž': u'็ดข้ฉฌ้‡Œ', u'็ถฒ่ทฏ': u'็ฝ‘็ปœ', u'็ถฒ็ตก': u'็ฝ‘็ปœ', u'ๅฏฎๅœ‹': u'่€ๆŒ', u'่‚ฏ้›…': u'่‚ฏๅฐผไบš', u'่‚ฏไบž': u'่‚ฏๅฐผไบš', u'่‡ช็”ฑ็ƒๅ‘˜': u'่‡ช็”ฑ็ƒๅ‘˜', u'่‡ช็”ฑ็ƒๅ“ก': u'่‡ช็”ฑ็ƒๅ‘˜', u'ๅ–ฎ่ปŠ': u'่‡ช่กŒ่ฝฆ', u'ๅคช็ฉบๆขญ': u'่ˆชๅคฉ้ฃžๆœบ', u'็ฉฟๆขญๆฉŸ': u'่ˆชๅคฉ้ฃžๆœบ', u'็ฏ€ๆ…ถ': u'่Š‚ๆ—ฅ', u'ๆ™ถๅ…ƒ': u'่Šฏ็‰‡', u'ๆ™ถ็‰‡': u'่Šฏ็‰‡', u'่˜‡ๅˆฉๅ—': u'่‹้‡Œๅ—', u'ๅฃซๅคšๅ•คๆขจ': u'่‰่Ž“', u'่Žซไธ‰ๆฏ”ๅ…‹': u'่Žซๆก‘ๆฏ”ๅ…‹', u'่ณด็ดขๆ‰˜': u'่Žฑ็ดขๆ‰˜', u'่พญๅฝ™': u'่ฏๆฑ‡', u'็‰‡่ชž': u'่ฏ็ป„', u'่ชฟๅˆถ่งฃ่ชฟๅ™จ': u'่ฐƒๅˆถ่งฃ่ฐƒๅ™จ', u'ๆ•ธๆ“šๆฉŸ': u'่ฐƒๅˆถ่งฃ่ฐƒๅ™จ', u'่ฒๅ—': u'่ดๅฎ', u'ๅฐšๆฏ”ไบž': u'่ตžๆฏ”ไบš', u'็ป‘็ดง่ทณ': u'่นฆๆž่ทณ', u'็ฌจ่ฑฌ่ทณ': u'่นฆๆž่ทณ', u'่ปŸ้ซ”': u'่ฝฏไปถ', u'่ปŸไปถ': u'่ฝฏไปถ', u'่ปŸ็ขŸๆฉŸ': u'่ฝฏ้ฉฑ', u'็ฑณ้ซ˜ๅฅง้›ฒ': u'่ฟˆๅ…‹ๅฐ”ยทๆฌงๆ–‡', u'่ˆ’้บฅๅŠ ': u'่ฟˆๅ…‹ๅฐ”ยท่ˆ’้ฉฌ่ตซ', u'้ ็จ‹ๆŽงๅˆถ': u'่ฟœ็จ‹ๆŽงๅˆถ', u'่ฟœ็จ‹ๆŽงๅˆถ': u'่ฟœ็จ‹ๆŽงๅˆถ', u'ไบžๅกžๆ‹œ็„ถ': u'้˜ฟๅกžๆ‹œ็–†', u'้˜ฟๆ‹‰ไผฏ่ฏๅˆๅคงๅ…ฌๅœ‹': u'้˜ฟๆ‹‰ไผฏ่”ๅˆ้…‹้•ฟๅ›ฝ', u'ๆ•ฃ้’ฑ': u'้›ถ้’ฑ', u'ๅ—้Ÿ“': u'้Ÿฉๅ›ฝ', u'้ฆฌ็ˆพๅœฐๅคซ': u'้ฉฌๅฐ”ไปฃๅคซ', u'ๆฒ™่Šฌ': u'้ฉฌๆ‹‰็‰นยท่จ่Šฌ', u'้ฆฌ็ˆพไป–': u'้ฉฌ่€ณไป–', u'่ฌไบ‹ๅพ—': u'้ฉฌ่‡ช่พพ', u'้ฆฌๅˆฉๅ…ฑๅ’Œๅœ‹': u'้ฉฌ้‡Œๅ…ฑๅ’Œๅ›ฝ', u'้ ่จญ': u'้ป˜่ฎค', u'ๆป‘้ผ ': u'้ผ ๆ ‡', })
AdvancedLangConv
/AdvancedLangConv-0.01.tar.gz/AdvancedLangConv-0.01/langconv/defaulttables/zh_cn.py
zh_cn.py
from zh_hant import convtable as oldtable convtable = oldtable.copy() convtable.update({ u'โ€œ': u'ใ€Œ', u'โ€': u'ใ€', u'โ€˜': u'ใ€Ž', u'โ€™': u'ใ€', u'ไธ‰ๆฅต้ซ”': u'ไธ‰ๆฅต็ฎก', u'ไธ่‘—็—•่ทก': u'ไธ็€็—•่ทก', u'ไธ่‘—้‚Š้š›': u'ไธ็€้‚Š้š›', u'ไธ–็•Œ่ฃก': u'ไธ–็•Œ่ฃ', u'ไธ–็•Œ้‡Œ': u'ไธ–็•Œ่ฃ', u'ไธญๆ–‡้‡Œ': u'ไธญๆ–‡่ฃ', u'ไธญๆ–‡่ฃก': u'ไธญๆ–‡่ฃ', u'ๆฐ‘ไน': u'ไธญๆจ‚', u'ๅŽไน': u'ไธญๆจ‚', u'ๆŸฅๅพท': u'ไนๅพ—', u'ไน˜่‘—': u'ไน˜็€', u'ไน˜่‘—ไฝœ': u'ไน˜่‘—ไฝœ', u'ไน˜่‘—ๅ': u'ไน˜่‘—ๅ', u'ไน˜่‘—ๆ›ธ': u'ไน˜่‘—ๆ›ธ', u'ไน˜่‘—็จฑ': u'ไน˜่‘—็จฑ', u'ไน˜่‘—่€…': u'ไน˜่‘—่€…', u'ไน˜่‘—่ฟฐ': u'ไน˜่‘—่ฟฐ', u'ไน˜่‘—้Œ„': u'ไน˜่‘—้Œ„', u'่‘‰้–€': u'ไนŸ้–€', u'ไบŒๆฅต้ซ”': u'ไบŒๆฅต็ฎก', u'็ถฒ้š›็ถฒ่ทฏ': u'ไบ’่ฏ็ถฒ', u'ๅ› ็‰น็ฝ‘': u'ไบ’่ฏ็ถฒ', u'ไบฎ่‘—': u'ไบฎ็€', u'ไบฎ่‘—ไฝœ': u'ไบฎ่‘—ไฝœ', u'ไบฎ่‘—ๅ': u'ไบฎ่‘—ๅ', u'ไบฎ่‘—ๆ›ธ': u'ไบฎ่‘—ๆ›ธ', u'ไบฎ่‘—็จฑ': u'ไบฎ่‘—็จฑ', u'ไบฎ่‘—่€…': u'ไบฎ่‘—่€…', u'ไบฎ่‘—่ฟฐ': u'ไบฎ่‘—่ฟฐ', u'ไบฎ่‘—้Œ„': u'ไบฎ่‘—้Œ„', u'ไบบๅทฅๆ™บๆ…ง': u'ไบบๅทฅๆ™บ่ƒฝ', u'ไป—่‘—': u'ไป—็€', u'ไป—่‘—ไฝœ': u'ไป—่‘—ไฝœ', u'ไป—่‘—ๅ': u'ไป—่‘—ๅ', u'ไป—่‘—ๆ›ธ': u'ไป—่‘—ๆ›ธ', u'ไป—่‘—็จฑ': u'ไป—่‘—็จฑ', u'ไป—่‘—่€…': u'ไป—่‘—่€…', u'ไป—่‘—่ฟฐ': u'ไป—่‘—่ฟฐ', u'ไป—่‘—้Œ„': u'ไป—่‘—้Œ„', u'ไปฃ่กจ่‘—': u'ไปฃ่กจ็€', u'ไปฃ่กจ่‘—ไฝœ': u'ไปฃ่กจ่‘—ไฝœ', u'ไปฃ่กจ่‘—ๅ': u'ไปฃ่กจ่‘—ๅ', u'ไปฃ่กจ่‘—ๆ›ธ': u'ไปฃ่กจ่‘—ๆ›ธ', u'ไปฃ่กจ่‘—็จฑ': u'ไปฃ่กจ่‘—็จฑ', u'ไปฃ่กจ่‘—่€…': u'ไปฃ่กจ่‘—่€…', u'ไปฃ่กจ่‘—่ฟฐ': u'ไปฃ่กจ่‘—่ฟฐ', u'ไปฃ่กจ่‘—้Œ„': u'ไปฃ่กจ่‘—้Œ„', u'่ฒ้‡Œๆ–ฏ': u'ไผฏๅˆฉ่Œฒ', u'ไผด่‘—': u'ไผด็€', u'ไผด่‘—ไฝœ': u'ไผด่‘—ไฝœ', u'ไผด่‘—ๅ': u'ไผด่‘—ๅ', u'ไผด่‘—ๆ›ธ': u'ไผด่‘—ๆ›ธ', u'ไผด่‘—็จฑ': u'ไผด่‘—็จฑ', u'ไผด่‘—่€…': u'ไผด่‘—่€…', u'ไผด่‘—่ฟฐ': u'ไผด่‘—่ฟฐ', u'ไผด่‘—้Œ„': u'ไผด่‘—้Œ„', u'ๅญ—็ฏ€': u'ไฝๅ…ƒ็ต„', u'ๅญ—่Š‚': u'ไฝๅ…ƒ็ต„', u'ไฝŽ่‘—': u'ไฝŽ็€', u'ไฝŽ่‘—ไฝœ': u'ไฝŽ่‘—ไฝœ', u'ไฝŽ่‘—ๅ': u'ไฝŽ่‘—ๅ', u'ไฝŽ่‘—ๆ›ธ': u'ไฝŽ่‘—ๆ›ธ', u'ไฝŽ่‘—็จฑ': u'ไฝŽ่‘—็จฑ', u'ไฝŽ่‘—่€…': u'ไฝŽ่‘—่€…', u'ไฝŽ่‘—่ฟฐ': u'ไฝŽ่‘—่ฟฐ', u'ไฝŽ่‘—้Œ„': u'ไฝŽ่‘—้Œ„', u'ไฝ่‘—': u'ไฝ็€', u'ไฝ่‘—ไฝœ': u'ไฝ่‘—ไฝœ', u'ไฝ่‘—ๅ': u'ไฝ่‘—ๅ', u'ไฝ่‘—ๆ›ธ': u'ไฝ่‘—ๆ›ธ', u'ไฝ่‘—็จฑ': u'ไฝ่‘—็จฑ', u'ไฝ่‘—่€…': u'ไฝ่‘—่€…', u'ไฝ่‘—่ฟฐ': u'ไฝ่‘—่ฟฐ', u'ไฝ่‘—้Œ„': u'ไฝ่‘—้Œ„', u'็ถญๅพท่ง’': u'ไฝ›ๅพ—่ง’', u'ไฝœๅ“่ฃก': u'ไฝœๅ“่ฃ', u'ไฝœๅ“้‡Œ': u'ไฝœๅ“่ฃ', u'ไพ†่‘—': u'ไพ†็€', u'ไพ†่‘—ไฝœ': u'ไพ†่‘—ไฝœ', u'ไพ†่‘—ๅ': u'ไพ†่‘—ๅ', u'ไพ†่‘—ๆ›ธ': u'ไพ†่‘—ๆ›ธ', u'ไพ†่‘—็จฑ': u'ไพ†่‘—็จฑ', u'ไพ†่‘—่€…': u'ไพ†่‘—่€…', u'ไพ†่‘—่ฟฐ': u'ไพ†่‘—่ฟฐ', u'ไพ†่‘—้Œ„': u'ไพ†่‘—้Œ„', u'ๆตท็Š': u'ไพฏ่ณฝๅ› ', u'ไฟ้šœ่‘—': u'ไฟ้šœ็€', u'ไฟ้šœ่‘—ไฝœ': u'ไฟ้šœ่‘—ไฝœ', u'ไฟ้šœ่‘—ๅ': u'ไฟ้šœ่‘—ๅ', u'ไฟ้šœ่‘—ๆ›ธ': u'ไฟ้šœ่‘—ๆ›ธ', u'ไฟ้šœ่‘—็จฑ': u'ไฟ้šœ่‘—็จฑ', u'ไฟ้šœ่‘—่€…': u'ไฟ้šœ่‘—่€…', u'ไฟ้šœ่‘—่ฟฐ': u'ไฟ้šœ่‘—่ฟฐ', u'ไฟ้šœ่‘—้Œ„': u'ไฟ้šœ่‘—้Œ„', u'ไฟก่‘—': u'ไฟก็€', u'ไฟก่‘—ไฝœ': u'ไฟก่‘—ไฝœ', u'ไฟก่‘—ๅ': u'ไฟก่‘—ๅ', u'ไฟก่‘—ๆ›ธ': u'ไฟก่‘—ๆ›ธ', u'ไฟก่‘—็จฑ': u'ไฟก่‘—็จฑ', u'ไฟก่‘—่€…': u'ไฟก่‘—่€…', u'ไฟก่‘—่ฟฐ': u'ไฟก่‘—่ฟฐ', u'ไฟก่‘—้Œ„': u'ไฟก่‘—้Œ„', u'ๅ€™่‘—': u'ๅ€™็€', u'ๅ€™่‘—ไฝœ': u'ๅ€™่‘—ไฝœ', u'ๅ€™่‘—ๅ': u'ๅ€™่‘—ๅ', u'ๅ€™่‘—ๆ›ธ': u'ๅ€™่‘—ๆ›ธ', u'ๅ€™่‘—็จฑ': u'ๅ€™่‘—็จฑ', u'ๅ€™่‘—่€…': u'ๅ€™่‘—่€…', u'ๅ€™่‘—่ฟฐ': u'ๅ€™่‘—่ฟฐ', u'ๅ€™่‘—้Œ„': u'ๅ€™่‘—้Œ„', u'ๅ€Ÿ่‘—': u'ๅ€Ÿ็€', u'ๅ€Ÿ่‘—ไฝœ': u'ๅ€Ÿ่‘—ไฝœ', u'ๅ€Ÿ่‘—ๅ': u'ๅ€Ÿ่‘—ๅ', u'ๅ€Ÿ่‘—ๆ›ธ': u'ๅ€Ÿ่‘—ๆ›ธ', u'ๅ€Ÿ่‘—็จฑ': u'ๅ€Ÿ่‘—็จฑ', u'ๅ€Ÿ่‘—่€…': u'ๅ€Ÿ่‘—่€…', u'ๅ€Ÿ่‘—่ฟฐ': u'ๅ€Ÿ่‘—่ฟฐ', u'ๅ€Ÿ่‘—้Œ„': u'ๅ€Ÿ่‘—้Œ„', u'ๅš่‘—': u'ๅš็€', u'ๅš่‘—ไฝœ': u'ๅš่‘—ไฝœ', u'ๅš่‘—ๅ': u'ๅš่‘—ๅ', u'ๅš่‘—ๆ›ธ': u'ๅš่‘—ๆ›ธ', u'ๅš่‘—็จฑ': u'ๅš่‘—็จฑ', u'ๅš่‘—่€…': u'ๅš่‘—่€…', u'ๅš่‘—่ฟฐ': u'ๅš่‘—่ฟฐ', u'ๅš่‘—้Œ„': u'ๅš่‘—้Œ„', u'ๅด่‘—': u'ๅด็€', u'ๅด่‘—ไฝœ': u'ๅด่‘—ไฝœ', u'ๅด่‘—ๅ': u'ๅด่‘—ๅ', u'ๅด่‘—ๆ›ธ': u'ๅด่‘—ๆ›ธ', u'ๅด่‘—็จฑ': u'ๅด่‘—็จฑ', u'ๅด่‘—่€…': u'ๅด่‘—่€…', u'ๅด่‘—่ฟฐ': u'ๅด่‘—่ฟฐ', u'ๅด่‘—้Œ„': u'ๅด่‘—้Œ„', u'ๅท่‘—': u'ๅท็€', u'ๅท่‘—ไฝœ': u'ๅท่‘—ไฝœ', u'ๅท่‘—ๅ': u'ๅท่‘—ๅ', u'ๅท่‘—ๆ›ธ': u'ๅท่‘—ๆ›ธ', u'ๅท่‘—็จฑ': u'ๅท่‘—็จฑ', u'ๅท่‘—่€…': u'ๅท่‘—่€…', u'ๅท่‘—่ฟฐ': u'ๅท่‘—่ฟฐ', u'ๅท่‘—้Œ„': u'ๅท่‘—้Œ„', u'ๅ‚™่‘—': u'ๅ‚™็€', u'ๅ‚™่‘—ไฝœ': u'ๅ‚™่‘—ไฝœ', u'ๅ‚™่‘—ๅ': u'ๅ‚™่‘—ๅ', u'ๅ‚™่‘—ๆ›ธ': u'ๅ‚™่‘—ๆ›ธ', u'ๅ‚™่‘—็จฑ': u'ๅ‚™่‘—็จฑ', u'ๅ‚™่‘—่€…': u'ๅ‚™่‘—่€…', u'ๅ‚™่‘—่ฟฐ': u'ๅ‚™่‘—่ฟฐ', u'ๅ‚™่‘—้Œ„': u'ๅ‚™่‘—้Œ„', u'ๅ‡ถๆฎ˜': u'ๅ…‡ๆฎ˜', u'ๅ‡ถๆฎบ': u'ๅ…‡ๆฎบ', u'้›ช้“้พ™': u'ๅ…ˆ้€ฒ', u'้›ช้ต้พ': u'ๅ…ˆ้€ฒ', u'ๅ…‰่‘—': u'ๅ…‰็€', u'ๅ…‰่‘—ไฝœ': u'ๅ…‰่‘—ไฝœ', u'ๅ…‰่‘—ๅ': u'ๅ…‰่‘—ๅ', u'ๅ…‰่‘—ๆ›ธ': u'ๅ…‰่‘—ๆ›ธ', u'ๅ…‰่‘—็จฑ': u'ๅ…‰่‘—็จฑ', u'ๅ…‰่‘—่€…': u'ๅ…‰่‘—่€…', u'ๅ…‰่‘—่ฟฐ': u'ๅ…‰่‘—่ฟฐ', u'ๅ…‰่‘—้Œ„': u'ๅ…‰่‘—้Œ„', u'ๆŸฏๆž—้ “': u'ๅ…‹ๆž—้ “', u'ๅ…‹็พ…ๅŸƒ่ฅฟไบž': u'ๅ…‹็พ…ๅœฐไบž', u'ๅ…ฌ่ปŠไธŠๆ›ธ': u'ๅ…ฌ่ปŠไธŠๆ›ธ', u'ๅ†€่‘—': u'ๅ†€็€', u'ๅ†€่‘—ไฝœ': u'ๅ†€่‘—ไฝœ', u'ๅ†€่‘—ๅ': u'ๅ†€่‘—ๅ', u'ๅ†€่‘—ๆ›ธ': u'ๅ†€่‘—ๆ›ธ', u'ๅ†€่‘—็จฑ': u'ๅ†€่‘—็จฑ', u'ๅ†€่‘—่€…': u'ๅ†€่‘—่€…', u'ๅ†€่‘—่ฟฐ': u'ๅ†€่‘—่ฟฐ', u'ๅ†€่‘—้Œ„': u'ๅ†€่‘—้Œ„', u'ๅ†’่‘—': u'ๅ†’็€', u'ๅ†’่‘—ไฝœ': u'ๅ†’่‘—ไฝœ', u'ๅ†’่‘—ๅ': u'ๅ†’่‘—ๅ', u'ๅ†’่‘—ๆ›ธ': u'ๅ†’่‘—ๆ›ธ', u'ๅ†’่‘—็จฑ': u'ๅ†’่‘—็จฑ', u'ๅ†’่‘—่€…': u'ๅ†’่‘—่€…', u'ๅ†’่‘—่ฟฐ': u'ๅ†’่‘—่ฟฐ', u'ๅ†’่‘—้Œ„': u'ๅ†’่‘—้Œ„', u'ๅ†ฌๅคฉ้‡Œ': u'ๅ†ฌๅคฉ่ฃ', u'ๅ†ฌๅคฉ่ฃก': u'ๅ†ฌๅคฉ่ฃ', u'ๅ†ฌๆ—ฅ่ฃก': u'ๅ†ฌๆ—ฅ่ฃ', u'ๅ†ฌๆ—ฅ้‡Œ': u'ๅ†ฌๆ—ฅ่ฃ', u'ๅˆ†ๅธƒ': u'ๅˆ†ไฝˆ', u'ๅˆ†ๅธƒๆ–ผ': u'ๅˆ†ไฝˆๆ–ผ', u'ๅˆ†ๅธƒไบŽ': u'ๅˆ†ไฝˆๆ–ผ', u'ๅˆ—ๆ”ฏๆ•ฆๆ–ฏ็™ป': u'ๅˆ—ๆ”ฏๆ•ฆๅฃซ็™ป', u'่ณดๆฏ”็‘žไบž': u'ๅˆฉๆฏ”้‡Œไบž', u'ๅˆถ่‘—': u'ๅˆถ็€', u'ๅˆถ่‘—ไฝœ': u'ๅˆถ่‘—ไฝœ', u'ๅˆถ่‘—ๅ': u'ๅˆถ่‘—ๅ', u'ๅˆถ่‘—ๆ›ธ': u'ๅˆถ่‘—ๆ›ธ', u'ๅˆถ่‘—็จฑ': u'ๅˆถ่‘—็จฑ', u'ๅˆถ่‘—่€…': u'ๅˆถ่‘—่€…', u'ๅˆถ่‘—่ฟฐ': u'ๅˆถ่‘—่ฟฐ', u'ๅˆถ่‘—้Œ„': u'ๅˆถ่‘—้Œ„', u'ๅˆป่‘—': u'ๅˆป็€', u'ๅˆป่‘—ไฝœ': u'ๅˆป่‘—ไฝœ', u'ๅˆป่‘—ๅ': u'ๅˆป่‘—ๅ', u'ๅˆป่‘—ๆ›ธ': u'ๅˆป่‘—ๆ›ธ', u'ๅˆป่‘—็จฑ': u'ๅˆป่‘—็จฑ', u'ๅˆป่‘—่€…': u'ๅˆป่‘—่€…', u'ๅˆป่‘—่ฟฐ': u'ๅˆป่‘—่ฟฐ', u'ๅˆป่‘—้Œ„': u'ๅˆป่‘—้Œ„', u'่ฟฆ็ด': u'ๅŠ ็ด', u'ๅŠ ๅฝญ': u'ๅŠ ่“ฌ', u'ๅŠชๅŠ›่‘—': u'ๅŠชๅŠ›็€', u'ๅŠชๅŠ›่‘—ไฝœ': u'ๅŠชๅŠ›่‘—ไฝœ', u'ๅŠชๅŠ›่‘—ๅ': u'ๅŠชๅŠ›่‘—ๅ', u'ๅŠชๅŠ›่‘—ๆ›ธ': u'ๅŠชๅŠ›่‘—ๆ›ธ', u'ๅŠชๅŠ›่‘—็จฑ': u'ๅŠชๅŠ›่‘—็จฑ', u'ๅŠชๅŠ›่‘—่€…': u'ๅŠชๅŠ›่‘—่€…', u'ๅŠชๅŠ›่‘—่ฟฐ': u'ๅŠชๅŠ›่‘—่ฟฐ', u'ๅŠชๅŠ›่‘—้Œ„': u'ๅŠชๅŠ›่‘—้Œ„', u'ๅŠช่‘—': u'ๅŠช็€', u'ๅŠช่‘—ไฝœ': u'ๅŠช่‘—ไฝœ', u'ๅŠช่‘—ๅ': u'ๅŠช่‘—ๅ', u'ๅŠช่‘—ๆ›ธ': u'ๅŠช่‘—ๆ›ธ', u'ๅŠช่‘—็จฑ': u'ๅŠช่‘—็จฑ', u'ๅŠช่‘—่€…': u'ๅŠช่‘—่€…', u'ๅŠช่‘—่ฟฐ': u'ๅŠช่‘—่ฟฐ', u'ๅŠช่‘—้Œ„': u'ๅŠช่‘—้Œ„', u'ๅ‹•่‘—': u'ๅ‹•็€', u'ๅ‹•่‘—ไฝœ': u'ๅ‹•่‘—ไฝœ', u'ๅ‹•่‘—ๅ': u'ๅ‹•่‘—ๅ', u'ๅ‹•่‘—ๆ›ธ': u'ๅ‹•่‘—ๆ›ธ', u'ๅ‹•่‘—็จฑ': u'ๅ‹•่‘—็จฑ', u'ๅ‹•่‘—่€…': u'ๅ‹•่‘—่€…', u'ๅ‹•่‘—่ฟฐ': u'ๅ‹•่‘—่ฟฐ', u'ๅ‹•่‘—้Œ„': u'ๅ‹•่‘—้Œ„', u'ๅŒป้™ข้‡Œ': u'ๅŒป้™ข่ฃ', u'ๆณขๆœญ้‚ฃ': u'ๅš่Œจ็“ฆ็ด', u'็ๅฆฎๅผ—ยทๅกๆ™ฎ้‡Œไบš่’‚': u'ๅกไฝฉ้›…่’‚', u'ๅฐ่‘—': u'ๅฐ็€', u'ๅฐ่‘—ไฝœ': u'ๅฐ่‘—ไฝœ', u'ๅฐ่‘—ๅ': u'ๅฐ่‘—ๅ', u'ๅฐ่‘—ๆ›ธ': u'ๅฐ่‘—ๆ›ธ', u'ๅฐ่‘—็จฑ': u'ๅฐ่‘—็จฑ', u'ๅฐ่‘—่€…': u'ๅฐ่‘—่€…', u'ๅฐ่‘—่ฟฐ': u'ๅฐ่‘—่ฟฐ', u'ๅฐ่‘—้Œ„': u'ๅฐ่‘—้Œ„', u'็“œๅœฐ้ฆฌๆ‹‰': u'ๅฑๅœฐ้ฆฌๆ‹‰', u'ๆณก้บต': u'ๅณ้ฃŸ้บต', u'ๆ–นไพฟ้ข': u'ๅณ้ฃŸ้บต', u'ๅฟซ้€Ÿ้ข': u'ๅณ้ฃŸ้บต', u'้€Ÿ้ฃŸ้บต': u'ๅณ้ฃŸ้บต', u'ๅŽ„็“œๅคš': u'ๅŽ„็“œๅคš็ˆพ', u'ๅŽ„็“œๅคš็ˆพ': u'ๅŽ„็“œๅคš็ˆพ', u'ๅŽ„็“œๅคšๅฐ”': u'ๅŽ„็“œๅคš็ˆพ', u'ๅŽ„ๅˆฉๅž‚ไบž': u'ๅŽ„็ซ‹็‰น้‡Œไบž', u'ๅŽป่‘—': u'ๅŽป็€', u'ๅŽป่‘—ไฝœ': u'ๅŽป่‘—ไฝœ', u'ๅŽป่‘—ๅ': u'ๅŽป่‘—ๅ', u'ๅŽป่‘—ๆ›ธ': u'ๅŽป่‘—ๆ›ธ', u'ๅŽป่‘—็จฑ': u'ๅŽป่‘—็จฑ', u'ๅŽป่‘—่€…': u'ๅŽป่‘—่€…', u'ๅŽป่‘—่ฟฐ': u'ๅŽป่‘—่ฟฐ', u'ๅŽป่‘—้Œ„': u'ๅŽป่‘—้Œ„', u'ๅ—่‘—': u'ๅ—็€', u'ๅ—่‘—ไฝœ': u'ๅ—่‘—ไฝœ', u'ๅ—่‘—ๅ': u'ๅ—่‘—ๅ', u'ๅ—่‘—ๆ›ธ': u'ๅ—่‘—ๆ›ธ', u'ๅ—่‘—็จฑ': u'ๅ—่‘—็จฑ', u'ๅ—่‘—่€…': u'ๅ—่‘—่€…', u'ๅ—่‘—่ฟฐ': u'ๅ—่‘—่ฟฐ', u'ๅ—่‘—้Œ„': u'ๅ—่‘—้Œ„', u'ๅซ่‘—': u'ๅซ็€', u'ๅซ่‘—ไฝœ': u'ๅซ่‘—ไฝœ', u'ๅซ่‘—ๅ': u'ๅซ่‘—ๅ', u'ๅซ่‘—ๆ›ธ': u'ๅซ่‘—ๆ›ธ', u'ๅซ่‘—็จฑ': u'ๅซ่‘—็จฑ', u'ๅซ่‘—่€…': u'ๅซ่‘—่€…', u'ๅซ่‘—่ฟฐ': u'ๅซ่‘—่ฟฐ', u'ๅซ่‘—้Œ„': u'ๅซ่‘—้Œ„', u'ๅฑๅ’': u'ๅฑๅ’ค', u'ๅฑๅ’ค': u'ๅฑๅ’ค', u'ๅƒไธ่‘—': u'ๅƒไธ็€', u'ๅƒๅพ—่‘—': u'ๅƒๅพ—็€', u'ๅƒ่‘—': u'ๅƒ็€', u'ๅ‰ๅธƒๅœฐ': u'ๅ‰ๅธƒๅ ค', u'ๅ‘่‘—': u'ๅ‘็€', u'ๅ‘่‘—ไฝœ': u'ๅ‘่‘—ไฝœ', u'ๅ‘่‘—ๅ': u'ๅ‘่‘—ๅ', u'ๅ‘่‘—ๆ›ธ': u'ๅ‘่‘—ๆ›ธ', u'ๅ‘่‘—็จฑ': u'ๅ‘่‘—็จฑ', u'ๅ‘่‘—่€…': u'ๅ‘่‘—่€…', u'ๅ‘่‘—่ฟฐ': u'ๅ‘่‘—่ฟฐ', u'ๅ‘่‘—้Œ„': u'ๅ‘่‘—้Œ„', u'ๅซ่‘—': u'ๅซ็€', u'ๅซ่‘—ไฝœ': u'ๅซ่‘—ไฝœ', u'ๅซ่‘—ๅ': u'ๅซ่‘—ๅ', u'ๅซ่‘—ๆ›ธ': u'ๅซ่‘—ๆ›ธ', u'ๅซ่‘—็จฑ': u'ๅซ่‘—็จฑ', u'ๅซ่‘—่€…': u'ๅซ่‘—่€…', u'ๅซ่‘—่ฟฐ': u'ๅซ่‘—่ฟฐ', u'ๅซ่‘—้Œ„': u'ๅซ่‘—้Œ„', u'ๅน่‘—': u'ๅน็€', u'ๅน่‘—ไฝœ': u'ๅน่‘—ไฝœ', u'ๅน่‘—ๅ': u'ๅน่‘—ๅ', u'ๅน่‘—ๆ›ธ': u'ๅน่‘—ๆ›ธ', u'ๅน่‘—็จฑ': u'ๅน่‘—็จฑ', u'ๅน่‘—่€…': u'ๅน่‘—่€…', u'ๅน่‘—่ฟฐ': u'ๅน่‘—่ฟฐ', u'ๅน่‘—้Œ„': u'ๅน่‘—้Œ„', u'ๅ‘ณ่‘—': u'ๅ‘ณ็€', u'ๅ‘ณ่‘—ไฝœ': u'ๅ‘ณ่‘—ไฝœ', u'ๅ‘ณ่‘—ๅ': u'ๅ‘ณ่‘—ๅ', u'ๅ‘ณ่‘—ๆ›ธ': u'ๅ‘ณ่‘—ๆ›ธ', u'ๅ‘ณ่‘—็จฑ': u'ๅ‘ณ่‘—็จฑ', u'ๅ‘ณ่‘—่€…': u'ๅ‘ณ่‘—่€…', u'ๅ‘ณ่‘—่ฟฐ': u'ๅ‘ณ่‘—่ฟฐ', u'ๅ‘ณ่‘—้Œ„': u'ๅ‘ณ่‘—้Œ„', u'ๅ’ค': u'ๅ’ค', u'ๅ“ฅๆ–ฏๅคง้ปŽๅŠ ': u'ๅ“ฅๆ–ฏ้”้ปŽๅŠ ', u'ๅ“ญ่‘—': u'ๅ“ญ็€', u'ๅ“ญ่‘—ไฝœ': u'ๅ“ญ่‘—ไฝœ', u'ๅ“ญ่‘—ๅ': u'ๅ“ญ่‘—ๅ', u'ๅ“ญ่‘—ๆ›ธ': u'ๅ“ญ่‘—ๆ›ธ', u'ๅ“ญ่‘—็จฑ': u'ๅ“ญ่‘—็จฑ', u'ๅ“ญ่‘—่€…': u'ๅ“ญ่‘—่€…', u'ๅ“ญ่‘—่ฟฐ': u'ๅ“ญ่‘—่ฟฐ', u'ๅ“ญ่‘—้Œ„': u'ๅ“ญ่‘—้Œ„', u'ๅ”ฑ่‘—': u'ๅ”ฑ็€', u'ๅ”ฑ่‘—ไฝœ': u'ๅ”ฑ่‘—ไฝœ', u'ๅ”ฑ่‘—ๅ': u'ๅ”ฑ่‘—ๅ', u'ๅ”ฑ่‘—ๆ›ธ': u'ๅ”ฑ่‘—ๆ›ธ', u'ๅ”ฑ่‘—็จฑ': u'ๅ”ฑ่‘—็จฑ', u'ๅ”ฑ่‘—่€…': u'ๅ”ฑ่‘—่€…', u'ๅ”ฑ่‘—่ฟฐ': u'ๅ”ฑ่‘—่ฟฐ', u'ๅ”ฑ่‘—้Œ„': u'ๅ”ฑ่‘—้Œ„', u'ๅ–่‘—': u'ๅ–็€', u'ๅ–่‘—ไฝœ': u'ๅ–่‘—ไฝœ', u'ๅ–่‘—ๅ': u'ๅ–่‘—ๅ', u'ๅ–่‘—ๆ›ธ': u'ๅ–่‘—ๆ›ธ', u'ๅ–่‘—็จฑ': u'ๅ–่‘—็จฑ', u'ๅ–่‘—่€…': u'ๅ–่‘—่€…', u'ๅ–่‘—่ฟฐ': u'ๅ–่‘—่ฟฐ', u'ๅ–่‘—้Œ„': u'ๅ–่‘—้Œ„', u'่‡ช่กŒ่ฝฆ': u'ๅ–ฎ่ปŠ', u'ๅ—…ไธ่‘—': u'ๅ—…ไธ็€', u'ๅ—…ๅพ—่‘—': u'ๅ—…ๅพ—็€', u'ๅ—…่‘—': u'ๅ—…็€', u'ๅ˜ด้‡Œ': u'ๅ˜ด่ฃ', u'ๅ˜ด่ฃก': u'ๅ˜ด่ฃ', u'ๅšท่‘—': u'ๅšท็€', u'ๅšท่‘—ไฝœ': u'ๅšท่‘—ไฝœ', u'ๅšท่‘—ๅ': u'ๅšท่‘—ๅ', u'ๅšท่‘—ๆ›ธ': u'ๅšท่‘—ๆ›ธ', u'ๅšท่‘—็จฑ': u'ๅšท่‘—็จฑ', u'ๅšท่‘—่€…': u'ๅšท่‘—่€…', u'ๅšท่‘—่ฟฐ': u'ๅšท่‘—่ฟฐ', u'ๅšท่‘—้Œ„': u'ๅšท่‘—้Œ„', u'ๅ› ่‘—': u'ๅ› ็€', u'ๅ› ่‘—ไฝœ': u'ๅ› ่‘—ไฝœ', u'ๅ› ่‘—ๅ': u'ๅ› ่‘—ๅ', u'ๅ› ่‘—ๆ›ธ': u'ๅ› ่‘—ๆ›ธ', u'ๅ› ่‘—็จฑ': u'ๅ› ่‘—็จฑ', u'ๅ› ่‘—่€…': u'ๅ› ่‘—่€…', u'ๅ› ่‘—่ฟฐ': u'ๅ› ่‘—่ฟฐ', u'ๅ› ่‘—้Œ„': u'ๅ› ่‘—้Œ„', u'ๅ›ฐ่‘—': u'ๅ›ฐ็€', u'ๅ›ฐ่‘—ไฝœ': u'ๅ›ฐ่‘—ไฝœ', u'ๅ›ฐ่‘—ๅ': u'ๅ›ฐ่‘—ๅ', u'ๅ›ฐ่‘—ๆ›ธ': u'ๅ›ฐ่‘—ๆ›ธ', u'ๅ›ฐ่‘—็จฑ': u'ๅ›ฐ่‘—็จฑ', u'ๅ›ฐ่‘—่€…': u'ๅ›ฐ่‘—่€…', u'ๅ›ฐ่‘—่ฟฐ': u'ๅ›ฐ่‘—่ฟฐ', u'ๅ›ฐ่‘—้Œ„': u'ๅ›ฐ่‘—้Œ„', u'ๅœ่‘—': u'ๅœ็€', u'ๅœ่‘—ไฝœ': u'ๅœ่‘—ไฝœ', u'ๅœ่‘—ๅ': u'ๅœ่‘—ๅ', u'ๅœ่‘—ๆ›ธ': u'ๅœ่‘—ๆ›ธ', u'ๅœ่‘—็จฑ': u'ๅœ่‘—็จฑ', u'ๅœ่‘—่€…': u'ๅœ่‘—่€…', u'ๅœ่‘—่ฟฐ': u'ๅœ่‘—่ฟฐ', u'ๅœ่‘—้Œ„': u'ๅœ่‘—้Œ„', u'ๅ็“ฆ้ญฏ': u'ๅœ–็“ฆ็›ง', u'ๅœŸ่ฑ†็ถฒ': u'ๅœŸ่ฑ†็ถฒ', u'ๅœŸ่ฑ†็ฝ‘': u'ๅœŸ่ฑ†็ถฒ', u'ๅœจ่‘—': u'ๅœจ็€', u'ๅœจ่‘—ไฝœ': u'ๅœจ่‘—ไฝœ', u'ๅœจ่‘—ๅ': u'ๅœจ่‘—ๅ', u'ๅœจ่‘—ๆ›ธ': u'ๅœจ่‘—ๆ›ธ', u'ๅœจ่‘—็จฑ': u'ๅœจ่‘—็จฑ', u'ๅœจ่‘—่€…': u'ๅœจ่‘—่€…', u'ๅœจ่‘—่ฟฐ': u'ๅœจ่‘—่ฟฐ', u'ๅœจ่‘—้Œ„': u'ๅœจ่‘—้Œ„', u'่“‹ไบž้‚ฃ': u'ๅœญไบž้‚ฃ', u'ๅ่‘—': u'ๅ็€', u'ๅ่‘—ไฝœ': u'ๅ่‘—ไฝœ', u'ๅ่‘—ๅ': u'ๅ่‘—ๅ', u'ๅ่‘—ๆ›ธ': u'ๅ่‘—ๆ›ธ', u'ๅ่‘—็จฑ': u'ๅ่‘—็จฑ', u'ๅ่‘—่€…': u'ๅ่‘—่€…', u'ๅ่‘—่ฟฐ': u'ๅ่‘—่ฟฐ', u'ๅ่‘—้Œ„': u'ๅ่‘—้Œ„', u'ๅฆๅฐšๅฐผไบž': u'ๅฆๆก‘ๅฐผไบž', u'่กฃ็ดขๅŒนไบž': u'ๅŸƒๅกžไฟ„ๆฏ”ไบž', u'่กฃ็ดขๆฏ”ไบž': u'ๅŸƒๅกžไฟ„ๆฏ”ไบž', u'ๅ‰้‡Œๅทดๆ–ฏ': u'ๅŸบ้‡Œๅทดๆ–ฏ', u'ๅกžๆ™ฎๅ‹’ๆ–ฏ': u'ๅกžๆตฆ่ทฏๆ–ฏ', u'ๅกžๅธญ็ˆพ': u'ๅกž่ˆŒ็ˆพ', u'ๅฃ“่‘—': u'ๅฃ“็€', u'ๅฃ“่‘—ไฝœ': u'ๅฃ“่‘—ไฝœ', u'ๅฃ“่‘—ๅ': u'ๅฃ“่‘—ๅ', u'ๅฃ“่‘—ๆ›ธ': u'ๅฃ“่‘—ๆ›ธ', u'ๅฃ“่‘—็จฑ': u'ๅฃ“่‘—็จฑ', u'ๅฃ“่‘—่€…': u'ๅฃ“่‘—่€…', u'ๅฃ“่‘—่ฟฐ': u'ๅฃ“่‘—่ฟฐ', u'ๅฃ“่‘—้Œ„': u'ๅฃ“่‘—้Œ„', u'ๅคๅคฉ้‡Œ': u'ๅคๅคฉ่ฃ', u'ๅคๅคฉ่ฃก': u'ๅคๅคฉ่ฃ', u'ๅคๆ—ฅ้‡Œ': u'ๅคๆ—ฅ่ฃ', u'ๅคๆ—ฅ่ฃก': u'ๅคๆ—ฅ่ฃ', u'ๅคข่‘—': u'ๅคข็€', u'ๅคข่‘—ไฝœ': u'ๅคข่‘—ไฝœ', u'ๅคข่‘—ๅ': u'ๅคข่‘—ๅ', u'ๅคข่‘—ๆ›ธ': u'ๅคข่‘—ๆ›ธ', u'ๅคข่‘—็จฑ': u'ๅคข่‘—็จฑ', u'ๅคข่‘—่€…': u'ๅคข่‘—่€…', u'ๅคข่‘—่ฟฐ': u'ๅคข่‘—่ฟฐ', u'ๅคข่‘—้Œ„': u'ๅคข่‘—้Œ„', u'ๅคงๅซยท่ดๅ…‹ๆฑ‰ๅง†': u'ๅคง่ก›็ขงๅ’ธ', u'ๅคพ่‘—': u'ๅคพ็€', u'ๅคพ่‘—ไฝœ': u'ๅคพ่‘—ไฝœ', u'ๅคพ่‘—ๅ': u'ๅคพ่‘—ๅ', u'ๅคพ่‘—ๆ›ธ': u'ๅคพ่‘—ๆ›ธ', u'ๅคพ่‘—็จฑ': u'ๅคพ่‘—็จฑ', u'ๅคพ่‘—่€…': u'ๅคพ่‘—่€…', u'ๅคพ่‘—่ฟฐ': u'ๅคพ่‘—่ฟฐ', u'ๅคพ่‘—้Œ„': u'ๅคพ่‘—้Œ„', u'ๅญค่‘—': u'ๅญค็€', u'ๅญค่‘—ไฝœ': u'ๅญค่‘—ไฝœ', u'ๅญค่‘—ๅ': u'ๅญค่‘—ๅ', u'ๅญค่‘—ๆ›ธ': u'ๅญค่‘—ๆ›ธ', u'ๅญค่‘—็จฑ': u'ๅญค่‘—็จฑ', u'ๅญค่‘—่€…': u'ๅญค่‘—่€…', u'ๅญค่‘—่ฟฐ': u'ๅญค่‘—่ฟฐ', u'ๅญค่‘—้Œ„': u'ๅญค่‘—้Œ„', u'ๅญธ่‘—': u'ๅญธ็€', u'ๅญธ่‘—ไฝœ': u'ๅญธ่‘—ไฝœ', u'ๅญธ่‘—ๅ': u'ๅญธ่‘—ๅ', u'ๅญธ่‘—ๆ›ธ': u'ๅญธ่‘—ๆ›ธ', u'ๅญธ่‘—็จฑ': u'ๅญธ่‘—็จฑ', u'ๅญธ่‘—่€…': u'ๅญธ่‘—่€…', u'ๅญธ่‘—่ฟฐ': u'ๅญธ่‘—่ฟฐ', u'ๅญธ่‘—้Œ„': u'ๅญธ่‘—้Œ„', u'ๅญธ่ฃก': u'ๅญธ่ฃ', u'ๅญฆ้‡Œ': u'ๅญธ่ฃ', u'ๅฎˆ่‘—': u'ๅฎˆ็€', u'ๅฎˆ่‘—ไฝœ': u'ๅฎˆ่‘—ไฝœ', u'ๅฎˆ่‘—ๅ': u'ๅฎˆ่‘—ๅ', u'ๅฎˆ่‘—ๆ›ธ': u'ๅฎˆ่‘—ๆ›ธ', u'ๅฎˆ่‘—็จฑ': u'ๅฎˆ่‘—็จฑ', u'ๅฎˆ่‘—่€…': u'ๅฎˆ่‘—่€…', u'ๅฎˆ่‘—่ฟฐ': u'ๅฎˆ่‘—่ฟฐ', u'ๅฎˆ่‘—้Œ„': u'ๅฎˆ่‘—้Œ„', u'ๅฎ‰ๅœฐๅกๅŠๅทดๅธƒ้”': u'ๅฎ‰ๆ็“œๅ’Œๅทดๅธƒ้”', u'ๅฎš่‘—': u'ๅฎš็€', u'ๅฎš่‘—ไฝœ': u'ๅฎš่‘—ไฝœ', u'ๅฎš่‘—ๅ': u'ๅฎš่‘—ๅ', u'ๅฎš่‘—ๆ›ธ': u'ๅฎš่‘—ๆ›ธ', u'ๅฎš่‘—็จฑ': u'ๅฎš่‘—็จฑ', u'ๅฎš่‘—่€…': u'ๅฎš่‘—่€…', u'ๅฎš่‘—่ฟฐ': u'ๅฎš่‘—่ฟฐ', u'ๅฎš่‘—้Œ„': u'ๅฎš่‘—้Œ„', u'ๆฒƒๅฐ“ๆฒƒ': u'ๅฏŒ่ฑช', u'ๅฏ’ๅ‡่ฃก': u'ๅฏ’ๅ‡่ฃ', u'ๅฏ’ๅ‡้‡Œ': u'ๅฏ’ๅ‡่ฃ', u'ๅฏซ่‘—': u'ๅฏซ็€', u'ๅฏซ่‘—ไฝœ': u'ๅฏซ่‘—ไฝœ', u'ๅฏซ่‘—ๅ': u'ๅฏซ่‘—ๅ', u'ๅฏซ่‘—ๆ›ธ': u'ๅฏซ่‘—ๆ›ธ', u'ๅฏซ่‘—็จฑ': u'ๅฏซ่‘—็จฑ', u'ๅฏซ่‘—่€…': u'ๅฏซ่‘—่€…', u'ๅฏซ่‘—่ฟฐ': u'ๅฏซ่‘—่ฟฐ', u'ๅฏซ่‘—้Œ„': u'ๅฏซ่‘—้Œ„', u'ไธ“่พ‘้‡Œ': u'ๅฐˆ่ผฏ่ฃ', u'ๅฐˆ่ผฏ่ฃก': u'ๅฐˆ่ผฏ่ฃ', u'ๅฐ‹่‘—': u'ๅฐ‹็€', u'ๅฐ‹่‘—ไฝœ': u'ๅฐ‹่‘—ไฝœ', u'ๅฐ‹่‘—ๅ': u'ๅฐ‹่‘—ๅ', u'ๅฐ‹่‘—ๆ›ธ': u'ๅฐ‹่‘—ๆ›ธ', u'ๅฐ‹่‘—็จฑ': u'ๅฐ‹่‘—็จฑ', u'ๅฐ‹่‘—่€…': u'ๅฐ‹่‘—่€…', u'ๅฐ‹่‘—่ฟฐ': u'ๅฐ‹่‘—่ฟฐ', u'ๅฐ‹่‘—้Œ„': u'ๅฐ‹่‘—้Œ„', u'ๅฐ่‘—': u'ๅฐ็€', u'ๅฐ่‘—ไฝœ': u'ๅฐ่‘—ไฝœ', u'ๅฐ่‘—ๅ': u'ๅฐ่‘—ๅ', u'ๅฐ่‘—ๆ›ธ': u'ๅฐ่‘—ๆ›ธ', u'ๅฐ่‘—็จฑ': u'ๅฐ่‘—็จฑ', u'ๅฐ่‘—่€…': u'ๅฐ่‘—่€…', u'ๅฐ่‘—่ฟฐ': u'ๅฐ่‘—่ฟฐ', u'ๅฐ่‘—้Œ„': u'ๅฐ่‘—้Œ„', u'ๅฅˆๅŠๅˆฉไบž': u'ๅฐผๆ—ฅๅˆฉไบž', u'ๅฐผๆ—ฅๅˆฉไบš': u'ๅฐผๆ—ฅๅˆฉไบž', u'ๅฐผๆ—ฅๅˆฉไบž': u'ๅฐผๆ—ฅๅˆฉไบž', u'ๅฐผๆ—ฅๅฐ”': u'ๅฐผๆ—ฅ็ˆพ', u'ๅฐผๆ—ฅ็ˆพ': u'ๅฐผๆ—ฅ็ˆพ', u'ๅฐผๆ—ฅ': u'ๅฐผๆ—ฅ็ˆพ', u'ๅฑ•่‘—': u'ๅฑ•็€', u'ๅฑ•่‘—ไฝœ': u'ๅฑ•่‘—ไฝœ', u'ๅฑ•่‘—ๅ': u'ๅฑ•่‘—ๅ', u'ๅฑ•่‘—ๆ›ธ': u'ๅฑ•่‘—ๆ›ธ', u'ๅฑ•่‘—็จฑ': u'ๅฑ•่‘—็จฑ', u'ๅฑ•่‘—่€…': u'ๅฑ•่‘—่€…', u'ๅฑ•่‘—่ฟฐ': u'ๅฑ•่‘—่ฟฐ', u'ๅฑ•่‘—้Œ„': u'ๅฑ•่‘—้Œ„', u'ๅฑฑๆดž่ฃก': u'ๅฑฑๆดž่ฃ', u'ๅฑฑๆดž้‡Œ': u'ๅฑฑๆดž่ฃ', u'็”˜ๆฏ”ไบž': u'ๅฒกๆฏ”ไบž', u'ๅ…ฌ่ปŠ': u'ๅทดๅฃซ', u'ๅทด่ฒๅคš': u'ๅทดๅทดๅคšๆ–ฏ', u'ๅทดๅธƒไบž็ดๅนพๅ…งไบž': u'ๅทดๅธƒไบžๆ–ฐ็•ฟๅ…งไบž', u'ๅธƒๅ‰็ดๆณ•็ดข': u'ๅธƒๅŸบ็ดๆณ•็ดข', u'ๅธƒๅธŒไบž': u'ๅธƒๅธŒไบž', u'ๅธƒๅธŒไบš': u'ๅธƒๅธŒไบž', u'ๅธƒๅธŒ': u'ๅธƒๆฎŠ', u'ๅธƒไป€': u'ๅธƒๆฎŠ', u'่’ฒ้š†ๅœฐ': u'ๅธƒ้š†่ฟช', u'ๅธŒ็‰นๅ‹’': u'ๅธŒ็‰นๆ‹‰', u'ๅธ•ๅŠณ': u'ๅธ›็‰', u'ๅธถ่‘—': u'ๅธถ็€', u'ๅธถ่‘—ไฝœ': u'ๅธถ่‘—ไฝœ', u'ๅธถ่‘—ๅ': u'ๅธถ่‘—ๅ', u'ๅธถ่‘—ๆ›ธ': u'ๅธถ่‘—ๆ›ธ', u'ๅธถ่‘—็จฑ': u'ๅธถ่‘—็จฑ', u'ๅธถ่‘—่€…': u'ๅธถ่‘—่€…', u'ๅธถ่‘—่ฟฐ': u'ๅธถ่‘—่ฟฐ', u'ๅธถ่‘—้Œ„': u'ๅธถ่‘—้Œ„', u'ๅนซ่‘—': u'ๅนซ็€', u'ๅนซ่‘—ไฝœ': u'ๅนซ่‘—ไฝœ', u'ๅนซ่‘—ๅ': u'ๅนซ่‘—ๅ', u'ๅนซ่‘—ๆ›ธ': u'ๅนซ่‘—ๆ›ธ', u'ๅนซ่‘—็จฑ': u'ๅนซ่‘—็จฑ', u'ๅนซ่‘—่€…': u'ๅนซ่‘—่€…', u'ๅนซ่‘—่ฟฐ': u'ๅนซ่‘—่ฟฐ', u'ๅนซ่‘—้Œ„': u'ๅนซ่‘—้Œ„', u'ๅนฒ็€ๆ€ฅ': u'ๅนฒ็€ๆ€ฅ', u'่ณ“ๅฃซ': u'ๅนณๆฒป', u'ๅนดไปฃ้‡Œ': u'ๅนดไปฃ่ฃ', u'ๅนดไปฃ่ฃก': u'ๅนดไปฃ่ฃ', u'ๅนน่‘—': u'ๅนน็€', u'ๅนฒ็€': u'ๅนน็€', u'ๅนพๅ…งไบžๆฏ”็ดข': u'ๅนพๅ…งไบžๆฏ”็ดน', u'ๅบท่‘—': u'ๅบท็€', u'ๅบท่‘—ไฝœ': u'ๅบท่‘—ไฝœ', u'ๅบท่‘—ๅ': u'ๅบท่‘—ๅ', u'ๅบท่‘—ๆ›ธ': u'ๅบท่‘—ๆ›ธ', u'ๅบท่‘—็จฑ': u'ๅบท่‘—็จฑ', u'ๅบท่‘—่€…': u'ๅบท่‘—่€…', u'ๅบท่‘—่ฟฐ': u'ๅบท่‘—่ฟฐ', u'ๅบท่‘—้Œ„': u'ๅบท่‘—้Œ„', u'ๅพ…่‘—': u'ๅพ…็€', u'ๅพ…่‘—ไฝœ': u'ๅพ…่‘—ไฝœ', u'ๅพ…่‘—ๅ': u'ๅพ…่‘—ๅ', u'ๅพ…่‘—ๆ›ธ': u'ๅพ…่‘—ๆ›ธ', u'ๅพ…่‘—็จฑ': u'ๅพ…่‘—็จฑ', u'ๅพ…่‘—่€…': u'ๅพ…่‘—่€…', u'ๅพ…่‘—่ฟฐ': u'ๅพ…่‘—่ฟฐ', u'ๅพ…่‘—้Œ„': u'ๅพ…่‘—้Œ„', u'ๅพ—่‘—': u'ๅพ—็€', u'ๅพ—่‘—ไฝœ': u'ๅพ—่‘—ไฝœ', u'ๅพ—่‘—ๅ': u'ๅพ—่‘—ๅ', u'ๅพ—่‘—ๆ›ธ': u'ๅพ—่‘—ๆ›ธ', u'ๅพ—่‘—็จฑ': u'ๅพ—่‘—็จฑ', u'ๅพ—่‘—่€…': u'ๅพ—่‘—่€…', u'ๅพ—่‘—่ฟฐ': u'ๅพ—่‘—่ฟฐ', u'ๅพ—่‘—้Œ„': u'ๅพ—่‘—้Œ„', u'ๅพช่‘—': u'ๅพช็€', u'ๅพช่‘—ไฝœ': u'ๅพช่‘—ไฝœ', u'ๅพช่‘—ๅ': u'ๅพช่‘—ๅ', u'ๅพช่‘—ๆ›ธ': u'ๅพช่‘—ๆ›ธ', u'ๅพช่‘—็จฑ': u'ๅพช่‘—็จฑ', u'ๅพช่‘—่€…': u'ๅพช่‘—่€…', u'ๅพช่‘—่ฟฐ': u'ๅพช่‘—่ฟฐ', u'ๅพช่‘—้Œ„': u'ๅพช่‘—้Œ„', u'ๅฟƒ่‘—': u'ๅฟƒ็€', u'ๅฟƒ็นซ่‘—': u'ๅฟƒ็นซ็€', u'ๅฟƒ่‘—ไฝœ': u'ๅฟƒ่‘—ไฝœ', u'ๅฟƒ่‘—ๅ': u'ๅฟƒ่‘—ๅ', u'ๅฟƒ่‘—ๆ›ธ': u'ๅฟƒ่‘—ๆ›ธ', u'ๅฟƒ่‘—็จฑ': u'ๅฟƒ่‘—็จฑ', u'ๅฟƒ่‘—่€…': u'ๅฟƒ่‘—่€…', u'ๅฟƒ่‘—่ฟฐ': u'ๅฟƒ่‘—่ฟฐ', u'ๅฟƒ่‘—้Œ„': u'ๅฟƒ่‘—้Œ„', u'ๅฟƒ่ฃก': u'ๅฟƒ่ฃ', u'ๅฟƒ้‡Œ': u'ๅฟƒ่ฃ', u'ๅฟ่‘—': u'ๅฟ็€', u'ๅฟ่‘—ไฝœ': u'ๅฟ่‘—ไฝœ', u'ๅฟ่‘—ๅ': u'ๅฟ่‘—ๅ', u'ๅฟ่‘—ๆ›ธ': u'ๅฟ่‘—ๆ›ธ', u'ๅฟ่‘—็จฑ': u'ๅฟ่‘—็จฑ', u'ๅฟ่‘—่€…': u'ๅฟ่‘—่€…', u'ๅฟ่‘—่ฟฐ': u'ๅฟ่‘—่ฟฐ', u'ๅฟ่‘—้Œ„': u'ๅฟ่‘—้Œ„', u'ๅฟ—่‘—': u'ๅฟ—็€', u'ๅฟ—่‘—ไฝœ': u'ๅฟ—่‘—ไฝœ', u'ๅฟ—่‘—ๅ': u'ๅฟ—่‘—ๅ', u'ๅฟ—่‘—ๆ›ธ': u'ๅฟ—่‘—ๆ›ธ', u'ๅฟ—่‘—็จฑ': u'ๅฟ—่‘—็จฑ', u'ๅฟ—่‘—่€…': u'ๅฟ—่‘—่€…', u'ๅฟ—่‘—่ฟฐ': u'ๅฟ—่‘—่ฟฐ', u'ๅฟ—่‘—้Œ„': u'ๅฟ—่‘—้Œ„', u'ๅฟ™่‘—': u'ๅฟ™็€', u'ๅฟ™่‘—ไฝœ': u'ๅฟ™่‘—ไฝœ', u'ๅฟ™่‘—ๅ': u'ๅฟ™่‘—ๅ', u'ๅฟ™่‘—ๆ›ธ': u'ๅฟ™่‘—ๆ›ธ', u'ๅฟ™่‘—็จฑ': u'ๅฟ™่‘—็จฑ', u'ๅฟ™่‘—่€…': u'ๅฟ™่‘—่€…', u'ๅฟ™่‘—่ฟฐ': u'ๅฟ™่‘—่ฟฐ', u'ๅฟ™่‘—้Œ„': u'ๅฟ™่‘—้Œ„', u'ๆ€ฅ่‘—': u'ๆ€ฅ็€', u'ๆ€ฅ่‘—ไฝœ': u'ๆ€ฅ่‘—ไฝœ', u'ๆ€ฅ่‘—ๅ': u'ๆ€ฅ่‘—ๅ', u'ๆ€ฅ่‘—ๆ›ธ': u'ๆ€ฅ่‘—ๆ›ธ', u'ๆ€ฅ่‘—็จฑ': u'ๆ€ฅ่‘—็จฑ', u'ๆ€ฅ่‘—่€…': u'ๆ€ฅ่‘—่€…', u'ๆ€ฅ่‘—่ฟฐ': u'ๆ€ฅ่‘—่ฟฐ', u'ๆ€ฅ่‘—้Œ„': u'ๆ€ฅ่‘—้Œ„', u'ๆ€ง่‘—': u'ๆ€ง็€', u'ๆ€ง่‘—ไฝœ': u'ๆ€ง่‘—ไฝœ', u'ๆ€ง่‘—ๅ': u'ๆ€ง่‘—ๅ', u'ๆ€ง่‘—ๆ›ธ': u'ๆ€ง่‘—ๆ›ธ', u'ๆ€ง่‘—็จฑ': u'ๆ€ง่‘—็จฑ', u'ๆ€ง่‘—่€…': u'ๆ€ง่‘—่€…', u'ๆ€ง่‘—่ฟฐ': u'ๆ€ง่‘—่ฟฐ', u'ๆ€ง่‘—้Œ„': u'ๆ€ง่‘—้Œ„', u'ๆ‚ ่‘—': u'ๆ‚ ็€', u'ๆ‚ ่‘—ไฝœ': u'ๆ‚ ่‘—ไฝœ', u'ๆ‚ ่‘—ๅ': u'ๆ‚ ่‘—ๅ', u'ๆ‚ ่‘—ๆ›ธ': u'ๆ‚ ่‘—ๆ›ธ', u'ๆ‚ ่‘—็จฑ': u'ๆ‚ ่‘—็จฑ', u'ๆ‚ ่‘—่€…': u'ๆ‚ ่‘—่€…', u'ๆ‚ ่‘—่ฟฐ': u'ๆ‚ ่‘—่ฟฐ', u'ๆ‚ ่‘—้Œ„': u'ๆ‚ ่‘—้Œ„', u'ๆƒณ่ฑก': u'ๆƒณๅƒ', u'ๆƒณ่‘—': u'ๆƒณ็€', u'ๆƒณ่‘—ไฝœ': u'ๆƒณ่‘—ไฝœ', u'ๆƒณ่‘—ๅ': u'ๆƒณ่‘—ๅ', u'ๆƒณ่‘—ๆ›ธ': u'ๆƒณ่‘—ๆ›ธ', u'ๆƒณ่‘—็จฑ': u'ๆƒณ่‘—็จฑ', u'ๆƒณ่‘—่€…': u'ๆƒณ่‘—่€…', u'ๆƒณ่‘—่ฟฐ': u'ๆƒณ่‘—่ฟฐ', u'ๆƒณ่‘—้Œ„': u'ๆƒณ่‘—้Œ„', u'็พฉๅคงๅˆฉ': u'ๆ„ๅคงๅˆฉ', u'ๆ„›่‘—': u'ๆ„›็€', u'ๆ„›่‘—ไฝœ': u'ๆ„›่‘—ไฝœ', u'ๆ„›่‘—ๅ': u'ๆ„›่‘—ๅ', u'ๆ„›่‘—ๆ›ธ': u'ๆ„›่‘—ๆ›ธ', u'ๆ„›่‘—็จฑ': u'ๆ„›่‘—็จฑ', u'ๆ„›่‘—่€…': u'ๆ„›่‘—่€…', u'ๆ„›่‘—่ฟฐ': u'ๆ„›่‘—่ฟฐ', u'ๆ„›่‘—้Œ„': u'ๆ„›่‘—้Œ„', u'ๆ…ฃ่‘—': u'ๆ…ฃ็€', u'ๆ…ฃ่‘—ไฝœ': u'ๆ…ฃ่‘—ไฝœ', u'ๆ…ฃ่‘—ๅ': u'ๆ…ฃ่‘—ๅ', u'ๆ…ฃ่‘—ๆ›ธ': u'ๆ…ฃ่‘—ๆ›ธ', u'ๆ…ฃ่‘—็จฑ': u'ๆ…ฃ่‘—็จฑ', u'ๆ…ฃ่‘—่€…': u'ๆ…ฃ่‘—่€…', u'ๆ…ฃ่‘—่ฟฐ': u'ๆ…ฃ่‘—่ฟฐ', u'ๆ…ฃ่‘—้Œ„': u'ๆ…ฃ่‘—้Œ„', u'ๆ‡‰่‘—': u'ๆ‡‰็€', u'ๆ‡‰่‘—ไฝœ': u'ๆ‡‰่‘—ไฝœ', u'ๆ‡‰่‘—ๅ': u'ๆ‡‰่‘—ๅ', u'ๆ‡‰่‘—ๆ›ธ': u'ๆ‡‰่‘—ๆ›ธ', u'ๆ‡‰่‘—็จฑ': u'ๆ‡‰่‘—็จฑ', u'ๆ‡‰่‘—่€…': u'ๆ‡‰่‘—่€…', u'ๆ‡‰่‘—่ฟฐ': u'ๆ‡‰่‘—่ฟฐ', u'ๆ‡‰่‘—้Œ„': u'ๆ‡‰่‘—้Œ„', u'ๆ‡ท่‘—': u'ๆ‡ท็€', u'ๆ‡ท่‘—ไฝœ': u'ๆ‡ท่‘—ไฝœ', u'ๆ‡ท่‘—ๅ': u'ๆ‡ท่‘—ๅ', u'ๆ‡ท่‘—ๆ›ธ': u'ๆ‡ท่‘—ๆ›ธ', u'ๆ‡ท่‘—็จฑ': u'ๆ‡ท่‘—็จฑ', u'ๆ‡ท่‘—่€…': u'ๆ‡ท่‘—่€…', u'ๆ‡ท่‘—่ฟฐ': u'ๆ‡ท่‘—่ฟฐ', u'ๆ‡ท่‘—้Œ„': u'ๆ‡ท่‘—้Œ„', u'ๆˆ€่‘—': u'ๆˆ€็€', u'ๆˆ€่‘—ไฝœ': u'ๆˆ€่‘—ไฝœ', u'ๆˆ€่‘—ๅ': u'ๆˆ€่‘—ๅ', u'ๆˆ€่‘—ๆ›ธ': u'ๆˆ€่‘—ๆ›ธ', u'ๆˆ€่‘—็จฑ': u'ๆˆ€่‘—็จฑ', u'ๆˆ€่‘—่€…': u'ๆˆ€่‘—่€…', u'ๆˆ€่‘—่ฟฐ': u'ๆˆ€่‘—่ฟฐ', u'ๆˆ€่‘—้Œ„': u'ๆˆ€่‘—้Œ„', u'ๆˆฐ่‘—': u'ๆˆฐ็€', u'ๆˆฐ่‘—ไฝœ': u'ๆˆฐ่‘—ไฝœ', u'ๆˆฐ่‘—ๅ': u'ๆˆฐ่‘—ๅ', u'ๆˆฐ่‘—ๆ›ธ': u'ๆˆฐ่‘—ๆ›ธ', u'ๆˆฐ่‘—็จฑ': u'ๆˆฐ่‘—็จฑ', u'ๆˆฐ่‘—่€…': u'ๆˆฐ่‘—่€…', u'ๆˆฐ่‘—่ฟฐ': u'ๆˆฐ่‘—่ฟฐ', u'ๆˆฐ่‘—้Œ„': u'ๆˆฐ่‘—้Œ„', u'ๆˆฒ่ฃก': u'ๆˆฒ่ฃ', u'ๆˆ้‡Œ': u'ๆˆฒ่ฃ', u'้ป›ๅฎ‰ๅจœ': u'ๆˆดๅฎ‰ๅจœ', u'็‹„ๅฎ‰ๅจœ': u'ๆˆดๅฎ‰ๅจœ', u'ๆˆด่‘—': u'ๆˆด็€', u'ๆˆด่‘—ไฝœ': u'ๆˆด่‘—ไฝœ', u'ๆˆด่‘—ๅ': u'ๆˆด่‘—ๅ', u'ๆˆด่‘—ๆ›ธ': u'ๆˆด่‘—ๆ›ธ', u'ๆˆด่‘—็จฑ': u'ๆˆด่‘—็จฑ', u'ๆˆด่‘—่€…': u'ๆˆด่‘—่€…', u'ๆˆด่‘—่ฟฐ': u'ๆˆด่‘—่ฟฐ', u'ๆˆด่‘—้Œ„': u'ๆˆด่‘—้Œ„', u'็ดข็พ…้–€็พคๅณถ': u'ๆ‰€็พ…้–€็พคๅณถ', u'ๅˆ—ๅฐ': u'ๆ‰“ๅฐ', u'ๅฐ่กจๆฉŸ': u'ๆ‰“ๅฐๆฉŸ', u'ๆ‰“่‘—': u'ๆ‰“็€', u'ๆ‰“่‘—ไฝœ': u'ๆ‰“่‘—ไฝœ', u'ๆ‰“่‘—ๅ': u'ๆ‰“่‘—ๅ', u'ๆ‰“่‘—ๆ›ธ': u'ๆ‰“่‘—ๆ›ธ', u'ๆ‰“่‘—็จฑ': u'ๆ‰“่‘—็จฑ', u'ๆ‰“่‘—่€…': u'ๆ‰“่‘—่€…', u'ๆ‰“่‘—่ฟฐ': u'ๆ‰“่‘—่ฟฐ', u'ๆ‰“่‘—้Œ„': u'ๆ‰“่‘—้Œ„', u'ๆ‰›่‘—': u'ๆ‰›็€', u'ๆ‰›่‘—ไฝœ': u'ๆ‰›่‘—ไฝœ', u'ๆ‰›่‘—ๅ': u'ๆ‰›่‘—ๅ', u'ๆ‰›่‘—ๆ›ธ': u'ๆ‰›่‘—ๆ›ธ', u'ๆ‰›่‘—็จฑ': u'ๆ‰›่‘—็จฑ', u'ๆ‰›่‘—่€…': u'ๆ‰›่‘—่€…', u'ๆ‰›่‘—่ฟฐ': u'ๆ‰›่‘—่ฟฐ', u'ๆ‰›่‘—้Œ„': u'ๆ‰›่‘—้Œ„', u'ๆ‰พไธ่‘—': u'ๆ‰พไธ็€', u'ๆ‰พๅพ—่‘—': u'ๆ‰พๅพ—็€', u'ๆŠ“่‘—': u'ๆŠ“็€', u'ๆŠ“่‘—ไฝœ': u'ๆŠ“่‘—ไฝœ', u'ๆŠ“่‘—ๅ': u'ๆŠ“่‘—ๅ', u'ๆŠ“่‘—็จฑ': u'ๆŠ“่‘—็จฑ', u'ๆŠ“่‘—่€…': u'ๆŠ“่‘—่€…', u'ๆŠ“่‘—่ฟฐ': u'ๆŠ“่‘—่ฟฐ', u'ๆŠ“่‘—้Œ„': u'ๆŠ“่‘—้Œ„', u'ๆŠซ่‘—': u'ๆŠซ็€', u'ๆŠซ่‘—ไฝœ': u'ๆŠซ่‘—ไฝœ', u'ๆŠซ่‘—ๅ': u'ๆŠซ่‘—ๅ', u'ๆŠซ่‘—ๆ›ธ': u'ๆŠซ่‘—ๆ›ธ', u'ๆŠซ่‘—็จฑ': u'ๆŠซ่‘—็จฑ', u'ๆŠซ่‘—่€…': u'ๆŠซ่‘—่€…', u'ๆŠซ่‘—่ฟฐ': u'ๆŠซ่‘—่ฟฐ', u'ๆŠซ่‘—้Œ„': u'ๆŠซ่‘—้Œ„', u'ๆŠฌ่‘—': u'ๆŠฌ็€', u'ๆŠฌ่‘—ไฝœ': u'ๆŠฌ่‘—ไฝœ', u'ๆŠฌ่‘—ๅ': u'ๆŠฌ่‘—ๅ', u'ๆŠฌ่‘—็จฑ': u'ๆŠฌ่‘—็จฑ', u'ๆŠฌ่‘—่€…': u'ๆŠฌ่‘—่€…', u'ๆŠฌ่‘—่ฟฐ': u'ๆŠฌ่‘—่ฟฐ', u'ๆŠฌ่‘—้Œ„': u'ๆŠฌ่‘—้Œ„', u'ๆŠฑ่‘—': u'ๆŠฑ็€', u'ๆŠฑ่‘—ไฝœ': u'ๆŠฑ่‘—ไฝœ', u'ๆŠฑ่‘—ๅ': u'ๆŠฑ่‘—ๅ', u'ๆŠฑ่‘—็จฑ': u'ๆŠฑ่‘—็จฑ', u'ๆŠฑ่‘—่€…': u'ๆŠฑ่‘—่€…', u'ๆŠฑ่‘—่ฟฐ': u'ๆŠฑ่‘—่ฟฐ', u'ๆŠฑ่‘—้Œ„': u'ๆŠฑ่‘—้Œ„', u'ๆ‹‰่‘—': u'ๆ‹‰็€', u'ๆ‹‰่‘—ไฝœ': u'ๆ‹‰่‘—ไฝœ', u'ๆ‹‰่‘—ๅ': u'ๆ‹‰่‘—ๅ', u'ๆ‹‰่‘—ๆ›ธ': u'ๆ‹‰่‘—ๆ›ธ', u'ๆ‹‰่‘—็จฑ': u'ๆ‹‰่‘—็จฑ', u'ๆ‹‰่‘—่€…': u'ๆ‹‰่‘—่€…', u'ๆ‹‰่‘—่ฟฐ': u'ๆ‹‰่‘—่ฟฐ', u'ๆ‹‰่‘—้Œ„': u'ๆ‹‰่‘—้Œ„', u'ๆ‹Ž่‘—': u'ๆ‹Ž็€', u'ๆ‹Ž่‘—ไฝœ': u'ๆ‹Ž่‘—ไฝœ', u'ๆ‹Ž่‘—ๅ': u'ๆ‹Ž่‘—ๅ', u'ๆ‹Ž่‘—็จฑ': u'ๆ‹Ž่‘—็จฑ', u'ๆ‹Ž่‘—่€…': u'ๆ‹Ž่‘—่€…', u'ๆ‹Ž่‘—่ฟฐ': u'ๆ‹Ž่‘—่ฟฐ', u'ๆ‹Ž่‘—้Œ„': u'ๆ‹Ž่‘—้Œ„', u'ๆ‹–่‘—': u'ๆ‹–็€', u'ๆ‹–่‘—ไฝœ': u'ๆ‹–่‘—ไฝœ', u'ๆ‹–่‘—ๅ': u'ๆ‹–่‘—ๅ', u'ๆ‹–่‘—็จฑ': u'ๆ‹–่‘—็จฑ', u'ๆ‹–่‘—่€…': u'ๆ‹–่‘—่€…', u'ๆ‹–่‘—่ฟฐ': u'ๆ‹–่‘—่ฟฐ', u'ๆ‹–่‘—้Œ„': u'ๆ‹–่‘—้Œ„', u'ๆ‹ผ่‘—': u'ๆ‹ผ็€', u'ๆ‹ผ่‘—ไฝœ': u'ๆ‹ผ่‘—ไฝœ', u'ๆ‹ผ่‘—ๅ': u'ๆ‹ผ่‘—ๅ', u'ๆ‹ผ่‘—็จฑ': u'ๆ‹ผ่‘—็จฑ', u'ๆ‹ผ่‘—่€…': u'ๆ‹ผ่‘—่€…', u'ๆ‹ผ่‘—่ฟฐ': u'ๆ‹ผ่‘—่ฟฐ', u'ๆ‹ผ่‘—้Œ„': u'ๆ‹ผ่‘—้Œ„', u'ๆ‹ฟ่‘—': u'ๆ‹ฟ็€', u'ๆ‹ฟ็ ดๅด™': u'ๆ‹ฟ็ ดไพ–', u'ๆ‹ฟ่‘—ไฝœ': u'ๆ‹ฟ่‘—ไฝœ', u'ๆ‹ฟ่‘—ๅ': u'ๆ‹ฟ่‘—ๅ', u'ๆ‹ฟ่‘—็จฑ': u'ๆ‹ฟ่‘—็จฑ', u'ๆ‹ฟ่‘—่€…': u'ๆ‹ฟ่‘—่€…', u'ๆ‹ฟ่‘—่ฟฐ': u'ๆ‹ฟ่‘—่ฟฐ', u'ๆ‹ฟ่‘—้Œ„': u'ๆ‹ฟ่‘—้Œ„', u'ๆŒ่‘—': u'ๆŒ็€', u'ๆŒ่‘—ไฝœ': u'ๆŒ่‘—ไฝœ', u'ๆŒ่‘—ๅ': u'ๆŒ่‘—ๅ', u'ๆŒ่‘—็จฑ': u'ๆŒ่‘—็จฑ', u'ๆŒ่‘—่€…': u'ๆŒ่‘—่€…', u'ๆŒ่‘—่ฟฐ': u'ๆŒ่‘—่ฟฐ', u'ๆŒ่‘—้Œ„': u'ๆŒ่‘—้Œ„', u'ๆŒ‘่‘—': u'ๆŒ‘็€', u'ๆŒ‘่‘—ไฝœ': u'ๆŒ‘่‘—ไฝœ', u'ๆŒ‘่‘—ๅ': u'ๆŒ‘่‘—ๅ', u'ๆŒ‘่‘—็จฑ': u'ๆŒ‘่‘—็จฑ', u'ๆŒ‘่‘—่€…': u'ๆŒ‘่‘—่€…', u'ๆŒ‘่‘—่ฟฐ': u'ๆŒ‘่‘—่ฟฐ', u'ๆŒ‘่‘—้Œ„': u'ๆŒ‘่‘—้Œ„', u'ๆŒจ่‘—': u'ๆŒจ็€', u'ๆŒจ่‘—ไฝœ': u'ๆŒจ่‘—ไฝœ', u'ๆŒจ่‘—ๅ': u'ๆŒจ่‘—ๅ', u'ๆŒจ่‘—็จฑ': u'ๆŒจ่‘—็จฑ', u'ๆŒจ่‘—่€…': u'ๆŒจ่‘—่€…', u'ๆŒจ่‘—่ฟฐ': u'ๆŒจ่‘—่ฟฐ', u'ๆŒจ่‘—้Œ„': u'ๆŒจ่‘—้Œ„', u'ๆ†่‘—': u'ๆ†็€', u'ๆ†่‘—ไฝœ': u'ๆ†่‘—ไฝœ', u'ๆ†่‘—ๅ': u'ๆ†่‘—ๅ', u'ๆ†่‘—็จฑ': u'ๆ†่‘—็จฑ', u'ๆ†่‘—่€…': u'ๆ†่‘—่€…', u'ๆ†่‘—่ฟฐ': u'ๆ†่‘—่ฟฐ', u'ๆ†่‘—้Œ„': u'ๆ†่‘—้Œ„', u'ๆŽ–่‘—': u'ๆŽ–็€', u'ๆŽ–่‘—ไฝœ': u'ๆŽ–่‘—ไฝœ', u'ๆŽ–่‘—ๅ': u'ๆŽ–่‘—ๅ', u'ๆŽ–่‘—็จฑ': u'ๆŽ–่‘—็จฑ', u'ๆŽ–่‘—่€…': u'ๆŽ–่‘—่€…', u'ๆŽ–่‘—่ฟฐ': u'ๆŽ–่‘—่ฟฐ', u'ๆŽ–่‘—้Œ„': u'ๆŽ–่‘—้Œ„', u'ๆŽ™่‘—': u'ๆŽ™็€', u'ๆŽ™่‘—ไฝœ': u'ๆŽ™่‘—ไฝœ', u'ๆŽ™่‘—ๅ': u'ๆŽ™่‘—ๅ', u'ๆŽ™่‘—ๆ›ธ': u'ๆŽ™่‘—ๆ›ธ', u'ๆŽ™่‘—็จฑ': u'ๆŽ™่‘—็จฑ', u'ๆŽ™่‘—่€…': u'ๆŽ™่‘—่€…', u'ๆŽ™่‘—่ฟฐ': u'ๆŽ™่‘—่ฟฐ', u'ๆŽ™่‘—้Œ„': u'ๆŽ™่‘—้Œ„', u'ๆŽ›้‰ค': u'ๆŽ›้ˆŽ', u'ๆŽฅ่‘—': u'ๆŽฅ็€', u'ๆŽฅ่‘—ไฝœ': u'ๆŽฅ่‘—ไฝœ', u'ๆŽฅ่‘—ๅ': u'ๆŽฅ่‘—ๅ', u'ๆŽฅ่‘—็จฑ': u'ๆŽฅ่‘—็จฑ', u'ๆŽฅ่‘—่€…': u'ๆŽฅ่‘—่€…', u'ๆŽฅ่‘—่ฟฐ': u'ๆŽฅ่‘—่ฟฐ', u'ๆŽฅ่‘—้Œ„': u'ๆŽฅ่‘—้Œ„', u'ๆ‰่‘—': u'ๆ‰็€', u'ๆ‰่‘—ไฝœ': u'ๆ‰่‘—ไฝœ', u'ๆ‰่‘—ๅ': u'ๆ‰่‘—ๅ', u'ๆ‰่‘—ๆ›ธ': u'ๆ‰่‘—ๆ›ธ', u'ๆ‰่‘—็จฑ': u'ๆ‰่‘—็จฑ', u'ๆ‰่‘—่€…': u'ๆ‰่‘—่€…', u'ๆ‰่‘—่ฟฐ': u'ๆ‰่‘—่ฟฐ', u'ๆ‰่‘—้Œ„': u'ๆ‰่‘—้Œ„', u'ๆ่‘—': u'ๆ็€', u'ๆ่‘—ไฝœ': u'ๆ่‘—ไฝœ', u'ๆ่‘—ๅ': u'ๆ่‘—ๅ', u'ๆ่‘—็จฑ': u'ๆ่‘—็จฑ', u'ๆ่‘—่€…': u'ๆ่‘—่€…', u'ๆ่‘—่ฟฐ': u'ๆ่‘—่ฟฐ', u'ๆ่‘—้Œ„': u'ๆ่‘—้Œ„', u'ๆฎ่‘—': u'ๆฎ็€', u'ๆฎ่‘—ไฝœ': u'ๆฎ่‘—ไฝœ', u'ๆฎ่‘—ๅ': u'ๆฎ่‘—ๅ', u'ๆฎ่‘—็จฑ': u'ๆฎ่‘—็จฑ', u'ๆฎ่‘—่€…': u'ๆฎ่‘—่€…', u'ๆฎ่‘—่ฟฐ': u'ๆฎ่‘—่ฟฐ', u'ๆฎ่‘—้Œ„': u'ๆฎ่‘—้Œ„', u'ๆ‘Ÿ่‘—': u'ๆ‘Ÿ็€', u'ๆ‘Ÿ่‘—ไฝœ': u'ๆ‘Ÿ่‘—ไฝœ', u'ๆ‘Ÿ่‘—ๅ': u'ๆ‘Ÿ่‘—ๅ', u'ๆ‘Ÿ่‘—็จฑ': u'ๆ‘Ÿ่‘—็จฑ', u'ๆ‘Ÿ่‘—่€…': u'ๆ‘Ÿ่‘—่€…', u'ๆ‘Ÿ่‘—่ฟฐ': u'ๆ‘Ÿ่‘—่ฟฐ', u'ๆ‘Ÿ่‘—้Œ„': u'ๆ‘Ÿ่‘—้Œ„', u'ๆ’ผ่‘—': u'ๆ’ผ็€', u'ๆ’ผ่‘—ไฝœ': u'ๆ’ผ่‘—ไฝœ', u'ๆ’ผ่‘—ๅ': u'ๆ’ผ่‘—ๅ', u'ๆ’ผ่‘—ๆ›ธ': u'ๆ’ผ่‘—ๆ›ธ', u'ๆ’ผ่‘—็จฑ': u'ๆ’ผ่‘—็จฑ', u'ๆ’ผ่‘—่€…': u'ๆ’ผ่‘—่€…', u'ๆ’ผ่‘—่ฟฐ': u'ๆ’ผ่‘—่ฟฐ', u'ๆ’ผ่‘—้Œ„': u'ๆ’ผ่‘—้Œ„', u'ๆ“‹่‘—': u'ๆ“‹็€', u'ๆ“‹่‘—ไฝœ': u'ๆ“‹่‘—ไฝœ', u'ๆ“‹่‘—ๅ': u'ๆ“‹่‘—ๅ', u'ๆ“‹่‘—็จฑ': u'ๆ“‹่‘—็จฑ', u'ๆ“‹่‘—่€…': u'ๆ“‹่‘—่€…', u'ๆ“‹่‘—่ฟฐ': u'ๆ“‹่‘—่ฟฐ', u'ๆ“‹่‘—้Œ„': u'ๆ“‹่‘—้Œ„', u'ๆ“š่‘—': u'ๆ“š็€', u'ๆ“š่‘—ไฝœ': u'ๆ“š่‘—ไฝœ', u'ๆ“š่‘—ๅ': u'ๆ“š่‘—ๅ', u'ๆ“š่‘—ๆ›ธ': u'ๆ“š่‘—ๆ›ธ', u'ๆ“š่‘—็จฑ': u'ๆ“š่‘—็จฑ', u'ๆ“š่‘—่€…': u'ๆ“š่‘—่€…', u'ๆ“š่‘—่ฟฐ': u'ๆ“š่‘—่ฟฐ', u'ๆ“š่‘—้Œ„': u'ๆ“š่‘—้Œ„', u'ๆ“บ่‘—': u'ๆ“บ็€', u'ๆ“บ่‘—ไฝœ': u'ๆ“บ่‘—ไฝœ', u'ๆ“บ่‘—ๅ': u'ๆ“บ่‘—ๅ', u'ๆ“บ่‘—็จฑ': u'ๆ“บ่‘—็จฑ', u'ๆ“บ่‘—่€…': u'ๆ“บ่‘—่€…', u'ๆ“บ่‘—่ฟฐ': u'ๆ“บ่‘—่ฟฐ', u'ๆ“บ่‘—้Œ„': u'ๆ“บ่‘—้Œ„', u'ๆ•…ไบ‹้‡Œ': u'ๆ•…ไบ‹่ฃ', u'ๆ•…ไบ‹่ฃก': u'ๆ•…ไบ‹่ฃ', u'ๆ•ž่‘—': u'ๆ•ž็€', u'ๆ•ž่‘—ไฝœ': u'ๆ•ž่‘—ไฝœ', u'ๆ•ž่‘—ๅ': u'ๆ•ž่‘—ๅ', u'ๆ•ž่‘—็จฑ': u'ๆ•ž่‘—็จฑ', u'ๆ•ž่‘—่€…': u'ๆ•ž่‘—่€…', u'ๆ•ž่‘—่ฟฐ': u'ๆ•ž่‘—่ฟฐ', u'ๆ•ž่‘—้Œ„': u'ๆ•ž่‘—้Œ„', u'ๆ•ธ่‘—': u'ๆ•ธ็€', u'ๆ•ธ่‘—ไฝœ': u'ๆ•ธ่‘—ไฝœ', u'ๆ•ธ่‘—ๅ': u'ๆ•ธ่‘—ๅ', u'ๆ•ธ่‘—็จฑ': u'ๆ•ธ่‘—็จฑ', u'ๆ•ธ่‘—่€…': u'ๆ•ธ่‘—่€…', u'ๆ•ธ่‘—่ฟฐ': u'ๆ•ธ่‘—่ฟฐ', u'ๆ•ธ่‘—้Œ„': u'ๆ•ธ่‘—้Œ„', u'ๆ–ฅ่‘—': u'ๆ–ฅ็€', u'ๆ–ฅ่‘—ไฝœ': u'ๆ–ฅ่‘—ไฝœ', u'ๆ–ฅ่‘—ๅ': u'ๆ–ฅ่‘—ๅ', u'ๆ–ฅ่‘—ๆ›ธ': u'ๆ–ฅ่‘—ๆ›ธ', u'ๆ–ฅ่‘—็จฑ': u'ๆ–ฅ่‘—็จฑ', u'ๆ–ฅ่‘—่€…': u'ๆ–ฅ่‘—่€…', u'ๆ–ฅ่‘—่ฟฐ': u'ๆ–ฅ่‘—่ฟฐ', u'ๆ–ฅ่‘—้Œ„': u'ๆ–ฅ่‘—้Œ„', u'ๅฒ็“ฆๆฟŸ่˜ญ': u'ๆ–ฏๅจๅฃซ่˜ญ', u'ๆ–ฏๆด›็ถญๅฐผไบž': u'ๆ–ฏๆด›ๆ–‡ๅฐผไบž', u'ๆ–ฐ่‘—้พ่™Ž้–€': u'ๆ–ฐ่‘—้พ่™Ž้–€', u'็ด่ฅฟ่˜ญ': u'ๆ–ฐ่ฅฟ่˜ญ', u'ๆ—ฅๅญ้‡Œ': u'ๆ—ฅๅญ่ฃ', u'ๆ—ฅๅญ่ฃก': u'ๆ—ฅๅญ่ฃ', u'ๆ˜‚่‘—': u'ๆ˜‚็€', u'ๆ˜‚่‘—ไฝœ': u'ๆ˜‚่‘—ไฝœ', u'ๆ˜‚่‘—ๅ': u'ๆ˜‚่‘—ๅ', u'ๆ˜‚่‘—ๆ›ธ': u'ๆ˜‚่‘—ๆ›ธ', u'ๆ˜‚่‘—็จฑ': u'ๆ˜‚่‘—็จฑ', u'ๆ˜‚่‘—่€…': u'ๆ˜‚่‘—่€…', u'ๆ˜‚่‘—่ฟฐ': u'ๆ˜‚่‘—่ฟฐ', u'ๆ˜‚่‘—้Œ„': u'ๆ˜‚่‘—้Œ„', u'ๆ˜ ่‘—': u'ๆ˜ ็€', u'ๆ˜ ่‘—ไฝœ': u'ๆ˜ ่‘—ไฝœ', u'ๆ˜ ่‘—ๅ': u'ๆ˜ ่‘—ๅ', u'ๆ˜ ่‘—ๆ›ธ': u'ๆ˜ ่‘—ๆ›ธ', u'ๆ˜ ่‘—็จฑ': u'ๆ˜ ่‘—็จฑ', u'ๆ˜ ่‘—่€…': u'ๆ˜ ่‘—่€…', u'ๆ˜ ่‘—่ฟฐ': u'ๆ˜ ่‘—่ฟฐ', u'ๆ˜ ่‘—้Œ„': u'ๆ˜ ่‘—้Œ„', u'ๆ˜ฅๅ‡้‡Œ': u'ๆ˜ฅๅ‡่ฃ', u'ๆ˜ฅๅ‡่ฃก': u'ๆ˜ฅๅ‡่ฃ', u'ๆ˜ฅๅคฉ่ฃก': u'ๆ˜ฅๅคฉ่ฃ', u'ๆ˜ฅๅคฉ้‡Œ': u'ๆ˜ฅๅคฉ่ฃ', u'ๆ˜ฅๆ—ฅ่ฃก': u'ๆ˜ฅๆ—ฅ่ฃ', u'ๆ˜ฅๆ—ฅ้‡Œ': u'ๆ˜ฅๆ—ฅ่ฃ', u'ๆ—ถ้—ด้‡Œ': u'ๆ™‚้–“่ฃ', u'ๆ™‚้–“่ฃก': u'ๆ™‚้–“่ฃ', u'ๆ™ƒ่‘—': u'ๆ™ƒ็€', u'ๆ™ƒ่‘—ไฝœ': u'ๆ™ƒ่‘—ไฝœ', u'ๆ™ƒ่‘—ๅ': u'ๆ™ƒ่‘—ๅ', u'ๆ™ƒ่‘—็จฑ': u'ๆ™ƒ่‘—็จฑ', u'ๆ™ƒ่‘—่€…': u'ๆ™ƒ่‘—่€…', u'ๆ™ƒ่‘—่ฟฐ': u'ๆ™ƒ่‘—่ฟฐ', u'ๆ™ƒ่‘—้Œ„': u'ๆ™ƒ่‘—้Œ„', u'ๆš‘ๅ‡้‡Œ': u'ๆš‘ๅ‡่ฃ', u'ๆš‘ๅ‡่ฃก': u'ๆš‘ๅ‡่ฃ', u'ๆš—่‘—': u'ๆš—็€', u'ๆš—่‘—ไฝœ': u'ๆš—่‘—ไฝœ', u'ๆš—่‘—ๅ': u'ๆš—่‘—ๅ', u'ๆš—่‘—ๆ›ธ': u'ๆš—่‘—ๆ›ธ', u'ๆš—่‘—็จฑ': u'ๆš—่‘—็จฑ', u'ๆš—่‘—่€…': u'ๆš—่‘—่€…', u'ๆš—่‘—่ฟฐ': u'ๆš—่‘—่ฟฐ', u'ๆš—่‘—้Œ„': u'ๆš—่‘—้Œ„', u'ๆœ‰่‘—': u'ๆœ‰็€', u'ๆœ‰่‘—ไฝœ': u'ๆœ‰่‘—ไฝœ', u'ๆœ‰่‘—ๅ': u'ๆœ‰่‘—ๅ', u'ๆœ‰่‘—ๆ›ธ': u'ๆœ‰่‘—ๆ›ธ', u'ๆœ‰่‘—็จฑ': u'ๆœ‰่‘—็จฑ', u'ๆœ‰่‘—่€…': u'ๆœ‰่‘—่€…', u'ๆœ‰่‘—่ฟฐ': u'ๆœ‰่‘—่ฟฐ', u'ๆœ‰่‘—้Œ„': u'ๆœ‰่‘—้Œ„', u'ๆœ›่‘—': u'ๆœ›็€', u'ๆœ›่‘—ไฝœ': u'ๆœ›่‘—ไฝœ', u'ๆœ›่‘—ๅ': u'ๆœ›่‘—ๅ', u'ๆœ›่‘—ๆ›ธ': u'ๆœ›่‘—ๆ›ธ', u'ๆœ›่‘—็จฑ': u'ๆœ›่‘—็จฑ', u'ๆœ›่‘—่€…': u'ๆœ›่‘—่€…', u'ๆœ›่‘—่ฟฐ': u'ๆœ›่‘—่ฟฐ', u'ๆœ›่‘—้Œ„': u'ๆœ›่‘—้Œ„', u'ๆœ่‘—': u'ๆœ็€', u'ๆœ่‘—ไฝœ': u'ๆœ่‘—ไฝœ', u'ๆœ่‘—ๅ': u'ๆœ่‘—ๅ', u'ๆœ่‘—็จฑ': u'ๆœ่‘—็จฑ', u'ๆœ่‘—่€…': u'ๆœ่‘—่€…', u'ๆœ่‘—่ฟฐ': u'ๆœ่‘—่ฟฐ', u'ๆœ่‘—้Œ„': u'ๆœ่‘—้Œ„', u'ๆœฌ่‘—': u'ๆœฌ็€', u'ๆœฌ่‘—ไฝœ': u'ๆœฌ่‘—ไฝœ', u'ๆœฌ่‘—ๅ': u'ๆœฌ่‘—ๅ', u'ๆœฌ่‘—ๆ›ธ': u'ๆœฌ่‘—ๆ›ธ', u'ๆœฌ่‘—็จฑ': u'ๆœฌ่‘—็จฑ', u'ๆœฌ่‘—่€…': u'ๆœฌ่‘—่€…', u'ๆœฌ่‘—่ฟฐ': u'ๆœฌ่‘—่ฟฐ', u'ๆœฌ่‘—้Œ„': u'ๆœฌ่‘—้Œ„', u'ๆ‘ๅญ้‡Œ': u'ๆ‘ๅญ่ฃ', u'ๆ‘ๅญ่ฃก': u'ๆ‘ๅญ่ฃ', u'ๆž•่‘—': u'ๆž•็€', u'ๆž•่‘—ไฝœ': u'ๆž•่‘—ไฝœ', u'ๆž•่‘—ๅ': u'ๆž•่‘—ๅ', u'ๆž•่‘—็จฑ': u'ๆž•่‘—็จฑ', u'ๆž•่‘—่€…': u'ๆž•่‘—่€…', u'ๆž•่‘—่ฟฐ': u'ๆž•่‘—่ฟฐ', u'ๆž•่‘—้Œ„': u'ๆž•่‘—้Œ„', u'ๆ ผ็‘ž้‚ฃ้”': u'ๆ ผๆž—็ด้”', u'ๆ’ž็ƒ': u'ๆกŒ็ƒ', u'ๅฐ็ƒ': u'ๆกŒ็ƒ', u'ๆขณ่‘—': u'ๆขณ็€', u'ๆขณ่‘—ไฝœ': u'ๆขณ่‘—ไฝœ', u'ๆขณ่‘—ๅ': u'ๆขณ่‘—ๅ', u'ๆขณ่‘—็จฑ': u'ๆขณ่‘—็จฑ', u'ๆขณ่‘—่€…': u'ๆขณ่‘—่€…', u'ๆขณ่‘—่ฟฐ': u'ๆขณ่‘—่ฟฐ', u'ๆขณ่‘—้Œ„': u'ๆขณ่‘—้Œ„', u'ๆฃฎๆž—่ฃก': u'ๆฃฎๆž—่ฃ', u'ๆฃฎๆž—้‡Œ': u'ๆฃฎๆž—่ฃ', u'ๆฃบๆ่ฃก': u'ๆฃบๆ่ฃ', u'ๆฃบๆ้‡Œ': u'ๆฃบๆ่ฃ', u'ๆฆด่“ฎ': u'ๆฆดๆงค', u'ๆฆด่Žฒ': u'ๆฆดๆงค', u'ๆจ‚่‘—': u'ๆจ‚็€', u'ๆจ‚่‘—ไฝœ': u'ๆจ‚่‘—ไฝœ', u'ๆจ‚่‘—ๅ': u'ๆจ‚่‘—ๅ', u'ๆจ‚่‘—ๆ›ธ': u'ๆจ‚่‘—ๆ›ธ', u'ๆจ‚่‘—็จฑ': u'ๆจ‚่‘—็จฑ', u'ๆจ‚่‘—่€…': u'ๆจ‚่‘—่€…', u'ๆจ‚่‘—่ฟฐ': u'ๆจ‚่‘—่ฟฐ', u'ๆจ‚่‘—้Œ„': u'ๆจ‚่‘—้Œ„', u'ๅฏถ็…': u'ๆจ™่‡ด', u'ๆจ™่ชŒ่‘—': u'ๆจ™่ชŒ็€', u'ๆฉŸๅ™จไบบ': u'ๆฉŸๆขฐไบบ', u'ๆœบๅ™จไบบ': u'ๆฉŸๆขฐไบบ', u'ๅކๅฒ้‡Œ': u'ๆญทๅฒ่ฃ', u'ๆญทๅฒ่ฃก': u'ๆญทๅฒ่ฃ', u'ๆฎบ่‘—': u'ๆฎบ็€', u'ๆฎบ่‘—ไฝœ': u'ๆฎบ่‘—ไฝœ', u'ๆฎบ่‘—ๅ': u'ๆฎบ่‘—ๅ', u'ๆฎบ่‘—ๆ›ธ': u'ๆฎบ่‘—ๆ›ธ', u'ๆฎบ่‘—็จฑ': u'ๆฎบ่‘—็จฑ', u'ๆฎบ่‘—่€…': u'ๆฎบ่‘—่€…', u'ๆฎบ่‘—่ฟฐ': u'ๆฎบ่‘—่ฟฐ', u'ๆฎบ่‘—้Œ„': u'ๆฎบ่‘—้Œ„', u'่Œ…ๅˆฉๅก”ๅฐผไบž': u'ๆฏ›้‡Œๅก”ๅฐผไบž', u'ๆฏ›้‡Œๆฑ‚ๆ–ฏ': u'ๆฏ›้‡Œ่ฃ˜ๆ–ฏ', u'ๆจก้‡Œ่ฅฟๆ–ฏ': u'ๆฏ›้‡Œ่ฃ˜ๆ–ฏ', u'ๆฑ‚่‘—': u'ๆฑ‚็€', u'ๆฑ‚่‘—ไฝœ': u'ๆฑ‚่‘—ไฝœ', u'ๆฑ‚่‘—ๅ': u'ๆฑ‚่‘—ๅ', u'ๆฑ‚่‘—ๆ›ธ': u'ๆฑ‚่‘—ๆ›ธ', u'ๆฑ‚่‘—็จฑ': u'ๆฑ‚่‘—็จฑ', u'ๆฑ‚่‘—่€…': u'ๆฑ‚่‘—่€…', u'ๆฑ‚่‘—่ฟฐ': u'ๆฑ‚่‘—่ฟฐ', u'ๆฑ‚่‘—้Œ„': u'ๆฑ‚่‘—้Œ„', u'ๆ–‡่Žฑ': u'ๆฑถ่Š', u'ๆฒ‰่‘—': u'ๆฒ‰็€', u'ๆฒ‰่‘—ไฝœ': u'ๆฒ‰่‘—ไฝœ', u'ๆฒ‰่‘—ๅ': u'ๆฒ‰่‘—ๅ', u'ๆฒ‰่‘—ๆ›ธ': u'ๆฒ‰่‘—ๆ›ธ', u'ๆฒ‰่‘—็จฑ': u'ๆฒ‰่‘—็จฑ', u'ๆฒ‰่‘—่€…': u'ๆฒ‰่‘—่€…', u'ๆฒ‰่‘—่ฟฐ': u'ๆฒ‰่‘—่ฟฐ', u'ๆฒ‰่‘—้Œ„': u'ๆฒ‰่‘—้Œ„', u'ๆฒ™ๅœฐ้˜ฟๆ‹‰ไผฏ': u'ๆฒ™็‰น้˜ฟๆ‹‰ไผฏ', u'ๆฒ™็ƒๅœฐ้˜ฟๆ‹‰ไผฏ': u'ๆฒ™็‰น้˜ฟๆ‹‰ไผฏ', u'้ฉฌๆ‹‰็‰นยท่จ่Šฌ': u'ๆฒ™่Šฌ', u'ๆฒฟ่‘—': u'ๆฒฟ็€', u'ๆฒฟ่‘—ไฝœ': u'ๆฒฟ่‘—ไฝœ', u'ๆฒฟ่‘—ๅ': u'ๆฒฟ่‘—ๅ', u'ๆฒฟ่‘—ๆ›ธ': u'ๆฒฟ่‘—ๆ›ธ', u'ๆฒฟ่‘—็จฑ': u'ๆฒฟ่‘—็จฑ', u'ๆฒฟ่‘—่€…': u'ๆฒฟ่‘—่€…', u'ๆฒฟ่‘—่ฟฐ': u'ๆฒฟ่‘—่ฟฐ', u'ๆฒฟ่‘—้Œ„': u'ๆฒฟ่‘—้Œ„', u'ๆณขๅฃซๅฐผไบž่ตซๅกžๅ“ฅ็ถญ็ด': u'ๆณขๆ–ฏๅฐผไบž้ป‘ๅกžๅ“ฅ็ถญ้‚ฃ', u'่พ›ๅทดๅจ': u'ๆดฅๅทดๅธƒ้Ÿ‹', u'ๅฎ้ƒฝๆ‹‰ๆ–ฏ': u'ๆดช้ƒฝๆ‹‰ๆ–ฏ', u'ๆดป่‘—': u'ๆดป็€', u'ๆดป่‘—ไฝœ': u'ๆดป่‘—ไฝœ', u'ๆดป่‘—ๅ': u'ๆดป่‘—ๅ', u'ๆดป่‘—ๆ›ธ': u'ๆดป่‘—ๆ›ธ', u'ๆดป่‘—็จฑ': u'ๆดป่‘—็จฑ', u'ๆดป่‘—่€…': u'ๆดป่‘—่€…', u'ๆดป่‘—่ฟฐ': u'ๆดป่‘—่ฟฐ', u'ๆดป่‘—้Œ„': u'ๆดป่‘—้Œ„', u'่กŒๅ‹•้›ป่ฉฑ': u'ๆตๅ‹•้›ป่ฉฑ', u'็งปๅŠจ็”ต่ฏ': u'ๆตๅ‹•้›ป่ฉฑ', u'ๆต่‘—': u'ๆต็€', u'ๆต่‘—ไฝœ': u'ๆต่‘—ไฝœ', u'ๆต่‘—ๅ': u'ๆต่‘—ๅ', u'ๆต่‘—ๆ›ธ': u'ๆต่‘—ๆ›ธ', u'ๆต่‘—็จฑ': u'ๆต่‘—็จฑ', u'ๆต่‘—่€…': u'ๆต่‘—่€…', u'ๆต่‘—่ฟฐ': u'ๆต่‘—่ฟฐ', u'ๆต่‘—้Œ„': u'ๆต่‘—้Œ„', u'ๆต้œฒ่‘—': u'ๆต้œฒ็€', u'ๆตฎ่‘—': u'ๆตฎ็€', u'ๆตฎ่‘—ไฝœ': u'ๆตฎ่‘—ไฝœ', u'ๆตฎ่‘—ๅ': u'ๆตฎ่‘—ๅ', u'ๆตฎ่‘—ๆ›ธ': u'ๆตฎ่‘—ๆ›ธ', u'ๆตฎ่‘—็จฑ': u'ๆตฎ่‘—็จฑ', u'ๆตฎ่‘—่€…': u'ๆตฎ่‘—่€…', u'ๆตฎ่‘—่ฟฐ': u'ๆตฎ่‘—่ฟฐ', u'ๆตฎ่‘—้Œ„': u'ๆตฎ่‘—้Œ„', u'ๆถต่‘—': u'ๆถต็€', u'ๆถต่‘—ไฝœ': u'ๆถต่‘—ไฝœ', u'ๆถต่‘—ๅ': u'ๆถต่‘—ๅ', u'ๆถต่‘—ๆ›ธ': u'ๆถต่‘—ๆ›ธ', u'ๆถต่‘—็จฑ': u'ๆถต่‘—็จฑ', u'ๆถต่‘—่€…': u'ๆถต่‘—่€…', u'ๆถต่‘—่ฟฐ': u'ๆถต่‘—่ฟฐ', u'ๆถต่‘—้Œ„': u'ๆถต่‘—้Œ„', u'ๆถผ่‘—': u'ๆถผ็€', u'ๆถผ่‘—ไฝœ': u'ๆถผ่‘—ไฝœ', u'ๆถผ่‘—ๅ': u'ๆถผ่‘—ๅ', u'ๆถผ่‘—ๆ›ธ': u'ๆถผ่‘—ๆ›ธ', u'ๆถผ่‘—็จฑ': u'ๆถผ่‘—็จฑ', u'ๆถผ่‘—่€…': u'ๆถผ่‘—่€…', u'ๆถผ่‘—่ฟฐ': u'ๆถผ่‘—่ฟฐ', u'ๆถผ่‘—้Œ„': u'ๆถผ่‘—้Œ„', u'ๆทฑๆทต่ฃก': u'ๆทฑๆทต่ฃ', u'ๆทฑๆธŠ้‡Œ': u'ๆทฑๆธŠ่ฃ', u'ๆธด่‘—': u'ๆธด็€', u'ๆธด่‘—ไฝœ': u'ๆธด่‘—ไฝœ', u'ๆธด่‘—ๅ': u'ๆธด่‘—ๅ', u'ๆธด่‘—ๆ›ธ': u'ๆธด่‘—ๆ›ธ', u'ๆธด่‘—็จฑ': u'ๆธด่‘—็จฑ', u'ๆธด่‘—่€…': u'ๆธด่‘—่€…', u'ๆธด่‘—่ฟฐ': u'ๆธด่‘—่ฟฐ', u'ๆธด่‘—้Œ„': u'ๆธด่‘—้Œ„', u'ๆบข่‘—': u'ๆบข็€', u'ๆบข่‘—ไฝœ': u'ๆบข่‘—ไฝœ', u'ๆบข่‘—ๅ': u'ๆบข่‘—ๅ', u'ๆบข่‘—ๆ›ธ': u'ๆบข่‘—ๆ›ธ', u'ๆบข่‘—็จฑ': u'ๆบข่‘—็จฑ', u'ๆบข่‘—่€…': u'ๆบข่‘—่€…', u'ๆบข่‘—่ฟฐ': u'ๆบข่‘—่ฟฐ', u'ๆบข่‘—้Œ„': u'ๆบข่‘—้Œ„', u'ๆผ”่‘—': u'ๆผ”็€', u'ๆผ”่‘—ไฝœ': u'ๆผ”่‘—ไฝœ', u'ๆผ”่‘—ๅ': u'ๆผ”่‘—ๅ', u'ๆผ”่‘—ๆ›ธ': u'ๆผ”่‘—ๆ›ธ', u'ๆผ”่‘—็จฑ': u'ๆผ”่‘—็จฑ', u'ๆผ”่‘—่€…': u'ๆผ”่‘—่€…', u'ๆผ”่‘—่ฟฐ': u'ๆผ”่‘—่ฟฐ', u'ๆผ”่‘—้Œ„': u'ๆผ”่‘—้Œ„', u'ๆผซ่‘—': u'ๆผซ็€', u'ๆผซ่‘—ไฝœ': u'ๆผซ่‘—ไฝœ', u'ๆผซ่‘—ๅ': u'ๆผซ่‘—ๅ', u'ๆผซ่‘—ๆ›ธ': u'ๆผซ่‘—ๆ›ธ', u'ๆผซ่‘—็จฑ': u'ๆผซ่‘—็จฑ', u'ๆผซ่‘—่€…': u'ๆผซ่‘—่€…', u'ๆผซ่‘—่ฟฐ': u'ๆผซ่‘—่ฟฐ', u'ๆผซ่‘—้Œ„': u'ๆผซ่‘—้Œ„', u'ๆฝค่‘—': u'ๆฝค็€', u'ๆฝค่‘—ไฝœ': u'ๆฝค่‘—ไฝœ', u'ๆฝค่‘—ๅ': u'ๆฝค่‘—ๅ', u'ๆฝค่‘—ๆ›ธ': u'ๆฝค่‘—ๆ›ธ', u'ๆฝค่‘—็จฑ': u'ๆฝค่‘—็จฑ', u'ๆฝค่‘—่€…': u'ๆฝค่‘—่€…', u'ๆฝค่‘—่ฟฐ': u'ๆฝค่‘—่ฟฐ', u'ๆฝค่‘—้Œ„': u'ๆฝค่‘—้Œ„', u'่ธ': u'็…™', u'็…ง่‘—': u'็…ง็€', u'็…ง่‘—ไฝœ': u'็…ง่‘—ไฝœ', u'็…ง่‘—ๅ': u'็…ง่‘—ๅ', u'็…ง่‘—ๆ›ธ': u'็…ง่‘—ๆ›ธ', u'็…ง่‘—็จฑ': u'็…ง่‘—็จฑ', u'็…ง่‘—่€…': u'็…ง่‘—่€…', u'็…ง่‘—่ฟฐ': u'็…ง่‘—่ฟฐ', u'็…ง่‘—้Œ„': u'็…ง่‘—้Œ„', u'็‡’่‘—': u'็‡’็€', u'็‡’่‘—ไฝœ': u'็‡’่‘—ไฝœ', u'็‡’่‘—ๅ': u'็‡’่‘—ๅ', u'็‡’่‘—ๆ›ธ': u'็‡’่‘—ๆ›ธ', u'็‡’่‘—็จฑ': u'็‡’่‘—็จฑ', u'็‡’่‘—่€…': u'็‡’่‘—่€…', u'็‡’่‘—่ฟฐ': u'็‡’่‘—่ฟฐ', u'็‡’่‘—้Œ„': u'็‡’่‘—้Œ„', u'็ˆญ่‘—': u'็ˆญ็€', u'็ˆญ่‘—ไฝœ': u'็ˆญ่‘—ไฝœ', u'็ˆญ่‘—ๅ': u'็ˆญ่‘—ๅ', u'็ˆญ่‘—ๆ›ธ': u'็ˆญ่‘—ๆ›ธ', u'็ˆญ่‘—็จฑ': u'็ˆญ่‘—็จฑ', u'็ˆญ่‘—่€…': u'็ˆญ่‘—่€…', u'็ˆญ่‘—่ฟฐ': u'็ˆญ่‘—่ฟฐ', u'็ˆญ่‘—้Œ„': u'็ˆญ่‘—้Œ„', u'ๅƒ้‡Œ้”ๆ‰˜่ฒๅ“ฅ': u'็‰น็ซ‹ๅฐผ้”ๅ’Œๅคšๅทดๅ“ฅ', u'็‰ฝ่‘—': u'็‰ฝ็€', u'็‰ฝ่‘—ไฝœ': u'็‰ฝ่‘—ไฝœ', u'็‰ฝ่‘—ๅ': u'็‰ฝ่‘—ๅ', u'็‰ฝ่‘—ๆ›ธ': u'็‰ฝ่‘—ๆ›ธ', u'็‰ฝ่‘—็จฑ': u'็‰ฝ่‘—็จฑ', u'็‰ฝ่‘—่€…': u'็‰ฝ่‘—่€…', u'็‰ฝ่‘—่ฟฐ': u'็‰ฝ่‘—่ฟฐ', u'็‰ฝ่‘—้Œ„': u'็‰ฝ่‘—้Œ„', u'็Šฏไธ่‘—': u'็Šฏไธ็€', u'็Šฏไธ่‘—ไฝœ': u'็Šฏไธ่‘—ไฝœ', u'็Šฏไธ่‘—ๅ': u'็Šฏไธ่‘—ๅ', u'็Šฏไธ่‘—ๆ›ธ': u'็Šฏไธ่‘—ๆ›ธ', u'็Šฏไธ่‘—็จฑ': u'็Šฏไธ่‘—็จฑ', u'็Šฏไธ่‘—่€…': u'็Šฏไธ่‘—่€…', u'็Šฏไธ่‘—่ฟฐ': u'็Šฏไธ่‘—่ฟฐ', u'็Šฏไธ่‘—้Œ„': u'็Šฏไธ่‘—้Œ„', u'็Šฏๅพ—่‘—': u'็Šฏๅพ—็€', u'็Šฌๅช': u'็‹—้šป', u'็Œœ่‘—': u'็Œœ็€', u'็Œœ่‘—ไฝœ': u'็Œœ่‘—ไฝœ', u'็Œœ่‘—ๅ': u'็Œœ่‘—ๅ', u'็Œœ่‘—ๆ›ธ': u'็Œœ่‘—ๆ›ธ', u'็Œœ่‘—็จฑ': u'็Œœ่‘—็จฑ', u'็Œœ่‘—่€…': u'็Œœ่‘—่€…', u'็Œœ่‘—่ฟฐ': u'็Œœ่‘—่ฟฐ', u'็Œœ่‘—้Œ„': u'็Œœ่‘—้Œ„', u'็‹ฑ้‡Œ': u'็„่ฃ', u'็„่ฃก': u'็„่ฃ', u'็จ่‘—': u'็จ็€', u'็จ่‘—ไฝœ': u'็จ่‘—ไฝœ', u'็จ่‘—ๅ': u'็จ่‘—ๅ', u'็จ่‘—ๆ›ธ': u'็จ่‘—ๆ›ธ', u'็จ่‘—็จฑ': u'็จ่‘—็จฑ', u'็จ่‘—่€…': u'็จ่‘—่€…', u'็จ่‘—่ฟฐ': u'็จ่‘—่ฟฐ', u'็จ่‘—้Œ„': u'็จ่‘—้Œ„', u'็ฒ่‘—': u'็ฒ็€', u'็ฒ่‘—ไฝœ': u'็ฒ่‘—ไฝœ', u'็ฒ่‘—ๅ': u'็ฒ่‘—ๅ', u'็ฒ่‘—ๆ›ธ': u'็ฒ่‘—ๆ›ธ', u'็ฒ่‘—็จฑ': u'็ฒ่‘—็จฑ', u'็ฒ่‘—่€…': u'็ฒ่‘—่€…', u'็ฒ่‘—่ฟฐ': u'็ฒ่‘—่ฟฐ', u'็ฒ่‘—้Œ„': u'็ฒ่‘—้Œ„', u'่ซพ้ญฏ': u'็‘™้ญฏ', u'่ฌ้‚ฃๆœ': u'็“ฆๅŠช้˜ฟๅœ–', u'็”œ่‘—': u'็”œ็€', u'็”œ่‘—ไฝœ': u'็”œ่‘—ไฝœ', u'็”œ่‘—ๅ': u'็”œ่‘—ๅ', u'็”œ่‘—ๆ›ธ': u'็”œ่‘—ๆ›ธ', u'็”œ่‘—็จฑ': u'็”œ่‘—็จฑ', u'็”œ่‘—่€…': u'็”œ่‘—่€…', u'็”œ่‘—่ฟฐ': u'็”œ่‘—่ฟฐ', u'็”œ่‘—้Œ„': u'็”œ่‘—้Œ„', u'็”จไธ่‘—': u'็”จไธ็€', u'็”จๅพ—่‘—': u'็”จๅพ—็€', u'็”จ่‘—': u'็”จ็€', u'็”จ่‘—ไฝœ': u'็”จ่‘—ไฝœ', u'็”จ่‘—ๅ': u'็”จ่‘—ๅ', u'็”จ่‘—ๆ›ธ': u'็”จ่‘—ๆ›ธ', u'็”จ่‘—็จฑ': u'็”จ่‘—็จฑ', u'็”จ่‘—่€…': u'็”จ่‘—่€…', u'็”จ่‘—่ฟฐ': u'็”จ่‘—่ฟฐ', u'็”จ่‘—้Œ„': u'็”จ่‘—้Œ„', u'็•™่‘—': u'็•™็€', u'็•™่‘—ไฝœ': u'็•™่‘—ไฝœ', u'็•™่‘—ๅ': u'็•™่‘—ๅ', u'็•™่‘—ๆ›ธ': u'็•™่‘—ๆ›ธ', u'็•™่‘—็จฑ': u'็•™่‘—็จฑ', u'็•™่‘—่€…': u'็•™่‘—่€…', u'็•™่‘—่ฟฐ': u'็•™่‘—่ฟฐ', u'็•™่‘—้Œ„': u'็•™่‘—้Œ„', u'็•ถ่‘—': u'็•ถ็€', u'็•ถ่‘—ไฝœ': u'็•ถ่‘—ไฝœ', u'็•ถ่‘—ๅ': u'็•ถ่‘—ๅ', u'็•ถ่‘—ๆ›ธ': u'็•ถ่‘—ๆ›ธ', u'็•ถ่‘—็จฑ': u'็•ถ่‘—็จฑ', u'็•ถ่‘—่€…': u'็•ถ่‘—่€…', u'็•ถ่‘—่ฟฐ': u'็•ถ่‘—่ฟฐ', u'็•ถ่‘—้Œ„': u'็•ถ่‘—้Œ„', u'็–‘่‘—': u'็–‘็€', u'็–‘่‘—ไฝœ': u'็–‘่‘—ไฝœ', u'็–‘่‘—ๅ': u'็–‘่‘—ๅ', u'็–‘่‘—ๆ›ธ': u'็–‘่‘—ๆ›ธ', u'็–‘่‘—็จฑ': u'็–‘่‘—็จฑ', u'็–‘่‘—่€…': u'็–‘่‘—่€…', u'็–‘่‘—่ฟฐ': u'็–‘่‘—่ฟฐ', u'็–‘่‘—้Œ„': u'็–‘่‘—้Œ„', u'ๅ‘ๅธƒ': u'็™ผไฝˆ', u'็™ผๅธƒ': u'็™ผไฝˆ', u'็™พ็ง‘่ฃก': u'็™พ็ง‘่ฃ', u'็™พ็ง‘้‡Œ': u'็™พ็ง‘่ฃ', u'่จˆ็จ‹่ปŠ': u'็š„ๅฃซ', u'ๅ‡บ็งŸ่ฝฆ': u'็š„ๅฃซ', u'็šฎ้‡Œ้˜ณ็ง‹': u'็šฎ่ฃ้™ฝ็ง‹', u'็šฎ่ฃก้™ฝ็ง‹': u'็šฎ่ฃ้™ฝ็ง‹', u'็šบ่‘—': u'็šบ็€', u'็šบ่‘—ไฝœ': u'็šบ่‘—ไฝœ', u'็šบ่‘—ๅ': u'็šบ่‘—ๅ', u'็šบ่‘—ๆ›ธ': u'็šบ่‘—ๆ›ธ', u'็šบ่‘—็จฑ': u'็šบ่‘—็จฑ', u'็šบ่‘—่€…': u'็šบ่‘—่€…', u'็šบ่‘—่ฟฐ': u'็šบ่‘—่ฟฐ', u'็šบ่‘—้Œ„': u'็šบ่‘—้Œ„', u'็››่‘—': u'็››็€', u'็››่‘—ไฝœ': u'็››่‘—ไฝœ', u'็››่‘—ๅ': u'็››่‘—ๅ', u'็››่‘—ๆ›ธ': u'็››่‘—ๆ›ธ', u'็››่‘—็จฑ': u'็››่‘—็จฑ', u'็››่‘—่€…': u'็››่‘—่€…', u'็››่‘—่ฟฐ': u'็››่‘—่ฟฐ', u'็››่‘—้Œ„': u'็››่‘—้Œ„', u'็›งๅฎ‰้”': u'็›งๆ—บ้”', u'็›ฏ่‘—': u'็›ฏ็€', u'็›ฏ่‘—ไฝœ': u'็›ฏ่‘—ไฝœ', u'็›ฏ่‘—ๅ': u'็›ฏ่‘—ๅ', u'็›ฏ่‘—ๆ›ธ': u'็›ฏ่‘—ๆ›ธ', u'็›ฏ่‘—็จฑ': u'็›ฏ่‘—็จฑ', u'็›ฏ่‘—่€…': u'็›ฏ่‘—่€…', u'็›ฏ่‘—่ฟฐ': u'็›ฏ่‘—่ฟฐ', u'็›ฏ่‘—้Œ„': u'็›ฏ่‘—้Œ„', u'็›พ่‘—': u'็›พ็€', u'็›พ่‘—ไฝœ': u'็›พ่‘—ไฝœ', u'็›พ่‘—ๅ': u'็›พ่‘—ๅ', u'็›พ่‘—ๆ›ธ': u'็›พ่‘—ๆ›ธ', u'็›พ่‘—็จฑ': u'็›พ่‘—็จฑ', u'็›พ่‘—่€…': u'็›พ่‘—่€…', u'็›พ่‘—่ฟฐ': u'็›พ่‘—่ฟฐ', u'็›พ่‘—้Œ„': u'็›พ่‘—้Œ„', u'็œ‹ไธ่‘—': u'็œ‹ไธ็€', u'็œ‹ๅพ—่‘—': u'็œ‹ๅพ—็€', u'็œ‹่‘—': u'็œ‹็€', u'็œ‹่‘—ไฝœ': u'็œ‹่‘—ไฝœ', u'็œ‹่‘—ๅ': u'็œ‹่‘—ๅ', u'็œ‹่‘—ๆ›ธ': u'็œ‹่‘—ๆ›ธ', u'็œ‹่‘—็จฑ': u'็œ‹่‘—็จฑ', u'็œ‹่‘—่€…': u'็œ‹่‘—่€…', u'็œ‹่‘—่ฟฐ': u'็œ‹่‘—่ฟฐ', u'็œ‹่‘—้Œ„': u'็œ‹่‘—้Œ„', u'็œผ็›่ฃก': u'็œผ็›่ฃ', u'็œผ็›้‡Œ': u'็œผ็›่ฃ', u'่‘—ไป€้บผๆ€ฅ': u'็€ไป€้บผๆ€ฅ', u'่‘—ไป–': u'็€ไป–', u'่‘—ไฝ ': u'็€ไฝ ', u'่‘—ๅŠ›': u'็€ๅŠ›', u'่‘—ๅœฐ': u'็€ๅœฐ', u'่‘—ๅขจ': u'็€ๅขจ', u'่‘—ๅฅน': u'็€ๅฅน', u'่‘—ๅฆณ': u'็€ๅฆณ', u'่‘—ๅฎƒ': u'็€ๅฎƒ', u'่‘—ๅฏฆ': u'็€ๅฏฆ', u'่‘—ๅฟ™': u'็€ๅฟ™', u'่‘—ๆ€ฅ': u'็€ๆ€ฅ', u'่‘—ๆƒณ': u'็€ๆƒณ', u'่‘—ๆ„': u'็€ๆ„', u'่‘—ๆˆ‘': u'็€ๆˆ‘', u'่‘—ๆ‰‹': u'็€ๆ‰‹', u'่‘—ๆ•ธ': u'็€ๆ•ธ', u'่‘—ๆณ•': u'็€ๆณ•', u'่‘—ๆถผ': u'็€ๆถผ', u'่‘—็ซ': u'็€็ซ', u'่‘—็œผ': u'็€็œผ', u'่‘—็ฅ‚': u'็€็ฅ‚', u'่‘—็ญ†': u'็€็ญ†', u'่‘—็ตฒ': u'็€็ตฒ', u'่‘—็ทŠ': u'็€็ทŠ', u'่‘—่…ณ': u'็€่…ณ', u'่‘—่‰ฆ': u'็€่‰ฆ', u'่‘—่‰ฒ': u'็€่‰ฒ', u'่‘—่ฝ': u'็€่ฝ', u'่‘—่กฃ': u'็€่กฃ', u'่‘—่ฃ': u'็€่ฃ', u'่‘—่ฟท': u'็€่ฟท', u'่‘—้‡': u'็€้‡', u'่‘—้Œ„': u'็€้Œ„', u'่‘—้™ธ': u'็€้™ธ', u'่‘—้žญ': u'็€้žญ', u'็กไธ่‘—': u'็กไธ็€', u'็กๅพ—่‘—': u'็กๅพ—็€', u'็ก่‘—': u'็ก็€', u'็ก่‘—ไฝœ': u'็ก่‘—ไฝœ', u'็ก่‘—ๅ': u'็ก่‘—ๅ', u'็ก่‘—ๆ›ธ': u'็ก่‘—ๆ›ธ', u'็ก่‘—็จฑ': u'็ก่‘—็จฑ', u'็ก่‘—่€…': u'็ก่‘—่€…', u'็ก่‘—่ฟฐ': u'็ก่‘—่ฟฐ', u'็ก่‘—้Œ„': u'็ก่‘—้Œ„', u'็žž่‘—': u'็žž็€', u'็žž่‘—ไฝœ': u'็žž่‘—ไฝœ', u'็žž่‘—ๅ': u'็žž่‘—ๅ', u'็žž่‘—ๆ›ธ': u'็žž่‘—ๆ›ธ', u'็žž่‘—็จฑ': u'็žž่‘—็จฑ', u'็žž่‘—่€…': u'็žž่‘—่€…', u'็žž่‘—่ฟฐ': u'็žž่‘—่ฟฐ', u'็žž่‘—้Œ„': u'็žž่‘—้Œ„', u'็žช่‘—': u'็žช็€', u'็žช่‘—ไฝœ': u'็žช่‘—ไฝœ', u'็žช่‘—ๅ': u'็žช่‘—ๅ', u'็žช่‘—ๆ›ธ': u'็žช่‘—ๆ›ธ', u'็žช่‘—็จฑ': u'็žช่‘—็จฑ', u'็žช่‘—่€…': u'็žช่‘—่€…', u'็žช่‘—่ฟฐ': u'็žช่‘—่ฟฐ', u'็žช่‘—้Œ„': u'็žช่‘—้Œ„', u'็ฐก่จŠ': u'็Ÿญ่จŠ', u'็Ÿญไฟก': u'็Ÿญ่จŠ', u'็กฌไปถ': u'็กฌไปถ', u'็กฌ้ซ”': u'็กฌไปถ', u'็ฆๆ–ฏ': u'็ฆๅฃซ', u'็ฆ่‘—': u'็ฆ็€', u'็ฆ่‘—ไฝœ': u'็ฆ่‘—ไฝœ', u'็ฆ่‘—ๅ': u'็ฆ่‘—ๅ', u'็ฆ่‘—ๆ›ธ': u'็ฆ่‘—ๆ›ธ', u'็ฆ่‘—็จฑ': u'็ฆ่‘—็จฑ', u'็ฆ่‘—่€…': u'็ฆ่‘—่€…', u'็ฆ่‘—่ฟฐ': u'็ฆ่‘—่ฟฐ', u'็ฆ่‘—้Œ„': u'็ฆ่‘—้Œ„', u'็ง‹ๅ‡่ฃก': u'็ง‹ๅ‡่ฃ', u'็ง‹ๅ‡้‡Œ': u'็ง‹ๅ‡่ฃ', u'็ง‹ๅคฉ่ฃก': u'็ง‹ๅคฉ่ฃ', u'็ง‹ๅคฉ้‡Œ': u'็ง‹ๅคฉ่ฃ', u'็ง‹ๆ—ฅ้‡Œ': u'็ง‹ๆ—ฅ่ฃ', u'็ง‹ๆ—ฅ่ฃก': u'็ง‹ๆ—ฅ่ฃ', u'่‘›ๆ‘ฉ': u'็ง‘ๆ‘ฉ็พ…', u'ๆท่ฑน': u'็ฉๆžถ', u'็ฉบ่‘—': u'็ฉบ็€', u'็ฉบ่‘—ไฝœ': u'็ฉบ่‘—ไฝœ', u'็ฉบ่‘—ๅ': u'็ฉบ่‘—ๅ', u'็ฉบ่‘—ๆ›ธ': u'็ฉบ่‘—ๆ›ธ', u'็ฉบ่‘—็จฑ': u'็ฉบ่‘—็จฑ', u'็ฉบ่‘—่€…': u'็ฉบ่‘—่€…', u'็ฉบ่‘—่ฟฐ': u'็ฉบ่‘—่ฟฐ', u'็ฉบ่‘—้Œ„': u'็ฉบ่‘—้Œ„', u'ๅคช็ฉบๆขญ': u'็ฉฟๆขญๆฉŸ', u'่ˆชๅคฉ้ฃžๆœบ': u'็ฉฟๆขญๆฉŸ', u'็ฉฟ่‘—': u'็ฉฟ็€', u'็ฉฟ่‘—ไฝœ': u'็ฉฟ่‘—ไฝœ', u'็ฉฟ่‘—ๅ': u'็ฉฟ่‘—ๅ', u'็ฉฟ่‘—ๆ›ธ': u'็ฉฟ่‘—ๆ›ธ', u'็ฉฟ่‘—็จฑ': u'็ฉฟ่‘—็จฑ', u'็ฉฟ่‘—่€…': u'็ฉฟ่‘—่€…', u'็ฉฟ่‘—่ฟฐ': u'็ฉฟ่‘—่ฟฐ', u'็ฉฟ่‘—้Œ„': u'็ฉฟ่‘—้Œ„', u'็ซ™่‘—': u'็ซ™็€', u'็ซ™่‘—ไฝœ': u'็ซ™่‘—ไฝœ', u'็ซ™่‘—ๅ': u'็ซ™่‘—ๅ', u'็ซ™่‘—ๆ›ธ': u'็ซ™่‘—ๆ›ธ', u'็ซ™่‘—็จฑ': u'็ซ™่‘—็จฑ', u'็ซ™่‘—่€…': u'็ซ™่‘—่€…', u'็ซ™่‘—่ฟฐ': u'็ซ™่‘—่ฟฐ', u'็ซ™่‘—้Œ„': u'็ซ™่‘—้Œ„', u'็ฌ‘่‘—': u'็ฌ‘็€', u'็ฌ‘่‘—ไฝœ': u'็ฌ‘่‘—ไฝœ', u'็ฌ‘่‘—ๅ': u'็ฌ‘่‘—ๅ', u'็ฌ‘่‘—ๆ›ธ': u'็ฌ‘่‘—ๆ›ธ', u'็ฌ‘่‘—็จฑ': u'็ฌ‘่‘—็จฑ', u'็ฌ‘่‘—่€…': u'็ฌ‘่‘—่€…', u'็ฌ‘่‘—่ฟฐ': u'็ฌ‘่‘—่ฟฐ', u'็ฌ‘่‘—้Œ„': u'็ฌ‘่‘—้Œ„', u'็ฎก่‘—': u'็ฎก็€', u'็ฎก่‘—ไฝœ': u'็ฎก่‘—ไฝœ', u'็ฎก่‘—ๅ': u'็ฎก่‘—ๅ', u'็ฎก่‘—ๆ›ธ': u'็ฎก่‘—ๆ›ธ', u'็ฎก่‘—็จฑ': u'็ฎก่‘—็จฑ', u'็ฎก่‘—่€…': u'็ฎก่‘—่€…', u'็ฎก่‘—่ฟฐ': u'็ฎก่‘—่ฟฐ', u'็ฎก่‘—้Œ„': u'็ฎก่‘—้Œ„', u'่ฟˆๅ…‹ๅฐ”ยทๆฌงๆ–‡': u'็ฑณ้ซ˜ๅฅง้›ฒ', u'็ณปๅˆ—่ฃก': u'็ณปๅˆ—่ฃ', u'็ณปๅˆ—้‡Œ': u'็ณปๅˆ—่ฃ', u'็ดข้ฆฌๅˆฉไบž': u'็ดข้ฆฌ้‡Œ', u'็ดฎ่‘—': u'็ดฎ็€', u'็ดฎ่‘—ไฝœ': u'็ดฎ่‘—ไฝœ', u'็ดฎ่‘—ๅ': u'็ดฎ่‘—ๅ', u'็ดฎ่‘—ๆ›ธ': u'็ดฎ่‘—ๆ›ธ', u'็ดฎ่‘—็จฑ': u'็ดฎ่‘—็จฑ', u'็ดฎ่‘—่€…': u'็ดฎ่‘—่€…', u'็ดฎ่‘—่ฟฐ': u'็ดฎ่‘—่ฟฐ', u'็ดฎ่‘—้Œ„': u'็ดฎ่‘—้Œ„', u'็ถ่‘—': u'็ถ็€', u'็ถ่‘—ไฝœ': u'็ถ่‘—ไฝœ', u'็ถ่‘—ๅ': u'็ถ่‘—ๅ', u'็ถ่‘—ๆ›ธ': u'็ถ่‘—ๆ›ธ', u'็ถ่‘—็จฑ': u'็ถ่‘—็จฑ', u'็ถ่‘—่€…': u'็ถ่‘—่€…', u'็ถ่‘—่ฟฐ': u'็ถ่‘—่ฟฐ', u'็ถ่‘—้Œ„': u'็ถ่‘—้Œ„', u'็ถฒ่ทฏ': u'็ถฒ็ตก', u'็ทๅ‡ถ': u'็ทๅ…‡', u'็นž่‘—': u'็นž็€', u'็นž่‘—ไฝœ': u'็นž่‘—ไฝœ', u'็นž่‘—ๅ': u'็นž่‘—ๅ', u'็นž่‘—ๆ›ธ': u'็นž่‘—ๆ›ธ', u'็นž่‘—็จฑ': u'็นž่‘—็จฑ', u'็นž่‘—่€…': u'็นž่‘—่€…', u'็นž่‘—่ฟฐ': u'็นž่‘—่ฟฐ', u'็นž่‘—้Œ„': u'็นž่‘—้Œ„', u'็บ่‘—': u'็บ็€', u'็บ่‘—ไฝœ': u'็บ่‘—ไฝœ', u'็บ่‘—ๅ': u'็บ่‘—ๅ', u'็บ่‘—ๆ›ธ': u'็บ่‘—ๆ›ธ', u'็บ่‘—็จฑ': u'็บ่‘—็จฑ', u'็บ่‘—่€…': u'็บ่‘—่€…', u'็บ่‘—่ฟฐ': u'็บ่‘—่ฟฐ', u'็บ่‘—้Œ„': u'็บ่‘—้Œ„', u'็ฝฉ่‘—': u'็ฝฉ็€', u'็ฝฉ่‘—ไฝœ': u'็ฝฉ่‘—ไฝœ', u'็ฝฉ่‘—ๅ': u'็ฝฉ่‘—ๅ', u'็ฝฉ่‘—ๆ›ธ': u'็ฝฉ่‘—ๆ›ธ', u'็ฝฉ่‘—็จฑ': u'็ฝฉ่‘—็จฑ', u'็ฝฉ่‘—่€…': u'็ฝฉ่‘—่€…', u'็ฝฉ่‘—่ฟฐ': u'็ฝฉ่‘—่ฟฐ', u'็ฝฉ่‘—้Œ„': u'็ฝฉ่‘—้Œ„', u'็ฝต่‘—': u'็ฝต็€', u'็ฝต่‘—ไฝœ': u'็ฝต่‘—ไฝœ', u'็ฝต่‘—ๅ': u'็ฝต่‘—ๅ', u'็ฝต่‘—ๆ›ธ': u'็ฝต่‘—ๆ›ธ', u'็ฝต่‘—็จฑ': u'็ฝต่‘—็จฑ', u'็ฝต่‘—่€…': u'็ฝต่‘—่€…', u'็ฝต่‘—่ฟฐ': u'็ฝต่‘—่ฟฐ', u'็ฝต่‘—้Œ„': u'็ฝต่‘—้Œ„', u'็พŽ่‘—': u'็พŽ็€', u'็พŽ่‘—ไฝœ': u'็พŽ่‘—ไฝœ', u'็พŽ่‘—ๅ': u'็พŽ่‘—ๅ', u'็พŽ่‘—ๆ›ธ': u'็พŽ่‘—ๆ›ธ', u'็พŽ่‘—็จฑ': u'็พŽ่‘—็จฑ', u'็พŽ่‘—่€…': u'็พŽ่‘—่€…', u'็พŽ่‘—่ฟฐ': u'็พŽ่‘—่ฟฐ', u'็พŽ่‘—้Œ„': u'็พŽ่‘—้Œ„', u'่€€่‘—': u'่€€็€', u'่€€่‘—ไฝœ': u'่€€่‘—ไฝœ', u'่€€่‘—ๅ': u'่€€่‘—ๅ', u'่€€่‘—ๆ›ธ': u'่€€่‘—ๆ›ธ', u'่€€่‘—็จฑ': u'่€€่‘—็จฑ', u'่€€่‘—่€…': u'่€€่‘—่€…', u'่€€่‘—่ฟฐ': u'่€€่‘—่ฟฐ', u'่€€่‘—้Œ„': u'่€€่‘—้Œ„', u'ๅฏฎๅœ‹': u'่€ๆ’พ', u'่€ƒ่‘—': u'่€ƒ็€', u'่€ƒ่‘—ไฝœ': u'่€ƒ่‘—ไฝœ', u'่€ƒ่‘—ๅ': u'่€ƒ่‘—ๅ', u'่€ƒ่‘—ๆ›ธ': u'่€ƒ่‘—ๆ›ธ', u'่€ƒ่‘—็จฑ': u'่€ƒ่‘—็จฑ', u'่€ƒ่‘—่€…': u'่€ƒ่‘—่€…', u'่€ƒ่‘—่ฟฐ': u'่€ƒ่‘—่ฟฐ', u'่€ƒ่‘—้Œ„': u'่€ƒ่‘—้Œ„', u'ๅœฃๅŸบ่Œจๅ’Œๅฐผ็ปดๆ–ฏ': u'่–ๅ‰ๆ–ฏ็ดๅŸŸๆ–ฏ', u'่–ๅ…‹้‡Œๆ–ฏๅคš็ฆๅŠๅฐผ็ถญๆ–ฏ': u'่–ๅ‰ๆ–ฏ็ดๅŸŸๆ–ฏ', u'่–ๆ–‡ๆฃฎๅŠๆ ผ็‘ž้‚ฃไธ': u'่–ๆ–‡ๆฃฎ็‰นๅ’Œๆ ผๆž—็ดไธๆ–ฏ', u'่–้œฒ่ฅฟไบž': u'่–็›ง่ฅฟไบž', u'่–้ฆฌๅˆฉ่ซพ': u'่–้ฆฌๅŠ›่ซพ', u'่ฝไธ่‘—': u'่ฝไธ็€', u'่ฝๅพ—่‘—': u'่ฝๅพ—็€', u'่ฝ่‘—': u'่ฝ็€', u'่ฝ่‘—ไฝœ': u'่ฝ่‘—ไฝœ', u'่ฝ่‘—ๅ': u'่ฝ่‘—ๅ', u'่ฝ่‘—ๆ›ธ': u'่ฝ่‘—ๆ›ธ', u'่ฝ่‘—็จฑ': u'่ฝ่‘—็จฑ', u'่ฝ่‘—่€…': u'่ฝ่‘—่€…', u'่ฝ่‘—่ฟฐ': u'่ฝ่‘—่ฟฐ', u'่ฝ่‘—้Œ„': u'่ฝ่‘—้Œ„', u'่‚š้‡Œ': u'่‚š่ฃ', u'่‚š่ฃก': u'่‚š่ฃ', u'่‚ฏๅฐผไบš': u'่‚ฏ้›…', u'่‚ฏไบž': u'่‚ฏ้›…', u'่ƒŒ่‘—': u'่ƒŒ็€', u'่ƒŒ่‘—ไฝœ': u'่ƒŒ่‘—ไฝœ', u'่ƒŒ่‘—ๅ': u'่ƒŒ่‘—ๅ', u'่ƒŒ่‘—ๆ›ธ': u'่ƒŒ่‘—ๆ›ธ', u'่ƒŒ่‘—็จฑ': u'่ƒŒ่‘—็จฑ', u'่ƒŒ่‘—่€…': u'่ƒŒ่‘—่€…', u'่ƒŒ่‘—่ฟฐ': u'่ƒŒ่‘—่ฟฐ', u'่ƒŒ่‘—้Œ„': u'่ƒŒ่‘—้Œ„', u'่† ่‘—': u'่† ็€', u'่† ่‘—ไฝœ': u'่† ่‘—ไฝœ', u'่† ่‘—ๅ': u'่† ่‘—ๅ', u'่† ่‘—ๆ›ธ': u'่† ่‘—ๆ›ธ', u'่† ่‘—็จฑ': u'่† ่‘—็จฑ', u'่† ่‘—่€…': u'่† ่‘—่€…', u'่† ่‘—่ฟฐ': u'่† ่‘—่ฟฐ', u'่† ่‘—้Œ„': u'่† ่‘—้Œ„', u'่‡จ่‘—': u'่‡จ็€', u'่‡จ่‘—ไฝœ': u'่‡จ่‘—ไฝœ', u'่‡จ่‘—ๅ': u'่‡จ่‘—ๅ', u'่‡จ่‘—ๆ›ธ': u'่‡จ่‘—ๆ›ธ', u'่‡จ่‘—็จฑ': u'่‡จ่‘—็จฑ', u'่‡จ่‘—่€…': u'่‡จ่‘—่€…', u'่‡จ่‘—่ฟฐ': u'่‡จ่‘—่ฟฐ', u'่‡จ่‘—้Œ„': u'่‡จ่‘—้Œ„', u'่ˆ‡่‘—': u'่ˆ‡็€', u'่ˆ‡่‘—ไฝœ': u'่ˆ‡่‘—ไฝœ', u'่ˆ‡่‘—ๅ': u'่ˆ‡่‘—ๅ', u'่ˆ‡่‘—ๆ›ธ': u'่ˆ‡่‘—ๆ›ธ', u'่ˆ‡่‘—็จฑ': u'่ˆ‡่‘—็จฑ', u'่ˆ‡่‘—่€…': u'่ˆ‡่‘—่€…', u'่ˆ‡่‘—่ฟฐ': u'่ˆ‡่‘—่ฟฐ', u'่ˆ‡่‘—้Œ„': u'่ˆ‡่‘—้Œ„', u'่ฟˆๅ…‹ๅฐ”ยท่ˆ’้ฉฌ่ตซ': u'่ˆ’้บฅๅŠ ', u'่‹ฆ่‘—': u'่‹ฆ็€', u'่‹ฆ่‘—ไฝœ': u'่‹ฆ่‘—ไฝœ', u'่‹ฆ่‘—ๅ': u'่‹ฆ่‘—ๅ', u'่‹ฆ่‘—ๆ›ธ': u'่‹ฆ่‘—ๆ›ธ', u'่‹ฆ่‘—็จฑ': u'่‹ฆ่‘—็จฑ', u'่‹ฆ่‘—่€…': u'่‹ฆ่‘—่€…', u'่‹ฆ่‘—่ฟฐ': u'่‹ฆ่‘—่ฟฐ', u'่‹ฆ่‘—้Œ„': u'่‹ฆ่‘—้Œ„', u'่‹ฆ้‡Œ': u'่‹ฆ่ฃ', u'่‹ฆ่ฃก': u'่‹ฆ่ฃ', u'่Žซไธ‰ๆฏ”ๅ…‹': u'่Žซๆก‘ๆฏ”ๅ…‹', u'่ณด็ดขๆ‰˜': u'่Š็ดขๆ‰˜', u'้ฆฌ่‡ช้”': u'่ฌไบ‹ๅพ—', u'้ฉฌ่‡ช่พพ': u'่ฌไบ‹ๅพ—', u'่ฝ่‘—': u'่ฝ็€', u'่ฝ่‘—ไฝœ': u'่ฝ่‘—ไฝœ', u'่ฝ่‘—ๅ': u'่ฝ่‘—ๅ', u'่ฝ่‘—ๆ›ธ': u'่ฝ่‘—ๆ›ธ', u'่ฝ่‘—็จฑ': u'่ฝ่‘—็จฑ', u'่ฝ่‘—่€…': u'่ฝ่‘—่€…', u'่ฝ่‘—่ฟฐ': u'่ฝ่‘—่ฟฐ', u'่ฝ่‘—้Œ„': u'่ฝ่‘—้Œ„', u'่’™่‘—': u'่’™็€', u'่’™่‘—ไฝœ': u'่’™่‘—ไฝœ', u'่’™่‘—ๅ': u'่’™่‘—ๅ', u'่’™่‘—ๆ›ธ': u'่’™่‘—ๆ›ธ', u'่’™่‘—็จฑ': u'่’™่‘—็จฑ', u'่’™่‘—่€…': u'่’™่‘—่€…', u'่’™่‘—่ฟฐ': u'่’™่‘—่ฟฐ', u'่’™่‘—้Œ„': u'่’™่‘—้Œ„', u'่จ่พพๅง†': u'่–ฉ้”ๅง†', u'่—‰่‘—': u'่—‰็€', u'่—่‘—': u'่—็€', u'่—่‘—ไฝœ': u'่—่‘—ไฝœ', u'่—่‘—ๅ': u'่—่‘—ๅ', u'่—่‘—ๆ›ธ': u'่—่‘—ๆ›ธ', u'่—่‘—็จฑ': u'่—่‘—็จฑ', u'่—่‘—่€…': u'่—่‘—่€…', u'่—่‘—่ฟฐ': u'่—่‘—่ฟฐ', u'่—่‘—้Œ„': u'่—่‘—้Œ„', u'่—่‘—': u'่—็€', u'่—่‘—ไฝœ': u'่—่‘—ไฝœ', u'่—่‘—ๅ': u'่—่‘—ๅ', u'่—่‘—ๆ›ธ': u'่—่‘—ๆ›ธ', u'่—่‘—็จฑ': u'่—่‘—็จฑ', u'่—่‘—่€…': u'่—่‘—่€…', u'่—่‘—่ฟฐ': u'่—่‘—่ฟฐ', u'่—่‘—้Œ„': u'่—่‘—้Œ„', u'่˜ธ่‘—': u'่˜ธ็€', u'่˜ธ่‘—ไฝœ': u'่˜ธ่‘—ไฝœ', u'่˜ธ่‘—ๅ': u'่˜ธ่‘—ๅ', u'่˜ธ่‘—ๆ›ธ': u'่˜ธ่‘—ๆ›ธ', u'่˜ธ่‘—็จฑ': u'่˜ธ่‘—็จฑ', u'่˜ธ่‘—่€…': u'่˜ธ่‘—่€…', u'่˜ธ่‘—่ฟฐ': u'่˜ธ่‘—่ฟฐ', u'่˜ธ่‘—้Œ„': u'่˜ธ่‘—้Œ„', u'่กŒ่‘—': u'่กŒ็€', u'่กŒ่‘—ไฝœ': u'่กŒ่‘—ไฝœ', u'่กŒ่‘—ๅ': u'่กŒ่‘—ๅ', u'่กŒ่‘—ๆ›ธ': u'่กŒ่‘—ๆ›ธ', u'่กŒ่‘—็จฑ': u'่กŒ่‘—็จฑ', u'่กŒ่‘—่€…': u'่กŒ่‘—่€…', u'่กŒ่‘—่ฟฐ': u'่กŒ่‘—่ฟฐ', u'่กŒ่‘—้Œ„': u'่กŒ่‘—้Œ„', u'่ก›': u'่กž', u'่กฃ่‘—': u'่กฃ็€', u'่กฃ่‘—ไฝœ': u'่กฃ่‘—ไฝœ', u'่กฃ่‘—ๅ': u'่กฃ่‘—ๅ', u'่กฃ่‘—ๆ›ธ': u'่กฃ่‘—ๆ›ธ', u'่กฃ่‘—็จฑ': u'่กฃ่‘—็จฑ', u'่กฃ่‘—่€…': u'่กฃ่‘—่€…', u'่กฃ่‘—่ฟฐ': u'่กฃ่‘—่ฟฐ', u'่กฃ่‘—้Œ„': u'่กฃ่‘—้Œ„', u'่ฃกๅ‹พๅค–้€ฃ': u'่ฃๅ‹พๅค–้€ฃ', u'้‡Œๅ‹พๅค–่ฟž': u'่ฃๅ‹พๅค–้€ฃ', u'้‡Œ้ข': u'่ฃ้ข', u'่ฃก้ข': u'่ฃ้ข', u'่ฃ่‘—': u'่ฃ็€', u'่ฃ่‘—ไฝœ': u'่ฃ่‘—ไฝœ', u'่ฃ่‘—ๅ': u'่ฃ่‘—ๅ', u'่ฃ่‘—ๆ›ธ': u'่ฃ่‘—ๆ›ธ', u'่ฃ่‘—็จฑ': u'่ฃ่‘—็จฑ', u'่ฃ่‘—่€…': u'่ฃ่‘—่€…', u'่ฃ่‘—่ฟฐ': u'่ฃ่‘—่ฟฐ', u'่ฃ่‘—้Œ„': u'่ฃ่‘—้Œ„', u'่ฃน่‘—': u'่ฃน็€', u'่ฃน่‘—ไฝœ': u'่ฃน่‘—ไฝœ', u'่ฃน่‘—ๅ': u'่ฃน่‘—ๅ', u'่ฃน่‘—ๆ›ธ': u'่ฃน่‘—ๆ›ธ', u'่ฃน่‘—็จฑ': u'่ฃน่‘—็จฑ', u'่ฃน่‘—่€…': u'่ฃน่‘—่€…', u'่ฃน่‘—่ฟฐ': u'่ฃน่‘—่ฟฐ', u'่ฃน่‘—้Œ„': u'่ฃน่‘—้Œ„', u'่ฆ‹่‘—': u'่ฆ‹็€', u'่ฆ‹่‘—ไฝœ': u'่ฆ‹่‘—ไฝœ', u'่ฆ‹่‘—ๅ': u'่ฆ‹่‘—ๅ', u'่ฆ‹่‘—ๆ›ธ': u'่ฆ‹่‘—ๆ›ธ', u'่ฆ‹่‘—็จฑ': u'่ฆ‹่‘—็จฑ', u'่ฆ‹่‘—่€…': u'่ฆ‹่‘—่€…', u'่ฆ‹่‘—่ฟฐ': u'่ฆ‹่‘—่ฟฐ', u'่ฆ‹่‘—้Œ„': u'่ฆ‹่‘—้Œ„', u'่จ˜่‘—': u'่จ˜็€', u'่จ˜่‘—ไฝœ': u'่จ˜่‘—ไฝœ', u'่จ˜่‘—ๅ': u'่จ˜่‘—ๅ', u'่จ˜่‘—ๆ›ธ': u'่จ˜่‘—ๆ›ธ', u'่จ˜่‘—็จฑ': u'่จ˜่‘—็จฑ', u'่จ˜่‘—่€…': u'่จ˜่‘—่€…', u'่จ˜่‘—่ฟฐ': u'่จ˜่‘—่ฟฐ', u'่จ˜่‘—้Œ„': u'่จ˜่‘—้Œ„', u'่ฉฆ่‘—': u'่ฉฆ็€', u'่ฉฆ่‘—ไฝœ': u'่ฉฆ่‘—ไฝœ', u'่ฉฆ่‘—ๅ': u'่ฉฆ่‘—ๅ', u'่ฉฆ่‘—ๆ›ธ': u'่ฉฆ่‘—ๆ›ธ', u'่ฉฆ่‘—็จฑ': u'่ฉฆ่‘—็จฑ', u'่ฉฆ่‘—่€…': u'่ฉฆ่‘—่€…', u'่ฉฆ่‘—่ฟฐ': u'่ฉฆ่‘—่ฟฐ', u'่ฉฆ่‘—้Œ„': u'่ฉฆ่‘—้Œ„', u'่ชž่‘—': u'่ชž็€', u'่ชž่‘—ไฝœ': u'่ชž่‘—ไฝœ', u'่ชž่‘—ๅ': u'่ชž่‘—ๅ', u'่ชž่‘—ๆ›ธ': u'่ชž่‘—ๆ›ธ', u'่ชž่‘—็จฑ': u'่ชž่‘—็จฑ', u'่ชž่‘—่€…': u'่ชž่‘—่€…', u'่ชž่‘—่ฟฐ': u'่ชž่‘—่ฟฐ', u'่ชž่‘—้Œ„': u'่ชž่‘—้Œ„', u'ๆ•ธๆ“šๆฉŸ': u'่ชฟๅˆถ่งฃ่ชฟๅ™จ', u'่ฎŠ่‘—': u'่ฎŠ็€', u'่ฎŠ่‘—ไฝœ': u'่ฎŠ่‘—ไฝœ', u'่ฎŠ่‘—ๅ': u'่ฎŠ่‘—ๅ', u'่ฎŠ่‘—ๆ›ธ': u'่ฎŠ่‘—ๆ›ธ', u'่ฎŠ่‘—็จฑ': u'่ฎŠ่‘—็จฑ', u'่ฎŠ่‘—่€…': u'่ฎŠ่‘—่€…', u'่ฎŠ่‘—่ฟฐ': u'่ฎŠ่‘—่ฟฐ', u'่ฎŠ่‘—้Œ„': u'่ฎŠ่‘—้Œ„', u'่ฑŽ่‘—': u'่ฑŽ็€', u'่ฑŽ่‘—ไฝœ': u'่ฑŽ่‘—ไฝœ', u'่ฑŽ่‘—ๅ': u'่ฑŽ่‘—ๅ', u'่ฑŽ่‘—ๆ›ธ': u'่ฑŽ่‘—ๆ›ธ', u'่ฑŽ่‘—็จฑ': u'่ฑŽ่‘—็จฑ', u'่ฑŽ่‘—่€…': u'่ฑŽ่‘—่€…', u'่ฑŽ่‘—่ฟฐ': u'่ฑŽ่‘—่ฟฐ', u'่ฑŽ่‘—้Œ„': u'่ฑŽ่‘—้Œ„', u'่ฑซ่‘—': u'่ฑซ็€', u'่ฑซ่‘—ไฝœ': u'่ฑซ่‘—ไฝœ', u'่ฑซ่‘—ๅ': u'่ฑซ่‘—ๅ', u'่ฑซ่‘—ๆ›ธ': u'่ฑซ่‘—ๆ›ธ', u'่ฑซ่‘—็จฑ': u'่ฑซ่‘—็จฑ', u'่ฑซ่‘—่€…': u'่ฑซ่‘—่€…', u'่ฑซ่‘—่ฟฐ': u'่ฑซ่‘—่ฟฐ', u'่ฑซ่‘—้Œ„': u'่ฑซ่‘—้Œ„', u'่ฒๅ—': u'่ฒๅฏง', u'่ฒž่‘—': u'่ฒž็€', u'่ฒž่‘—ไฝœ': u'่ฒž่‘—ไฝœ', u'่ฒž่‘—ๅ': u'่ฒž่‘—ๅ', u'่ฒž่‘—ๆ›ธ': u'่ฒž่‘—ๆ›ธ', u'่ฒž่‘—็จฑ': u'่ฒž่‘—็จฑ', u'่ฒž่‘—่€…': u'่ฒž่‘—่€…', u'่ฒž่‘—่ฟฐ': u'่ฒž่‘—่ฟฐ', u'่ฒž่‘—้Œ„': u'่ฒž่‘—้Œ„', u'่ฒทๅ‡ถ': u'่ฒทๅ…‡', u'ๅฐšๆฏ”ไบž': u'่ดŠๆฏ”ไบž', u'่ตฐ่‘—': u'่ตฐ็€', u'่ตฐ่‘—ไฝœ': u'่ตฐ่‘—ไฝœ', u'่ตฐ่‘—ๅ': u'่ตฐ่‘—ๅ', u'่ตฐ่‘—ๆ›ธ': u'่ตฐ่‘—ๆ›ธ', u'่ตฐ่‘—็จฑ': u'่ตฐ่‘—็จฑ', u'่ตฐ่‘—่€…': u'่ตฐ่‘—่€…', u'่ตฐ่‘—่ฟฐ': u'่ตฐ่‘—่ฟฐ', u'่ตฐ่‘—้Œ„': u'่ตฐ่‘—้Œ„', u'่ถ•่‘—': u'่ถ•็€', u'่ถ•่‘—ไฝœ': u'่ถ•่‘—ไฝœ', u'่ถ•่‘—ๅ': u'่ถ•่‘—ๅ', u'่ถ•่‘—ๆ›ธ': u'่ถ•่‘—ๆ›ธ', u'่ถ•่‘—็จฑ': u'่ถ•่‘—็จฑ', u'่ถ•่‘—่€…': u'่ถ•่‘—่€…', u'่ถ•่‘—่ฟฐ': u'่ถ•่‘—่ฟฐ', u'่ถ•่‘—้Œ„': u'่ถ•่‘—้Œ„', u'่ถด่‘—': u'่ถด็€', u'่ถด่‘—ไฝœ': u'่ถด่‘—ไฝœ', u'่ถด่‘—ๅ': u'่ถด่‘—ๅ', u'่ถด่‘—ๆ›ธ': u'่ถด่‘—ๆ›ธ', u'่ถด่‘—็จฑ': u'่ถด่‘—็จฑ', u'่ถด่‘—่€…': u'่ถด่‘—่€…', u'่ถด่‘—่ฟฐ': u'่ถด่‘—่ฟฐ', u'่ถด่‘—้Œ„': u'่ถด่‘—้Œ„', u'่ท‘่‘—': u'่ท‘็€', u'่ท‘่‘—ไฝœ': u'่ท‘่‘—ไฝœ', u'่ท‘่‘—ๅ': u'่ท‘่‘—ๅ', u'่ท‘่‘—ๆ›ธ': u'่ท‘่‘—ๆ›ธ', u'่ท‘่‘—็จฑ': u'่ท‘่‘—็จฑ', u'่ท‘่‘—่€…': u'่ท‘่‘—่€…', u'่ท‘่‘—่ฟฐ': u'่ท‘่‘—่ฟฐ', u'่ท‘่‘—้Œ„': u'่ท‘่‘—้Œ„', u'่ทŸ่‘—': u'่ทŸ็€', u'่ทŸ่‘—ไฝœ': u'่ทŸ่‘—ไฝœ', u'่ทŸ่‘—ๅ': u'่ทŸ่‘—ๅ', u'่ทŸ่‘—ๆ›ธ': u'่ทŸ่‘—ๆ›ธ', u'่ทŸ่‘—็จฑ': u'่ทŸ่‘—็จฑ', u'่ทŸ่‘—่€…': u'่ทŸ่‘—่€…', u'่ทŸ่‘—่ฟฐ': u'่ทŸ่‘—่ฟฐ', u'่ทŸ่‘—้Œ„': u'่ทŸ่‘—้Œ„', u'่ทช่‘—': u'่ทช็€', u'่ทช่‘—ไฝœ': u'่ทช่‘—ไฝœ', u'่ทช่‘—ๅ': u'่ทช่‘—ๅ', u'่ทช่‘—ๆ›ธ': u'่ทช่‘—ๆ›ธ', u'่ทช่‘—็จฑ': u'่ทช่‘—็จฑ', u'่ทช่‘—่€…': u'่ทช่‘—่€…', u'่ทช่‘—่ฟฐ': u'่ทช่‘—่ฟฐ', u'่ทช่‘—้Œ„': u'่ทช่‘—้Œ„', u'่ทณ่‘—': u'่ทณ็€', u'่ทณ่‘—ไฝœ': u'่ทณ่‘—ไฝœ', u'่ทณ่‘—ๅ': u'่ทณ่‘—ๅ', u'่ทณ่‘—ๆ›ธ': u'่ทณ่‘—ๆ›ธ', u'่ทณ่‘—็จฑ': u'่ทณ่‘—็จฑ', u'่ทณ่‘—่€…': u'่ทณ่‘—่€…', u'่ทณ่‘—่ฟฐ': u'่ทณ่‘—่ฟฐ', u'่ทณ่‘—้Œ„': u'่ทณ่‘—้Œ„', u'่ธ่‘—': u'่ธ็€', u'่ธ่‘—ไฝœ': u'่ธ่‘—ไฝœ', u'่ธ่‘—ๅ': u'่ธ่‘—ๅ', u'่ธ่‘—็จฑ': u'่ธ่‘—็จฑ', u'่ธ่‘—่€…': u'่ธ่‘—่€…', u'่ธ่‘—่ฟฐ': u'่ธ่‘—่ฟฐ', u'่ธ่‘—้Œ„': u'่ธ่‘—้Œ„', u'่ธฉ่‘—': u'่ธฉ็€', u'่ธฉ่‘—ไฝœ': u'่ธฉ่‘—ไฝœ', u'่ธฉ่‘—ๅ': u'่ธฉ่‘—ๅ', u'่ธฉ่‘—ๆ›ธ': u'่ธฉ่‘—ๆ›ธ', u'่ธฉ่‘—็จฑ': u'่ธฉ่‘—็จฑ', u'่ธฉ่‘—่€…': u'่ธฉ่‘—่€…', u'่ธฉ่‘—่ฟฐ': u'่ธฉ่‘—่ฟฐ', u'่ธฉ่‘—้Œ„': u'่ธฉ่‘—้Œ„', u'่บ่‘—': u'่บ็€', u'่บ่‘—ไฝœ': u'่บ่‘—ไฝœ', u'่บ่‘—ๅ': u'่บ่‘—ๅ', u'่บ่‘—ๆ›ธ': u'่บ่‘—ๆ›ธ', u'่บ่‘—็จฑ': u'่บ่‘—็จฑ', u'่บ่‘—่€…': u'่บ่‘—่€…', u'่บ่‘—่ฟฐ': u'่บ่‘—่ฟฐ', u'่บ่‘—้Œ„': u'่บ่‘—้Œ„', u'่บซ่‘—': u'่บซ็€', u'่บซ่‘—ไฝœ': u'่บซ่‘—ไฝœ', u'่บซ่‘—ๅ': u'่บซ่‘—ๅ', u'่บซ่‘—ๆ›ธ': u'่บซ่‘—ๆ›ธ', u'่บซ่‘—็จฑ': u'่บซ่‘—็จฑ', u'่บซ่‘—่€…': u'่บซ่‘—่€…', u'่บซ่‘—่ฟฐ': u'่บซ่‘—่ฟฐ', u'่บซ่‘—้Œ„': u'่บซ่‘—้Œ„', u'่บบ่‘—': u'่บบ็€', u'่บบ่‘—ไฝœ': u'่บบ่‘—ไฝœ', u'่บบ่‘—ๅ': u'่บบ่‘—ๅ', u'่บบ่‘—ๆ›ธ': u'่บบ่‘—ๆ›ธ', u'่บบ่‘—็จฑ': u'่บบ่‘—็จฑ', u'่บบ่‘—่€…': u'่บบ่‘—่€…', u'่บบ่‘—่ฟฐ': u'่บบ่‘—่ฟฐ', u'่บบ่‘—้Œ„': u'่บบ่‘—้Œ„', u'่ปŸ้ซ”': u'่ปŸไปถ', u'่ผ‰่‘—': u'่ผ‰็€', u'่ผ‰่‘—ไฝœ': u'่ผ‰่‘—ไฝœ', u'่ผ‰่‘—ๅ': u'่ผ‰่‘—ๅ', u'่ผ‰่‘—ๆ›ธ': u'่ผ‰่‘—ๆ›ธ', u'่ผ‰่‘—็จฑ': u'่ผ‰่‘—็จฑ', u'่ผ‰่‘—่€…': u'่ผ‰่‘—่€…', u'่ผ‰่‘—่ฟฐ': u'่ผ‰่‘—่ฟฐ', u'่ผ‰่‘—้Œ„': u'่ผ‰่‘—้Œ„', u'่ฝ‰่‘—': u'่ฝ‰็€', u'่ฝ‰่‘—ไฝœ': u'่ฝ‰่‘—ไฝœ', u'่ฝ‰่‘—ๅ': u'่ฝ‰่‘—ๅ', u'่ฝ‰่‘—ๆ›ธ': u'่ฝ‰่‘—ๆ›ธ', u'่ฝ‰่‘—็จฑ': u'่ฝ‰่‘—็จฑ', u'่ฝ‰่‘—่€…': u'่ฝ‰่‘—่€…', u'่ฝ‰่‘—่ฟฐ': u'่ฝ‰่‘—่ฟฐ', u'่ฝ‰่‘—้Œ„': u'่ฝ‰่‘—้Œ„', u'่พฆ่‘—': u'่พฆ็€', u'่พฆ่‘—ไฝœ': u'่พฆ่‘—ไฝœ', u'่พฆ่‘—ๅ': u'่พฆ่‘—ๅ', u'่พฆ่‘—ๆ›ธ': u'่พฆ่‘—ๆ›ธ', u'่พฆ่‘—็จฑ': u'่พฆ่‘—็จฑ', u'่พฆ่‘—่€…': u'่พฆ่‘—่€…', u'่พฆ่‘—่ฟฐ': u'่พฆ่‘—่ฟฐ', u'่พฆ่‘—้Œ„': u'่พฆ่‘—้Œ„', u'่ฟ‘่ง’่ชไฟก': u'่ฟ‘่ง’่ฐไฟก', u'่ฟ‘่ง’่ฐไฟก': u'่ฟ‘่ง’่ฐไฟก', u'่ฟซ่‘—': u'่ฟซ็€', u'่ฟฝ่‘—': u'่ฟฝ็€', u'่ฟฝ่‘—ไฝœ': u'่ฟฝ่‘—ไฝœ', u'่ฟฝ่‘—ๅ': u'่ฟฝ่‘—ๅ', u'่ฟฝ่‘—ๆ›ธ': u'่ฟฝ่‘—ๆ›ธ', u'่ฟฝ่‘—็จฑ': u'่ฟฝ่‘—็จฑ', u'่ฟฝ่‘—่€…': u'่ฟฝ่‘—่€…', u'่ฟฝ่‘—่ฟฐ': u'่ฟฝ่‘—่ฟฐ', u'่ฟฝ่‘—้Œ„': u'่ฟฝ่‘—้Œ„', u'้€†่‘—': u'้€†็€', u'้€†่‘—ไฝœ': u'้€†่‘—ไฝœ', u'้€†่‘—ๅ': u'้€†่‘—ๅ', u'้€†่‘—ๆ›ธ': u'้€†่‘—ๆ›ธ', u'้€†่‘—็จฑ': u'้€†่‘—็จฑ', u'้€†่‘—่€…': u'้€†่‘—่€…', u'้€†่‘—่ฟฐ': u'้€†่‘—่ฟฐ', u'้€†่‘—้Œ„': u'้€†่‘—้Œ„', u'้€™้‡Œ': u'้€™่ฃ', u'้€™่ฃก': u'้€™่ฃ', u'้€ฃ่‘—': u'้€ฃ็€', u'้€ฃ่‘—ไฝœ': u'้€ฃ่‘—ไฝœ', u'้€ฃ่‘—ๅ': u'้€ฃ่‘—ๅ', u'้€ฃ่‘—ๆ›ธ': u'้€ฃ่‘—ๆ›ธ', u'้€ฃ่‘—็จฑ': u'้€ฃ่‘—็จฑ', u'้€ฃ่‘—่€…': u'้€ฃ่‘—่€…', u'้€ฃ่‘—่ฟฐ': u'้€ฃ่‘—่ฟฐ', u'้€ฃ่‘—้Œ„': u'้€ฃ่‘—้Œ„', u'้€ผ่‘—': u'้€ผ็€', u'้€ผ่‘—ไฝœ': u'้€ผ่‘—ไฝœ', u'้€ผ่‘—ๅ': u'้€ผ่‘—ๅ', u'้€ผ่‘—ๆ›ธ': u'้€ผ่‘—ๆ›ธ', u'้€ผ่‘—็จฑ': u'้€ผ่‘—็จฑ', u'้€ผ่‘—่€…': u'้€ผ่‘—่€…', u'้€ผ่‘—่ฟฐ': u'้€ผ่‘—่ฟฐ', u'้€ผ่‘—้Œ„': u'้€ผ่‘—้Œ„', u'้‡่‘—': u'้‡็€', u'้‡่‘—ไฝœ': u'้‡่‘—ไฝœ', u'้‡่‘—ๅ': u'้‡่‘—ๅ', u'้‡่‘—ๆ›ธ': u'้‡่‘—ๆ›ธ', u'้‡่‘—็จฑ': u'้‡่‘—็จฑ', u'้‡่‘—่€…': u'้‡่‘—่€…', u'้‡่‘—่ฟฐ': u'้‡่‘—่ฟฐ', u'้‡่‘—้Œ„': u'้‡่‘—้Œ„', u'้”่‘—': u'้”็€', u'้”่‘—ไฝœ': u'้”่‘—ไฝœ', u'้”่‘—ๅ': u'้”่‘—ๅ', u'้”่‘—ๆ›ธ': u'้”่‘—ๆ›ธ', u'้”่‘—็จฑ': u'้”่‘—็จฑ', u'้”่‘—่€…': u'้”่‘—่€…', u'้”่‘—่ฟฐ': u'้”่‘—่ฟฐ', u'้”่‘—้Œ„': u'้”่‘—้Œ„', u'้ ่‘—': u'้ ็€', u'้ ่‘—ไฝœ': u'้ ่‘—ไฝœ', u'้ ่‘—ๅ': u'้ ่‘—ๅ', u'้ ่‘—ๆ›ธ': u'้ ่‘—ๆ›ธ', u'้ ่‘—็จฑ': u'้ ่‘—็จฑ', u'้ ่‘—่€…': u'้ ่‘—่€…', u'้ ่‘—่ฟฐ': u'้ ่‘—่ฟฐ', u'้ ่‘—้Œ„': u'้ ่‘—้Œ„', u'้…่‘—': u'้…็€', u'้…่‘—ไฝœ': u'้…่‘—ไฝœ', u'้…่‘—ๅ': u'้…่‘—ๅ', u'้…่‘—ๆ›ธ': u'้…่‘—ๆ›ธ', u'้…่‘—็จฑ': u'้…่‘—็จฑ', u'้…่‘—่€…': u'้…่‘—่€…', u'้…่‘—่ฟฐ': u'้…่‘—่ฟฐ', u'้…่‘—้Œ„': u'้…่‘—้Œ„', u'้†ฏ': u'้…ฐ', u'้†œ่‘—': u'้†œ็€', u'้†œ่‘—ไฝœ': u'้†œ่‘—ไฝœ', u'้†œ่‘—ๅ': u'้†œ่‘—ๅ', u'้†œ่‘—ๆ›ธ': u'้†œ่‘—ๆ›ธ', u'้†œ่‘—็จฑ': u'้†œ่‘—็จฑ', u'้†œ่‘—่€…': u'้†œ่‘—่€…', u'้†œ่‘—่ฟฐ': u'้†œ่‘—่ฟฐ', u'้†œ่‘—้Œ„': u'้†œ่‘—้Œ„', u'้†ซ้™ข่ฃก': u'้†ซ้™ข่ฃ', u'้†ฏๅฃบ': u'้†ฏๅฃบ', u'้†ฏๅฃถ': u'้†ฏๅฃบ', u'้†ฏ้†‹': u'้†ฏ้†‹', u'้†ฏ้†ข': u'้†ฏ้†ข', u'้†ฏ้†ฌ': u'้†ฏ้†ฌ', u'้†ฏ้…ฑ': u'้†ฏ้†ฌ', u'้†ฏ้ธก': u'้†ฏ้›ž', u'้†ฏ้›ž': u'้†ฏ้›ž', u'้‡€่‘—': u'้‡€็€', u'้‡€่‘—ไฝœ': u'้‡€่‘—ไฝœ', u'้‡€่‘—ๅ': u'้‡€่‘—ๅ', u'้‡€่‘—ๆ›ธ': u'้‡€่‘—ๆ›ธ', u'้‡€่‘—็จฑ': u'้‡€่‘—็จฑ', u'้‡€่‘—่€…': u'้‡€่‘—่€…', u'้‡€่‘—่ฟฐ': u'้‡€่‘—่ฟฐ', u'้‡€่‘—้Œ„': u'้‡€่‘—้Œ„', u'้‰ค': u'้ˆŽ', u'้‰คๅฟƒ้ฌฅ่ง’': u'้ˆŽๅฟƒ้ฌฅ่ง’', u'้‹ช่‘—': u'้‹ช็€', u'้‹ช่‘—ไฝœ': u'้‹ช่‘—ไฝœ', u'้‹ช่‘—ๅ': u'้‹ช่‘—ๅ', u'้‹ช่‘—ๆ›ธ': u'้‹ช่‘—ๆ›ธ', u'้‹ช่‘—็จฑ': u'้‹ช่‘—็จฑ', u'้‹ช่‘—่€…': u'้‹ช่‘—่€…', u'้‹ช่‘—่ฟฐ': u'้‹ช่‘—่ฟฐ', u'้‹ช่‘—้Œ„': u'้‹ช่‘—้Œ„', u'้–‰่‘—': u'้–‰็€', u'้–‰่‘—ไฝœ': u'้–‰่‘—ไฝœ', u'้–‰่‘—ๅ': u'้–‰่‘—ๅ', u'้–‰่‘—ๆ›ธ': u'้–‰่‘—ๆ›ธ', u'้–‰่‘—็จฑ': u'้–‰่‘—็จฑ', u'้–‰่‘—่€…': u'้–‰่‘—่€…', u'้–‰่‘—่ฟฐ': u'้–‰่‘—่ฟฐ', u'้–‰่‘—้Œ„': u'้–‰่‘—้Œ„', u'้–‹่‘—': u'้–‹็€', u'้–‹่‘—ไฝœ': u'้–‹่‘—ไฝœ', u'้–‹่‘—ๅ': u'้–‹่‘—ๅ', u'้–‹่‘—ๆ›ธ': u'้–‹่‘—ๆ›ธ', u'้–‹่‘—็จฑ': u'้–‹่‘—็จฑ', u'้–‹่‘—่€…': u'้–‹่‘—่€…', u'้–‹่‘—่ฟฐ': u'้–‹่‘—่ฟฐ', u'้–‹่‘—้Œ„': u'้–‹่‘—้Œ„', u'้–‘่‘—': u'้–‘็€', u'้–‘่‘—ไฝœ': u'้–‘่‘—ไฝœ', u'้–‘่‘—ๅ': u'้–‘่‘—ๅ', u'้–‘่‘—ๆ›ธ': u'้–‘่‘—ๆ›ธ', u'้–‘่‘—็จฑ': u'้–‘่‘—็จฑ', u'้–‘่‘—่€…': u'้–‘่‘—่€…', u'้–‘่‘—่ฟฐ': u'้–‘่‘—่ฟฐ', u'้–‘่‘—้Œ„': u'้–‘่‘—้Œ„', u'้—œ่‘—': u'้—œ็€', u'้—œ่‘—ไฝœ': u'้—œ่‘—ไฝœ', u'้—œ่‘—ๅ': u'้—œ่‘—ๅ', u'้—œ่‘—ๆ›ธ': u'้—œ่‘—ๆ›ธ', u'้—œ่‘—็จฑ': u'้—œ่‘—็จฑ', u'้—œ่‘—่€…': u'้—œ่‘—่€…', u'้—œ่‘—่ฟฐ': u'้—œ่‘—่ฟฐ', u'้—œ่‘—้Œ„': u'้—œ่‘—้Œ„', u'่žไธ่‘—': u'้—ปไธ็€', u'่žๅพ—่‘—': u'้—ปๅพ—็€', u'่ž่‘—': u'้—ป็€', u'ไบžๅกžๆ‹œ็„ถ': u'้˜ฟๅกžๆ‹œ็–†', u'้˜ฟๆ‹‰ไผฏ่ฏๅˆๅคงๅ…ฌๅœ‹': u'้˜ฟๆ‹‰ไผฏ่ฏๅˆ้…‹้•ทๅœ‹', u'้™„่‘—': u'้™„็€', u'้™„่‘—ไฝœ': u'้™„่‘—ไฝœ', u'้™„่‘—ๅ': u'้™„่‘—ๅ', u'้™„่‘—ๆ›ธ': u'้™„่‘—ๆ›ธ', u'้™„่‘—็จฑ': u'้™„่‘—็จฑ', u'้™„่‘—่€…': u'้™„่‘—่€…', u'้™„่‘—่ฟฐ': u'้™„่‘—่ฟฐ', u'้™„่‘—้Œ„': u'้™„่‘—้Œ„', u'้™‹่‘—': u'้™‹็€', u'้™‹่‘—ไฝœ': u'้™‹่‘—ไฝœ', u'้™‹่‘—ๅ': u'้™‹่‘—ๅ', u'้™‹่‘—ๆ›ธ': u'้™‹่‘—ๆ›ธ', u'้™‹่‘—็จฑ': u'้™‹่‘—็จฑ', u'้™‹่‘—่€…': u'้™‹่‘—่€…', u'้™‹่‘—่ฟฐ': u'้™‹่‘—่ฟฐ', u'้™‹่‘—้Œ„': u'้™‹่‘—้Œ„', u'้™ช่‘—': u'้™ช็€', u'้™ช่‘—ไฝœ': u'้™ช่‘—ไฝœ', u'้™ช่‘—ๅ': u'้™ช่‘—ๅ', u'้™ช่‘—ๆ›ธ': u'้™ช่‘—ๆ›ธ', u'้™ช่‘—็จฑ': u'้™ช่‘—็จฑ', u'้™ช่‘—่€…': u'้™ช่‘—่€…', u'้™ช่‘—่ฟฐ': u'้™ช่‘—่ฟฐ', u'้™ช่‘—้Œ„': u'้™ช่‘—้Œ„', u'้š”่‘—': u'้š”็€', u'้š”่‘—ไฝœ': u'้š”่‘—ไฝœ', u'้š”่‘—ๅ': u'้š”่‘—ๅ', u'้š”่‘—ๆ›ธ': u'้š”่‘—ๆ›ธ', u'้š”่‘—็จฑ': u'้š”่‘—็จฑ', u'้š”่‘—่€…': u'้š”่‘—่€…', u'้š”่‘—่ฟฐ': u'้š”่‘—่ฟฐ', u'้š”่‘—้Œ„': u'้š”่‘—้Œ„', u'้šจ่‘—': u'้šจ็€', u'้šจ่‘—ไฝœ': u'้šจ่‘—ไฝœ', u'้šจ่‘—ๅ': u'้šจ่‘—ๅ', u'้šจ่‘—ๆ›ธ': u'้šจ่‘—ๆ›ธ', u'้šจ่‘—็จฑ': u'้šจ่‘—็จฑ', u'้šจ่‘—่€…': u'้šจ่‘—่€…', u'้šจ่‘—่ฟฐ': u'้šจ่‘—่ฟฐ', u'้šจ่‘—้Œ„': u'้šจ่‘—้Œ„', u'้›…่‘—': u'้›…็€', u'้›…่‘—ไฝœ': u'้›…่‘—ไฝœ', u'้›…่‘—ๅ': u'้›…่‘—ๅ', u'้›…่‘—ๆ›ธ': u'้›…่‘—ๆ›ธ', u'้›…่‘—็จฑ': u'้›…่‘—็จฑ', u'้›…่‘—่€…': u'้›…่‘—่€…', u'้›…่‘—่ฟฐ': u'้›…่‘—่ฟฐ', u'้›…่‘—้Œ„': u'้›…่‘—้Œ„', u'้›œ่‘—': u'้›œ็€', u'้›œ่‘—ไฝœ': u'้›œ่‘—ไฝœ', u'้›œ่‘—ๅ': u'้›œ่‘—ๅ', u'้›œ่‘—ๆ›ธ': u'้›œ่‘—ๆ›ธ', u'้›œ่‘—็จฑ': u'้›œ่‘—็จฑ', u'้›œ่‘—่€…': u'้›œ่‘—่€…', u'้›œ่‘—่ฟฐ': u'้›œ่‘—่ฟฐ', u'้›œ่‘—้Œ„': u'้›œ่‘—้Œ„', u'ๅ†ฐๆท‡ๆท‹': u'้›ช็ณ•', u'้›ช้‡Œ็บข': u'้›ช่ฃ็ด…', u'้›ช่ฃก็ด…': u'้›ช่ฃ็ด…', u'้›ช่ฃก่•ป': u'้›ช่ฃ่•ป', u'้›ช้‡Œ่•ป': u'้›ช่ฃ่•ป', u'้ ่‘—': u'้ ็€', u'้ ่‘—ไฝœ': u'้ ่‘—ไฝœ', u'้ ่‘—ๅ': u'้ ่‘—ๅ', u'้ ่‘—็จฑ': u'้ ่‘—็จฑ', u'้ ่‘—็งฐ': u'้ ่‘—็จฑ', u'้ ่‘—่€…': u'้ ่‘—่€…', u'้ ่‘—่ฟฐ': u'้ ่‘—่ฟฐ', u'้ ่‘—้Œ„': u'้ ่‘—้Œ„', u'้ ่‘—ๅฝ•': u'้ ่‘—้Œ„', u'้Ÿฟ่‘—': u'้Ÿฟ็€', u'้Ÿฟ่‘—ไฝœ': u'้Ÿฟ่‘—ไฝœ', u'้Ÿฟ่‘—ๅ': u'้Ÿฟ่‘—ๅ', u'้Ÿฟ่‘—ๆ›ธ': u'้Ÿฟ่‘—ๆ›ธ', u'้Ÿฟ่‘—็จฑ': u'้Ÿฟ่‘—็จฑ', u'้Ÿฟ่‘—่€…': u'้Ÿฟ่‘—่€…', u'้Ÿฟ่‘—่ฟฐ': u'้Ÿฟ่‘—่ฟฐ', u'้Ÿฟ่‘—้Œ„': u'้Ÿฟ่‘—้Œ„', u'้ ‚่‘—': u'้ ‚็€', u'้ ‚่‘—ไฝœ': u'้ ‚่‘—ไฝœ', u'้ ‚่‘—ๅ': u'้ ‚่‘—ๅ', u'้ ‚่‘—ๆ›ธ': u'้ ‚่‘—ๆ›ธ', u'้ ‚่‘—็จฑ': u'้ ‚่‘—็จฑ', u'้ ‚่‘—่€…': u'้ ‚่‘—่€…', u'้ ‚่‘—่ฟฐ': u'้ ‚่‘—่ฟฐ', u'้ ‚่‘—้Œ„': u'้ ‚่‘—้Œ„', u'้ †่‘—': u'้ †็€', u'้ †่‘—ไฝœ': u'้ †่‘—ไฝœ', u'้ †่‘—ๅ': u'้ †่‘—ๅ', u'้ †่‘—ๆ›ธ': u'้ †่‘—ๆ›ธ', u'้ †่‘—็จฑ': u'้ †่‘—็จฑ', u'้ †่‘—่€…': u'้ †่‘—่€…', u'้ †่‘—่ฟฐ': u'้ †่‘—่ฟฐ', u'้ †่‘—้Œ„': u'้ †่‘—้Œ„', u'้ ’ๅธƒ': u'้ ’ไฝˆ', u'้ขๅธƒ': u'้ ’ไฝˆ', u'้ ˜ๅŸŸ่ฃก': u'้ ˜ๅŸŸ่ฃ', u'้ข†ๅŸŸ้‡Œ': u'้ ˜ๅŸŸ่ฃ', u'้ ˜่‘—': u'้ ˜็€', u'้ ˜่‘—ไฝœ': u'้ ˜่‘—ไฝœ', u'้ ˜่‘—ๅ': u'้ ˜่‘—ๅ', u'้ ˜่‘—ๆ›ธ': u'้ ˜่‘—ๆ›ธ', u'้ ˜่‘—็จฑ': u'้ ˜่‘—็จฑ', u'้ ˜่‘—่€…': u'้ ˜่‘—่€…', u'้ ˜่‘—่ฟฐ': u'้ ˜่‘—่ฟฐ', u'้ ˜่‘—้Œ„': u'้ ˜่‘—้Œ„', u'้ฃ„่‘—': u'้ฃ„็€', u'้ฃ„่‘—ไฝœ': u'้ฃ„่‘—ไฝœ', u'้ฃ„่‘—ๅ': u'้ฃ„่‘—ๅ', u'้ฃ„่‘—ๆ›ธ': u'้ฃ„่‘—ๆ›ธ', u'้ฃ„่‘—็จฑ': u'้ฃ„่‘—็จฑ', u'้ฃ„่‘—่€…': u'้ฃ„่‘—่€…', u'้ฃ„่‘—่ฟฐ': u'้ฃ„่‘—่ฟฐ', u'้ฃ„่‘—้Œ„': u'้ฃ„่‘—้Œ„', u'้คจ่ฃก': u'้คจ่ฃ', u'้ฆ†้‡Œ': u'้คจ่ฃ', u'้ฆฌ็ˆพๅœฐๅคซ': u'้ฆฌ็ˆพไปฃๅคซ', u'้ฆฌๅˆฉๅ…ฑๅ’Œๅœ‹': u'้ฆฌ้‡Œๅ…ฑๅ’Œๅœ‹', u'ๅœŸ่ฑ†': u'้ฆฌ้ˆด่–ฏ', u'้ง•่‘—': u'้ง•็€', u'้ง•่‘—ไฝœ': u'้ง•่‘—ไฝœ', u'้ง•่‘—ๅ': u'้ง•่‘—ๅ', u'้ง•่‘—ๆ›ธ': u'้ง•่‘—ๆ›ธ', u'้ง•่‘—็จฑ': u'้ง•่‘—็จฑ', u'้ง•่‘—่€…': u'้ง•่‘—่€…', u'้ง•่‘—่ฟฐ': u'้ง•่‘—่ฟฐ', u'้ง•่‘—้Œ„': u'้ง•่‘—้Œ„', u'้จŽ่‘—': u'้จŽ็€', u'้จŽ่‘—ไฝœ': u'้จŽ่‘—ไฝœ', u'้จŽ่‘—ๅ': u'้จŽ่‘—ๅ', u'้จŽ่‘—ๆ›ธ': u'้จŽ่‘—ๆ›ธ', u'้จŽ่‘—็จฑ': u'้จŽ่‘—็จฑ', u'้จŽ่‘—่€…': u'้จŽ่‘—่€…', u'้จŽ่‘—่ฟฐ': u'้จŽ่‘—่ฟฐ', u'้จŽ่‘—้Œ„': u'้จŽ่‘—้Œ„', u'้จ™่‘—': u'้จ™็€', u'้จ™่‘—ไฝœ': u'้จ™่‘—ไฝœ', u'้จ™่‘—ๅ': u'้จ™่‘—ๅ', u'้จ™่‘—ๆ›ธ': u'้จ™่‘—ๆ›ธ', u'้จ™่‘—็จฑ': u'้จ™่‘—็จฑ', u'้จ™่‘—่€…': u'้จ™่‘—่€…', u'้จ™่‘—่ฟฐ': u'้จ™่‘—่ฟฐ', u'้จ™่‘—้Œ„': u'้จ™่‘—้Œ„', u'้ซ˜่‘—': u'้ซ˜็€', u'้ซ˜่‘—ไฝœ': u'้ซ˜่‘—ไฝœ', u'้ซ˜่‘—ๅ': u'้ซ˜่‘—ๅ', u'้ซ˜่‘—ๆ›ธ': u'้ซ˜่‘—ๆ›ธ', u'้ซ˜่‘—็จฑ': u'้ซ˜่‘—็จฑ', u'้ซ˜่‘—่€…': u'้ซ˜่‘—่€…', u'้ซ˜่‘—่ฟฐ': u'้ซ˜่‘—่ฟฐ', u'้ซ˜่‘—้Œ„': u'้ซ˜่‘—้Œ„', u'้ซญ่‘—': u'้ซญ็€', u'้ซญ่‘—ไฝœ': u'้ซญ่‘—ไฝœ', u'้ซญ่‘—ๅ': u'้ซญ่‘—ๅ', u'้ซญ่‘—ๆ›ธ': u'้ซญ่‘—ๆ›ธ', u'้ซญ่‘—็จฑ': u'้ซญ่‘—็จฑ', u'้ซญ่‘—่€…': u'้ซญ่‘—่€…', u'้ซญ่‘—่ฟฐ': u'้ซญ่‘—่ฟฐ', u'้ซญ่‘—้Œ„': u'้ซญ่‘—้Œ„', u'้ฌฅ่‘—': u'้ฌฅ็€', u'้ฌฅ่‘—ไฝœ': u'้ฌฅ่‘—ไฝœ', u'้ฌฅ่‘—ๅ': u'้ฌฅ่‘—ๅ', u'้ฌฅ่‘—ๆ›ธ': u'้ฌฅ่‘—ๆ›ธ', u'้ฌฅ่‘—็จฑ': u'้ฌฅ่‘—็จฑ', u'้ฌฅ่‘—่€…': u'้ฌฅ่‘—่€…', u'้ฌฅ่‘—่ฟฐ': u'้ฌฅ่‘—่ฟฐ', u'้ฌฅ่‘—้Œ„': u'้ฌฅ่‘—้Œ„', u'้บ—่‘—': u'้บ—็€', u'้บ—่‘—ไฝœ': u'้บ—่‘—ไฝœ', u'้บ—่‘—ๅ': u'้บ—่‘—ๅ', u'้บ—่‘—ๆ›ธ': u'้บ—่‘—ๆ›ธ', u'้บ—่‘—็จฑ': u'้บ—่‘—็จฑ', u'้บ—่‘—่€…': u'้บ—่‘—่€…', u'้บ—่‘—่ฟฐ': u'้บ—่‘—่ฟฐ', u'้บ—่‘—้Œ„': u'้บ—่‘—้Œ„', u'้ป่‘—': u'้ป็€', u'้ป่‘—ไฝœ': u'้ป่‘—ไฝœ', u'้ป่‘—ๅ': u'้ป่‘—ๅ', u'้ป่‘—ๆ›ธ': u'้ป่‘—ๆ›ธ', u'้ป่‘—็จฑ': u'้ป่‘—็จฑ', u'้ป่‘—่€…': u'้ป่‘—่€…', u'้ป่‘—่ฟฐ': u'้ป่‘—่ฟฐ', u'้ป่‘—้Œ„': u'้ป่‘—้Œ„', u'้ปž่‘—': u'้ปž็€', u'้ปž่‘—ไฝœ': u'้ปž่‘—ไฝœ', u'้ปž่‘—ๅ': u'้ปž่‘—ๅ', u'้ปž่‘—ๆ›ธ': u'้ปž่‘—ๆ›ธ', u'้ปž่‘—็จฑ': u'้ปž่‘—็จฑ', u'้ปž่‘—่€…': u'้ปž่‘—่€…', u'้ปž่‘—่ฟฐ': u'้ปž่‘—่ฟฐ', u'้ปž่‘—้Œ„': u'้ปž่‘—้Œ„', u'้ปž่ฃก': u'้ปž่ฃ', u'็‚น้‡Œ': u'้ปž่ฃ', })
AdvancedLangConv
/AdvancedLangConv-0.01.tar.gz/AdvancedLangConv-0.01/langconv/defaulttables/zh_hk.py
zh_hk.py
from zh_hant import convtable as oldtable convtable = oldtable.copy() convtable.update({ u'โ€œ': u'ใ€Œ', u'โ€': u'ใ€', u'โ€˜': u'ใ€Ž', u'โ€™': u'ใ€', u'ไธ‰ๆฅต็ฎก': u'ไธ‰ๆฅต้ซ”', u'ไธ‰ๆž็ฎก': u'ไธ‰ๆฅต้ซ”', u'ไธ–็•Œ่ฃ': u'ไธ–็•Œ่ฃก', u'ไธญๆ–‡่ฃ': u'ไธญๆ–‡่ฃก', u'ไธฒ่กŒ': u'ไธฒๅˆ—', u'ไธฒๅˆ—ๅŠ ้€Ÿๅ™จ': u'ไธฒๅˆ—ๅŠ ้€Ÿๅ™จ', u'ไปฅๅคช็ฝ‘': u'ไน™ๅคช็ถฒ', u'ๅฅถ้…ช': u'ไนณ้…ช', u'ไบŒๆฅต็ฎก': u'ไบŒๆฅต้ซ”', u'ไบŒๆž็ฎก': u'ไบŒๆฅต้ซ”', u'ไบคไบ’ๅผ': u'ไบ’ๅ‹•ๅผ', u'้˜ฟๅกžๆ‹œ็–†': u'ไบžๅกžๆ‹œ็„ถ', u'ไบบๅทฅๆ™บ่ƒฝ': u'ไบบๅทฅๆ™บๆ…ง', u'ๆŽฅๅฃ': u'ไป‹้ข', u'ไปปๆ„็ƒๅ“ก': u'ไปปๆ„็ƒๅ“ก', u'ไปปๆ„็ƒๅ‘˜': u'ไปปๆ„็ƒๅ“ก', u'ๆœๅŠกๅ™จ': u'ไผบๆœๅ™จ', u'ๅญ—็ฏ€': u'ไฝๅ…ƒ็ต„', u'ๅญ—่Š‚': u'ไฝๅ…ƒ็ต„', u'ไฝœๅ“่ฃ': u'ไฝœๅ“่ฃก', u'ไผ˜ๅ…ˆ็บง': u'ๅ„ชๅ…ˆ้ †ๅบ', u'ๅ…ƒๅ…‡': u'ๅ…ƒๅ‡ถ', u'ๅ…ƒๅ‡ถ': u'ๅ…ƒๅ‡ถ', u'ๅ…‰็›˜': u'ๅ…‰็ขŸ', u'ๅ…‰้ฉฑ': u'ๅ…‰็ขŸๆฉŸ', u'ๅ…‹็พ…ๅœฐไบž': u'ๅ…‹็พ…ๅŸƒ่ฅฟไบž', u'ๅ…‹็ฝ—ๅœฐไบš': u'ๅ…‹็พ…ๅŸƒ่ฅฟไบž', u'ๅ…จ่ง’': u'ๅ…จๅฝข', u'ๅ†ฌๅคฉ่ฃ': u'ๅ†ฌๅคฉ่ฃก', u'ๅ†ฌๆ—ฅ่ฃ': u'ๅ†ฌๆ—ฅ่ฃก', u'ๅ‡‰่œ': u'ๅ†ท็›ค', u'ๅ†ท่œ': u'ๅ†ท็›ค', u'ๅ‡ถๅ™จ': u'ๅ‡ถๅ™จ', u'ๅ…‡ๅ™จ': u'ๅ‡ถๅ™จ', u'ๅ‡ถๅพ’': u'ๅ‡ถๅพ’', u'ๅ…‡ๅพ’': u'ๅ‡ถๅพ’', u'ๅ…‡ๆ‰‹': u'ๅ‡ถๆ‰‹', u'ๅ‡ถๆ‰‹': u'ๅ‡ถๆ‰‹', u'ๅ…‡ๆกˆ': u'ๅ‡ถๆกˆ', u'ๅ‡ถๆกˆ': u'ๅ‡ถๆกˆ', u'ๅ‡ถๆฎ˜': u'ๅ‡ถๆฎ˜', u'ๅ…‡ๆฎ˜': u'ๅ‡ถๆฎ˜', u'ๅ‡ถๆฎ‹': u'ๅ‡ถๆฎ˜', u'ๅ…‡ๆฎบ': u'ๅ‡ถๆฎบ', u'ๅ‡ถๆ€': u'ๅ‡ถๆฎบ', u'ๅ‡ถๆฎบ': u'ๅ‡ถๆฎบ', u'ๅˆ†ๅธƒๅผ': u'ๅˆ†ๆ•ฃๅผ', u'ๆ‰“ๅฐ': u'ๅˆ—ๅฐ', u'ๅˆ—ๆ”ฏๆ•ฆๅฃซ็™ป': u'ๅˆ—ๆ”ฏๆ•ฆๆ–ฏ็™ป', u'ๅ‰ชๅฝฉ': u'ๅ‰ช็ถต', u'ๅŠ ่“ฌ': u'ๅŠ ๅฝญ', u'ๆ€ป็บฟ': u'ๅŒฏๆตๆŽ’', u'ๅฑ€ๅŸŸ็ฝ‘': u'ๅ€ๅŸŸ็ถฒ', u'็‰น็ซ‹ๅฐผ้”ๅ’Œๅคšๅทดๅ“ฅ': u'ๅƒ้‡Œ้”ๆ‰˜่ฒๅ“ฅ', u'็‰น็ซ‹ๅฐผ่พพๅ’Œๆ‰˜ๅทดๅ“ฅ': u'ๅƒ้‡Œ้”ๆ‰˜่ฒๅ“ฅ', u'ๅŠ่ง’': u'ๅŠๅฝข', u'ๅกๅก”็ˆพ': u'ๅก้”', u'ๅกๅก”ๅฐ”': u'ๅก้”', u'ๆ‰“ๅฐๆฉŸ': u'ๅฐ่กจๆฉŸ', u'ๆ‰“ๅฐๆœบ': u'ๅฐ่กจๆฉŸ', u'ๅŽ„็ซ‹็‰น้‡Œไบž': u'ๅŽ„ๅˆฉๅž‚ไบž', u'ๅŽ„็ซ‹็‰น้‡Œไบš': u'ๅŽ„ๅˆฉๅž‚ไบž', u'ๅŽ„็“œๅคšๅฐ”': u'ๅŽ„็“œๅคš', u'ๅŽ„็“œๅคš็ˆพ': u'ๅŽ„็“œๅคš', u'ๆ–ฏๅจๅฃซๅ…ฐ': u'ๅฒ็“ฆๆฟŸ่˜ญ', u'ๆ–ฏๅจๅฃซ่˜ญ': u'ๅฒ็“ฆๆฟŸ่˜ญ', u'ๅ‰ๅธƒๆ': u'ๅ‰ๅธƒๅœฐ', u'ๅ‰ๅธƒๅ ค': u'ๅ‰ๅธƒๅœฐ', u'ๅŸบ้‡Œๅทดๆ–ฏ': u'ๅ‰้‡Œๅทดๆ–ฏ', u'ๅœ–็“ฆ็›ง': u'ๅ็“ฆ้ญฏ', u'ๅ›พ็“ฆๅข': u'ๅ็“ฆ้ญฏ', u'ๅ“ˆ่จๅ…‹ๆ–ฏๅฆ': u'ๅ“ˆ่–ฉๅ…‹', u'ๅ“ฅๆ–ฏ้”้ปŽๅŠ ': u'ๅ“ฅๆ–ฏๅคง้ปŽๅŠ ', u'ๅ“ฅๆ–ฏ่พพ้ปŽๅŠ ': u'ๅ“ฅๆ–ฏๅคง้ปŽๅŠ ', u'ๆ ผ้ญฏๅ‰ไบž': u'ๅ–ฌๆฒปไบž', u'ๆ ผ้ฒๅ‰ไบš': u'ๅ–ฌๆฒปไบž', u'ไฝๆฒปไบš': u'ๅ–ฌๆฒปไบž', u'ไฝๆฒปไบž': u'ๅ–ฌๆฒปไบž', u'ๅ˜ด่ฃ': u'ๅ˜ด่ฃก', u'ๅœŸๅบ“ๆ›ผๆ–ฏๅฆ': u'ๅœŸๅบซๆ›ผ', u'่–ฏไป”': u'ๅœŸ่ฑ†', u'ๅœŸ่ฑ†็ถฒ': u'ๅœŸ่ฑ†็ถฒ', u'ๅœŸ่ฑ†็ฝ‘': u'ๅœŸ่ฑ†็ถฒ', u'ๅฆๆก‘ๅฐผไบš': u'ๅฆๅฐšๅฐผไบž', u'ๅฆๆก‘ๅฐผไบž': u'ๅฆๅฐšๅฐผไบž', u'็ซฏๅฃ': u'ๅŸ ', u'ๅก”ๅ‰ๅ…‹ๆ–ฏๅฆ': u'ๅก”ๅ‰ๅ…‹', u'ๅกž่ˆŒๅฐ”': u'ๅกžๅธญ็ˆพ', u'ๅกž่ˆŒ็ˆพ': u'ๅกžๅธญ็ˆพ', u'ๅกžๆตฆ่ทฏๆ–ฏ': u'ๅกžๆ™ฎๅ‹’ๆ–ฏ', u'ๅคๅคฉ่ฃ': u'ๅคๅคฉ่ฃก', u'ๅคๆ—ฅ่ฃ': u'ๅคๆ—ฅ่ฃก', u'ๅคšๆ˜ŽๅฐผๅŠ ๅ…ฑๅ’Œๅœ‹': u'ๅคšๆ˜ŽๅฐผๅŠ ', u'ๅคš็ฑณๅฐผๅŠ ๅ…ฑๅ’Œๅ›ฝ': u'ๅคšๆ˜ŽๅฐผๅŠ ', u'ๅคš็ฑณๅฐผๅŠ ๅ…ฑๅ’Œๅœ‹': u'ๅคšๆ˜ŽๅฐผๅŠ ', u'ๅคš็ฑณๅฐผๅŠ ๅ›ฝ': u'ๅคš็ฑณๅฐผๅ…‹', u'ๅคšๆ˜ŽๅฐผๅŠ ๅœ‹': u'ๅคš็ฑณๅฐผๅ…‹', u'็ฉฟๆขญๆฉŸ': u'ๅคช็ฉบๆขญ', u'่ˆชๅคฉ้ฃžๆœบ': u'ๅคช็ฉบๆขญ', u'ๅฐผๆ—ฅๅˆฉไบš': u'ๅฅˆๅŠๅˆฉไบž', u'ๅฐผๆ—ฅๅˆฉไบž': u'ๅฅˆๅŠๅˆฉไบž', u'ๅญ—็ฌฆ': u'ๅญ—ๅ…ƒ', u'ๅญ—ๅท': u'ๅญ—ๅž‹ๅคงๅฐ', u'ๅญ—ๅบ“': u'ๅญ—ๅž‹ๆช”', u'ๅญ—็ฌฆ้›†': u'ๅญ—็ฌฆ้›†', u'ๅญ˜็›˜': u'ๅญ˜ๆช”', u'ๅญธ่ฃ': u'ๅญธ่ฃก', u'ๅฎ‰ๆ็“œๅ’Œๅทดๅธƒ้”': u'ๅฎ‰ๅœฐๅกๅŠๅทดๅธƒ้”', u'ๅฎ‰ๆ็“œๅ’Œๅทดๅธƒ่พพ': u'ๅฎ‰ๅœฐๅกๅŠๅทดๅธƒ้”', u'ๅฎ‹ๅ…ƒ': u'ๅฎ‹ๅ…ƒ', u'ๆดช้ƒฝๆ‹‰ๆ–ฏ': u'ๅฎ้ƒฝๆ‹‰ๆ–ฏ', u'ๅฏปๅ€': u'ๅฎšๅ€', u'ๅฏ’ๅ‡่ฃ': u'ๅฏ’ๅ‡่ฃก', u'ๅฎฝๅธฆ': u'ๅฏฌ้ ป', u'่€ๆ’พ': u'ๅฏฎๅœ‹', u'่€ๆŒ': u'ๅฏฎๅœ‹', u'ๆ‰“้—จ': u'ๅฐ„้–€', u'ๅฐˆ่ผฏ่ฃ': u'ๅฐˆ่ผฏ่ฃก', u'่ดŠๆฏ”ไบž': u'ๅฐšๆฏ”ไบž', u'่ตžๆฏ”ไบš': u'ๅฐšๆฏ”ไบž', u'ๅฐผๆ—ฅ็ˆพ': u'ๅฐผๆ—ฅ', u'ๅฐผๆ—ฅๅฐ”': u'ๅฐผๆ—ฅ', u'ๅฑฑๆดž่ฃ': u'ๅฑฑๆดž่ฃก', u'ๅทดๅธƒไบžๆ–ฐ็•ฟๅ…งไบž': u'ๅทดๅธƒไบž็ดๅนพๅ…งไบž', u'ๅทดๅธƒไบšๆ–ฐๅ‡ ๅ†…ไบš': u'ๅทดๅธƒไบž็ดๅนพๅ…งไบž', u'ๅทดๅทดๅคšๆ–ฏ': u'ๅทด่ฒๅคš', u'ๅธƒๅŸบ็บณๆณ•็ดข': u'ๅธƒๅ‰็ดๆณ•็ดข', u'ๅธƒๅŸบ็ดๆณ•็ดข': u'ๅธƒๅ‰็ดๆณ•็ดข', u'ๅธƒไป€': u'ๅธƒๅธŒ', u'ๅธƒๆฎŠ': u'ๅธƒๅธŒ', u'ๅธ•ๅŠณ': u'ๅธ›็‰', u'ไพ‹็จ‹': u'ๅธธๅผ', u'ๅนณๆฒปไน‹ไนฑ': u'ๅนณๆฒปไน‹ไบ‚', u'ๅนณๆฒปไน‹ไบ‚': u'ๅนณๆฒปไน‹ไบ‚', u'ๅนดไปฃ่ฃ': u'ๅนดไปฃ่ฃก', u'ๅ‡ ๅ†…ไบšๆฏ”็ป': u'ๅนพๅ…งไบžๆฏ”็ดข', u'ๅนพๅ…งไบžๆฏ”็ดน': u'ๅนพๅ…งไบžๆฏ”็ดข', u'ๅฝฉๅธฆ': u'ๅฝฉๅธถ', u'ๅฝฉๆŽ’': u'ๅฝฉๆŽ’', u'ๅฝฉๆฅผ': u'ๅฝฉๆจ“', u'ๅฝฉ็‰Œๆฅผ': u'ๅฝฉ็‰Œๆจ“', u'ๅพฉ่˜‡': u'ๅพฉ็”ฆ', u'ๅค่‹': u'ๅพฉ็”ฆ', u'ๅฟƒ่ฃ': u'ๅฟƒ่ฃก', u'ๅฟซ้—ชๅญ˜ๅ‚จๅ™จ': u'ๅฟซ้–ƒ่จ˜ๆ†ถ้ซ”', u'้—ชๅญ˜': u'ๅฟซ้–ƒ่จ˜ๆ†ถ้ซ”', u'ๆƒณ่ฑก': u'ๆƒณๅƒ', u'ไผ ๆ„Ÿ': u'ๆ„Ÿๆธฌ', u'ไน ็”จ': u'ๆ…ฃ็”จ', u'ๆˆๅฝฉๅจฑไบฒ': u'ๆˆฒ็ถตๅจ›่ฆช', u'ๆˆฒ่ฃ': u'ๆˆฒ่ฃก', u'ๆ‰‹็”ต็ญ’': u'ๆ‰‹้›ป็ญ’', u'ๆ‰‹็”ต': u'ๆ‰‹้›ป็ญ’', u'ๆ‹ฌๅท': u'ๆ‹ฌๅผง', u'ๆ‹ฟ็ ดไพ–': u'ๆ‹ฟ็ ดๅด™', u'ๆ‹ฟ็ ดไป‘': u'ๆ‹ฟ็ ดๅด™', u'็ฉๆžถ': u'ๆท่ฑน', u'ๆ‰ซ็ž„ไปช': u'ๆŽƒ็ž„ๅ™จ', u'ๆŒ‚้’ฉ': u'ๆŽ›้‰ค', u'ๆŽ›้ˆŽ': u'ๆŽ›้‰ค', u'ๆŽงไปถ': u'ๆŽงๅˆถ้ …', u'ๅฐ็ƒ': u'ๆ’ž็ƒ', u'ๆกŒ็ƒ': u'ๆ’ž็ƒ', u'ไพฟๆบๅผ': u'ๆ”œๅธถๅž‹', u'ๆ•…ไบ‹่ฃ': u'ๆ•…ไบ‹่ฃก', u'่ฐƒๅˆถ่งฃ่ฐƒๅ™จ': u'ๆ•ธๆ“šๆฉŸ', u'่ชฟๅˆถ่งฃ่ชฟๅ™จ': u'ๆ•ธๆ“šๆฉŸ', u'ๆ–ฏๆด›ๆ–‡ๅฐผไบž': u'ๆ–ฏๆด›็ถญๅฐผไบž', u'ๆ–ฏๆด›ๆ–‡ๅฐผไบš': u'ๆ–ฏๆด›็ถญๅฐผไบž', u'ๆ–ฐ็บชๅ…ƒ': u'ๆ–ฐ็ด€ๅ…ƒ', u'ๆ–ฐ็ด€ๅ…ƒ': u'ๆ–ฐ็ด€ๅ…ƒ', u'ๆ—ฅๅญ่ฃ': u'ๆ—ฅๅญ่ฃก', u'ๆ˜ฅๅ‡่ฃ': u'ๆ˜ฅๅ‡่ฃก', u'ๆ˜ฅๅคฉ่ฃ': u'ๆ˜ฅๅคฉ่ฃก', u'ๆ˜ฅๆ—ฅ่ฃ': u'ๆ˜ฅๆ—ฅ่ฃก', u'ๆ™‚้–“่ฃ': u'ๆ™‚้–“่ฃก', u'่Šฏ็‰‡': u'ๆ™ถๅ…ƒ', u'ๆš‘ๅ‡่ฃ': u'ๆš‘ๅ‡่ฃก', u'ๆ‘ๅญ่ฃ': u'ๆ‘ๅญ่ฃก', u'ไนๅพ—': u'ๆŸฅๅพท', u'ๅ…‹ๆž—้ “': u'ๆŸฏๆž—้ “', u'ๅ…‹ๆž—้กฟ': u'ๆŸฏๆž—้ “', u'ๆ ผๆž—็ด้”': u'ๆ ผ็‘ž้‚ฃ้”', u'ๆ ผๆž—็บณ่พพ': u'ๆ ผ็‘ž้‚ฃ้”', u'ๅ‡ก้ซ˜': u'ๆขต่ฐท', u'ๆฃฎๆž—่ฃ': u'ๆฃฎๆž—่ฃก', u'ๆฃบๆ่ฃ': u'ๆฃบๆ่ฃก', u'ๆฆด่“ฎ': u'ๆฆดๆงค', u'ๆฆด่Žฒ': u'ๆฆดๆงค', u'ไปฟ็œŸ': u'ๆจกๆ“ฌ', u'ๆฏ›้‡Œ่ฃ˜ๆ–ฏ': u'ๆจก้‡Œ่ฅฟๆ–ฏ', u'ๆฏ›้‡Œๆฑ‚ๆ–ฏ': u'ๆจก้‡Œ่ฅฟๆ–ฏ', u'ๆฉŸๆขฐไบบ': u'ๆฉŸๅ™จไบบ', u'ๆœบๅ™จไบบ': u'ๆฉŸๅ™จไบบ', u'ๅญ—ๆฎต': u'ๆฌ„ไฝ', u'ๆญทๅฒ่ฃ': u'ๆญทๅฒ่ฃก', u'ๅ…ƒ้Ÿณ': u'ๆฏ้Ÿณ', u'ๆฐธๅކ': u'ๆฐธๆ›†', u'ๆ–‡่Žฑ': u'ๆฑถ่Š', u'ๆฒ™็‰น้˜ฟๆ‹‰ไผฏ': u'ๆฒ™็ƒๅœฐ้˜ฟๆ‹‰ไผฏ', u'ๆฒ™ๅœฐ้˜ฟๆ‹‰ไผฏ': u'ๆฒ™็ƒๅœฐ้˜ฟๆ‹‰ไผฏ', u'ๆณขๆ–ฏๅฐผไบž้ป‘ๅกžๅ“ฅ็ถญ้‚ฃ': u'ๆณขๅฃซๅฐผไบž่ตซๅกžๅ“ฅ็ถญ็ด', u'ๆณขๆ–ฏๅฐผไบšๅ’Œ้ป‘ๅกžๅ“ฅ็ปด้‚ฃ': u'ๆณขๅฃซๅฐผไบž่ตซๅกžๅ“ฅ็ถญ็ด', u'ๅš่Œจ็“ฆ็บณ': u'ๆณขๆœญ้‚ฃ', u'ๅš่Œจ็“ฆ็ด': u'ๆณขๆœญ้‚ฃ', u'ไพฏ่ต›ๅ› ': u'ๆตท็Š', u'ไพฏ่ณฝๅ› ': u'ๆตท็Š', u'ๆทฑๆทต่ฃ': u'ๆทฑๆทต่ฃก', u'ๅ…‰ๆ ‡': u'ๆธธๆจ™', u'้ผ ๆ ‡': u'ๆป‘้ผ ', u'็ฎ—ๆณ•': u'ๆผ”็ฎ—ๆณ•', u'ไนŒๅ…นๅˆซๅ…‹ๆ–ฏๅฆ': u'็ƒ่Œฒๅˆฅๅ…‹', u'่ฏ็ป„': u'็‰‡่ชž', u'็„่ฃ': u'็„่ฃก', u'ๅกžๆ‹‰ๅˆฉๆ˜‚': u'็…ๅญๅฑฑ', u'ๅฑๅœฐ้ฉฌๆ‹‰': u'็“œๅœฐ้ฆฌๆ‹‰', u'ๅฑๅœฐ้ฆฌๆ‹‰': u'็“œๅœฐ้ฆฌๆ‹‰', u'ๅ†ˆๆฏ”ไบš': u'็”˜ๆฏ”ไบž', u'ๅฒกๆฏ”ไบž': u'็”˜ๆฏ”ไบž', u'็–‘ๅ…‡': u'็–‘ๅ‡ถ', u'็–‘ๅ‡ถ': u'็–‘ๅ‡ถ', u'็™พ็ง‘่ฃ': u'็™พ็ง‘่ฃก', u'็šฎ่ฃ้™ฝ็ง‹': u'็šฎ่ฃก้™ฝ็ง‹', u'็›งๆ—บ้”': u'็›งๅฎ‰้”', u'ๅขๆ—บ่พพ': u'็›งๅฎ‰้”', u'็œŸๅ‡ถ': u'็œŸๅ‡ถ', u'็œŸๅ…‡': u'็œŸๅ‡ถ', u'็œผ็›่ฃ': u'็œผ็›่ฃก', u'็ก…็‰‡': u'็Ÿฝ็‰‡', u'็ก…่ฐท': u'็Ÿฝ่ฐท', u'็กฌ็›˜': u'็กฌ็ขŸ', u'็กฌไปถ': u'็กฌ้ซ”', u'็›˜็‰‡': u'็ขŸ็‰‡', u'็ฃ็›˜': u'็ฃ็ขŸ', u'็ฃ้“': u'็ฃ่ปŒ', u'็ง‹ๅ‡่ฃ': u'็ง‹ๅ‡่ฃก', u'็ง‹ๅคฉ่ฃ': u'็ง‹ๅคฉ่ฃก', u'็ง‹ๆ—ฅ่ฃ': u'็ง‹ๆ—ฅ่ฃก', u'็จ‹ๆŽง': u'็จ‹ๅผๆŽงๅˆถ', u'็ชๅฐผๆ–ฏ': u'็ชๅฐผ่ฅฟไบž', u'ๅฐพๆณจ': u'็ซ ็ฏ€้™„่จป', u'่นฆๆž่ทณ': u'็ฌจ่ฑฌ่ทณ', u'็ป‘็ดง่ทณ': u'็ฌจ่ฑฌ่ทณ', u'็ญ‰ไบŽ': u'็ญ‰ๆ–ผ', u'็Ÿญ่จŠ': u'็ฐก่จŠ', u'็Ÿญไฟก': u'็ฐก่จŠ', u'็ณปๅˆ—่ฃ': u'็ณปๅˆ—่ฃก', u'ๆ–ฐ่ฅฟ่˜ญ': u'็ด่ฅฟ่˜ญ', u'ๆ–ฐ่ฅฟๅ…ฐ': u'็ด่ฅฟ่˜ญ', u'ๆ‰€็ฝ—้—จ็พคๅฒ›': u'็ดข็พ…้–€็พคๅณถ', u'ๆ‰€็พ…้–€็พคๅณถ': u'็ดข็พ…้–€็พคๅณถ', u'็ดข้ฆฌ้‡Œ': u'็ดข้ฆฌๅˆฉไบž', u'็ดข้ฉฌ้‡Œ': u'็ดข้ฆฌๅˆฉไบž', u'็ป“ๅฝฉ': u'็ต็ถต', u'ไฝ›ๅพ—่ง’': u'็ถญๅพท่ง’', u'็ถฒ็ตก': u'็ถฒ่ทฏ', u'็ฝ‘็ปœ': u'็ถฒ่ทฏ', u'ไบ’่ฏ็ถฒ': u'็ถฒ้š›็ถฒ่ทฏ', u'ๅ› ็‰น็ฝ‘': u'็ถฒ้š›็ถฒ่ทฏ', u'ๅฝฉ็ƒ': u'็ถต็ƒ', u'ๅฝฉ็ปธ': u'็ถต็ถข', u'ๅฝฉ็บฟ': u'็ถต็ทš', u'ๅฝฉ่ˆน': u'็ถต่ˆน', u'ๅฝฉ่กฃ': u'็ถต่กฃ', u'็ผ‰ๅ‡ถ': u'็ทๅ‡ถ', u'็ทๅ…‡': u'็ทๅ‡ถ', u'็ทๅ‡ถ': u'็ทๅ‡ถ', u'ๆ„ๅคงๅˆฉ': u'็พฉๅคงๅˆฉ', u'่€ๅญ—ๅท': u'่€ๅญ—่™Ÿ', u'ๅœฃๅŸบ่Œจๅ’Œๅฐผ็ปดๆ–ฏ': u'่–ๅ…‹้‡Œๆ–ฏๅคš็ฆๅŠๅฐผ็ถญๆ–ฏ', u'่–ๅ‰ๆ–ฏ็ดๅŸŸๆ–ฏ': u'่–ๅ…‹้‡Œๆ–ฏๅคš็ฆๅŠๅฐผ็ถญๆ–ฏ', u'่–ๆ–‡ๆฃฎ็‰นๅ’Œๆ ผๆž—็ดไธๆ–ฏ': u'่–ๆ–‡ๆฃฎๅŠๆ ผ็‘ž้‚ฃไธ', u'ๅœฃๆ–‡ๆฃฎ็‰นๅ’Œๆ ผๆž—็บณไธๆ–ฏ': u'่–ๆ–‡ๆฃฎๅŠๆ ผ็‘ž้‚ฃไธ', u'ๅœฃๅข่ฅฟไบš': u'่–้œฒ่ฅฟไบž', u'่–็›ง่ฅฟไบž': u'่–้œฒ่ฅฟไบž', u'ๅœฃ้ฉฌๅŠ›่ฏบ': u'่–้ฆฌๅˆฉ่ซพ', u'่–้ฆฌๅŠ›่ซพ': u'่–้ฆฌๅˆฉ่ซพ', u'่‚š่ฃ': u'่‚š่ฃก', u'่‚ฏๅฐผไบš': u'่‚ฏไบž', u'่‚ฏ้›…': u'่‚ฏไบž', u'ไปปๆ„็ƒ': u'่‡ช็”ฑ็ƒ', u'่ˆชๅคฉๅคงๅญฆ': u'่ˆชๅคฉๅคงๅญธ', u'่‹ฆ่ฃ': u'่‹ฆ่ฃก', u'ๆฏ›้‡Œๅก”ๅฐผไบš': u'่Œ…ๅˆฉๅก”ๅฐผไบž', u'ๆฏ›้‡Œๅก”ๅฐผไบž': u'่Œ…ๅˆฉๅก”ๅฐผไบž', u'่Žซๆก‘ๆฏ”ๅ…‹': u'่Žซไธ‰ๆฏ”ๅ…‹', u'ไธ‡ๅކ': u'่ฌๆ›†', u'็“ฆๅŠช้˜ฟๅ›พ': u'่ฌ้‚ฃๆœ', u'็“ฆๅŠช้˜ฟๅœ–': u'่ฌ้‚ฃๆœ', u'ไนŸ้–€': u'่‘‰้–€', u'ไนŸ้—จ': u'่‘‰้–€', u'็€': u'่‘—', u'็ง‘ๆ‘ฉ็พ…': u'่‘›ๆ‘ฉ', u'็ง‘ๆ‘ฉ็ฝ—': u'่‘›ๆ‘ฉ', u'ๅธƒ้š†่ฟช': u'่’ฒ้š†ๅœฐ', u'ๅœญไบž้‚ฃ': u'่“‹ไบž้‚ฃ', u'ๅœญไบš้‚ฃ': u'่“‹ไบž้‚ฃ', u'็ซ้”…็›–ๅธฝ': u'่“‹็ซ้‹', u'่‹้‡Œๅ—': u'่˜‡ๅˆฉๅ—', u'่กŒๅ‡ถ': u'่กŒๅ‡ถ', u'่กŒๅ…‡': u'่กŒๅ‡ถ', u'่กŒๅ‡ถๅŽ': u'่กŒๅ‡ถๅพŒ', u'่กŒๅ…‡ๅพŒ': u'่กŒๅ‡ถๅพŒ', u'่กŒๅ‡ถๅพŒ': u'่กŒๅ‡ถๅพŒ', u'ๆตๅ‹•้›ป่ฉฑ': u'่กŒๅ‹•้›ป่ฉฑ', u'็งปๅŠจ็”ต่ฏ': u'่กŒๅ‹•้›ป่ฉฑ', u'่กŒ็จ‹ๆŽงๅˆถ': u'่กŒ็จ‹ๆŽงๅˆถ', u'่กž': u'่ก›', u'ๅซ็”Ÿ': u'่ก›็”Ÿ', u'่กž็”Ÿ': u'่ก›็”Ÿ', u'ๅŸƒๅกžไฟ„ๆฏ”ไบš': u'่กฃ็ดขๆฏ”ไบž', u'ๅŸƒๅกžไฟ„ๆฏ”ไบž': u'่กฃ็ดขๆฏ”ไบž', u'่ฃๅ‹พๅค–้€ฃ': u'่ฃกๅ‹พๅค–้€ฃ', u'่ฃ้ข': u'่ฃก้ข', u'ๅˆ†่พจ็އ': u'่งฃๆžๅบฆ', u'่ฏ‘็ ': u'่งฃ็ขผ', u'ๅ‡บ็งŸ่ฝฆ': u'่จˆ็จ‹่ปŠ', u'ๆƒ้™': u'่จฑๅฏๆฌŠ', u'็‘™้ฒ': u'่ซพ้ญฏ', u'็‘™้ญฏ': u'่ซพ้ญฏ', u'ๅ˜้‡': u'่ฎŠๆ•ธ', u'็ง‘็‰น่ฟช็“ฆ': u'่ฑก็‰™ๆตทๅฒธ', u'่ฒๅฏง': u'่ฒๅ—', u'่ดๅฎ': u'่ฒๅ—', u'ไผฏๅˆฉ่Œฒ': u'่ฒ้‡Œๆ–ฏ', u'ไผฏๅˆฉๅ…น': u'่ฒ้‡Œๆ–ฏ', u'่ฒทๅ…‡': u'่ฒทๅ‡ถ', u'ไนฐๅ‡ถ': u'่ฒทๅ‡ถ', u'่ฒทๅ‡ถ': u'่ฒทๅ‡ถ', u'ๆ•ฐๆฎๅบ“': u'่ณ‡ๆ–™ๅบซ', u'ไฟกๆฏ่ฎบ': u'่ณ‡่จŠ็†่ซ–', u'ๅฅ”้ฉฐ': u'่ณ“ๅฃซ', u'ๅนณๆฒป': u'่ณ“ๅฃซ', u'ๅˆฉๆฏ”้‡Œไบš': u'่ณดๆฏ”็‘žไบž', u'ๅˆฉๆฏ”้‡Œไบž': u'่ณดๆฏ”็‘žไบž', u'่Š็ดขๆ‰˜': u'่ณด็ดขๆ‰˜', u'่Žฑ็ดขๆ‰˜': u'่ณด็ดขๆ‰˜', u'่ฝฏ้ฉฑ': u'่ปŸ็ขŸๆฉŸ', u'่ปŸไปถ': u'่ปŸ้ซ”', u'่ฝฏไปถ': u'่ปŸ้ซ”', u'ๅŠ ่ฝฝ': u'่ผ‰ๅ…ฅ', u'ๆดฅๅทดๅธƒ้Ÿฆ': u'่พ›ๅทดๅจ', u'ๆดฅๅทดๅธƒ้Ÿ‹': u'่พ›ๅทดๅจ', u'่ฏๆฑ‡': u'่พญๅฝ™', u'ๅŠ ็บณ': u'่ฟฆ็ด', u'ๅŠ ็ด': u'่ฟฆ็ด', u'่ฟฝๅ‡ถ': u'่ฟฝๅ‡ถ', u'่ฟฝๅ…‡': u'่ฟฝๅ‡ถ', u'้€™่ฃ': u'้€™่ฃก', u'ไฟก้“': u'้€š้“', u'้€žๅ‡ถ้ฌฅ็‹ ': u'้€žๅ‡ถ้ฌฅ็‹ ', u'้€žๅ…‡้ฌฅ็‹ ': u'้€žๅ‡ถ้ฌฅ็‹ ', u'้€žๅ‡ถๆ–—็‹ ': u'้€žๅ‡ถ้ฌฅ็‹ ', u'ๅณ้ฃŸ้บต': u'้€Ÿ้ฃŸ้บต', u'ๆ–นไพฟ้ข': u'้€Ÿ้ฃŸ้บต', u'ๅฟซ้€Ÿ้ข': u'้€Ÿ้ฃŸ้บต', u'่ฟžๅญ—ๅท': u'้€ฃๅญ—่™Ÿ', u'่ฟ›ๅˆถ': u'้€ฒไฝ', u'ๅ…ฅ็ƒ': u'้€ฒ็ƒ', u'็ฎ—ๅญ': u'้‹็ฎ—ๅ…ƒ', u'้ ็จ‹ๆŽงๅˆถ': u'้ ็จ‹ๆŽงๅˆถ', u'่ฟœ็จ‹ๆŽงๅˆถ': u'้ ็จ‹ๆŽงๅˆถ', u'ๆบซ็ดๅœ–่ฌ': u'้‚ฃๆœ', u'้†ซ้™ข่ฃ': u'้†ซ้™ข่ฃก', u'้…ฐ': u'้†ฏ', u'ๅทจๅ•†': u'้‰…่ณˆ', u'้’ฉ': u'้‰ค', u'้ˆŽ': u'้‰ค', u'้’ฉๅฟƒๆ–—่ง’': u'้‰คๅฟƒ้ฌฅ่ง’', u'้ˆŽๅฟƒ้ฌฅ่ง’': u'้‰คๅฟƒ้ฌฅ่ง’', u'ๅ†™ไฟๆŠค': u'้˜ฒๅฏซ', u'้˜ฟๆ‹‰ไผฏ่”ๅˆ้…‹้•ฟๅ›ฝ': u'้˜ฟๆ‹‰ไผฏ่ฏๅˆๅคงๅ…ฌๅœ‹', u'้˜ฟๆ‹‰ไผฏ่ฏๅˆ้…‹้•ทๅœ‹': u'้˜ฟๆ‹‰ไผฏ่ฏๅˆๅคงๅ…ฌๅœ‹', u'ๅ™ชๅฃฐ': u'้›œ่จŠ', u'่„ฑๆœบ': u'้›ข็ทš', u'้›ช่ฃ็ด…': u'้›ช่ฃก็ด…', u'้›ช่ฃ่•ป': u'้›ช่ฃก่•ป', u'้›ช้“้พ™': u'้›ช้ต้พ', u'้’้œ‰็ด ': u'้’้ปด็ด ', u'ๅผ‚ๆญฅ': u'้žๅŒๆญฅ', u'ๅฃฐๅก': u'้Ÿณๆ•ˆๅก', u'็ผบ็œ': u'้ ่จญ', u'้ขๅธƒ': u'้ ’ๅธƒ', u'้ ’ไฝˆ': u'้ ’ๅธƒ', u'้ ˜ๅŸŸ่ฃ': u'้ ˜ๅŸŸ่ฃก', u'ๅคด็ƒ': u'้ ญๆงŒ', u'็ฒ’ๅ…ฅ็ƒ': u'้ก†้€ฒ็ƒ', u'้คจ่ฃ': u'้คจ่ฃก', u'้ฉฌ้‡Œๅ…ฑๅ’Œๅ›ฝ': u'้ฆฌๅˆฉๅ…ฑๅ’Œๅœ‹', u'้ฆฌ้‡Œๅ…ฑๅ’Œๅœ‹': u'้ฆฌๅˆฉๅ…ฑๅ’Œๅœ‹', u'้ฉฌ่€ณไป–': u'้ฆฌ็ˆพไป–', u'้ฉฌๅฐ”ไปฃๅคซ': u'้ฆฌ็ˆพๅœฐๅคซ', u'้ฆฌ็ˆพไปฃๅคซ': u'้ฆฌ็ˆพๅœฐๅคซ', u'่ฌไบ‹ๅพ—': u'้ฆฌ่‡ช้”', u'็‹„ๅฎ‰ๅจœ': u'้ป›ๅฎ‰ๅจœ', u'ๆˆดๅฎ‰ๅจœ': u'้ป›ๅฎ‰ๅจœ', u'้ปž่ฃ': u'้ปž่ฃก', u'ไฝๅ›พ': u'้ปž้™ฃๅœ–', })
AdvancedLangConv
/AdvancedLangConv-0.01.tar.gz/AdvancedLangConv-0.01/langconv/defaulttables/zh_tw.py
zh_tw.py
convtable = { u'ใ‘ณ': u'ใ‘‡', u'ใžž': u'๐ชจŠ', u'ใ ': u'ใŸ†', u'ใฉœ': u'ใจซ', u'ไ‰ฌ': u'๐ซ‚ˆ', u'ไŠท': u'ไŒถ', u'ไ‹™': u'ไŒบ', u'ไ‹ป': u'ไŒพ', u'ไŒˆ': u'๐ฆˆ–', u'ไผ': u'ไž', u'ไช': u'๐ฉผ', u'ไช—': u'๐ฉ€', u'ไช˜': u'๐ฉฟ', u'ไซด': u'๐ฉ–—', u'ไฌ˜': u'๐ฉ™ฎ', u'ไฌ': u'๐ฉ™ฏ', u'ไญ€': u'๐ฉ ‡', u'ไญƒ': u'๐ฉ ˆ', u'ไญฟ': u'๐ฉงญ', u'ไฎ': u'๐ฉงฐ', u'ไฎž': u'๐ฉจ', u'ไฎ ': u'๐ฉงฟ', u'ไฎณ': u'๐ฉจ', u'ไฎพ': u'๐ฉงช', u'ไฏ€': u'ไฏ…', u'ไฐพ': u'้ฒƒ', u'ไฑ™': u'๐ฉพˆ', u'ไฑฌ': u'๐ฉพŠ', u'ไฑฐ': u'๐ฉพ‹', u'ไฑท': u'ไฒฃ', u'ไฑฝ': u'ไฒ', u'ไฒ': u'้ณš', u'ไฒฐ': u'๐ช‰‚', u'ไดฌ': u'๐ชŽˆ', u'ไดด': u'๐ชŽ‹', u'ไธŸ': u'ไธข', u'ไธฆ': u'ๅนถ', u'ไนพ': u'ๅนฒ', u'ไบ‚': u'ไนฑ', u'ไบ™': u'ไบ˜', u'ไบž': u'ไบš', u'ไฝ‡': u'ไผซ', u'ไฝˆ': u'ๅธƒ', u'ไฝ”': u'ๅ ', u'ไฝต': u'ๅนถ', u'ไพ†': u'ๆฅ', u'ไพ–': u'ไป‘', u'ไพถ': u'ไพฃ', u'ไฟ': u'ไฟฃ', u'ไฟ‚': u'็ณป', u'ไฟ”': u'ไผฃ', u'ไฟ ': u'ไพ ', u'ๅ€€': u'ไผฅ', u'ๅ€†': u'ไฟฉ', u'ๅ€ˆ': u'ไฟซ', u'ๅ€‰': u'ไป“', u'ๅ€‹': u'ไธช', u'ๅ€‘': u'ไปฌ', u'ๅ€–': u'ๅนธ', u'ๅ€ซ': u'ไผฆ', u'ๅ‰': u'ไผŸ', u'ๅด': u'ไพง', u'ๅต': u'ไพฆ', u'ๅฝ': u'ไผช', u'ๅ‚‘': u'ๆฐ', u'ๅ‚–': u'ไผง', u'ๅ‚˜': u'ไผž', u'ๅ‚™': u'ๅค‡', u'ๅ‚ข': u'ๅฎถ', u'ๅ‚ญ': u'ไฝฃ', u'ๅ‚ฏ': u'ๅฌ', u'ๅ‚ณ': u'ไผ ', u'ๅ‚ด': u'ไผ›', u'ๅ‚ต': u'ๅ€บ', u'ๅ‚ท': u'ไผค', u'ๅ‚พ': u'ๅ€พ', u'ๅƒ‚': u'ๅป', u'ๅƒ…': u'ไป…', u'ๅƒ‰': u'ไฝฅ', u'ๅƒ‘': u'ไพจ', u'ๅƒ•': u'ไป†', u'ๅƒž': u'ไผช', u'ๅƒฅ': u'ไพฅ', u'ๅƒจ': u'ๅพ', u'ๅƒฑ': u'้›‡', u'ๅƒน': u'ไปท', u'ๅ„€': u'ไปช', u'ๅ„‚': u'ไพฌ', u'ๅ„„': u'ไบฟ', u'ๅ„ˆ': u'ไพฉ', u'ๅ„‰': u'ไฟญ', u'ๅ„': u'ๅ‚ง', u'ๅ„”': u'ไฟฆ', u'ๅ„•': u'ไพช', u'ๅ„˜': u'ๅฐฝ', u'ๅ„Ÿ': u'ๅฟ', u'ๅ„ช': u'ไผ˜', u'ๅ„ฒ': u'ๅ‚จ', u'ๅ„ท': u'ไฟช', u'ๅ„ธ': u'ใ‘ฉ', u'ๅ„บ': u'ๅ‚ฉ', u'ๅ„ป': u'ๅ‚ฅ', u'ๅ„ผ': u'ไฟจ', u'ๅ…‡': u'ๅ‡ถ', u'ๅ…Œ': u'ๅ…‘', u'ๅ…’': u'ๅ„ฟ', u'ๅ…—': u'ๅ…–', u'ๅ…ง': u'ๅ†…', u'ๅ…ฉ': u'ไธค', u'ๅ†Š': u'ๅ†Œ', u'ๅ†ช': u'ๅน‚', u'ๅ‡ˆ': u'ๅ‡€', u'ๅ‡': u'ๅ†ป', u'ๅ‡™': u'๐ชž', u'ๅ‡œ': u'ๅ‡›', u'ๅ‡ฑ': u'ๅ‡ฏ', u'ๅˆฅ': u'ๅˆซ', u'ๅˆช': u'ๅˆ ', u'ๅ‰„': u'ๅˆญ', u'ๅ‰‡': u'ๅˆ™', u'ๅ‰‹': u'ๅ…‹', u'ๅ‰Ž': u'ๅˆน', u'ๅ‰—': u'ๅˆฌ', u'ๅ‰›': u'ๅˆš', u'ๅ‰': u'ๅ‰ฅ', u'ๅ‰ฎ': u'ๅ‰', u'ๅ‰ด': u'ๅ‰€', u'ๅ‰ต': u'ๅˆ›', u'ๅ‰ท': u'้“ฒ', u'ๅŠƒ': u'ๅˆ’', u'ๅЇ': u'ๅ‰ง', u'ๅЉ': u'ๅˆ˜', u'ๅŠŠ': u'ๅˆฝ', u'ๅŠŒ': u'ๅˆฟ', u'ๅŠ': u'ๅ‰‘', u'ๅŠ': u'ใ“ฅ', u'ๅŠ‘': u'ๅ‰‚', u'ๅŠš': u'ใ”‰', u'ๅ‹': u'ๅŠฒ', u'ๅ‹•': u'ๅŠจ', u'ๅ‹™': u'ๅŠก', u'ๅ‹›': u'ๅ‹‹', u'ๅ‹': u'่ƒœ', u'ๅ‹ž': u'ๅŠณ', u'ๅ‹ข': u'ๅŠฟ', u'ๅ‹ฉ': u'ๅ‹š', u'ๅ‹ฑ': u'ๅŠข', u'ๅ‹ณ': u'ๅ‹‹', u'ๅ‹ต': u'ๅŠฑ', u'ๅ‹ธ': u'ๅŠ', u'ๅ‹ป': u'ๅŒ€', u'ๅŒญ': u'ๅŒฆ', u'ๅŒฏ': u'ๆฑ‡', u'ๅŒฑ': u'ๅŒฎ', u'ๅ€': u'ๅŒบ', u'ๅ”': u'ๅ', u'ๅป': u'ๅด', u'ๅฝ': u'ๅณ', u'ๅŽ™': u'ๅŽ', u'ๅŽ ': u'ๅŽ•', u'ๅŽค': u'ๅކ', u'ๅŽญ': u'ๅŽŒ', u'ๅŽฒ': u'ๅމ', u'ๅŽด': u'ๅŽฃ', u'ๅƒ': u'ๅ‚', u'ๅ„': u'ๅ', u'ๅข': u'ไธ›', u'ๅ’': u'ๅ’ค', u'ๅณ': u'ๅด', u'ๅถ': u'ๅ‘', u'ๅ‘‚': u'ๅ•', u'ๅ’ผ': u'ๅ‘™', u'ๅ“ก': u'ๅ‘˜', u'ๅ”„': u'ๅ‘—', u'ๅ”š': u'ๅฃ', u'ๅ•': u'้—ฎ', u'ๅ•“': u'ๅฏ', u'ๅ•ž': u'ๅ“‘', u'ๅ•Ÿ': u'ๅฏ', u'ๅ•ข': u'ๅ”ก', u'ๅ–Ž': u'ใ–ž', u'ๅ–š': u'ๅ”ค', u'ๅ–ช': u'ไธง', u'ๅ–ซ': u'ๅƒ', u'ๅ–ฌ': u'ไน”', u'ๅ–ฎ': u'ๅ•', u'ๅ–ฒ': u'ๅ“Ÿ', u'ๅ—†': u'ๅ‘›', u'ๅ—‡': u'ๅ•ฌ', u'ๅ—Š': u'ๅ”', u'ๅ—Ž': u'ๅ—', u'ๅ—š': u'ๅ‘œ', u'ๅ—ฉ': u'ๅ”ข', u'ๅ—ฐ': u'๐ ฎถ', u'ๅ—ถ': u'ๅ“”', u'ๅ—น': u'๐ชก', u'ๅ˜†': u'ๅน', u'ๅ˜': u'ๅ–ฝ', u'ๅ˜”': u'ๅ‘•', u'ๅ˜–': u'ๅ•ง', u'ๅ˜—': u'ๅฐ', u'ๅ˜œ': u'ๅ”›', u'ๅ˜ฉ': u'ๅ“—', u'ๅ˜ฎ': u'ๅ” ', u'ๅ˜ฏ': u'ๅ•ธ', u'ๅ˜ฐ': u'ๅฝ', u'ๅ˜ต': u'ๅ““', u'ๅ˜ธ': u'ๅ‘’', u'ๅ˜ฝ': u'ๅ•ด', u'ๅ™': u'ๆถ', u'ๅ™“': u'ๅ˜˜', u'ๅ™š': u'ใ–Š', u'ๅ™': u'ๅ’', u'ๅ™ ': u'ๅ“’', u'ๅ™ฅ': u'ๅ“', u'ๅ™ฆ': u'ๅ“•', u'ๅ™ฏ': u'ๅ—ณ', u'ๅ™ฒ': u'ๅ“™', u'ๅ™ด': u'ๅ–ท', u'ๅ™ธ': u'ๅจ', u'ๅ™น': u'ๅฝ“', u'ๅš€': u'ๅ’›', u'ๅš‡': u'ๅ“', u'ๅšŒ': u'ๅ“œ', u'ๅš': u'ๅฐ', u'ๅš•': u'ๅ™œ', u'ๅš™': u'ๅ•ฎ', u'ๅšฅ': u'ๅ’ฝ', u'ๅšฆ': u'ๅ‘–', u'ๅšจ': u'ๅ’™', u'ๅšฎ': u'ๅ‘', u'ๅšฒ': u'ไบธ', u'ๅšณ': u'ๅ–พ', u'ๅšด': u'ไธฅ', u'ๅšถ': u'ๅ˜ค', u'ๅ›€': u'ๅ•ญ', u'ๅ›': u'ๅ—ซ', u'ๅ›‚': u'ๅšฃ', u'ๅ›…': u'ๅ†', u'ๅ›ˆ': u'ๅ‘“', u'ๅ›Œ': u'่‹', u'ๅ›‘': u'ๅ˜ฑ', u'ๅ›ช': u'ๅ›ฑ', u'ๅœ‡': u'ๅ›ต', u'ๅœ‹': u'ๅ›ฝ', u'ๅœ': u'ๅ›ด', u'ๅœ’': u'ๅ›ญ', u'ๅœ“': u'ๅœ†', u'ๅœ–': u'ๅ›พ', u'ๅœ˜': u'ๅ›ข', u'ๅœž': u'๐ชขฎ', u'ๅžต': u'ๅŸฏ', u'ๅŸก': u'ๅžญ', u'ๅŸฐ': u'้‡‡', u'ๅŸท': u'ๆ‰ง', u'ๅ …': u'ๅš', u'ๅ Š': u'ๅžฉ', u'ๅ –': u'ๅžด', u'ๅ ': u'ๅŸš', u'ๅ ฏ': u'ๅฐง', u'ๅ ฑ': u'ๆŠฅ', u'ๅ ด': u'ๅœบ', u'ๅกŠ': u'ๅ—', u'ๅก‹': u'่Œ”', u'ๅก': u'ๅžฒ', u'ๅก’': u'ๅŸ˜', u'ๅก—': u'ๆถ‚', u'ๅกš': u'ๅ†ข', u'ๅกข': u'ๅž', u'ๅกค': u'ๅŸ™', u'ๅกต': u'ๅฐ˜', u'ๅกน': u'ๅ ‘', u'ๅขŠ': u'ๅžซ', u'ๅขœ': u'ๅ ', u'ๅขฎ': u'ๅ •', u'ๅขฐ': u'ๅ›', u'ๅขณ': u'ๅŸ', u'ๅขป': u'ๅข™', u'ๅขพ': u'ๅžฆ', u'ๅฃ‡': u'ๅ›', u'ๅฃˆ': u'๐ก’„', u'ๅฃ‹': u'ๅžฑ', u'ๅฃ“': u'ๅŽ‹', u'ๅฃ˜': u'ๅž’', u'ๅฃ™': u'ๅœน', u'ๅฃš': u'ๅž†', u'ๅฃœ': u'ๅ›', u'ๅฃž': u'ๅ', u'ๅฃŸ': u'ๅž„', u'ๅฃ ': u'ๅž…', u'ๅฃข': u'ๅœ', u'ๅฃฉ': u'ๅ', u'ๅฃฏ': u'ๅฃฎ', u'ๅฃบ': u'ๅฃถ', u'ๅฃผ': u'ๅฃธ', u'ๅฃฝ': u'ๅฏฟ', u'ๅค ': u'ๅคŸ', u'ๅคข': u'ๆขฆ', u'ๅคฅ': u'ไผ™', u'ๅคพ': u'ๅคน', u'ๅฅ': u'ๅฅ‚', u'ๅฅง': u'ๅฅฅ', u'ๅฅฉ': u'ๅฅ', u'ๅฅช': u'ๅคบ', u'ๅฅฌ': u'ๅฅ–', u'ๅฅฎ': u'ๅฅ‹', u'ๅฅผ': u'ๅงน', u'ๅฆ': u'ๅฆ†', u'ๅง': u'ๅง—', u'ๅงฆ': u'ๅฅธ', u'ๅจ›': u'ๅจฑ', u'ๅฉ': u'ๅจ„', u'ๅฉฆ': u'ๅฆ‡', u'ๅฉญ': u'ๅจ…', u'ๅชง': u'ๅจฒ', u'ๅชฏ': u'ๅฆซ', u'ๅชผ': u'ๅชช', u'ๅชฝ': u'ๅฆˆ', u'ๅซ—': u'ๅฆช', u'ๅซต': u'ๅฆฉ', u'ๅซป': u'ๅจด', u'ๅซฟ': u'ๅฉณ', u'ๅฌ€': u'ๅฆซ', u'ๅฌˆ': u'ๅจ†', u'ๅฌ‹': u'ๅฉต', u'ๅฌŒ': u'ๅจ‡', u'ๅฌ™': u'ๅซฑ', u'ๅฌก': u'ๅซ’', u'ๅฌค': u'ๅฌท', u'ๅฌช': u'ๅซ”', u'ๅฌฐ': u'ๅฉด', u'ๅฌธ': u'ๅฉถ', u'ๅญŒ': u'ๅจˆ', u'ๅญซ': u'ๅญ™', u'ๅญธ': u'ๅญฆ', u'ๅญฟ': u'ๅญช', u'ๅฎฎ': u'ๅฎซ', u'ๅฏ€': u'้‡‡', u'ๅฏข': u'ๅฏ', u'ๅฏฆ': u'ๅฎž', u'ๅฏง': u'ๅฎ', u'ๅฏฉ': u'ๅฎก', u'ๅฏซ': u'ๅ†™', u'ๅฏฌ': u'ๅฎฝ', u'ๅฏต': u'ๅฎ ', u'ๅฏถ': u'ๅฎ', u'ๅฐ‡': u'ๅฐ†', u'ๅฐˆ': u'ไธ“', u'ๅฐ‹': u'ๅฏป', u'ๅฐ': u'ๅฏน', u'ๅฐŽ': u'ๅฏผ', u'ๅฐท': u'ๅฐด', u'ๅฑ†': u'ๅฑŠ', u'ๅฑ': u'ๅฐธ', u'ๅฑ“': u'ๅฑƒ', u'ๅฑœ': u'ๅฑ‰', u'ๅฑข': u'ๅฑก', u'ๅฑค': u'ๅฑ‚', u'ๅฑจ': u'ๅฑฆ', u'ๅฑฉ': u'๐ชจ—', u'ๅฑฌ': u'ๅฑž', u'ๅฒก': u'ๅ†ˆ', u'ๅณด': u'ๅฒ˜', u'ๅณถ': u'ๅฒ›', u'ๅณฝ': u'ๅณก', u'ๅด': u'ๅดƒ', u'ๅด‘': u'ๆ˜†', u'ๅด—': u'ๅฒ—', u'ๅด™': u'ไป‘', u'ๅดข': u'ๅณฅ', u'ๅดฌ': u'ๅฒฝ', u'ๅต': u'ๅฒš', u'ๅต—': u'ๅฒ', u'ๅถ': u'ๅต', u'ๅถ„': u'ๅดญ', u'ๅถ‡': u'ๅฒ–', u'ๅถ”': u'ๅตš', u'ๅถ—': u'ๅด‚', u'ๅถ ': u'ๅณค', u'ๅถข': u'ๅณฃ', u'ๅถง': u'ๅณ„', u'ๅถฎ': u'ๅด„', u'ๅถด': u'ๅฒ™', u'ๅถธ': u'ๅต˜', u'ๅถบ': u'ๅฒญ', u'ๅถผ': u'ๅฑฟ', u'ๅถฝ': u'ๅฒณ', u'ๅท‹': u'ๅฒฟ', u'ๅท’': u'ๅณฆ', u'ๅท”': u'ๅท…', u'ๅท–': u'ๅฒฉ', u'ๅทฐ': u'ๅทฏ', u'ๅทน': u'ๅบ', u'ๅธฅ': u'ๅธ…', u'ๅธซ': u'ๅธˆ', u'ๅธณ': u'ๅธ', u'ๅธถ': u'ๅธฆ', u'ๅน€': u'ๅธง', u'ๅนƒ': u'ๅธ', u'ๅน—': u'ๅธผ', u'ๅน˜': u'ๅธป', u'ๅนŸ': u'ๅธœ', u'ๅนฃ': u'ๅธ', u'ๅนซ': u'ๅธฎ', u'ๅนฌ': u'ๅธฑ', u'ๅนน': u'ๅนฒ', u'ๅนบ': u'ไนˆ', u'ๅนพ': u'ๅ‡ ', u'ๅบซ': u'ๅบ“', u'ๅป': u'ๅŽ•', u'ๅป‚': u'ๅŽข', u'ๅป„': u'ๅŽฉ', u'ๅปˆ': u'ๅŽฆ', u'ๅปš': u'ๅŽจ', u'ๅป': u'ๅŽฎ', u'ๅปŸ': u'ๅบ™', u'ๅป ': u'ๅŽ‚', u'ๅปก': u'ๅบ‘', u'ๅปข': u'ๅบŸ', u'ๅปฃ': u'ๅนฟ', u'ๅปฉ': u'ๅปช', u'ๅปฌ': u'ๅบ', u'ๅปณ': u'ๅŽ…', u'ๅผ’': u'ๅผ‘', u'ๅผ”': u'ๅŠ', u'ๅผณ': u'ๅผช', u'ๅผต': u'ๅผ ', u'ๅผท': u'ๅผบ', u'ๅฝ†': u'ๅˆซ', u'ๅฝˆ': u'ๅผน', u'ๅฝŒ': u'ๅผฅ', u'ๅฝŽ': u'ๅผฏ', u'ๅฝ™': u'ๆฑ‡', u'ๅฝž': u'ๅฝ', u'ๅฝฅ': u'ๅฝฆ', u'ๅพŒ': u'ๅŽ', u'ๅพ‘': u'ๅพ„', u'ๅพž': u'ไปŽ', u'ๅพ ': u'ๅพ•', u'ๅพฉ': u'ๅค', u'ๅพต': u'ๅพ', u'ๅพน': u'ๅฝป', u'ๆ†': u'ๆ’', u'ๆฅ': u'่€ป', u'ๆ‚…': u'ๆ‚ฆ', u'ๆ‚ž': u'ๆ‚ฎ', u'ๆ‚ต': u'ๆ€…', u'ๆ‚ถ': u'้—ท', u'ๆƒก': u'ๆถ', u'ๆƒฑ': u'ๆผ', u'ๆƒฒ': u'ๆฝ', u'ๆƒป': u'ๆป', u'ๆ„›': u'็ˆฑ', u'ๆ„œ': u'ๆƒฌ', u'ๆ„จ': u'ๆ‚ซ', u'ๆ„ด': u'ๆ€†', u'ๆ„ท': u'ๆบ', u'ๆ„พ': u'ๅฟพ', u'ๆ…„': u'ๆ —', u'ๆ…‹': u'ๆ€', u'ๆ…': u'ๆ„ ', u'ๆ…˜': u'ๆƒจ', u'ๆ…š': u'ๆƒญ', u'ๆ…Ÿ': u'ๆธ', u'ๆ…ฃ': u'ๆƒฏ', u'ๆ…ค': u'ๆ‚ซ', u'ๆ…ช': u'ๆ€„', u'ๆ…ซ': u'ๆ€‚', u'ๆ…ฎ': u'่™‘', u'ๆ…ณ': u'ๆ‚ญ', u'ๆ…ถ': u'ๅบ†', u'ๆ…ผ': u'ๆˆš', u'ๆ…พ': u'ๆฌฒ', u'ๆ†‚': u'ๅฟง', u'ๆ†Š': u'ๆƒซ', u'ๆ†': u'ๆ€œ', u'ๆ†‘': u'ๅ‡ญ', u'ๆ†’': u'ๆ„ฆ', u'ๆ†š': u'ๆƒฎ', u'ๆ†ค': u'ๆ„ค', u'ๆ†ซ': u'ๆ‚ฏ', u'ๆ†ฎ': u'ๆ€ƒ', u'ๆ†ฒ': u'ๅฎช', u'ๆ†ถ': u'ๅฟ†', u'ๆ‡‡': u'ๆณ', u'ๆ‡‰': u'ๅบ”', u'ๆ‡Œ': u'ๆ€ฟ', u'ๆ‡': u'ๆ‡”', u'ๆ‡ž': u'่’™', u'ๆ‡Ÿ': u'ๆ€ผ', u'ๆ‡ฃ': u'ๆ‡‘', u'ๆ‡จ': u'ๆน', u'ๆ‡ฒ': u'ๆƒฉ', u'ๆ‡ถ': u'ๆ‡’', u'ๆ‡ท': u'ๆ€€', u'ๆ‡ธ': u'ๆ‚ฌ', u'ๆ‡บ': u'ๅฟ', u'ๆ‡ผ': u'ๆƒง', u'ๆ‡พ': u'ๆ…‘', u'ๆˆ€': u'ๆ‹', u'ๆˆ‡': u'ๆˆ†', u'ๆˆ”': u'ๆˆ‹', u'ๆˆง': u'ๆˆ—', u'ๆˆฉ': u'ๆˆฌ', u'ๆˆฐ': u'ๆˆ˜', u'ๆˆฑ': u'ๆˆฏ', u'ๆˆฒ': u'ๆˆ', u'ๆˆถ': u'ๆˆท', u'ๆ‹‹': u'ๆŠ›', u'ๆ‹š': u'ๆ‹ผ', u'ๆŒฉ': u'ๆ', u'ๆŒฑ': u'ๆŒฒ', u'ๆŒพ': u'ๆŒŸ', u'ๆจ': u'่ˆ', u'ๆซ': u'ๆ‰ช', u'ๆฑ': u'ๆŒจ', u'ๆฒ': u'ๅท', u'ๆŽƒ': u'ๆ‰ซ', u'ๆŽ„': u'ๆŠก', u'ๆŽ—': u'ๆŒœ', u'ๆŽ™': u'ๆŒฃ', u'ๆŽ›': u'ๆŒ‚', u'ๆŽก': u'้‡‡', u'ๆ€': u'ๆ‹ฃ', u'ๆš': u'ๆ‰ฌ', u'ๆ›': u'ๆข', u'ๆฎ': u'ๆŒฅ', u'ๆ': u'ๆŸ', u'ๆ–': u'ๆ‘‡', u'ๆ—': u'ๆฃ', u'ๆต': u'ๆพ', u'ๆถ': u'ๆŠข', u'ๆ‘‘': u'ๆŽด', u'ๆ‘œ': u'ๆŽผ', u'ๆ‘Ÿ': u'ๆ‚', u'ๆ‘ฏ': u'ๆŒš', u'ๆ‘ณ': u'ๆŠ ', u'ๆ‘ถ': u'ๆŠŸ', u'ๆ‘บ': u'ๆŠ˜', u'ๆ‘ป': u'ๆŽบ', u'ๆ’ˆ': u'ๆž', u'ๆ’': u'ๆŒฆ', u'ๆ’': u'ๆ’‘', u'ๆ’“': u'ๆŒ ', u'ๆ’': u'ใง‘', u'ๆ’Ÿ': u'ๆŒข', u'ๆ’ฃ': u'ๆŽธ', u'ๆ’ฅ': u'ๆ‹จ', u'ๆ’ซ': u'ๆŠš', u'ๆ’ฒ': u'ๆ‰‘', u'ๆ’ณ': u'ๆฟ', u'ๆ’ป': u'ๆŒž', u'ๆ’พ': u'ๆŒ', u'ๆ’ฟ': u'ๆก', u'ๆ“': u'ๆ‹ฅ', u'ๆ“„': u'ๆŽณ', u'ๆ“‡': u'ๆ‹ฉ', u'ๆ“Š': u'ๅ‡ป', u'ๆ“‹': u'ๆŒก', u'ๆ““': u'ใงŸ', u'ๆ“”': u'ๆ‹…', u'ๆ“š': u'ๆฎ', u'ๆ“ ': u'ๆŒค', u'ๆ“ฌ': u'ๆ‹Ÿ', u'ๆ“ฏ': u'ๆ‘ˆ', u'ๆ“ฐ': u'ๆ‹ง', u'ๆ“ฑ': u'ๆ', u'ๆ“ฒ': u'ๆŽท', u'ๆ“ด': u'ๆ‰ฉ', u'ๆ“ท': u'ๆ’ท', u'ๆ“บ': u'ๆ‘†', u'ๆ“ป': u'ๆ“ž', u'ๆ“ผ': u'ๆ’ธ', u'ๆ“พ': u'ๆ‰ฐ', u'ๆ”„': u'ๆ‘…', u'ๆ”†': u'ๆ’ต', u'ๆ”': u'ๆ‹ข', u'ๆ””': u'ๆ‹ฆ', u'ๆ”–': u'ๆ’„', u'ๆ”™': u'ๆ€', u'ๆ”›': u'ๆ’บ', u'ๆ”œ': u'ๆบ', u'ๆ”': u'ๆ‘„', u'ๆ”ข': u'ๆ”’', u'ๆ”ฃ': u'ๆŒ›', u'ๆ”ค': u'ๆ‘Š', u'ๆ”ช': u'ๆ…', u'ๆ”ฌ': u'ๆฝ', u'ๆ•—': u'่ดฅ', u'ๆ•˜': u'ๅ™', u'ๆ•ต': u'ๆ•Œ', u'ๆ•ธ': u'ๆ•ฐ', u'ๆ–‚': u'ๆ•›', u'ๆ–ƒ': u'ๆฏ™', u'ๆ–•': u'ๆ–“', u'ๆ–ฌ': u'ๆ–ฉ', u'ๆ–ท': u'ๆ–ญ', u'ๆ–ผ': u'ไบŽ', u'ๆ—‚': u'ๆ——', u'ๆ—ฃ': u'ๆ—ข', u'ๆ˜‡': u'ๅ‡', u'ๆ™‚': u'ๆ—ถ', u'ๆ™‰': u'ๆ™‹', u'ๆ™': u'ๆ˜ผ', u'ๆšˆ': u'ๆ™•', u'ๆš‰': u'ๆ™–', u'ๆš˜': u'ๆ—ธ', u'ๆšข': u'็•…', u'ๆšซ': u'ๆš‚', u'ๆ›„': u'ๆ™”', u'ๆ›†': u'ๅކ', u'ๆ›‡': u'ๆ˜™', u'ๆ›‰': u'ๆ™“', u'ๆ›': u'ๅ‘', u'ๆ›–': u'ๆšง', u'ๆ› ': u'ๆ—ท', u'ๆ›จ': u'ๆ˜ฝ', u'ๆ›ฌ': u'ๆ™’', u'ๆ›ธ': u'ไนฆ', u'ๆœƒ': u'ไผš', u'ๆœง': u'่ƒง', u'ๆœฎ': u'ๆœฏ', u'ๆฑ': u'ไธœ', u'ๆด': u'้”จ', u'ๆŸต': u'ๆ …', u'ๆกฟ': u'ๆ†', u'ๆข”': u'ๆ €', u'ๆข˜': u'ๆžง', u'ๆข': u'ๆก', u'ๆขŸ': u'ๆžญ', u'ๆขฒ': u'ๆฃ', u'ๆฃ„': u'ๅผƒ', u'ๆฃŠ': u'ๆฃ‹', u'ๆฃ–': u'ๆžจ', u'ๆฃ—': u'ๆžฃ', u'ๆฃŸ': u'ๆ ‹', u'ๆฃก': u'๎ ญ', u'ๆฃง': u'ๆ ˆ', u'ๆฃฒ': u'ๆ –', u'ๆฃถ': u'ๆขพ', u'ๆค': u'ๆก ', u'ๆฅŠ': u'ๆจ', u'ๆฅ“': u'ๆžซ', u'ๆฅจ': u'ๆกข', u'ๆฅญ': u'ไธš', u'ๆฅต': u'ๆž', u'ๆฆฆ': u'ๅนฒ', u'ๆฆช': u'ๆฉ', u'ๆฆฎ': u'่ฃ', u'ๆฆฒ': u'ๆฆ…', u'ๆฆฟ': u'ๆกค', u'ๆง‹': u'ๆž„', u'ๆง': u'ๆžช', u'ๆง“': u'ๆ ', u'ๆงค': u'ๆขฟ', u'ๆงง': u'ๆค ', u'ๆงจ': u'ๆค', u'ๆงณ': u'ๆกจ', u'ๆจ': u'ๆกฉ', u'ๆจ‚': u'ไน', u'ๆจ…': u'ๆžž', u'ๆจ‘': u'ๆข', u'ๆจ“': u'ๆฅผ', u'ๆจ™': u'ๆ ‡', u'ๆจž': u'ๆžข', u'ๆจฃ': u'ๆ ท', u'ๆจธ': u'ๆœด', u'ๆจน': u'ๆ ‘', u'ๆจบ': u'ๆกฆ', u'ๆฉˆ': u'ๆกก', u'ๆฉ‹': u'ๆกฅ', u'ๆฉŸ': u'ๆœบ', u'ๆฉข': u'ๆคญ', u'ๆฉซ': u'ๆจช', u'ๆช': u'ๆชฉ', u'ๆช‰': u'ๆŸฝ', u'ๆช”': u'ๆกฃ', u'ๆชœ': u'ๆกง', u'ๆชŸ': u'ๆงš', u'ๆชข': u'ๆฃ€', u'ๆชฃ': u'ๆจฏ', u'ๆชฎ': u'ๆขผ', u'ๆชฏ': u'ๅฐ', u'ๆชณ': u'ๆงŸ', u'ๆชธ': u'ๆŸ ', u'ๆชป': u'ๆง›', u'ๆซƒ': u'ๆŸœ', u'ๆซ“': u'ๆฉน', u'ๆซš': u'ๆฆˆ', u'ๆซ›': u'ๆ ‰', u'ๆซ': u'ๆคŸ', u'ๆซž': u'ๆฉผ', u'ๆซŸ': u'ๆ Ž', u'ๆซฅ': u'ๆฉฑ', u'ๆซง': u'ๆง ', u'ๆซจ': u'ๆ Œ', u'ๆซช': u'ๆžฅ', u'ๆซซ': u'ๆฉฅ', u'ๆซฌ': u'ๆฆ‡', u'ๆซฑ': u'่˜–', u'ๆซณ': u'ๆ Š', u'ๆซธ': u'ๆฆ‰', u'ๆซป': u'ๆจฑ', u'ๆฌ„': u'ๆ ', u'ๆฌ…': u'ๆฆ‰', u'ๆฌŠ': u'ๆƒ', u'ๆฌ': u'ๆคค', u'ๆฌ’': u'ๆ พ', u'ๆฌ–': u'ๆฆ„', u'ๆฌž': u'ๆฃ‚', u'ๆฌฝ': u'้’ฆ', u'ๆญŽ': u'ๅน', u'ๆญ': u'ๆฌง', u'ๆญŸ': u'ๆฌค', u'ๆญก': u'ๆฌข', u'ๆญฒ': u'ๅฒ', u'ๆญท': u'ๅކ', u'ๆญธ': u'ๅฝ’', u'ๆญฟ': u'ๆฎ', u'ๆฎ˜': u'ๆฎ‹', u'ๆฎž': u'ๆฎ’', u'ๆฎค': u'ๆฎ‡', u'ๆฎจ': u'ใฑฎ', u'ๆฎซ': u'ๆฎš', u'ๆฎญ': u'ๅƒต', u'ๆฎฎ': u'ๆฎ“', u'ๆฎฏ': u'ๆฎก', u'ๆฎฐ': u'ใฑฉ', u'ๆฎฒ': u'ๆญผ', u'ๆฎบ': u'ๆ€', u'ๆฎป': u'ๅฃณ', u'ๆฎผ': u'ๅฃณ', u'ๆฏ€': u'ๆฏ', u'ๆฏ†': u'ๆฎด', u'ๆฏฟ': u'ๆฏต', u'ๆฐ‚': u'็‰ฆ', u'ๆฐˆ': u'ๆฏก', u'ๆฐŒ': u'ๆฐ‡', u'ๆฐฃ': u'ๆฐ”', u'ๆฐซ': u'ๆฐข', u'ๆฐฌ': u'ๆฐฉ', u'ๆฐณ': u'ๆฐฒ', u'ๆฑ™': u'ๆฑก', u'ๆฑบ': u'ๅ†ณ', u'ๆฒ’': u'ๆฒก', u'ๆฒ–': u'ๅ†ฒ', u'ๆณ': u'ๅ†ต', u'ๆณ': u'ๆบฏ', u'ๆดฉ': u'ๆณ„', u'ๆดถ': u'ๆฑน', u'ๆตน': u'ๆตƒ', u'ๆถ‡': u'ๆณพ', u'ๆถผ': u'ๅ‡‰', u'ๆท’': u'ๅ‡„', u'ๆทš': u'ๆณช', u'ๆทฅ': u'ๆธŒ', u'ๆทจ': u'ๅ‡€', u'ๆทฉ': u'ๅ‡Œ', u'ๆทช': u'ๆฒฆ', u'ๆทต': u'ๆธŠ', u'ๆทถ': u'ๆถž', u'ๆทบ': u'ๆต…', u'ๆธ™': u'ๆถฃ', u'ๆธ›': u'ๅ‡', u'ๆธฆ': u'ๆถก', u'ๆธฌ': u'ๆต‹', u'ๆธพ': u'ๆต‘', u'ๆนŠ': u'ๅ‡‘', u'ๆนž': u'ๆตˆ', u'ๆนง': u'ๆถŒ', u'ๆนฏ': u'ๆฑค', u'ๆบˆ': u'ๆฒฉ', u'ๆบ–': u'ๅ‡†', u'ๆบ': u'ๆฒŸ', u'ๆบซ': u'ๆธฉ', u'ๆป„': u'ๆฒง', u'ๆป…': u'็ญ', u'ๆปŒ': u'ๆถค', u'ๆปŽ': u'่ฅ', u'ๆป™': u'ๆฑ‡', u'ๆปฌ': u'ๆฒช', u'ๆปฏ': u'ๆปž', u'ๆปฒ': u'ๆธ—', u'ๆปท': u'ๅค', u'ๆปธ': u'ๆต’', u'ๆปป': u'ๆต', u'ๆปพ': u'ๆปš', u'ๆปฟ': u'ๆปก', u'ๆผ': u'ๆธ”', u'ๆผš': u'ๆฒค', u'ๆผข': u'ๆฑ‰', u'ๆผฃ': u'ๆถŸ', u'ๆผฌ': u'ๆธ', u'ๆผฒ': u'ๆถจ', u'ๆผต': u'ๆบ†', u'ๆผธ': u'ๆธ', u'ๆผฟ': u'ๆต†', u'ๆฝ': u'้ข', u'ๆฝ‘': u'ๆณผ', u'ๆฝ”': u'ๆด', u'ๆฝ™': u'ๆฒฉ', u'ๆฝ›': u'ๆฝœ', u'ๆฝค': u'ๆถฆ', u'ๆฝฏ': u'ๆต”', u'ๆฝฐ': u'ๆบƒ', u'ๆฝท': u'ๆป—', u'ๆฝฟ': u'ๆถ ', u'ๆพ€': u'ๆถฉ', u'ๆพ†': u'ๆต‡', u'ๆพ‡': u'ๆถ', u'ๆพ': u'ๆฒ„', u'ๆพ—': u'ๆถง', u'ๆพ ': u'ๆธ‘', u'ๆพค': u'ๆณฝ', u'ๆพฆ': u'ๆปช', u'ๆพฉ': u'ๆณถ', u'ๆพฎ': u'ๆต', u'ๆพฑ': u'ๆท€', u'ๆพพ': u'ใณ ', u'ๆฟ': u'ๆตŠ', u'ๆฟƒ': u'ๆต“', u'ๆฟ•': u'ๆนฟ', u'ๆฟ˜': u'ๆณž', u'ๆฟŸ': u'ๆตŽ', u'ๆฟค': u'ๆถ›', u'ๆฟซ': u'ๆปฅ', u'ๆฟฐ': u'ๆฝ', u'ๆฟฑ': u'ๆปจ', u'ๆฟบ': u'ๆบ…', u'ๆฟผ': u'ๆณบ', u'ๆฟพ': u'ๆปค', u'็€…': u'ๆปข', u'็€†': u'ๆธŽ', u'็€‡': u'ใฒฟ', u'็€‰': u'ๆณป', u'็€‹': u'ๆฒˆ', u'็€': u'ๆต', u'็€•': u'ๆฟ’', u'็€˜': u'ๆณธ', u'็€': u'ๆฒฅ', u'็€Ÿ': u'ๆฝ‡', u'็€ ': u'ๆฝ†', u'็€ฆ': u'ๆฝด', u'็€ง': u'ๆณท', u'็€จ': u'ๆฟ‘', u'็€ฐ': u'ๅผฅ', u'็€ฒ': u'ๆฝ‹', u'็€พ': u'ๆพœ', u'็ƒ': u'ๆฒฃ', u'็„': u'ๆป ', u'็‘': u'ๆด’', u'็•': u'ๆผ“', u'็˜': u'ๆปฉ', u'็': u'็', u'็ ': u'ๆผค', u'็ฃ': u'ๆนพ', u'็ค': u'ๆปฆ', u'็ง': u'ๆปŸ', u'็ฝ': u'็พ', u'็‚บ': u'ไธบ', u'็ƒ': u'ไนŒ', u'็ƒด': u'็ƒƒ', u'็„ก': u'ๆ— ', u'็…‰': u'็‚ผ', u'็…’': u'็‚œ', u'็…™': u'็ƒŸ', u'็…ข': u'่Œ•', u'็…ฅ': u'็„•', u'็…ฉ': u'็ƒฆ', u'็…ฌ': u'็‚€', u'็…ฑ': u'ใถฝ', u'็†…': u'็…ด', u'็†’': u'่ง', u'็†—': u'็‚', u'็†ฑ': u'็ƒญ', u'็†ฒ': u'้ขŽ', u'็†พ': u'็‚ฝ', u'็‡': u'็ƒจ', u'็‡ˆ': u'็ฏ', u'็‡‰': u'็‚–', u'็‡’': u'็ƒง', u'็‡™': u'็ƒซ', u'็‡œ': u'็„–', u'็‡Ÿ': u'่ฅ', u'็‡ฆ': u'็ฟ', u'็‡ฌ': u'ๆฏ', u'็‡ญ': u'็ƒ›', u'็‡ด': u'็ƒฉ', u'็‡ถ': u'ใถถ', u'็‡ผ': u'็ƒฌ', u'็‡พ': u'็„˜', u'็ˆ': u'็ƒ', u'็ˆ': u'็‚‰', u'็ˆ›': u'็ƒ‚', u'็ˆญ': u'ไบ‰', u'็ˆฒ': u'ไธบ', u'็ˆบ': u'็ˆท', u'็ˆพ': u'ๅฐ”', u'็‰†': u'ๅข™', u'็‰˜': u'็‰', u'็‰ฝ': u'็‰ต', u'็Š–': u'่ฆ', u'็Šข': u'็ŠŠ', u'็Šง': u'็‰บ', u'็‹€': u'็Šถ', u'็‹น': u'็‹ญ', u'็‹ฝ': u'็‹ˆ', u'็Œ™': u'็‹ฐ', u'็Œถ': u'็Šน', u'็Œป': u'็‹ฒ', u'็': u'็Šธ', u'็ƒ': u'ๅ‘†', u'็„': u'็‹ฑ', u'็…': u'็‹ฎ', u'็Ž': u'ๅฅ–', u'็จ': u'็‹ฌ', u'็ช': u'็‹ฏ', u'็ซ': u'็Œƒ', u'็ฎ': u'็‹', u'็ฐ': u'็‹ž', u'็ฑ': u'ใบ', u'็ฒ': u'่Žท', u'็ต': u'็ŒŽ', u'็ท': u'็Šท', u'็ธ': u'ๅ…ฝ', u'็บ': u'็ญ', u'็ป': u'็Œฎ', u'็ผ': u'็Œ•', u'็Ž€': u'็Œก', u'็พ': u'็Žฐ', u'็บ': u'็', u'็ฟ': u'็ฒ', u'็‘‹': u'็Žฎ', u'็‘’': u'็Žš', u'็‘ฃ': u'็', u'็‘ค': u'็‘ถ', u'็‘ฉ': u'่Žน', u'็‘ช': u'็Ž›', u'็‘ฒ': u'็Žฑ', u'็‘ฝ': u'๐ชป', u'็’‰': u'็', u'็’ฃ': u'็Ž‘', u'็’ฆ': u'็‘ท', u'็’ซ': u'็ฐ', u'็’ฐ': u'็Žฏ', u'็’ฝ': u'็Žบ', u'็“Š': u'็ผ', u'็“': u'็‘', u'็“”': u'็’Ž', u'็“š': u'็“’', u'็”Œ': u'็“ฏ', u'็”•': u'็“ฎ', u'็”ข': u'ไบง', u'็”ฃ': u'ไบง', u'็”ฆ': u'่‹', u'็”ฏ': u'ๅฎ', u'็•': u'ไบฉ', u'็•ข': u'ๆฏ•', u'็•ซ': u'็”ป', u'็•ฐ': u'ๅผ‚', u'็•ต': u'็”ป', u'็•ถ': u'ๅฝ“', u'็–‡': u'็•ด', u'็–Š': u'ๅ ', u'็—™': u'็—‰', u'็— ': u'้…ธ', u'็—พ': u'็–ด', u'็˜‚': u'็—–', u'็˜‹': u'็–ฏ', u'็˜': u'็–ก', u'็˜“': u'็—ช', u'็˜ž': u'็˜—', u'็˜ก': u'็–ฎ', u'็˜ง': u'็–Ÿ', u'็˜ฎ': u'็˜†', u'็˜ฒ': u'็–ญ', u'็˜บ': u'็˜˜', u'็˜ป': u'็˜˜', u'็™‚': u'็–—', u'็™†': u'็—จ', u'็™‡': u'็—ซ', u'็™‰': u'็˜…', u'็™’': u'ๆ„ˆ', u'็™˜': u'็– ', u'็™Ÿ': u'็˜ช', u'็™ก': u'็—ด', u'็™ข': u'็—’', u'็™ค': u'็––', u'็™ฅ': u'็—‡', u'็™ง': u'็–ฌ', u'็™ฉ': u'็™ž', u'็™ฌ': u'็™ฃ', u'็™ญ': u'็˜ฟ', u'็™ฎ': u'็˜พ', u'็™ฐ': u'็—ˆ', u'็™ฑ': u'็˜ซ', u'็™ฒ': u'็™ซ', u'็™ผ': u'ๅ‘', u'็šš': u'็š‘', u'็šฐ': u'็–ฑ', u'็šธ': u'็šฒ', u'็šบ': u'็šฑ', u'็›ƒ': u'ๆฏ', u'็›œ': u'็›—', u'็›ž': u'็›', u'็›ก': u'ๅฐฝ', u'็›ฃ': u'็›‘', u'็›ค': u'็›˜', u'็›ง': u'ๅข', u'็›ช': u'่ก', u'็œž': u'็œŸ', u'็œฅ': u'็œฆ', u'็œพ': u'ไผ—', u'็': u'๐ชพข', u'็': u'ๅ›ฐ', u'็œ': u'็', u'็ž': u'็', u'็ž˜': u'็œ', u'็žœ': u'ไ–', u'็žž': u'็ž’', u'็žญ': u'ไบ†', u'็žถ': u'็ž†', u'็žผ': u'็‘', u'็Ÿ‡': u'่’™', u'็Ÿ“': u'็œฌ', u'็Ÿš': u'็žฉ', u'็Ÿฏ': u'็Ÿซ', u'็กƒ': u'ๆœฑ', u'็กœ': u'็ก', u'็กค': u'็ก–', u'็กจ': u'็ —', u'็กฏ': u'็ š', u'็ข•': u'ๅŸผ', u'็ขฉ': u'็ก•', u'็ขญ': u'็ €', u'็ขธ': u'็ œ', u'็ขบ': u'็กฎ', u'็ขผ': u'็ ', u'็ฃ‘': u'็ก™', u'็ฃš': u'็ –', u'็ฃฃ': u'็ขœ', u'็ฃง': u'็ข›', u'็ฃฏ': u'็Ÿถ', u'็ฃฝ': u'็ก—', u'็ค†': u'็กท', u'็คŽ': u'็ก€', u'็ค™': u'็ข', u'็คฆ': u'็Ÿฟ', u'็คช': u'็ บ', u'็คซ': u'็ พ', u'็คฌ': u'็Ÿพ', u'็คฑ': u'็ ป', u'็ฅ˜': u'็ฎ—', u'็ฅฟ': u'็ฆ„', u'็ฆ': u'็ฅธ', u'็ฆŽ': u'็ฅฏ', u'็ฆ•': u'็ฅŽ', u'็ฆก': u'็ฅƒ', u'็ฆฆ': u'ๅพก', u'็ฆช': u'็ฆ…', u'็ฆฎ': u'็คผ', u'็ฆฐ': u'็ฅข', u'็ฆฑ': u'็ฅท', u'็ฆฟ': u'็งƒ', u'็งˆ': u'็ฑผ', u'็จ…': u'็จŽ', u'็จˆ': u'็ง†', u'็จ': u'ไ…‰', u'็จœ': u'ๆฃฑ', u'็จŸ': u'็ฆ€', u'็จฎ': u'็ง', u'็จฑ': u'็งฐ', u'็ฉ€': u'่ฐท', u'็ฉŒ': u'็จฃ', u'็ฉ': u'็งฏ', u'็ฉŽ': u'้ข–', u'็ฉ ': u'็งพ', u'็ฉก': u'็ฉ‘', u'็ฉข': u'็งฝ', u'็ฉฉ': u'็จณ', u'็ฉซ': u'่Žท', u'็ฉญ': u'็จ†', u'็ชฉ': u'็ช', u'็ชช': u'ๆดผ', u'็ชฎ': u'็ฉท', u'็ชฏ': u'็ช‘', u'็ชต': u'็ชŽ', u'็ชถ': u'็ชญ', u'็ชบ': u'็ชฅ', u'็ซ„': u'็ชœ', u'็ซ…': u'็ช', u'็ซ‡': u'็ชฆ', u'็ซˆ': u'็ถ', u'็ซŠ': u'็ชƒ', u'็ซช': u'็ซ–', u'็ซถ': u'็ซž', u'็ญ†': u'็ฌ”', u'็ญ': u'็ฌ‹', u'็ญง': u'็ฌ•', u'็ญด': u'ไ‡ฒ', u'็ฎ‡': u'ไธช', u'็ฎ‹': u'็ฌบ', u'็ฎ': u'็ญ', u'็ฏ€': u'่Š‚', u'็ฏ„': u'่Œƒ', u'็ฏ‰': u'็ญ‘', u'็ฏ‹': u'็ฎง', u'็ฏ”': u'็ญผ', u'็ฏค': u'็ฌƒ', u'็ฏฉ': u'็ญ›', u'็ฏณ': u'็ญš', u'็ฐ€': u'็ฎฆ', u'็ฐ': u'็ฏ“', u'็ฐ‘': u'่“‘', u'็ฐž': u'็ฎช', u'็ฐก': u'็ฎ€', u'็ฐฃ': u'็ฏ‘', u'็ฐซ': u'็ฎซ', u'็ฐน': u'็ญœ', u'็ฐฝ': u'็ญพ', u'็ฐพ': u'ๅธ˜', u'็ฑƒ': u'็ฏฎ', u'็ฑŒ': u'็ญน', u'็ฑ™': u'็ฎ“', u'็ฑœ': u'็ฎจ', u'็ฑŸ': u'็ฑ', u'็ฑ ': u'็ฌผ', u'็ฑค': u'็ญพ', u'็ฑฉ': u'็ฌพ', u'็ฑช': u'็ฐ–', u'็ฑฌ': u'็ฏฑ', u'็ฑฎ': u'็ฎฉ', u'็ฑฒ': u'ๅ', u'็ฒต': u'็ฒค', u'็ณ': u'็ณ', u'็ณž': u'็ฒช', u'็ณง': u'็ฒฎ', u'็ณฐ': u'ๅ›ข', u'็ณฒ': u'็ฒ', u'็ณด': u'็ฑด', u'็ณถ': u'็ฒœ', u'็ณน': u'็บŸ', u'็ณพ': u'็บ ', u'็ด€': u'็บช', u'็ด‚': u'็บฃ', u'็ด„': u'็บฆ', u'็ด…': u'็บข', u'็ด†': u'็บก', u'็ด‡': u'็บฅ', u'็ดˆ': u'็บจ', u'็ด‰': u'็บซ', u'็ด‹': u'็บน', u'็ด': u'็บณ', u'็ด': u'็บฝ', u'็ด“': u'็บพ', u'็ด”': u'็บฏ', u'็ด•': u'็บฐ', u'็ด–': u'็บผ', u'็ด—': u'็บฑ', u'็ด˜': u'็บฎ', u'็ด™': u'็บธ', u'็ดš': u'็บง', u'็ด›': u'็บท', u'็ดœ': u'็บญ', u'็ด': u'็บด', u'็ดก': u'็บบ', u'็ดฌ': u'ไŒท', u'็ดฎ': u'ๆ‰Ž', u'็ดฐ': u'็ป†', u'็ดฑ': u'็ป‚', u'็ดฒ': u'็ป', u'็ดณ': u'็ป…', u'็ดต': u'็บป', u'็ดน': u'็ป', u'็ดบ': u'็ป€', u'็ดผ': u'็ป‹', u'็ดฟ': u'็ป', u'็ต€': u'็ปŒ', u'็ต‚': u'็ปˆ', u'็ต„': u'็ป„', u'็ต…': u'ไŒน', u'็ต†': u'็ปŠ', u'็ตŽ': u'็ป—', u'็ต': u'็ป“', u'็ต•': u'็ป', u'็ต›': u'็ปฆ', u'็ต': u'็ป”', u'็ตž': u'็ปž', u'็ตก': u'็ปœ', u'็ตข': u'็ปš', u'็ตฆ': u'็ป™', u'็ตจ': u'็ป’', u'็ตฐ': u'็ป–', u'็ตฑ': u'็ปŸ', u'็ตฒ': u'ไธ', u'็ตณ': u'็ป›', u'็ตถ': u'็ป', u'็ตน': u'็ปข', u'็ตบ': u'๐ซ„จ', u'็ถ': u'็ป‘', u'็ถƒ': u'็ปก', u'็ถ†': u'็ป ', u'็ถˆ': u'็ปจ', u'็ถ‰': u'็ปฃ', u'็ถŒ': u'็ปค', u'็ถ': u'็ปฅ', u'็ถ': u'ไŒผ', u'็ถ“': u'็ป', u'็ถœ': u'็ปผ', u'็ถž': u'็ผ', u'็ถ ': u'็ปฟ', u'็ถข': u'็ปธ', u'็ถฃ': u'็ปป', u'็ถซ': u'็บฟ', u'็ถฌ': u'็ปถ', u'็ถญ': u'็ปด', u'็ถฏ': u'็ปน', u'็ถฐ': u'็ปพ', u'็ถฑ': u'็บฒ', u'็ถฒ': u'็ฝ‘', u'็ถณ': u'็ปท', u'็ถด': u'็ผ€', u'็ถต': u'ๅฝฉ', u'็ถธ': u'็บถ', u'็ถน': u'็ปบ', u'็ถบ': u'็ปฎ', u'็ถป': u'็ปฝ', u'็ถฝ': u'็ปฐ', u'็ถพ': u'็ปซ', u'็ถฟ': u'็ปต', u'็ท„': u'็ปฒ', u'็ท‡': u'็ผ', u'็ทŠ': u'็ดง', u'็ท‹': u'็ปฏ', u'็ท‘': u'็ปฟ', u'็ท’': u'็ปช', u'็ท“': u'็ปฌ', u'็ท”': u'็ปฑ', u'็ท—': u'็ผƒ', u'็ท˜': u'็ผ„', u'็ท™': u'็ผ‚', u'็ทš': u'็บฟ', u'็ท': u'็ผ‰', u'็ทž': u'็ผŽ', u'็ท ': u'็ผ”', u'็ทก': u'็ผ—', u'็ทฃ': u'็ผ˜', u'็ทฆ': u'็ผŒ', u'็ทจ': u'็ผ–', u'็ทฉ': u'็ผ“', u'็ทฌ': u'็ผ…', u'็ทฏ': u'็บฌ', u'็ทฑ': u'็ผ‘', u'็ทฒ': u'็ผˆ', u'็ทด': u'็ปƒ', u'็ทถ': u'็ผ', u'็ทน': u'็ผ‡', u'็ทป': u'่‡ด', u'็ธˆ': u'่ฆ', u'็ธ‰': u'็ผ™', u'็ธŠ': u'็ผข', u'็ธ‹': u'็ผ’', u'็ธ': u'็ป‰', u'็ธ‘': u'็ผฃ', u'็ธ•': u'็ผŠ', u'็ธ—': u'็ผž', u'็ธ›': u'็ผš', u'็ธ': u'็ผœ', u'็ธž': u'็ผŸ', u'็ธŸ': u'็ผ›', u'็ธฃ': u'ๅŽฟ', u'็ธง': u'็ปฆ', u'็ธซ': u'็ผ', u'็ธญ': u'็ผก', u'็ธฎ': u'็ผฉ', u'็ธฑ': u'็บต', u'็ธฒ': u'็ผง', u'็ธณ': u'ไŒธ', u'็ธด': u'็บค', u'็ธต': u'็ผฆ', u'็ธถ': u'็ตท', u'็ธท': u'็ผ•', u'็ธน': u'็ผฅ', u'็ธฝ': u'ๆ€ป', u'็ธพ': u'็ปฉ', u'็นƒ': u'็ปท', u'็น…': u'็ผซ', u'็น†': u'็ผช', u'็น': u'็ฉ—', u'็น’': u'็ผฏ', u'็น”': u'็ป‡', u'็น•': u'็ผฎ', u'็นš': u'็ผญ', u'็นž': u'็ป•', u'็นก': u'็ปฃ', u'็นข': u'็ผ‹', u'็นฉ': u'็ปณ', u'็นช': u'็ป˜', u'็นซ': u'็ณป', u'็นญ': u'่Œง', u'็นฎ': u'็ผฐ', u'็นฏ': u'็ผณ', u'็นฐ': u'็ผฒ', u'็นณ': u'็ผด', u'็นธ': u'ไ', u'็นน': u'็ปŽ', u'็นผ': u'็ปง', u'็นฝ': u'็ผค', u'็นพ': u'็ผฑ', u'็นฟ': u'ไ€', u'็บ': u'๐ซ„ธ', u'็บˆ': u'็ผฌ', u'็บŠ': u'็บฉ', u'็บŒ': u'็ปญ', u'็บ': u'็ดฏ', u'็บ': u'็ผ ', u'็บ“': u'็ผจ', u'็บ”': u'ๆ‰', u'็บ–': u'็บค', u'็บ˜': u'็ผต', u'็บœ': u'็ผ†', u'็ผฝ': u'้’ต', u'็ฝˆ': u'ๅ›', u'็ฝŒ': u'็ฝ‚', u'็ฝŽ': u'ๅ›', u'็ฝฐ': u'็ฝš', u'็ฝต': u'้ช‚', u'็ฝท': u'็ฝข', u'็พ…': u'็ฝ—', u'็พ†': u'็ฝด', u'็พˆ': u'็พ', u'็พ‹': u'่Šˆ', u'็พฅ': u'็พŸ', u'็พจ': u'็พก', u'็พฉ': u'ไน‰', u'็ฟ’': u'ไน ', u'็ฟน': u'็ฟ˜', u'่€ฌ': u'่€ง', u'่€ฎ': u'่€ข', u'่–': u'ๅœฃ', u'่ž': u'้—ป', u'่ฏ': u'่”', u'่ฐ': u'่ช', u'่ฒ': u'ๅฃฐ', u'่ณ': u'่€ธ', u'่ต': u'่ฉ', u'่ถ': u'่‚', u'่ท': u'่Œ', u'่น': u'่', u'่ฝ': u'ๅฌ', u'่พ': u'่‹', u'่‚…': u'่‚ƒ', u'่„…': u'่ƒ', u'่„ˆ': u'่„‰', u'่„›': u'่ƒซ', u'่„ฃ': u'ๅ”‡', u'่„ซ': u'่„ฑ', u'่„น': u'่ƒ€', u'่…Ž': u'่‚พ', u'่…–': u'่ƒจ', u'่…ก': u'่„ถ', u'่…ฆ': u'่„‘', u'่…ซ': u'่‚ฟ', u'่…ณ': u'่„š', u'่…ธ': u'่‚ ', u'่†ƒ': u'่…ฝ', u'่†š': u'่‚ค', u'่† ': u'่ƒถ', u'่†ฉ': u'่…ป', u'่†ฝ': u'่ƒ†', u'่†พ': u'่„', u'่†ฟ': u'่„“', u'่‡‰': u'่„ธ', u'่‡': u'่„', u'่‡': u'่†‘', u'่‡˜': u'่…Š', u'่‡š': u'่ƒช', u'่‡Ÿ': u'่„', u'่‡ ': u'่„”', u'่‡ข': u'่‡œ', u'่‡ฅ': u'ๅง', u'่‡จ': u'ไธด', u'่‡บ': u'ๅฐ', u'่ˆ‡': u'ไธŽ', u'่ˆˆ': u'ๅ…ด', u'่ˆ‰': u'ไธพ', u'่ˆŠ': u'ๆ—ง', u'่ˆ˜': u'้ฆ†', u'่‰™': u'่ˆฑ', u'่‰ค': u'่ˆฃ', u'่‰ฆ': u'่ˆฐ', u'่‰ซ': u'่ˆป', u'่‰ฑ': u'่‰ฐ', u'่‰ท': u'่‰ณ', u'่Šป': u'ๅˆ', u'่‹ง': u'่‹Ž', u'่Œฒ': u'ๅ…น', u'่Š': u'่†', u'่ŽŠ': u'ๅบ„', u'่Ž–': u'่ŒŽ', u'่Žข': u'่š', u'่Žง': u'่‹‹', u'่ฏ': u'ๅŽ', u'่ด': u'ๅบต', u'่‡': u'่‹Œ', u'่Š': u'่Žฑ', u'่ฌ': u'ไธ‡', u'่ต': u'่Žด', u'่‘‰': u'ๅถ', u'่‘’': u'่ญ', u'่‘ค': u'่ฎ', u'่‘ฆ': u'่‹‡', u'่‘ฏ': u'่ฏ', u'่‘ท': u'่ค', u'่’“': u'่Žผ', u'่’”': u'่Žณ', u'่’ž': u'่Ž…', u'่’ผ': u'่‹', u'่“€': u'่ช', u'่“‹': u'็›–', u'่“ฎ': u'่Žฒ', u'่“ฏ': u'่‹', u'่“ด': u'่Žผ', u'่“ฝ': u'่œ', u'่””': u'ๅœ', u'่”˜': u'ๅ‚', u'่”ž': u'่’Œ', u'่”ฃ': u'่’‹', u'่”ฅ': u'่‘ฑ', u'่”ฆ': u'่Œ‘', u'่”ญ': u'่ซ', u'่•': u'่จ', u'่•†': u'่’‡', u'่•Ž': u'่ž', u'่•’': u'่ฌ', u'่•“': u'่Šธ', u'่••': u'่Žธ', u'่•˜': u'่›', u'่•ข': u'่’‰', u'่•ฉ': u'่ก', u'่•ช': u'่Šœ', u'่•ญ': u'่ง', u'่•ท': u'่“ฃ', u'่–€': u'่•ฐ', u'่–ˆ': u'่Ÿ', u'่–Š': u'่“Ÿ', u'่–Œ': u'่Š—', u'่–‘': u'ๅงœ', u'่–”': u'่”ท', u'่–˜': u'่™', u'่–Ÿ': u'่Žถ', u'่–ฆ': u'่', u'่–ฉ': u'่จ', u'่–ณ': u'ไ“•', u'่–ด': u'่‹ง', u'่–บ': u'่ ', u'่—': u'่“', u'่—Ž': u'่ฉ', u'่—': u'่‰บ', u'่—ฅ': u'่ฏ', u'่—ช': u'่–ฎ', u'่—ด': u'่•ด', u'่—ถ': u'่‹ˆ', u'่—น': u'่”ผ', u'่—บ': u'่”บ', u'่˜„': u'่•ฒ', u'่˜†': u'่Šฆ', u'่˜‡': u'่‹', u'่˜Š': u'่•ด', u'่˜‹': u'่‹น', u'่˜š': u'่—“', u'่˜ž': u'่”น', u'่˜ข': u'่Œ', u'่˜ญ': u'ๅ…ฐ', u'่˜บ': u'่“ ', u'่˜ฟ': u'่', u'่™†': u'่”‚', u'่™•': u'ๅค„', u'่™›': u'่™š', u'่™œ': u'่™', u'่™Ÿ': u'ๅท', u'่™ง': u'ไบ', u'่™ฏ': u'่™ฌ', u'่›บ': u'่›ฑ', u'่›ป': u'่œ•', u'่œ†': u'่šฌ', u'่•': u'่š€', u'่Ÿ': u'็Œฌ', u'่ฆ': u'่™พ', u'่ธ': u'่œ—', u'่ž„': u'่›ณ', u'่žž': u'่š‚', u'่žข': u'่ค', u'่žฎ': u'ไ—–', u'่žป': u'่ผ', u'่žฟ': u'่ž€', u'่Ÿ„': u'่›ฐ', u'่Ÿˆ': u'่ˆ', u'่ŸŽ': u'่žจ', u'่Ÿฃ': u'่™ฎ', u'่Ÿฌ': u'่‰', u'่Ÿฏ': u'่›ฒ', u'่Ÿฒ': u'่™ซ', u'่Ÿถ': u'่›', u'่Ÿป': u'่š', u'่ …': u'่‡', u'่ †': u'่™ฟ', u'่ ': u'่Ž', u'่ ': u'่›ด', u'่ ‘': u'่พ', u'่ Ÿ': u'่œก', u'่ ฃ': u'่›Ž', u'่ จ': u'่Ÿ', u'่ ฑ': u'่›Š', u'่ ถ': u'่š•', u'่ ป': u'่›ฎ', u'่ก†': u'ไผ—', u'่กŠ': u'่”‘', u'่ก“': u'ๆœฏ', u'่ก•': u'ๅŒ', u'่กš': u'่ƒก', u'่ก›': u'ๅซ', u'่ก': u'ๅ†ฒ', u'่กน': u'ๅช', u'่ขž': u'่กฎ', u'่ฃŠ': u'่ข…', u'่ฃ': u'้‡Œ', u'่ฃœ': u'่กฅ', u'่ฃ': u'่ฃ…', u'่ฃก': u'้‡Œ', u'่ฃฝ': u'ๅˆถ', u'่ค‡': u'ๅค', u'่คŒ': u'่ฃˆ', u'่ค˜': u'่ข†', u'่คฒ': u'่ฃค', u'่คณ': u'่ฃข', u'่คธ': u'่ค›', u'่คป': u'ไบต', u'่ฅ€': u'๐ซŒ€', u'่ฅ†': u'ๅนž', u'่ฅ‡': u'่ฃฅ', u'่ฅ': u'่ขฏ', u'่ฅ–': u'่ข„', u'่ฅ': u'่ฃฃ', u'่ฅ ': u'่ฃ†', u'่ฅค': u'่คด', u'่ฅช': u'่ขœ', u'่ฅฌ': u'ไ™“', u'่ฅฏ': u'่กฌ', u'่ฅฒ': u'่ขญ', u'่ฆ‹': u'่ง', u'่ฆŽ': u'่งƒ', u'่ฆ': u'่ง„', u'่ฆ“': u'่ง…', u'่ฆ–': u'่ง†', u'่ฆ˜': u'่ง‡', u'่ฆก': u'่ง‹', u'่ฆฅ': u'่ง', u'่ฆฆ': u'่งŽ', u'่ฆช': u'ไบฒ', u'่ฆฌ': u'่งŠ', u'่ฆฏ': u'่ง', u'่ฆฒ': u'่ง', u'่ฆท': u'่ง‘', u'่ฆบ': u'่ง‰', u'่ฆผ': u'๐ซŒจ', u'่ฆฝ': u'่งˆ', u'่ฆฟ': u'่งŒ', u'่ง€': u'่ง‚', u'่งด': u'่งž', u'่งถ': u'่งฏ', u'่งธ': u'่งฆ', u'่จ': u'่ฎ ', u'่จ‚': u'่ฎข', u'่จƒ': u'่ฎฃ', u'่จˆ': u'่ฎก', u'่จŠ': u'่ฎฏ', u'่จŒ': u'่ฎง', u'่จŽ': u'่ฎจ', u'่จ': u'่ฎฆ', u'่จ‘': u'๐ซ™', u'่จ’': u'่ฎฑ', u'่จ“': u'่ฎญ', u'่จ•': u'่ฎช', u'่จ–': u'่ฎซ', u'่จ—': u'ๆ‰˜', u'่จ˜': u'่ฎฐ', u'่จ›': u'่ฎน', u'่จ': u'่ฎถ', u'่จŸ': u'่ฎผ', u'่จข': u'ไœฃ', u'่จฃ': u'่ฏ€', u'่จฅ': u'่ฎท', u'่จฉ': u'่ฎป', u'่จช': u'่ฎฟ', u'่จญ': u'่ฎพ', u'่จฑ': u'่ฎธ', u'่จด': u'่ฏ‰', u'่จถ': u'่ฏƒ', u'่จบ': u'่ฏŠ', u'่จป': u'ๆณจ', u'่ฉ': u'่ฏ‚', u'่ฉ†': u'่ฏ‹', u'่ฉŽ': u'่ฎต', u'่ฉ': u'่ฏˆ', u'่ฉ’': u'่ฏ’', u'่ฉ”': u'่ฏ', u'่ฉ•': u'่ฏ„', u'่ฉ–': u'่ฏ', u'่ฉ—': u'่ฏ‡', u'่ฉ˜': u'่ฏŽ', u'่ฉ›': u'่ฏ…', u'่ฉž': u'่ฏ', u'่ฉ ': u'ๅ’', u'่ฉก': u'่ฏฉ', u'่ฉข': u'่ฏข', u'่ฉฃ': u'่ฏฃ', u'่ฉฆ': u'่ฏ•', u'่ฉฉ': u'่ฏ—', u'่ฉซ': u'่ฏง', u'่ฉฌ': u'่ฏŸ', u'่ฉญ': u'่ฏก', u'่ฉฎ': u'่ฏ ', u'่ฉฐ': u'่ฏ˜', u'่ฉฑ': u'่ฏ', u'่ฉฒ': u'่ฏฅ', u'่ฉณ': u'่ฏฆ', u'่ฉต': u'่ฏœ', u'่ฉผ': u'่ฏ™', u'่ฉฟ': u'่ฏ–', u'่ช„': u'่ฏ”', u'่ช…': u'่ฏ›', u'่ช†': u'่ฏ“', u'่ช‡': u'ๅคธ', u'่ชŒ': u'ๅฟ—', u'่ช': u'่ฎค', u'่ช‘': u'่ฏณ', u'่ช’': u'่ฏถ', u'่ช•': u'่ฏž', u'่ช˜': u'่ฏฑ', u'่ชš': u'่ฏฎ', u'่ชž': u'่ฏญ', u'่ช ': u'่ฏš', u'่ชก': u'่ฏซ', u'่ชฃ': u'่ฏฌ', u'่ชค': u'่ฏฏ', u'่ชฅ': u'่ฏฐ', u'่ชฆ': u'่ฏต', u'่ชจ': u'่ฏฒ', u'่ชช': u'่ฏด', u'่ชฌ': u'่ฏด', u'่ชฐ': u'่ฐ', u'่ชฒ': u'่ฏพ', u'่ชถ': u'่ฐ‡', u'่ชน': u'่ฏฝ', u'่ชผ': u'่ฐŠ', u'่ชพ': u'่จš', u'่ชฟ': u'่ฐƒ', u'่ซ‚': u'่ฐ„', u'่ซ„': u'่ฐ†', u'่ซ‡': u'่ฐˆ', u'่ซ‰': u'่ฏฟ', u'่ซ‹': u'่ฏท', u'่ซ': u'่ฏค', u'่ซ': u'่ฏน', u'่ซ‘': u'่ฏผ', u'่ซ’': u'่ฐ…', u'่ซ–': u'่ฎบ', u'่ซ—': u'่ฐ‚', u'่ซ›': u'่ฐ€', u'่ซœ': u'่ฐ', u'่ซ': u'่ฐž', u'่ซž': u'่ฐ', u'่ซข': u'่ฏจ', u'่ซค': u'่ฐ”', u'่ซฆ': u'่ฐ›', u'่ซง': u'่ฐ', u'่ซซ': u'่ฐ', u'่ซญ': u'่ฐ•', u'่ซฎ': u'ๅ’จ', u'่ซฐ': u'๐ซฐ', u'่ซฑ': u'่ฎณ', u'่ซณ': u'่ฐ™', u'่ซถ': u'่ฐŒ', u'่ซท': u'่ฎฝ', u'่ซธ': u'่ฏธ', u'่ซบ': u'่ฐš', u'่ซผ': u'่ฐ–', u'่ซพ': u'่ฏบ', u'่ฌ€': u'่ฐ‹', u'่ฌ': u'่ฐ’', u'่ฌ‚': u'่ฐ“', u'่ฌ„': u'่ชŠ', u'่ฌ…': u'่ฏŒ', u'่ฌŠ': u'่ฐŽ', u'่ฌŽ': u'่ฐœ', u'่ฌ': u'๐ซฒ', u'่ฌ': u'่ฐง', u'่ฌ”': u'่ฐ‘', u'่ฌ–': u'่ฐก', u'่ฌ—': u'่ฐค', u'่ฌ™': u'่ฐฆ', u'่ฌš': u'่ฐฅ', u'่ฌ›': u'่ฎฒ', u'่ฌ': u'่ฐข', u'่ฌ ': u'่ฐฃ', u'่ฌก': u'่ฐฃ', u'่ฌจ': u'่ฐŸ', u'่ฌซ': u'่ฐช', u'่ฌฌ': u'่ฐฌ', u'่ฌญ': u'่ฐซ', u'่ฌณ': u'่ฎด', u'่ฌน': u'่ฐจ', u'่ฌพ': u'่ฐฉ', u'่ญ…': u'ไœง', u'่ญ‰': u'่ฏ', u'่ญŠ': u'๐ซข', u'่ญŽ': u'่ฐฒ', u'่ญ': u'่ฎฅ', u'่ญ–': u'่ฐฎ', u'่ญ˜': u'่ฏ†', u'่ญ™': u'่ฐฏ', u'่ญš': u'่ฐญ', u'่ญœ': u'่ฐฑ', u'่ญซ': u'่ฐต', u'่ญญ': u'ๆฏ', u'่ญฏ': u'่ฏ‘', u'่ญฐ': u'่ฎฎ', u'่ญด': u'่ฐด', u'่ญท': u'ๆŠค', u'่ญธ': u'่ฏช', u'่ญฝ': u'่ช‰', u'่ญพ': u'่ฐซ', u'่ฎ€': u'่ฏป', u'่ฎŠ': u'ๅ˜', u'่ฎŽ': u'ไป‡', u'่ฎ’': u'่ฐ—', u'่ฎ“': u'่ฎฉ', u'่ฎ•': u'่ฐฐ', u'่ฎ–': u'่ฐถ', u'่ฎš': u'่ตž', u'่ฎœ': u'่ฐ ', u'่ฎž': u'่ฐณ', u'่ฑˆ': u'ๅฒ‚', u'่ฑŽ': u'็ซ–', u'่ฑ': u'ไธฐ', u'่ฑ”': u'่‰ณ', u'่ฑฌ': u'็Œช', u'่ฑถ': u'่ฑฎ', u'่ฒ“': u'็Œซ', u'่ฒ™': u'ไ™', u'่ฒ': u'่ด', u'่ฒž': u'่ดž', u'่ฒŸ': u'่ด ', u'่ฒ ': u'่ดŸ', u'่ฒก': u'่ดข', u'่ฒข': u'่ดก', u'่ฒง': u'่ดซ', u'่ฒจ': u'่ดง', u'่ฒฉ': u'่ดฉ', u'่ฒช': u'่ดช', u'่ฒซ': u'่ดฏ', u'่ฒฌ': u'่ดฃ', u'่ฒฏ': u'่ดฎ', u'่ฒฐ': u'่ดณ', u'่ฒฒ': u'่ต€', u'่ฒณ': u'่ดฐ', u'่ฒด': u'่ดต', u'่ฒถ': u'่ดฌ', u'่ฒท': u'ไนฐ', u'่ฒธ': u'่ดท', u'่ฒบ': u'่ดถ', u'่ฒป': u'่ดน', u'่ฒผ': u'่ดด', u'่ฒฝ': u'่ดป', u'่ฒฟ': u'่ดธ', u'่ณ€': u'่ดบ', u'่ณ': u'่ดฒ', u'่ณ‚': u'่ต‚', u'่ณƒ': u'่ต', u'่ณ„': u'่ดฟ', u'่ณ…': u'่ต…', u'่ณ‡': u'่ต„', u'่ณˆ': u'่ดพ', u'่ณŠ': u'่ดผ', u'่ณ‘': u'่ตˆ', u'่ณ’': u'่ตŠ', u'่ณ“': u'ๅฎพ', u'่ณ•': u'่ต‡', u'่ณ™': u'่ต’', u'่ณš': u'่ต‰', u'่ณœ': u'่ต', u'่ณž': u'่ต', u'่ณ ': u'่ต”', u'่ณก': u'่ต“', u'่ณข': u'่ดค', u'่ณฃ': u'ๅ–', u'่ณค': u'่ดฑ', u'่ณฆ': u'่ต‹', u'่ณง': u'่ต•', u'่ณช': u'่ดจ', u'่ณซ': u'่ต', u'่ณฌ': u'่ดฆ', u'่ณญ': u'่ตŒ', u'่ณฐ': u'ไž', u'่ณด': u'่ต–', u'่ณต': u'่ต—', u'่ณบ': u'่ตš', u'่ณป': u'่ต™', u'่ณผ': u'่ดญ', u'่ณฝ': u'่ต›', u'่ณพ': u'่ตœ', u'่ด„': u'่ดฝ', u'่ด…': u'่ต˜', u'่ด‡': u'่ตŸ', u'่ดˆ': u'่ต ', u'่ดŠ': u'่ตž', u'่ด‹': u'่ต', u'่ด': u'่ตก', u'่ด': u'่ตข', u'่ด': u'่ต†', u'่ด“': u'่ตƒ', u'่ด”': u'่ต‘', u'่ด–': u'่ตŽ', u'่ด—': u'่ต', u'่ด›': u'่ตฃ', u'่ดœ': u'่ตƒ', u'่ตฌ': u'่ตช', u'่ถ•': u'่ตถ', u'่ถ™': u'่ตต', u'่ถจ': u'่ถ‹', u'่ถฒ': u'่ถฑ', u'่ทก': u'่ฟน', u'่ธ': u'่ทต', u'่ธด': u'่ธŠ', u'่นŒ': u'่ท„', u'่น•': u'่ทธ', u'่นฃ': u'่น’', u'่นค': u'่ธช', u'่นบ': u'่ทท', u'่นป': u'๐ซ‹', u'่บ‚': u'่ทถ', u'่บ‰': u'่ถธ', u'่บŠ': u'่ธŒ', u'่บ‹': u'่ทป', u'่บ': u'่ทƒ', u'่บ‘': u'่ธฏ', u'่บ’': u'่ทž', u'่บ“': u'่ธฌ', u'่บ•': u'่นฐ', u'่บš': u'่ทน', u'่บก': u'่น‘', u'่บฅ': u'่นฟ', u'่บฆ': u'่บœ', u'่บช': u'่บ', u'่ป€': u'่บฏ', u'่ปŠ': u'่ฝฆ', u'่ป‹': u'่ฝง', u'่ปŒ': u'่ฝจ', u'่ป': u'ๅ†›', u'่ป': u'๐ซ„', u'่ป‘': u'่ฝช', u'่ป’': u'่ฝฉ', u'่ป”': u'่ฝซ', u'่ป›': u'่ฝญ', u'่ปŸ': u'่ฝฏ', u'่ปค': u'่ฝท', u'่ปจ': u'๐ซ‰', u'่ปซ': u'่ฝธ', u'่ปฒ': u'่ฝฑ', u'่ปธ': u'่ฝด', u'่ปน': u'่ฝต', u'่ปบ': u'่ฝบ', u'่ปป': u'่ฝฒ', u'่ปผ': u'่ฝถ', u'่ปพ': u'่ฝผ', u'่ผƒ': u'่พƒ', u'่ผ…': u'่พ‚', u'่ผ‡': u'่พ', u'่ผˆ': u'่พ€', u'่ผ‰': u'่ฝฝ', u'่ผŠ': u'่ฝพ', u'่ผ’': u'่พ„', u'่ผ“': u'ๆŒฝ', u'่ผ”': u'่พ…', u'่ผ•': u'่ฝป', u'่ผ—': u'๐ซ', u'่ผ›': u'่พ†', u'่ผœ': u'่พŽ', u'่ผ': u'่พ‰', u'่ผž': u'่พ‹', u'่ผŸ': u'่พ', u'่ผฅ': u'่พŠ', u'่ผฆ': u'่พ‡', u'่ผฉ': u'่พˆ', u'่ผช': u'่ฝฎ', u'่ผฌ': u'่พŒ', u'่ผฎ': u'๐ซ“', u'่ผฏ': u'่พ‘', u'่ผณ': u'่พ', u'่ผธ': u'่พ“', u'่ผป': u'่พ', u'่ผพ': u'่พ—', u'่ผฟ': u'่ˆ†', u'่ฝ€': u'่พ’', u'่ฝ‚': u'ๆฏ‚', u'่ฝ„': u'่พ–', u'่ฝ…': u'่พ•', u'่ฝ†': u'่พ˜', u'่ฝ‰': u'่ฝฌ', u'่ฝ': u'่พ™', u'่ฝŽ': u'่ฝฟ', u'่ฝ”': u'่พš', u'่ฝŸ': u'่ฝฐ', u'่ฝก': u'่พ”', u'่ฝข': u'่ฝน', u'่ฝฃ': u'๐ซ†', u'่ฝค': u'่ฝณ', u'่พฆ': u'ๅŠž', u'่พญ': u'่พž', u'่พฎ': u'่พซ', u'่พฏ': u'่พฉ', u'่พฒ': u'ๅ†œ', u'่ฟด': u'ๅ›ž', u'้€•': u'่ฟณ', u'้€™': u'่ฟ™', u'้€ฃ': u'่ฟž', u'้€ฑ': u'ๅ‘จ', u'้€ฒ': u'่ฟ›', u'้Š': u'ๆธธ', u'้‹': u'่ฟ', u'้Ž': u'่ฟ‡', u'้”': u'่พพ', u'้•': u'่ฟ', u'้™': u'้ฅ', u'้œ': u'้€Š', u'้ž': u'้€’', u'้ ': u'่ฟœ', u'้ก': u'ๆบฏ', u'้ฉ': u'้€‚', u'้ฒ': u'่ฟŸ', u'้ท': u'่ฟ', u'้ธ': u'้€‰', u'้บ': u'้—', u'้ผ': u'่พฝ', u'้‚': u'่ฟˆ', u'้‚„': u'่ฟ˜', u'้‚‡': u'่ฟฉ', u'้‚Š': u'่พน', u'้‚': u'้€ป', u'้‚': u'้€ฆ', u'้ƒŸ': u'้ƒ', u'้ƒต': u'้‚ฎ', u'้„†': u'้ƒ“', u'้„‰': u'ไนก', u'้„’': u'้‚น', u'้„”': u'้‚ฌ', u'้„–': u'้ƒง', u'้„ง': u'้‚“', u'้„ญ': u'้ƒ‘', u'้„ฐ': u'้‚ป', u'้„ฒ': u'้ƒธ', u'้„ด': u'้‚บ', u'้„ถ': u'้ƒ', u'้„บ': u'้‚', u'้…‡': u'้…‚', u'้…ˆ': u'้ƒฆ', u'้†–': u'้…', u'้†œ': u'ไธ‘', u'้†ž': u'้…', u'้†ฃ': u'็ณ–', u'้†ซ': u'ๅŒป', u'้†ฌ': u'้…ฑ', u'้†ฏ': u'้…ฐ', u'้†ฑ': u'้…ฆ', u'้‡€': u'้…ฟ', u'้‡': u'่ก…', u'้‡ƒ': u'้…พ', u'้‡…': u'้…ฝ', u'้‡‹': u'้‡Š', u'้‡': u'ๅŽ˜', u'้‡’': u'้’…', u'้‡“': u'้’†', u'้‡”': u'้’‡', u'้‡•': u'้’Œ', u'้‡—': u'้’Š', u'้‡˜': u'้’‰', u'้‡™': u'้’‹', u'้‡': u'้’ˆ', u'้‡ฃ': u'้’“', u'้‡ค': u'้’', u'้‡ง': u'้’', u'้‡ฉ': u'้’’', u'้‡ณ': u'๐จฐฟ', u'้‡ต': u'้’—', u'้‡ท': u'้’', u'้‡น': u'้’•', u'้‡บ': u'้’Ž', u'้‡พ': u'ไฅบ', u'้ˆ€': u'้’ฏ', u'้ˆ': u'้’ซ', u'้ˆƒ': u'้’˜', u'้ˆ„': u'้’ญ', u'้ˆ‡': u'๐ซ“ง', u'้ˆˆ': u'้’š', u'้ˆ‰': u'้’ ', u'้ˆ‹': u'๐จฑ‚', u'้ˆ': u'้’', u'้ˆŽ': u'้’ฉ', u'้ˆ': u'้’ค', u'้ˆ‘': u'้’ฃ', u'้ˆ’': u'้’‘', u'้ˆ”': u'้’ž', u'้ˆ•': u'้’ฎ', u'้ˆž': u'้’ง', u'้ˆ ': u'๐จฑ', u'้ˆฃ': u'้’™', u'้ˆฅ': u'้’ฌ', u'้ˆฆ': u'้’›', u'้ˆง': u'้’ช', u'้ˆฎ': u'้“Œ', u'้ˆฏ': u'๐จฑ„', u'้ˆฐ': u'้“ˆ', u'้ˆฒ': u'๐จฑƒ', u'้ˆณ': u'้’ถ', u'้ˆด': u'้“ƒ', u'้ˆท': u'้’ด', u'้ˆธ': u'้’น', u'้ˆน': u'้“', u'้ˆบ': u'้’ฐ', u'้ˆฝ': u'้’ธ', u'้ˆพ': u'้“€', u'้ˆฟ': u'้’ฟ', u'้‰€': u'้’พ', u'้‰': u'๐จฑ…', u'้‰…': u'้’œ', u'้‰ˆ': u'้“Š', u'้‰‰': u'้“‰', u'้‰‹': u'้“‡', u'้‰': u'้“‹', u'้‰‘': u'้“‚', u'้‰•': u'้’ท', u'้‰—': u'้’ณ', u'้‰š': u'้“†', u'้‰›': u'้“…', u'้‰ž': u'้’บ', u'้‰ข': u'้’ต', u'้‰ค': u'้’ฉ', u'้‰ฆ': u'้’ฒ', u'้‰ฌ': u'้’ผ', u'้‰ญ': u'้’ฝ', u'้‰ถ': u'้“', u'้‰ธ': u'้“ฐ', u'้‰บ': u'้“’', u'้‰ป': u'้“ฌ', u'้‰ฟ': u'้“ช', u'้Š€': u'้“ถ', u'้Šƒ': u'้“ณ', u'้Š…': u'้“œ', u'้Š': u'้“š', u'้Š‘': u'้“ฃ', u'้Š“': u'้“จ', u'้Š–': u'้“ข', u'้Š˜': u'้“ญ', u'้Šš': u'้“ซ', u'้Š›': u'้“ฆ', u'้Šœ': u'่ก”', u'้Š ': u'้“‘', u'้Šฃ': u'้“ท', u'้Šฅ': u'้“ฑ', u'้Šฆ': u'้“Ÿ', u'้Šจ': u'้“ต', u'้Šฉ': u'้“ฅ', u'้Šช': u'้“•', u'้Šซ': u'้“ฏ', u'้Šฌ': u'้“', u'้Šฑ': u'้“ž', u'้Šณ': u'้”', u'้Šถ': u'๐จฑ‡', u'้Šท': u'้”€', u'้Šน': u'้”ˆ', u'้Šป': u'้”‘', u'้Šผ': u'้”‰', u'้‹': u'้“', u'้‹ƒ': u'้”’', u'้‹…': u'้”Œ', u'้‹‡': u'้’ก', u'้‹‰': u'๐จฑˆ', u'้‹Œ': u'้“ค', u'้‹': u'้“—', u'้‹’': u'้”‹', u'้‹™': u'้“ป', u'้‹': u'้”Š', u'้‹Ÿ': u'้”“', u'้‹ฃ': u'้“˜', u'้‹ค': u'้”„', u'้‹ฅ': u'้”ƒ', u'้‹ฆ': u'้””', u'้‹จ': u'้”‡', u'้‹ฉ': u'้““', u'้‹ช': u'้“บ', u'้‹ญ': u'้”', u'้‹ฎ': u'้“–', u'้‹ฏ': u'้”†', u'้‹ฐ': u'้”‚', u'้‹ฑ': u'้“ฝ', u'้‹ถ': u'้”', u'้‹ธ': u'้”ฏ', u'้‹ผ': u'้’ข', u'้Œ': u'้”ž', u'้Œ‚': u'๐จฑ‹', u'้Œ„': u'ๅฝ•', u'้Œ†': u'้”–', u'้Œ‡': u'้”ซ', u'้Œˆ': u'้”ฉ', u'้Œ': u'้“”', u'้Œ': u'้”ฅ', u'้Œ’': u'้”•', u'้Œ•': u'้”Ÿ', u'้Œ˜': u'้”ค', u'้Œ™': u'้”ฑ', u'้Œš': u'้“ฎ', u'้Œ›': u'้”›', u'้ŒŸ': u'้”ฌ', u'้Œ ': u'้”ญ', u'้Œก': u'้”œ', u'้Œข': u'้’ฑ', u'้Œฆ': u'้”ฆ', u'้Œจ': u'้”š', u'้Œฉ': u'้” ', u'้Œซ': u'้”ก', u'้Œฎ': u'้”ข', u'้Œฏ': u'้”™', u'้Œฒ': u'ๅฝ•', u'้Œณ': u'้”ฐ', u'้Œถ': u'่กจ', u'้Œธ': u'้“ผ', u'้€': u'้”', u'้': u'้”จ', u'้ƒ': u'้”ช', u'้„': u'๐จฑ‰', u'้†': u'้’”', u'้‡': u'้”ด', u'้ˆ': u'้”ณ', u'้Š': u'็‚ผ', u'้‹': u'้”…', u'้': u'้•€', u'้”': u'้”ท', u'้˜': u'้“ก', u'้š': u'้’–', u'้›': u'้”ป', u'้ ': u'้”ฝ', u'้ค': u'้”ธ', u'้ฅ': u'้”ฒ', u'้ฉ': u'้”˜', u'้ฌ': u'้”น', u'้ฎ': u'๐จฑŽ', u'้ฐ': u'้”พ', u'้ต': u'้”ฎ', u'้ถ': u'้”ถ', u'้บ': u'้”—', u'้พ': u'้’Ÿ', u'้Ž‚': u'้•', u'้Ž„': u'้”ฟ', u'้އ': u'้•…', u'้ŽŠ': u'้•‘', u'้ŽŒ': u'้•ฐ', u'้Ž”': u'้••', u'้Ž–': u'้”', u'้Ž˜': u'้•‰', u'้Žš': u'้”ค', u'้Ž›': u'้•ˆ', u'้Ž': u'๐จฑ', u'้Žก': u'้•ƒ', u'้Žข': u'้’จ', u'้Žฃ': u'่“ฅ', u'้Žฆ': u'้•', u'้Žง': u'้“ ', u'้Žฉ': u'้“ฉ', u'้Žช': u'้”ผ', u'้Žฌ': u'้•', u'้Žญ': u'้Žฎ', u'้Žฎ': u'้•‡', u'้Žฏ': u'๐จฑ', u'้Žฐ': u'้•’', u'้Žฒ': u'้•‹', u'้Žณ': u'้•', u'้Žต': u'้•“', u'้Žท': u'๐จฐพ', u'้Žธ': u'้•Œ', u'้Žฟ': u'้•Ž', u'้ƒ': u'้•ž', u'้†': u'๐จฑŒ', u'้‡': u'้•Ÿ', u'้ˆ': u'้“พ', u'้‰': u'๐จฑ’', u'้Œ': u'้•†', u'้': u'้•™', u'้': u'้• ', u'้‘': u'้•', u'้—': u'้“ฟ', u'้˜': u'้”ต', u'้š': u'ๆˆš', u'้œ': u'้•—', u'้': u'้•˜', u'้ž': u'้•›', u'้Ÿ': u'้“ฒ', u'้ก': u'้•œ', u'้ข': u'้•–', u'้ค': u'้•‚', u'้ฆ': u'๐ซ“ฉ', u'้จ': u'้Œพ', u'้ฐ': u'้•š', u'้ต': u'้“ง', u'้ท': u'้•ค', u'้น': u'้•ช', u'้บ': u'ไฅฝ', u'้ฝ': u'้”ˆ', u'้ƒ': u'้“™', u'้‹': u'้“ด', u'้': u'๐ซ”Ž', u'้Ž': u'๐จฑ“', u'้': u'๐จฑ”', u'้': u'้•ฃ', u'้’': u'้“น', u'้“': u'้•ฆ', u'้”': u'้•ก', u'้˜': u'้’Ÿ', u'้™': u'้•ซ', u'้': u'้•ข', u'้ ': u'้•จ', u'้ฅ': u'ไฆ…', u'้ฆ': u'้”Ž', u'้ง': u'้”', u'้จ': u'้•„', u'้ซ': u'้•Œ', u'้ฎ': u'้•ฐ', u'้ฏ': u'ไฆƒ', u'้ฒ': u'้•ฏ', u'้ณ': u'้•ญ', u'้ต': u'้“', u'้ถ': u'้•ฎ', u'้ธ': u'้“Ž', u'้บ': u'้“›', u'้ฟ': u'้•ฑ', u'้‘„': u'้“ธ', u'้‘Š': u'้•ฌ', u'้‘Œ': u'้•”', u'้‘‘': u'้‰ด', u'้‘’': u'้‰ด', u'้‘”': u'้•ฒ', u'้‘•': u'้”ง', u'้‘ž': u'้•ด', u'้‘ ': u'้“„', u'้‘ฃ': u'้•ณ', u'้‘ฅ': u'้•ฅ', u'้‘ญ': u'้•ง', u'้‘ฐ': u'้’ฅ', u'้‘ฑ': u'้•ต', u'้‘ฒ': u'้•ถ', u'้‘ท': u'้•Š', u'้‘น': u'้•ฉ', u'้‘ผ': u'้”ฃ', u'้‘ฝ': u'้’ป', u'้‘พ': u'้Šฎ', u'้‘ฟ': u'ๅ‡ฟ', u'้’': u'้•ข', u'้•Ÿ': u'ๆ—‹', u'้•ท': u'้•ฟ', u'้–€': u'้—จ', u'้–‚': u'้—ฉ', u'้–ƒ': u'้—ช', u'้–†': u'้—ซ', u'้–ˆ': u'้—ฌ', u'้–‰': u'้—ญ', u'้–‹': u'ๅผ€', u'้–Œ': u'้—ถ', u'้–': u'๐จธ‚', u'้–Ž': u'้—ณ', u'้–': u'้—ฐ', u'้–': u'๐จธƒ', u'้–‘': u'้—ฒ', u'้–’': u'้—ฒ', u'้–“': u'้—ด', u'้–”': u'้—ต', u'้–˜': u'้—ธ', u'้–ก': u'้˜‚', u'้–ฃ': u'้˜', u'้–ค': u'ๅˆ', u'้–ฅ': u'้˜€', u'้–จ': u'้—บ', u'้–ฉ': u'้—ฝ', u'้–ซ': u'้˜ƒ', u'้–ฌ': u'้˜†', u'้–ญ': u'้—พ', u'้–ฑ': u'้˜…', u'้–ฒ': u'้˜…', u'้–ถ': u'้˜Š', u'้–น': u'้˜‰', u'้–ป': u'้˜Ž', u'้–ผ': u'้˜', u'้–ฝ': u'้˜', u'้–พ': u'้˜ˆ', u'้–ฟ': u'้˜Œ', u'้—ƒ': u'้˜’', u'้—†': u'ๆฟ', u'้—ˆ': u'้—ฑ', u'้—Š': u'้˜”', u'้—‹': u'้˜•', u'้—Œ': u'้˜‘', u'้—': u'้˜‡', u'้—': u'้˜—', u'้—’': u'้˜˜', u'้—“': u'้—ฟ', u'้—”': u'้˜–', u'้—•': u'้˜™', u'้—–': u'้—ฏ', u'้—œ': u'ๅ…ณ', u'้—ž': u'้˜š', u'้— ': u'้˜“', u'้—ก': u'้˜', u'้—ข': u'่พŸ', u'้—ค': u'้˜›', u'้—ฅ': u'้—ผ', u'้™˜': u'้™‰', u'้™': u'้™•', u'้™ž': u'ๅ‡', u'้™ฃ': u'้˜ต', u'้™ฐ': u'้˜ด', u'้™ณ': u'้™ˆ', u'้™ธ': u'้™†', u'้™ฝ': u'้˜ณ', u'้š‰': u'้™ง', u'้šŠ': u'้˜Ÿ', u'้šŽ': u'้˜ถ', u'้š•': u'้™จ', u'้š›': u'้™…', u'้šจ': u'้š', u'้šช': u'้™ฉ', u'้šฑ': u'้š', u'้šด': u'้™‡', u'้šธ': u'้šถ', u'้šป': u'ๅช', u'้›‹': u'้šฝ', u'้›–': u'่™ฝ', u'้›™': u'ๅŒ', u'้››': u'้›', u'้›œ': u'ๆ‚', u'้›ž': u'้ธก', u'้›ข': u'็ฆป', u'้›ฃ': u'้šพ', u'้›ฒ': u'ไบ‘', u'้›ป': u'็”ต', u'้œข': u'้œก', u'้œง': u'้›พ', u'้œฝ': u'้œ', u'้‚': u'้›ณ', u'้„': u'้œญ', u'้ˆ': u'็ต', u'้š': u'้“', u'้œ': u'้™', u'้ฆ': u'่…ผ', u'้จ': u'้ฅ', u'้ž€': u'้ผ—', u'้ž': u'ๅทฉ', u'้ž': u'็ปฑ', u'้žฆ': u'็ง‹', u'้žฝ': u'้ž’', u'้Ÿ': u'็ผฐ', u'้Ÿƒ': u'้ž‘', u'้Ÿ†': u'ๅƒ', u'้Ÿ‰': u'้žฏ', u'้Ÿ‹': u'้Ÿฆ', u'้ŸŒ': u'้Ÿง', u'้Ÿ': u'้Ÿจ', u'้Ÿ“': u'้Ÿฉ', u'้Ÿ™': u'้Ÿช', u'้Ÿœ': u'้Ÿฌ', u'้Ÿ': u'้žฒ', u'้Ÿž': u'้Ÿซ', u'้Ÿป': u'้Ÿต', u'้Ÿฟ': u'ๅ“', u'้ ': u'้กต', u'้ ‚': u'้กถ', u'้ ƒ': u'้กท', u'้ …': u'้กน', u'้ †': u'้กบ', u'้ ‡': u'้กธ', u'้ ˆ': u'้กป', u'้ Š': u'้กผ', u'้ Œ': u'้ข‚', u'้ Ž': u'้ข€', u'้ ': u'้ขƒ', u'้ ': u'้ข„', u'้ ‘': u'้กฝ', u'้ ’': u'้ข', u'้ “': u'้กฟ', u'้ —': u'้ข‡', u'้ ˜': u'้ข†', u'้ œ': u'้ขŒ', u'้ ก': u'้ข‰', u'้ ค': u'้ข', u'้ ฆ': u'้ข', u'้ ญ': u'ๅคด', u'้ ฎ': u'้ข’', u'้ ฐ': u'้ขŠ', u'้ ฒ': u'้ข‹', u'้ ด': u'้ข•', u'้ ท': u'้ข”', u'้ ธ': u'้ขˆ', u'้ น': u'้ข“', u'้ ป': u'้ข‘', u'้ ฝ': u'้ข“', u'้กƒ': u'๐ฉ––', u'้ก†': u'้ข—', u'้กŒ': u'้ข˜', u'้ก': u'้ข', u'้กŽ': u'้ขš', u'้ก': u'้ขœ', u'้ก’': u'้ข™', u'้ก“': u'้ข›', u'้ก”': u'้ขœ', u'้ก˜': u'ๆ„ฟ', u'้ก™': u'้ขก', u'้ก›': u'้ข ', u'้กž': u'็ฑป', u'้กข': u'้ขŸ', u'้กฅ': u'้ขข', u'้กง': u'้กพ', u'้กซ': u'้ขค', u'้กฌ': u'้ขฅ', u'้กฏ': u'ๆ˜พ', u'้กฐ': u'้ขฆ', u'้กฑ': u'้ข…', u'้กณ': u'้ขž', u'้กด': u'้ขง', u'้ขจ': u'้ฃŽ', u'้ขญ': u'้ฃ', u'้ขฎ': u'้ฃ‘', u'้ขฏ': u'้ฃ’', u'้ขฐ': u'๐ฉ™ฅ', u'้ขฑ': u'ๅฐ', u'้ขณ': u'ๅˆฎ', u'้ขถ': u'้ฃ“', u'้ขท': u'๐ฉ™ช', u'้ขธ': u'้ฃ”', u'้ขบ': u'้ฃ', u'้ขป': u'้ฃ–', u'้ขผ': u'้ฃ•', u'้ขพ': u'๐ฉ™ซ', u'้ฃ€': u'้ฃ—', u'้ฃ„': u'้ฃ˜', u'้ฃ†': u'้ฃ™', u'้ฃˆ': u'้ฃš', u'้ฃ›': u'้ฃž', u'้ฃ ': u'้ฅฃ', u'้ฃข': u'้ฅฅ', u'้ฃฃ': u'้ฅค', u'้ฃฅ': u'้ฅฆ', u'้ฃฉ': u'้ฅจ', u'้ฃช': u'้ฅช', u'้ฃซ': u'้ฅซ', u'้ฃญ': u'้ฅฌ', u'้ฃฏ': u'้ฅญ', u'้ฃฑ': u'้ฃง', u'้ฃฒ': u'้ฅฎ', u'้ฃด': u'้ฅด', u'้ฃผ': u'้ฅฒ', u'้ฃฝ': u'้ฅฑ', u'้ฃพ': u'้ฅฐ', u'้ฃฟ': u'้ฅณ', u'้คƒ': u'้ฅบ', u'้ค„': u'้ฅธ', u'้ค…': u'้ฅผ', u'้ค‰': u'้ฅท', u'้คŠ': u'ๅ…ป', u'้คŒ': u'้ฅต', u'้คŽ': u'้ฅน', u'้ค': u'้ฅป', u'้ค‘': u'้ฅฝ', u'้ค’': u'้ฆ', u'้ค“': u'้ฅฟ', u'้ค”': u'๐ซ—ฆ', u'้ค•': u'้ฆ‚', u'้ค–': u'้ฅพ', u'้ค—': u'๐ซ—ง', u'้ค˜': u'ไฝ™', u'้คš': u'่‚ด', u'้ค›': u'้ฆ„', u'้คœ': u'้ฆƒ', u'้คž': u'้ฅฏ', u'้คก': u'้ฆ…', u'้คฆ': u'๐ซ— ', u'้คจ': u'้ฆ†', u'้คญ': u'๐ซ—ฎ', u'้คฑ': u'็ณ‡', u'้คณ': u'้ฅง', u'้คต': u'ๅ–‚', u'้คถ': u'้ฆ‰', u'้คท': u'้ฆ‡', u'้คธ': u'๐ฉ Œ', u'้คบ': u'้ฆŽ', u'้คผ': u'้ฅฉ', u'้คพ': u'้ฆ', u'้คฟ': u'้ฆŠ', u'้ฅ': u'้ฆŒ', u'้ฅƒ': u'้ฆ', u'้ฅ…': u'้ฆ’', u'้ฅˆ': u'้ฆ', u'้ฅ‰': u'้ฆ‘', u'้ฅŠ': u'้ฆ“', u'้ฅ‹': u'้ฆˆ', u'้ฅŒ': u'้ฆ”', u'้ฅ‘': u'้ฅฅ', u'้ฅ’': u'้ฅถ', u'้ฅ—': u'้ฃจ', u'้ฅ˜': u'๐ซ—ด', u'้ฅœ': u'้ค', u'้ฅž': u'้ฆ‹', u'้ฅข': u'้ฆ•', u'้ฆฌ': u'้ฉฌ', u'้ฆญ': u'้ฉญ', u'้ฆฎ': u'ๅ†ฏ', u'้ฆฑ': u'้ฉฎ', u'้ฆณ': u'้ฉฐ', u'้ฆด': u'้ฉฏ', u'้ฆน': u'้ฉฒ', u'้ง': u'้ฉณ', u'้งƒ': u'๐ซ˜', u'้งŽ': u'๐ฉงจ', u'้ง': u'้ฉป', u'้ง‘': u'้ฉฝ', u'้ง’': u'้ฉน', u'้ง”': u'้ฉต', u'้ง•': u'้ฉพ', u'้ง˜': u'้ช€', u'้ง™': u'้ฉธ', u'้งš': u'๐ฉงซ', u'้ง›': u'้ฉถ', u'้ง': u'้ฉผ', u'้งŸ': u'้ฉท', u'้งก': u'้ช‚', u'้งข': u'้ชˆ', u'้งง': u'๐ฉงฒ', u'้งฉ': u'๐ฉงด', u'้งญ': u'้ช‡', u'้งฐ': u'้ชƒ', u'้งฑ': u'้ช†', u'้งถ': u'๐ฉงบ', u'้งธ': u'้ชŽ', u'้งป': u'๐ซ˜ฃ', u'้งฟ': u'้ช', u'้จ': u'้ช‹', u'้จ‚': u'้ช', u'้จƒ': u'๐ซ˜ค', u'้จ…': u'้ช“', u'้จŒ': u'้ช”', u'้จ': u'้ช’', u'้จŽ': u'้ช‘', u'้จ': u'้ช', u'้จ”': u'๐ฉจ€', u'้จ–': u'้ช›', u'้จ™': u'้ช—', u'้จš': u'๐ฉจŠ', u'้จ': u'๐ฉจƒ', u'้จŸ': u'๐ฉจˆ', u'้จ ': u'๐ซ˜จ', u'้จค': u'้ช™', u'้จง': u'ไฏ„', u'้จช': u'๐ฉจ„', u'้จซ': u'้ชž', u'้จญ': u'้ช˜', u'้จฎ': u'้ช', u'้จฐ': u'่…พ', u'้จถ': u'้ฉบ', u'้จท': u'้ชš', u'้จธ': u'้ชŸ', u'้จพ': u'้ชก', u'้ฉ€': u'่“ฆ', u'้ฉ': u'้ชœ', u'้ฉ‚': u'้ช–', u'้ฉƒ': u'้ช ', u'้ฉ„': u'้ชข', u'้ฉ…': u'้ฉฑ', u'้ฉŠ': u'้ช…', u'้ฉ‹': u'๐ฉงฏ', u'้ฉŒ': u'้ช•', u'้ฉ': u'้ช', u'้ฉ': u'้ชฃ', u'้ฉ•': u'้ช„', u'้ฉ—': u'้ชŒ', u'้ฉš': u'ๆƒŠ', u'้ฉ›': u'้ฉฟ', u'้ฉŸ': u'้ชค', u'้ฉข': u'้ฉด', u'้ฉค': u'้ชง', u'้ฉฅ': u'้ชฅ', u'้ฉฆ': u'้ชฆ', u'้ฉช': u'้ชŠ', u'้ฉซ': u'้ช‰', u'้ชฏ': u'่‚ฎ', u'้ซ': u'้ซ…', u'้ซ’': u'่„', u'้ซ”': u'ไฝ“', u'้ซ•': u'้ซŒ', u'้ซ–': u'้ซ‹', u'้ซฎ': u'ๅ‘', u'้ฌ†': u'ๆพ', u'้ฌ': u'่ƒก', u'้ฌš': u'้กป', u'้ฌข': u'้ฌ“', u'้ฌฅ': u'ๆ–—', u'้ฌง': u'้—น', u'้ฌจ': u'ๅ“„', u'้ฌฉ': u'้˜‹', u'้ฌฎ': u'้˜„', u'้ฌฑ': u'้ƒ', u'้ญŽ': u'้ญ‰', u'้ญ˜': u'้ญ‡', u'้ญš': u'้ฑผ', u'้ญ›': u'้ฑฝ', u'้ญŸ': u'๐ซš‰', u'้ญข': u'้ฑพ', u'้ญฅ': u'๐ฉฝน', u'้ญจ': u'้ฒ€', u'้ญฏ': u'้ฒ', u'้ญด': u'้ฒ‚', u'้ญท': u'้ฑฟ', u'้ญบ': u'้ฒ„', u'้ฎ': u'้ฒ…', u'้ฎƒ': u'้ฒ†', u'้ฎ„': u'๐ซš’', u'้ฎŠ': u'้ฒŒ', u'้ฎ‹': u'้ฒ‰', u'้ฎ': u'้ฒ', u'้ฎŽ': u'้ฒ‡', u'้ฎ': u'้ฒ', u'้ฎ‘': u'้ฒ', u'้ฎ’': u'้ฒ‹', u'้ฎ“': u'้ฒŠ', u'้ฎ•': u'๐ฉพ€', u'้ฎš': u'้ฒ’', u'้ฎœ': u'้ฒ˜', u'้ฎ': u'้ฒž', u'้ฎž': u'้ฒ•', u'้ฎŸ': u'๐ฉฝพ', u'้ฎฃ': u'ไฒŸ', u'้ฎฆ': u'้ฒ–', u'้ฎช': u'้ฒ”', u'้ฎซ': u'้ฒ›', u'้ฎญ': u'้ฒ‘', u'้ฎฎ': u'้ฒœ', u'้ฎฐ': u'๐ซš”', u'้ฎณ': u'้ฒ“', u'้ฎถ': u'้ฒช', u'้ฎธ': u'๐ฉพƒ', u'้ฎบ': u'้ฒ', u'้ฏ€': u'้ฒง', u'้ฏ': u'้ฒ ', u'้ฏ„': u'๐ฉพ', u'้ฏ†': u'๐ซš™', u'้ฏ‡': u'้ฒฉ', u'้ฏ‰': u'้ฒค', u'้ฏŠ': u'้ฒจ', u'้ฏ’': u'้ฒฌ', u'้ฏ”': u'้ฒป', u'้ฏ•': u'้ฒฏ', u'้ฏ–': u'้ฒญ', u'้ฏ—': u'้ฒž', u'้ฏ›': u'้ฒท', u'้ฏ': u'้ฒด', u'้ฏก': u'้ฒฑ', u'้ฏข': u'้ฒต', u'้ฏค': u'้ฒฒ', u'้ฏง': u'้ฒณ', u'้ฏจ': u'้ฒธ', u'้ฏช': u'้ฒฎ', u'้ฏซ': u'้ฒฐ', u'้ฏฐ': u'้ฒ‡', u'้ฏฑ': u'๐ฉพ‡', u'้ฏด': u'้ฒบ', u'้ฏถ': u'๐ฉฝผ', u'้ฏท': u'้ณ€', u'้ฏฝ': u'้ฒซ', u'้ฏฟ': u'้ณŠ', u'้ฐ': u'้ณˆ', u'้ฐ‚': u'้ฒ—', u'้ฐƒ': u'้ณ‚', u'้ฐ†': u'ไฒ ', u'้ฐˆ': u'้ฒฝ', u'้ฐ‰': u'้ณ‡', u'้ฐŒ': u'ไฒก', u'้ฐ': u'้ณ…', u'้ฐ': u'้ฒพ', u'้ฐ': u'้ณ„', u'้ฐ’': u'้ณ†', u'้ฐ“': u'้ณƒ', u'้ฐœ': u'้ณ’', u'้ฐŸ': u'้ณ‘', u'้ฐ ': u'้ณ‹', u'้ฐฃ': u'้ฒฅ', u'้ฐค': u'๐ซš•', u'้ฐฅ': u'้ณ', u'้ฐง': u'ไฒข', u'้ฐจ': u'้ณŽ', u'้ฐฉ': u'้ณ', u'้ฐญ': u'้ณ', u'้ฐฎ': u'้ณ', u'้ฐฑ': u'้ฒข', u'้ฐฒ': u'้ณŒ', u'้ฐณ': u'้ณ“', u'้ฐต': u'้ณ˜', u'้ฐท': u'้ฒฆ', u'้ฐน': u'้ฒฃ', u'้ฐบ': u'้ฒน', u'้ฐป': u'้ณ—', u'้ฐผ': u'้ณ›', u'้ฐพ': u'้ณ”', u'้ฑ‚': u'้ณ‰', u'้ฑ…': u'้ณ™', u'้ฑ‡': u'๐ฉพŒ', u'้ฑˆ': u'้ณ•', u'้ฑ‰': u'้ณ–', u'้ฑ’': u'้ณŸ', u'้ฑ”': u'้ณ', u'้ฑ–': u'้ณœ', u'้ฑ—': u'้ณž', u'้ฑ˜': u'้ฒŸ', u'้ฑ': u'้ฒผ', u'้ฑŸ': u'้ฒŽ', u'้ฑ ': u'้ฒ™', u'้ฑฃ': u'้ณฃ', u'้ฑค': u'้ณก', u'้ฑง': u'้ณข', u'้ฑจ': u'้ฒฟ', u'้ฑญ': u'้ฒš', u'้ฑฎ': u'๐ซšˆ', u'้ฑฏ': u'้ณ ', u'้ฑท': u'้ณ„', u'้ฑธ': u'้ฒˆ', u'้ฑบ': u'้ฒก', u'้ณฅ': u'้ธŸ', u'้ณง': u'ๅ‡ซ', u'้ณฉ': u'้ธ ', u'้ณฌ': u'ๅ‡ซ', u'้ณฒ': u'้ธค', u'้ณณ': u'ๅ‡ค', u'้ณด': u'้ธฃ', u'้ณถ': u'้ธข', u'้ณท': u'๐ซ››', u'้ณผ': u'๐ช‰ƒ', u'้ณพ': u'ได“', u'้ดƒ': u'๐ซ›ž', u'้ด†': u'้ธฉ', u'้ด‡': u'้ธจ', u'้ด‰': u'้ธฆ', u'้ด’': u'้ธฐ', u'้ด•': u'้ธต', u'้ด—': u'๐ซก', u'้ด›': u'้ธณ', u'้ดœ': u'๐ช‰ˆ', u'้ด': u'้ธฒ', u'้ดž': u'้ธฎ', u'้ดŸ': u'้ธฑ', u'้ดฃ': u'้ธช', u'้ดฆ': u'้ธฏ', u'้ดจ': u'้ธญ', u'้ดฏ': u'้ธธ', u'้ดฐ': u'้ธน', u'้ดฒ': u'๐ช‰†', u'้ดด': u'้ธป', u'้ดท': u'ได•', u'้ดป': u'้ธฟ', u'้ดฟ': u'้ธฝ', u'้ต': u'ได”', u'้ต‚': u'้ธบ', u'้ตƒ': u'้ธผ', u'้ต': u'้น€', u'้ต‘': u'้นƒ', u'้ต’': u'้น†', u'้ต“': u'้น', u'้ตš': u'๐ช‰', u'้ตœ': u'้นˆ', u'้ต': u'้น…', u'้ต ': u'้น„', u'้ตก': u'้น‰', u'้ตช': u'้นŒ', u'้ตฌ': u'้น', u'้ตฎ': u'้น', u'้ตฏ': u'้นŽ', u'้ตฐ': u'้›•', u'้ตฒ': u'้นŠ', u'้ตท': u'้น“', u'้ตพ': u'้น', u'้ถ„': u'ได–', u'้ถ‡': u'้ธซ', u'้ถ‰': u'้น‘', u'้ถŠ': u'้น’', u'้ถ’': u'๐ซ›ถ', u'้ถ“': u'้น‹', u'้ถ–': u'้น™', u'้ถ—': u'๐ซ›ธ', u'้ถ˜': u'้น•', u'้ถš': u'้น—', u'้ถก': u'้น–', u'้ถฅ': u'้น›', u'้ถฉ': u'้นœ', u'้ถช': u'ได—', u'้ถฌ': u'้ธง', u'้ถฏ': u'่Žบ', u'้ถฒ': u'้นŸ', u'้ถด': u'้นค', u'้ถน': u'้น ', u'้ถบ': u'้นก', u'้ถป': u'้น˜', u'้ถผ': u'้นฃ', u'้ถฟ': u'้นš', u'้ท€': u'้นš', u'้ท': u'้นข', u'้ท‚': u'้นž', u'้ท„': u'้ธก', u'้ทˆ': u'ได˜', u'้ทŠ': u'้น', u'้ท“': u'้นง', u'้ท”': u'๐ช‰‘', u'้ท–': u'้นฅ', u'้ท—': u'้ธฅ', u'้ท™': u'้ธท', u'้ทš': u'้นจ', u'้ทฅ': u'้ธถ', u'้ทฆ': u'้นช', u'้ทจ': u'๐ช‰Š', u'้ทซ': u'้น”', u'้ทฏ': u'้นฉ', u'้ทฒ': u'้นซ', u'้ทณ': u'้น‡', u'้ทธ': u'้นฌ', u'้ทน': u'้นฐ', u'้ทบ': u'้นญ', u'้ทฝ': u'้ธด', u'้ทฟ': u'ได™', u'้ธ‚': u'ใถ‰', u'้ธ‡': u'้นฏ', u'้ธ‹': u'๐ซ›ข', u'้ธŒ': u'้นฑ', u'้ธ': u'้นฒ', u'้ธ•': u'้ธฌ', u'้ธ˜': u'้นด', u'้ธš': u'้นฆ', u'้ธ›': u'้นณ', u'้ธ': u'้น‚', u'้ธž': u'้ธพ', u'้นต': u'ๅค', u'้นน': u'ๅ’ธ', u'้นบ': u'้นพ', u'้นผ': u'็ขฑ', u'้นฝ': u'็›', u'้บ—': u'ไธฝ', u'้บฅ': u'้บฆ', u'้บจ': u'๐ชŽŠ', u'้บฉ': u'้บธ', u'้บช': u'้ข', u'้บซ': u'้ข', u'้บฏ': u'ๆ›ฒ', u'้บฒ': u'๐ชމ', u'้บณ': u'๐ชŽŒ', u'้บด': u'ๆ›ฒ', u'้บต': u'้ข', u'้บผ': u'ไนˆ', u'้บฝ': u'ไนˆ', u'้ปƒ': u'้ป„', u'้ปŒ': u'้ป‰', u'้ปž': u'็‚น', u'้ปจ': u'ๅ…š', u'้ปฒ': u'้ปช', u'้ปด': u'้œ‰', u'้ปถ': u'้ปก', u'้ปท': u'้ปฉ', u'้ปฝ': u'้ปพ', u'้ปฟ': u'้ผ‹', u'้ผ‰': u'้ผ', u'้ผ•': u'ๅ†ฌ', u'้ผด': u'้ผน', u'้ฝ‡': u'้ฝ„', u'้ฝŠ': u'้ฝ', u'้ฝ‹': u'ๆ–‹', u'้ฝŽ': u'่ต', u'้ฝ': u'้ฝ‘', u'้ฝ’': u'้ฝฟ', u'้ฝ”': u'้พ€', u'้ฝ•': u'้พ', u'้ฝ—': u'้พ‚', u'้ฝ™': u'้พ…', u'้ฝœ': u'้พ‡', u'้ฝŸ': u'้พƒ', u'้ฝ ': u'้พ†', u'้ฝก': u'้พ„', u'้ฝฃ': u'ๅ‡บ', u'้ฝฆ': u'้พˆ', u'้ฝช': u'้พŠ', u'้ฝฌ': u'้พ‰', u'้ฝฒ': u'้พ‹', u'้ฝถ': u'่…ญ', u'้ฝท': u'้พŒ', u'้พ': u'้พ™', u'้พŽ': u'ๅސ', u'้พ': u'ๅบž', u'้พ”': u'้พš', u'้พ•': u'้พ›', u'้พœ': u'้พŸ', u'๐กžต': u'ใ›Ÿ', u'๐ก น': u'ใ›ฟ', u'๐กขƒ': u'ใ› ', u'๐กป•': u'ๅฒ', u'๐คชบ': u'ใป˜', u'๐คซฉ': u'ใป', u'๐ฆช™': u'ไ‘ฝ', u'๐งœต': u'ไ™Š', u'๐งž': u'ไ˜›', u'๐งฆง': u'๐ซŸ', u'๐งฉ™': u'ไœฅ', u'๐งตณ': u'ไžŒ', u'๐จ‹ข': u'ไข‚', u'๐จฅ›': u'๐จฑ€', u'๐จฆซ': u'ไฆ€', u'๐จงœ': u'ไฆ', u'๐จงฑ': u'๐จฑŠ', u'๐จซ’': u'๐จฑ', u'๐จฎ‚': u'๐จฑ•', u'๐จฏ…': u'ไฅฟ', u'๐ฉŽข': u'๐ฉพ', u'๐ฉช': u'๐ฉฝ', u'๐ฉ“ฃ': u'๐ฉ–•', u'๐ฉ—€': u'๐ฉ™ฆ', u'๐ฉ—ก': u'๐ฉ™ง', u'๐ฉ˜€': u'๐ฉ™ฉ', u'๐ฉ˜': u'๐ฉ™ญ', u'๐ฉ˜น': u'๐ฉ™จ', u'๐ฉ˜บ': u'๐ฉ™ฌ', u'๐ฉ™ˆ': u'๐ฉ™ฐ', u'๐ฉœฆ': u'๐ฉ †', u'๐ฉ”': u'๐ฉ ‹', u'๐ฉžฏ': u'ไญช', u'๐ฉŸ': u'๐ฉ …', u'๐ฉกบ': u'๐ฉงฆ', u'๐ฉขก': u'๐ฉงฌ', u'๐ฉขด': u'๐ฉงต', u'๐ฉขธ': u'๐ฉงณ', u'๐ฉขพ': u'๐ฉงฎ', u'๐ฉฃ': u'๐ฉงถ', u'๐ฉฃ‘': u'ไฏƒ', u'๐ฉฃต': u'๐ฉงป', u'๐ฉฃบ': u'๐ฉงผ', u'๐ฉคŠ': u'๐ฉงฉ', u'๐ฉค™': u'๐ฉจ†', u'๐ฉคฒ': u'๐ฉจ‰', u'๐ฉคธ': u'๐ฉจ…', u'๐ฉฅ„': u'๐ฉจ‹', u'๐ฉฅ‡': u'๐ฉจ', u'๐ฉฅ‰': u'๐ฉงฑ', u'๐ฉฅ‘': u'๐ฉจŒ', u'๐ฉง†': u'๐ฉจ', u'๐ฉตฉ': u'๐ฉฝบ', u'๐ฉตน': u'๐ฉฝป', u'๐ฉถ˜': u'ไฒž', u'๐ฉถฐ': u'๐ฉฝฟ', u'๐ฉถฑ': u'๐ฉฝฝ', u'๐ฉทฐ': u'๐ฉพ„', u'๐ฉธƒ': u'๐ฉพ…', u'๐ฉธฆ': u'๐ฉพ†', u'๐ฉฝ‡': u'๐ฉพŽ', u'๐ฉฟช': u'๐ช‰„', u'๐ช€ฆ': u'๐ช‰…', u'๐ช€พ': u'๐ช‰‹', u'๐ชˆ': u'๐ช‰‰', u'๐ช–': u'๐ช‰Œ', u'๐ช‚†': u'๐ช‰Ž', u'๐ชƒ': u'๐ช‰', u'๐ชƒ': u'๐ช‰', u'๐ช„†': u'๐ช‰”', u'๐ช„•': u'๐ช‰’', u'๐ช‡ณ': u'๐ช‰•', u'๐ช˜€': u'๐ชš', u'๐ช˜ฏ': u'๐ชš', u'๐ซš’': u'่ปฟ', u'ใ€Šๆ˜“ไนพ': u'ใ€Šๆ˜“ไนพ', u'ไธ่‘—็—•่ทก': u'ไธ็€็—•่ฟน', u'ไธ่‘—้‚Š้š›': u'ไธ็€่พน้™…', u'่ˆ‡่‘—': u'ไธŽ็€', u'่ˆ‡่‘—ๆ›ธ': u'ไธŽ่‘—ไนฆ', u'่ˆ‡่‘—ไฝœ': u'ไธŽ่‘—ไฝœ', u'่ˆ‡่‘—ๅ': u'ไธŽ่‘—ๅ', u'่ˆ‡่‘—้Œ„': u'ไธŽ่‘—ๅฝ•', u'่ˆ‡่‘—็จฑ': u'ไธŽ่‘—็งฐ', u'่ˆ‡่‘—่€…': u'ไธŽ่‘—่€…', u'่ˆ‡่‘—่ฟฐ': u'ไธŽ่‘—่ฟฐ', u'ไธ‘่‘—': u'ไธ‘็€', u'ไธ‘่‘—ๆ›ธ': u'ไธ‘่‘—ไนฆ', u'ไธ‘่‘—ไฝœ': u'ไธ‘่‘—ไฝœ', u'ไธ‘่‘—ๅ': u'ไธ‘่‘—ๅ', u'ไธ‘่‘—้Œ„': u'ไธ‘่‘—ๅฝ•', u'ไธ‘่‘—็จฑ': u'ไธ‘่‘—็งฐ', u'ไธ‘่‘—่€…': u'ไธ‘่‘—่€…', u'ไธ‘่‘—่ฟฐ': u'ไธ‘่‘—่ฟฐ', u'ๅฐˆ่‘—': u'ไธ“่‘—', u'่‡จ่‘—': u'ไธด็€', u'่‡จ่‘—ๆ›ธ': u'ไธด่‘—ไนฆ', u'่‡จ่‘—ไฝœ': u'ไธด่‘—ไฝœ', u'่‡จ่‘—ๅ': u'ไธด่‘—ๅ', u'่‡จ่‘—้Œ„': u'ไธด่‘—ๅฝ•', u'่‡จ่‘—็จฑ': u'ไธด่‘—็งฐ', u'่‡จ่‘—่€…': u'ไธด่‘—่€…', u'่‡จ่‘—่ฟฐ': u'ไธด่‘—่ฟฐ', u'้บ—่‘—': u'ไธฝ็€', u'้บ—่‘—ๆ›ธ': u'ไธฝ่‘—ไนฆ', u'้บ—่‘—ไฝœ': u'ไธฝ่‘—ไฝœ', u'้บ—่‘—ๅ': u'ไธฝ่‘—ๅ', u'้บ—่‘—้Œ„': u'ไธฝ่‘—ๅฝ•', u'้บ—่‘—็จฑ': u'ไธฝ่‘—็งฐ', u'้บ—่‘—่€…': u'ไธฝ่‘—่€…', u'้บ—่‘—่ฟฐ': u'ไธฝ่‘—่ฟฐ', u'ๆจ‚่‘—': u'ไน็€', u'ๆจ‚่‘—ๆ›ธ': u'ไน่‘—ไนฆ', u'ๆจ‚่‘—ไฝœ': u'ไน่‘—ไฝœ', u'ๆจ‚่‘—ๅ': u'ไน่‘—ๅ', u'ๆจ‚่‘—้Œ„': u'ไน่‘—ๅฝ•', u'ๆจ‚่‘—็จฑ': u'ไน่‘—็งฐ', u'ๆจ‚่‘—่€…': u'ไน่‘—่€…', u'ๆจ‚่‘—่ฟฐ': u'ไน่‘—่ฟฐ', u'ไน˜่‘—': u'ไน˜็€', u'ไน˜่‘—ๆ›ธ': u'ไน˜่‘—ไนฆ', u'ไน˜่‘—ไฝœ': u'ไน˜่‘—ไฝœ', u'ไน˜่‘—ๅ': u'ไน˜่‘—ๅ', u'ไน˜่‘—้Œ„': u'ไน˜่‘—ๅฝ•', u'ไน˜่‘—็จฑ': u'ไน˜่‘—็งฐ', u'ไน˜่‘—่€…': u'ไน˜่‘—่€…', u'ไน˜่‘—่ฟฐ': u'ไน˜่‘—่ฟฐ', u'ไนพไธ€ๅ›': u'ไนพไธ€ๅ›', u'ไนพไธ€ๅฃ‡': u'ไนพไธ€ๅ›', u'ไนพไธ€็ป„': u'ไนพไธ€็ป„', u'ไนพไธ€็ต„': u'ไนพไธ€็ป„', u'ไนพไธŠไนพไธ‹': u'ไนพไธŠไนพไธ‹', u'ไนพ็‚บๅคฉ': u'ไนพไธบๅคฉ', u'ไนพ็‚บ้™ฝ': u'ไนพไธบ้˜ณ', u'ไนพไน': u'ไนพไน', u'ไนพไนพ': u'ไนพไนพ', u'ไนพไบจ': u'ไนพไบจ', u'ไนพๅ„€': u'ไนพไปช', u'ไนพไปช': u'ไนพไปช', u'ไนพไฝ': u'ไนพไฝ', u'ไนพๅฅ': u'ไนพๅฅ', u'ไนพๅฅไนŸ': u'ไนพๅฅไนŸ', u'ไนพๅ…ƒ': u'ไนพๅ…ƒ', u'ไนพๅ…‰': u'ไนพๅ…‰', u'ไนพๅ…ด': u'ไนพๅ…ด', u'ไนพ่ˆˆ': u'ไนพๅ…ด', u'ไนพๅ†ˆ': u'ไนพๅ†ˆ', u'ไนพๅฒก': u'ไนพๅ†ˆ', u'ไนพๅЉ': u'ไนพๅˆ˜', u'ไนพๅˆ˜': u'ไนพๅˆ˜', u'ไนพๅ‰›': u'ไนพๅˆš', u'ไนพๅˆš': u'ไนพๅˆš', u'ไนพๅ‹™': u'ไนพๅŠก', u'ไนพๅŠก': u'ไนพๅŠก', u'ไนพๅŒ–': u'ไนพๅŒ–', u'ไนพๅฆ': u'ไนพๅฆ', u'ไนพๅŽฟ': u'ไนพๅŽฟ', u'ไนพ็ธฃ': u'ไนพๅŽฟ', u'ไนพๅฐ': u'ไนพๅฐ', u'ไนพๅ‰': u'ไนพๅ‰', u'ไนพๅ•Ÿ': u'ไนพๅฏ', u'ไนพๅฏ': u'ไนพๅฏ', u'ไนพๅ‘ฝ': u'ไนพๅ‘ฝ', u'ไนพๅ’Œ': u'ไนพๅ’Œ', u'ไนพๅ˜‰': u'ไนพๅ˜‰', u'ไนพๅœ–': u'ไนพๅ›พ', u'ไนพๅ›พ': u'ไนพๅ›พ', u'ไนพๅค': u'ไนพๅค', u'ไนพๅŸŽ': u'ไนพๅŸŽ', u'ไนพๅŸบ': u'ไนพๅŸบ', u'ไนพๅคฉไนŸ': u'ไนพๅคฉไนŸ', u'ไนพๅง‹': u'ไนพๅง‹', u'ไนพๅง“': u'ไนพๅง“', u'ไนพๅฏง': u'ไนพๅฎ', u'ไนพๅฎ': u'ไนพๅฎ', u'ไนพๅฎ…': u'ไนพๅฎ…', u'ไนพๅฎ‡': u'ไนพๅฎ‡', u'ไนพๅฎ‰': u'ไนพๅฎ‰', u'ไนพๅฎš': u'ไนพๅฎš', u'ไนพๅฐ': u'ไนพๅฐ', u'ไนพๅฑ…': u'ไนพๅฑ…', u'ไนพๅด—': u'ไนพๅฒ—', u'ไนพๅฒ—': u'ไนพๅฒ—', u'ไนพๅท›': u'ไนพๅท›', u'ไนพๅทž': u'ไนพๅทž', u'ไนพๅผ': u'ไนพๅผ', u'ไนพ้Œ„': u'ไนพๅฝ•', u'ไนพๅฝ•': u'ไนพๅฝ•', u'ไนพๅพ‹': u'ไนพๅพ‹', u'ไนพๅพท': u'ไนพๅพท', u'ไนพๅฟƒ': u'ไนพๅฟƒ', u'ไนพๅฟ ': u'ไนพๅฟ ', u'ไนพๆ–‡': u'ไนพๆ–‡', u'ไนพๆ–ท': u'ไนพๆ–ญ', u'ไนพๆ–ญ': u'ไนพๆ–ญ', u'ไนพๆ–น': u'ไนพๆ–น', u'ไนพๆ–ฝ': u'ไนพๆ–ฝ', u'ไนพๆ—ฆ': u'ไนพๆ—ฆ', u'ไนพๆ˜Ž': u'ไนพๆ˜Ž', u'ไนพๆ˜ง': u'ไนพๆ˜ง', u'ไนพๆš‰': u'ไนพๆ™–', u'ไนพๆ™–': u'ไนพๆ™–', u'ไนพๆ™ฏ': u'ไนพๆ™ฏ', u'ไนพๆ™ท': u'ไนพๆ™ท', u'ไนพๆ›œ': u'ไนพๆ›œ', u'ไนพๆž„': u'ไนพๆž„', u'ไนพๆง‹': u'ไนพๆž„', u'ไนพๆžข': u'ไนพๆžข', u'ไนพๆจž': u'ไนพๆžข', u'ไนพๆ ‹': u'ไนพๆ ‹', u'ไนพๆฃŸ': u'ไนพๆ ‹', u'ไนพๆญฅ': u'ไนพๆญฅ', u'ไนพๆฐ': u'ไนพๆฐ', u'ไนพๆฒ“ๅ’Œ': u'ไนพๆฒ“ๅ’Œ', u'ไนพๆฒ“ๅฉ†': u'ไนพๆฒ“ๅฉ†', u'ไนพๆณ‰': u'ไนพๆณ‰', u'ไนพๆทณ': u'ไนพๆทณ', u'ไนพๆธ…ๅฎฎ': u'ไนพๆธ…ๅฎซ', u'ไนพๆธ…ๅฎซ': u'ไนพๆธ…ๅฎซ', u'ไนพๆธฅ': u'ไนพๆธฅ', u'ไนพ้ˆ': u'ไนพ็ต', u'ไนพ็ต': u'ไนพ็ต', u'ไนพ็”ท': u'ไนพ็”ท', u'ไนพ็š‹': u'ไนพ็š‹', u'ไนพ็››ไธ–': u'ไนพ็››ไธ–', u'ไนพ็Ÿข': u'ไนพ็Ÿข', u'ไนพ็ฅ': u'ไนพ็ฅ', u'ไนพ็ฉน': u'ไนพ็ฉน', u'ไนพ็ซ‡': u'ไนพ็ชฆ', u'ไนพ็ชฆ': u'ไนพ็ชฆ', u'ไนพ็ซบ': u'ไนพ็ซบ', u'ไนพ็ฏค': u'ไนพ็ฌƒ', u'ไนพ็ฌƒ': u'ไนพ็ฌƒ', u'ไนพ็ฌฆ': u'ไนพ็ฌฆ', u'ไนพ็ญ–': u'ไนพ็ญ–', u'ไนพ็ฒพ': u'ไนพ็ฒพ', u'ไนพ็ด…': u'ไนพ็บข', u'ไนพ็บข': u'ไนพ็บข', u'ไนพ็ถฑ': u'ไนพ็บฒ', u'ไนพ็บฒ': u'ไนพ็บฒ', u'ไนพ็บฝ': u'ไนพ็บฝ', u'ไนพ็ด': u'ไนพ็บฝ', u'ไนพ็ตก': u'ไนพ็ปœ', u'ไนพ็ปœ': u'ไนพ็ปœ', u'ไนพ็ตฑ': u'ไนพ็ปŸ', u'ไนพ็ปŸ': u'ไนพ็ปŸ', u'ไนพ็ถญ': u'ไนพ็ปด', u'ไนพ็ปด': u'ไนพ็ปด', u'ไนพ็พ…': u'ไนพ็ฝ—', u'ไนพ็ฝ—': u'ไนพ็ฝ—', u'ไนพ่Šฑ': u'ไนพ่Šฑ', u'ไนพ่”ญ': u'ไนพ่ซ', u'ไนพ่ซ': u'ไนพ่ซ', u'ไนพ่กŒ': u'ไนพ่กŒ', u'ไนพ่กก': u'ไนพ่กก', u'ไนพ่ฆ†': u'ไนพ่ฆ†', u'ไนพ่ฑก': u'ไนพ่ฑก', u'ไนพ่ฑกๆญท': u'ไนพ่ฑกๅކ', u'ไนพ่ฑกๅކ': u'ไนพ่ฑกๅކ', u'ไนพ่ดž': u'ไนพ่ดž', u'ไนพ่ฒž': u'ไนพ่ดž', u'ไนพ่ฒบ': u'ไนพ่ดถ', u'ไนพ่ดถ': u'ไนพ่ดถ', u'ไนพ่ฝฆ': u'ไนพ่ฝฆ', u'ไนพ่ปŠ': u'ไนพ่ฝฆ', u'ไนพ่ฝด': u'ไนพ่ฝด', u'ไนพ่ปธ': u'ไนพ่ฝด', u'ไนพ้€š': u'ไนพ้€š', u'ไนพ้€ ': u'ไนพ้€ ', u'ไนพ้“': u'ไนพ้“', u'ไนพ้‘’': u'ไนพ้‰ด', u'ไนพ้‰ด': u'ไนพ้‰ด', u'ไนพ้’ง': u'ไนพ้’ง', u'ไนพ้ˆž': u'ไนพ้’ง', u'ไนพ้—ผ': u'ไนพ้—ผ', u'ไนพ้—ฅ': u'ไนพ้—ผ', u'ไนพ้™€': u'ไนพ้™€', u'ไนพ้™ต': u'ไนพ้™ต', u'ไนพ้š†': u'ไนพ้š†', u'ไนพ้Ÿณ': u'ไนพ้Ÿณ', u'ไนพ้กพ': u'ไนพ้กพ', u'ไนพ้กง': u'ไนพ้กพ', u'ไนพ้ฃŽ': u'ไนพ้ฃŽ', u'ไนพ้ขจ': u'ไนพ้ฃŽ', u'ไนพ้ฆ–': u'ไนพ้ฆ–', u'ไนพ้ฆฌ': u'ไนพ้ฉฌ', u'ไนพ้ฉฌ': u'ไนพ้ฉฌ', u'ไนพ้ต ': u'ไนพ้น„', u'ไนพ้น„': u'ไนพ้น„', u'ไนพ้ตฒ': u'ไนพ้นŠ', u'ไนพ้นŠ': u'ไนพ้นŠ', u'ไนพ้พ': u'ไนพ้พ™', u'ไนพ้พ™': u'ไนพ้พ™', u'ไนพ๏ผŒๅฅไนŸ': u'ไนพ๏ผŒๅฅไนŸ', u'ไนพ๏ผŒๅคฉไนŸ': u'ไนพ๏ผŒๅคฉไนŸ', u'็ˆญ่‘—': u'ไบ‰็€', u'็ˆญ่‘—ๆ›ธ': u'ไบ‰่‘—ไนฆ', u'็ˆญ่‘—ไฝœ': u'ไบ‰่‘—ไฝœ', u'็ˆญ่‘—ๅ': u'ไบ‰่‘—ๅ', u'็ˆญ่‘—้Œ„': u'ไบ‰่‘—ๅฝ•', u'็ˆญ่‘—็จฑ': u'ไบ‰่‘—็งฐ', u'็ˆญ่‘—่€…': u'ไบ‰่‘—่€…', u'็ˆญ่‘—่ฟฐ': u'ไบ‰่‘—่ฟฐ', u'ไบ”็ฎ‡ๅฑฑ': u'ไบ”็ฎ‡ๅฑฑ', u'ไบฎ่‘—': u'ไบฎ็€', u'ไบฎ่‘—ๆ›ธ': u'ไบฎ่‘—ไนฆ', u'ไบฎ่‘—ไฝœ': u'ไบฎ่‘—ไฝœ', u'ไบฎ่‘—ๅ': u'ไบฎ่‘—ๅ', u'ไบฎ่‘—้Œ„': u'ไบฎ่‘—ๅฝ•', u'ไบฎ่‘—็จฑ': u'ไบฎ่‘—็งฐ', u'ไบฎ่‘—่€…': u'ไบฎ่‘—่€…', u'ไบฎ่‘—่ฟฐ': u'ไบฎ่‘—่ฟฐ', u'ไป—่‘—': u'ไป—็€', u'ไป—่‘—ๆ›ธ': u'ไป—่‘—ไนฆ', u'ไป—่‘—ไฝœ': u'ไป—่‘—ไฝœ', u'ไป—่‘—ๅ': u'ไป—่‘—ๅ', u'ไป—่‘—้Œ„': u'ไป—่‘—ๅฝ•', u'ไป—่‘—็จฑ': u'ไป—่‘—็งฐ', u'ไป—่‘—่€…': u'ไป—่‘—่€…', u'ไป—่‘—่ฟฐ': u'ไป—่‘—่ฟฐ', u'ไปฃ่กจ่‘—': u'ไปฃ่กจ็€', u'ไปฃ่กจ่‘—ๆ›ธ': u'ไปฃ่กจ่‘—ไนฆ', u'ไปฃ่กจ่‘—ไฝœ': u'ไปฃ่กจ่‘—ไฝœ', u'ไปฃ่กจ่‘—ๅ': u'ไปฃ่กจ่‘—ๅ', u'ไปฃ่กจ่‘—้Œ„': u'ไปฃ่กจ่‘—ๅฝ•', u'ไปฃ่กจ่‘—็จฑ': u'ไปฃ่กจ่‘—็งฐ', u'ไปฃ่กจ่‘—่€…': u'ไปฃ่กจ่‘—่€…', u'ไปฃ่กจ่‘—่ฟฐ': u'ไปฃ่กจ่‘—่ฟฐ', u'ไปฅๅพฎ็Ÿฅ่‘—': u'ไปฅๅพฎ็Ÿฅ่‘—', u'ไปฐๅฑ‹่‘—ๆ›ธ': u'ไปฐๅฑ‹่‘—ไนฆ', u'ๅฝทๅฝฟ': u'ไปฟไฝ›', u'ๅคฅ่จˆ': u'ไผ™่ฎก', u'ๅ‚ณ่‘—': u'ไผ ็€', u'ๅ‚ณ่‘—ๆ›ธ': u'ไผ ่‘—ไนฆ', u'ๅ‚ณ่‘—ไฝœ': u'ไผ ่‘—ไฝœ', u'ๅ‚ณ่‘—ๅ': u'ไผ ่‘—ๅ', u'ๅ‚ณ่‘—้Œ„': u'ไผ ่‘—ๅฝ•', u'ๅ‚ณ่‘—็จฑ': u'ไผ ่‘—็งฐ', u'ๅ‚ณ่‘—่€…': u'ไผ ่‘—่€…', u'ๅ‚ณ่‘—่ฟฐ': u'ไผ ่‘—่ฟฐ', u'ไผด่‘—': u'ไผด็€', u'ไผด่‘—ๆ›ธ': u'ไผด่‘—ไนฆ', u'ไผด่‘—ไฝœ': u'ไผด่‘—ไฝœ', u'ไผด่‘—ๅ': u'ไผด่‘—ๅ', u'ไผด่‘—้Œ„': u'ไผด่‘—ๅฝ•', u'ไผด่‘—็จฑ': u'ไผด่‘—็งฐ', u'ไผด่‘—่€…': u'ไผด่‘—่€…', u'ไผด่‘—่ฟฐ': u'ไผด่‘—่ฟฐ', u'ไฝŽ่‘—': u'ไฝŽ็€', u'ไฝŽ่‘—ๆ›ธ': u'ไฝŽ่‘—ไนฆ', u'ไฝŽ่‘—ไฝœ': u'ไฝŽ่‘—ไฝœ', u'ไฝŽ่‘—ๅ': u'ไฝŽ่‘—ๅ', u'ไฝŽ่‘—้Œ„': u'ไฝŽ่‘—ๅฝ•', u'ไฝŽ่‘—็จฑ': u'ไฝŽ่‘—็งฐ', u'ไฝŽ่‘—่€…': u'ไฝŽ่‘—่€…', u'ไฝŽ่‘—่ฟฐ': u'ไฝŽ่‘—่ฟฐ', u'ไฝ่‘—': u'ไฝ็€', u'ไฝ่‘—ๆ›ธ': u'ไฝ่‘—ไนฆ', u'ไฝ่‘—ไฝœ': u'ไฝ่‘—ไฝœ', u'ไฝ่‘—ๅ': u'ไฝ่‘—ๅ', u'ไฝ่‘—้Œ„': u'ไฝ่‘—ๅฝ•', u'ไฝ่‘—็จฑ': u'ไฝ่‘—็งฐ', u'ไฝ่‘—่€…': u'ไฝ่‘—่€…', u'ไฝ่‘—่ฟฐ': u'ไฝ่‘—่ฟฐ', u'ไฝ›้ ญ่‘—็ณž': u'ไฝ›ๅคด่‘—็ฒช', u'ไพๅ„ธ็ด€': u'ไพ็ฝ—็บช', u'ๅด่‘—': u'ไพง็€', u'ๅด่‘—ๆ›ธ': u'ไพง่‘—ไนฆ', u'ๅด่‘—ไฝœ': u'ไพง่‘—ไฝœ', u'ๅด่‘—ๅ': u'ไพง่‘—ๅ', u'ๅด่‘—้Œ„': u'ไพง่‘—ๅฝ•', u'ๅด่‘—็จฑ': u'ไพง่‘—็งฐ', u'ๅด่‘—่€…': u'ไพง่‘—่€…', u'ๅด่‘—่ฟฐ': u'ไพง่‘—่ฟฐ', u'ไฟ่ญท่‘—': u'ไฟๆŠค็€', u'ไฟ้šœ่‘—': u'ไฟ้šœ็€', u'ไฟ้šœ่‘—ๆ›ธ': u'ไฟ้šœ่‘—ไนฆ', u'ไฟ้šœ่‘—ไฝœ': u'ไฟ้šœ่‘—ไฝœ', u'ไฟ้šœ่‘—ๅ': u'ไฟ้šœ่‘—ๅ', u'ไฟ้šœ่‘—้Œ„': u'ไฟ้šœ่‘—ๅฝ•', u'ไฟ้šœ่‘—็จฑ': u'ไฟ้šœ่‘—็งฐ', u'ไฟ้šœ่‘—่€…': u'ไฟ้šœ่‘—่€…', u'ไฟ้šœ่‘—่ฟฐ': u'ไฟ้šœ่‘—่ฟฐ', u'ไฟก่‘—': u'ไฟก็€', u'ไฟก่‘—ๆ›ธ': u'ไฟก่‘—ไนฆ', u'ไฟก่‘—ไฝœ': u'ไฟก่‘—ไฝœ', u'ไฟก่‘—ๅ': u'ไฟก่‘—ๅ', u'ไฟก่‘—้Œ„': u'ไฟก่‘—ๅฝ•', u'ไฟก่‘—็จฑ': u'ไฟก่‘—็งฐ', u'ไฟก่‘—่€…': u'ไฟก่‘—่€…', u'ไฟก่‘—่ฟฐ': u'ไฟก่‘—่ฟฐ', u'ไฟฎ้Š': u'ไฟฎ็‚ผ', u'ๅ€™่‘—': u'ๅ€™็€', u'ๅ€™่‘—ๆ›ธ': u'ๅ€™่‘—ไนฆ', u'ๅ€™่‘—ไฝœ': u'ๅ€™่‘—ไฝœ', u'ๅ€™่‘—ๅ': u'ๅ€™่‘—ๅ', u'ๅ€™่‘—้Œ„': u'ๅ€™่‘—ๅฝ•', u'ๅ€™่‘—็จฑ': u'ๅ€™่‘—็งฐ', u'ๅ€™่‘—่€…': u'ๅ€™่‘—่€…', u'ๅ€™่‘—่ฟฐ': u'ๅ€™่‘—่ฟฐ', u'่—‰ๅŠฉ': u'ๅ€ŸๅŠฉ', u'่—‰ๅฃ': u'ๅ€Ÿๅฃ', u'่—‰ๆ‰‹': u'ๅ€Ÿๆ‰‹', u'่—‰ๆ•…': u'ๅ€Ÿๆ•…', u'่—‰ๆฉŸ': u'ๅ€Ÿๆœบ', u'่—‰ๆญค': u'ๅ€Ÿๆญค', u'่—‰็”ฑ': u'ๅ€Ÿ็”ฑ', u'ๅ€Ÿ่‘—': u'ๅ€Ÿ็€', u'่—‰็€': u'ๅ€Ÿ็€', u'่—‰่‘—': u'ๅ€Ÿ็€', u'่—‰็ซฏ': u'ๅ€Ÿ็ซฏ', u'ๅ€Ÿ่‘—ๆ›ธ': u'ๅ€Ÿ่‘—ไนฆ', u'ๅ€Ÿ่‘—ไฝœ': u'ๅ€Ÿ่‘—ไฝœ', u'ๅ€Ÿ่‘—ๅ': u'ๅ€Ÿ่‘—ๅ', u'ๅ€Ÿ่‘—้Œ„': u'ๅ€Ÿ่‘—ๅฝ•', u'ๅ€Ÿ่‘—็จฑ': u'ๅ€Ÿ่‘—็งฐ', u'ๅ€Ÿ่‘—่€…': u'ๅ€Ÿ่‘—่€…', u'ๅ€Ÿ่‘—่ฟฐ': u'ๅ€Ÿ่‘—่ฟฐ', u'่—‰่ฉž': u'ๅ€Ÿ่ฏ', u'ๅš่‘—': u'ๅš็€', u'ๅš่‘—ๆ›ธ': u'ๅš่‘—ไนฆ', u'ๅš่‘—ไฝœ': u'ๅš่‘—ไฝœ', u'ๅš่‘—ๅ': u'ๅš่‘—ๅ', u'ๅš่‘—้Œ„': u'ๅš่‘—ๅฝ•', u'ๅš่‘—็จฑ': u'ๅš่‘—็งฐ', u'ๅš่‘—่€…': u'ๅš่‘—่€…', u'ๅš่‘—่ฟฐ': u'ๅš่‘—่ฟฐ', u'ๅท่‘—': u'ๅท็€', u'ๅท่‘—ๆ›ธ': u'ๅท่‘—ไนฆ', u'ๅท่‘—ไฝœ': u'ๅท่‘—ไฝœ', u'ๅท่‘—ๅ': u'ๅท่‘—ๅ', u'ๅท่‘—้Œ„': u'ๅท่‘—ๅฝ•', u'ๅท่‘—็จฑ': u'ๅท่‘—็งฐ', u'ๅท่‘—่€…': u'ๅท่‘—่€…', u'ๅท่‘—่ฟฐ': u'ๅท่‘—่ฟฐ', u'ๅ‚ขไฟฌ': u'ๅ‚ขไฟฌ', u'ๅ…‰่‘—': u'ๅ…‰็€', u'ๅ…‰่‘—ๆ›ธ': u'ๅ…‰่‘—ไนฆ', u'ๅ…‰่‘—ไฝœ': u'ๅ…‰่‘—ไฝœ', u'ๅ…‰่‘—ๅ': u'ๅ…‰่‘—ๅ', u'ๅ…‰่‘—้Œ„': u'ๅ…‰่‘—ๅฝ•', u'ๅ…‰่‘—็จฑ': u'ๅ…‰่‘—็งฐ', u'ๅ…‰่‘—่€…': u'ๅ…‰่‘—่€…', u'ๅ…‰่‘—่ฟฐ': u'ๅ…‰่‘—่ฟฐ', u'้—œ่‘—': u'ๅ…ณ็€', u'้—œ่‘—ๆ›ธ': u'ๅ…ณ่‘—ไนฆ', u'้—œ่‘—ไฝœ': u'ๅ…ณ่‘—ไฝœ', u'้—œ่‘—ๅ': u'ๅ…ณ่‘—ๅ', u'้—œ่‘—้Œ„': u'ๅ…ณ่‘—ๅฝ•', u'้—œ่‘—็จฑ': u'ๅ…ณ่‘—็งฐ', u'้—œ่‘—่€…': u'ๅ…ณ่‘—่€…', u'้—œ่‘—่ฟฐ': u'ๅ…ณ่‘—่ฟฐ', u'ๅ†€่‘—': u'ๅ†€็€', u'ๅ†€่‘—ๆ›ธ': u'ๅ†€่‘—ไนฆ', u'ๅ†€่‘—ไฝœ': u'ๅ†€่‘—ไฝœ', u'ๅ†€่‘—ๅ': u'ๅ†€่‘—ๅ', u'ๅ†€่‘—้Œ„': u'ๅ†€่‘—ๅฝ•', u'ๅ†€่‘—็จฑ': u'ๅ†€่‘—็งฐ', u'ๅ†€่‘—่€…': u'ๅ†€่‘—่€…', u'ๅ†€่‘—่ฟฐ': u'ๅ†€่‘—่ฟฐ', u'ๅ†’่‘—': u'ๅ†’็€', u'ๅ†’่‘—ๆ›ธ': u'ๅ†’่‘—ไนฆ', u'ๅ†’่‘—ไฝœ': u'ๅ†’่‘—ไฝœ', u'ๅ†’่‘—ๅ': u'ๅ†’่‘—ๅ', u'ๅ†’่‘—้Œ„': u'ๅ†’่‘—ๅฝ•', u'ๅ†’่‘—็จฑ': u'ๅ†’่‘—็งฐ', u'ๅ†’่‘—่€…': u'ๅ†’่‘—่€…', u'ๅ†’่‘—่ฟฐ': u'ๅ†’่‘—่ฟฐ', u'ๅฏซ่‘—': u'ๅ†™็€', u'ๅฏซ่‘—ๆ›ธ': u'ๅ†™่‘—ไนฆ', u'ๅฏซ่‘—ไฝœ': u'ๅ†™่‘—ไฝœ', u'ๅฏซ่‘—ๅ': u'ๅ†™่‘—ๅ', u'ๅฏซ่‘—้Œ„': u'ๅ†™่‘—ๅฝ•', u'ๅฏซ่‘—็จฑ': u'ๅ†™่‘—็งฐ', u'ๅฏซ่‘—่€…': u'ๅ†™่‘—่€…', u'ๅฏซ่‘—่ฟฐ': u'ๅ†™่‘—่ฟฐ', u'ๆถผ่‘—': u'ๅ‡‰็€', u'ๆถผ่‘—ๆ›ธ': u'ๅ‡‰่‘—ไนฆ', u'ๆถผ่‘—ไฝœ': u'ๅ‡‰่‘—ไฝœ', u'ๆถผ่‘—ๅ': u'ๅ‡‰่‘—ๅ', u'ๆถผ่‘—้Œ„': u'ๅ‡‰่‘—ๅฝ•', u'ๆถผ่‘—็จฑ': u'ๅ‡‰่‘—็งฐ', u'ๆถผ่‘—่€…': u'ๅ‡‰่‘—่€…', u'ๆถผ่‘—่ฟฐ': u'ๅ‡‰่‘—่ฟฐ', u'ๆ†‘่—‰': u'ๅ‡ญๅ€Ÿ', u'ๅˆถ่‘—': u'ๅˆถ็€', u'ๅˆถ่‘—ๆ›ธ': u'ๅˆถ่‘—ไนฆ', u'ๅˆถ่‘—ไฝœ': u'ๅˆถ่‘—ไฝœ', u'ๅˆถ่‘—ๅ': u'ๅˆถ่‘—ๅ', u'ๅˆถ่‘—้Œ„': u'ๅˆถ่‘—ๅฝ•', u'ๅˆถ่‘—็จฑ': u'ๅˆถ่‘—็งฐ', u'ๅˆถ่‘—่€…': u'ๅˆถ่‘—่€…', u'ๅˆถ่‘—่ฟฐ': u'ๅˆถ่‘—่ฟฐ', u'ๅˆป่‘—': u'ๅˆป็€', u'ๅˆป่‘—ๆ›ธ': u'ๅˆป่‘—ไนฆ', u'ๅˆป่‘—ไฝœ': u'ๅˆป่‘—ไฝœ', u'ๅˆป่‘—ๅ': u'ๅˆป่‘—ๅ', u'ๅˆป่‘—้Œ„': u'ๅˆป่‘—ๅฝ•', u'ๅˆป่‘—็จฑ': u'ๅˆป่‘—็งฐ', u'ๅˆป่‘—่€…': u'ๅˆป่‘—่€…', u'ๅˆป่‘—่ฟฐ': u'ๅˆป่‘—่ฟฐ', u'่พฆ่‘—': u'ๅŠž็€', u'่พฆ่‘—ๆ›ธ': u'ๅŠž่‘—ไนฆ', u'่พฆ่‘—ไฝœ': u'ๅŠž่‘—ไฝœ', u'่พฆ่‘—ๅ': u'ๅŠž่‘—ๅ', u'่พฆ่‘—้Œ„': u'ๅŠž่‘—ๅฝ•', u'่พฆ่‘—็จฑ': u'ๅŠž่‘—็งฐ', u'่พฆ่‘—่€…': u'ๅŠž่‘—่€…', u'่พฆ่‘—่ฟฐ': u'ๅŠž่‘—่ฟฐ', u'ๅ‹•่‘—': u'ๅŠจ็€', u'ๅ‹•่‘—ๆ›ธ': u'ๅŠจ่‘—ไนฆ', u'ๅ‹•่‘—ไฝœ': u'ๅŠจ่‘—ไฝœ', u'ๅ‹•่‘—ๅ': u'ๅŠจ่‘—ๅ', u'ๅ‹•่‘—้Œ„': u'ๅŠจ่‘—ๅฝ•', u'ๅ‹•่‘—็จฑ': u'ๅŠจ่‘—็งฐ', u'ๅ‹•่‘—่€…': u'ๅŠจ่‘—่€…', u'ๅ‹•่‘—่ฟฐ': u'ๅŠจ่‘—่ฟฐ', u'ๅŠชๅŠ›่‘—': u'ๅŠชๅŠ›็€', u'ๅŠชๅŠ›่‘—ๆ›ธ': u'ๅŠชๅŠ›่‘—ไนฆ', u'ๅŠชๅŠ›่‘—ไฝœ': u'ๅŠชๅŠ›่‘—ไฝœ', u'ๅŠชๅŠ›่‘—ๅ': u'ๅŠชๅŠ›่‘—ๅ', u'ๅŠชๅŠ›่‘—้Œ„': u'ๅŠชๅŠ›่‘—ๅฝ•', u'ๅŠชๅŠ›่‘—็จฑ': u'ๅŠชๅŠ›่‘—็งฐ', u'ๅŠชๅŠ›่‘—่€…': u'ๅŠชๅŠ›่‘—่€…', u'ๅŠชๅŠ›่‘—่ฟฐ': u'ๅŠชๅŠ›่‘—่ฟฐ', u'ๅŠช่‘—': u'ๅŠช็€', u'ๅŠช่‘—ๆ›ธ': u'ๅŠช่‘—ไนฆ', u'ๅŠช่‘—ไฝœ': u'ๅŠช่‘—ไฝœ', u'ๅŠช่‘—ๅ': u'ๅŠช่‘—ๅ', u'ๅŠช่‘—้Œ„': u'ๅŠช่‘—ๅฝ•', u'ๅŠช่‘—็จฑ': u'ๅŠช่‘—็งฐ', u'ๅŠช่‘—่€…': u'ๅŠช่‘—่€…', u'ๅŠช่‘—่ฟฐ': u'ๅŠช่‘—่ฟฐ', u'ๅ“่‘—': u'ๅ“่‘—', u'ๅฐ่‘—': u'ๅฐ็€', u'ๅฐ่‘—ๆ›ธ': u'ๅฐ่‘—ไนฆ', u'ๅฐ่‘—ไฝœ': u'ๅฐ่‘—ไฝœ', u'ๅฐ่‘—ๅ': u'ๅฐ่‘—ๅ', u'ๅฐ่‘—้Œ„': u'ๅฐ่‘—ๅฝ•', u'ๅฐ่‘—็จฑ': u'ๅฐ่‘—็งฐ', u'ๅฐ่‘—่€…': u'ๅฐ่‘—่€…', u'ๅฐ่‘—่ฟฐ': u'ๅฐ่‘—่ฟฐ', u'ๅท่ˆŒ': u'ๅท่ˆŒ', u'ๅฃ“่‘—': u'ๅŽ‹็€', u'ๅฃ“่‘—ๆ›ธ': u'ๅŽ‹่‘—ไนฆ', u'ๅฃ“่‘—ไฝœ': u'ๅŽ‹่‘—ไฝœ', u'ๅฃ“่‘—ๅ': u'ๅŽ‹่‘—ๅ', u'ๅฃ“่‘—้Œ„': u'ๅŽ‹่‘—ๅฝ•', u'ๅฃ“่‘—็จฑ': u'ๅŽ‹่‘—็งฐ', u'ๅฃ“่‘—่€…': u'ๅŽ‹่‘—่€…', u'ๅฃ“่‘—่ฟฐ': u'ๅŽ‹่‘—่ฟฐ', u'ๅŽŸ่‘—': u'ๅŽŸ่‘—', u'ๅŽป่‘—': u'ๅŽป็€', u'ๅŽป่‘—ๆ›ธ': u'ๅŽป่‘—ไนฆ', u'ๅŽป่‘—ไฝœ': u'ๅŽป่‘—ไฝœ', u'ๅŽป่‘—ๅ': u'ๅŽป่‘—ๅ', u'ๅŽป่‘—้Œ„': u'ๅŽป่‘—ๅฝ•', u'ๅŽป่‘—็จฑ': u'ๅŽป่‘—็งฐ', u'ๅŽป่‘—่€…': u'ๅŽป่‘—่€…', u'ๅŽป่‘—่ฟฐ': u'ๅŽป่‘—่ฟฐ', u'ๅๅ่ฆ†่ฆ†': u'ๅๅๅคๅค', u'ๅ่ฆ†': u'ๅๅค', u'ๅ—่‘—': u'ๅ—็€', u'ๅ—่‘—ๆ›ธ': u'ๅ—่‘—ไนฆ', u'ๅ—่‘—ไฝœ': u'ๅ—่‘—ไฝœ', u'ๅ—่‘—ๅ': u'ๅ—่‘—ๅ', u'ๅ—่‘—้Œ„': u'ๅ—่‘—ๅฝ•', u'ๅ—่‘—็จฑ': u'ๅ—่‘—็งฐ', u'ๅ—่‘—่€…': u'ๅ—่‘—่€…', u'ๅ—่‘—่ฟฐ': u'ๅ—่‘—่ฟฐ', u'่ฎŠ่‘—': u'ๅ˜็€', u'่ฎŠ่‘—ๆ›ธ': u'ๅ˜่‘—ไนฆ', u'่ฎŠ่‘—ไฝœ': u'ๅ˜่‘—ไฝœ', u'่ฎŠ่‘—ๅ': u'ๅ˜่‘—ๅ', u'่ฎŠ่‘—้Œ„': u'ๅ˜่‘—ๅฝ•', u'่ฎŠ่‘—็จฑ': u'ๅ˜่‘—็งฐ', u'่ฎŠ่‘—่€…': u'ๅ˜่‘—่€…', u'่ฎŠ่‘—่ฟฐ': u'ๅ˜่‘—่ฟฐ', u'ๅซ่‘—': u'ๅซ็€', u'ๅซ่‘—ๆ›ธ': u'ๅซ่‘—ไนฆ', u'ๅซ่‘—ไฝœ': u'ๅซ่‘—ไฝœ', u'ๅซ่‘—ๅ': u'ๅซ่‘—ๅ', u'ๅซ่‘—้Œ„': u'ๅซ่‘—ๅฝ•', u'ๅซ่‘—็จฑ': u'ๅซ่‘—็งฐ', u'ๅซ่‘—่€…': u'ๅซ่‘—่€…', u'ๅซ่‘—่ฟฐ': u'ๅซ่‘—่ฟฐ', u'ๅฏ็ฉฟ่‘—': u'ๅฏ็ฉฟ่‘—', u'ๅฑๅ’': u'ๅฑๅ’', u'ๅƒไธ่‘—': u'ๅƒไธ็€', u'ๅƒๅพ—่‘—': u'ๅƒๅพ—็€', u'ๅƒ่‘—': u'ๅƒ็€', u'ๅƒ่กฃ่‘—้ฃฏ': u'ๅƒ่กฃ่‘—้ฅญ', u'ๅˆ่‘—': u'ๅˆ่‘—', u'ๅ่‘—': u'ๅ่‘—', u'ๅ‘่‘—': u'ๅ‘็€', u'ๅ‘่‘—ๆ›ธ': u'ๅ‘่‘—ไนฆ', u'ๅ‘่‘—ไฝœ': u'ๅ‘่‘—ไฝœ', u'ๅ‘่‘—ๅ': u'ๅ‘่‘—ๅ', u'ๅ‘่‘—้Œ„': u'ๅ‘่‘—ๅฝ•', u'ๅ‘่‘—็จฑ': u'ๅ‘่‘—็งฐ', u'ๅ‘่‘—่€…': u'ๅ‘่‘—่€…', u'ๅ‘่‘—่ฟฐ': u'ๅ‘่‘—่ฟฐ', u'ๅซ่‘—': u'ๅซ็€', u'ๅซ่‘—ๆ›ธ': u'ๅซ่‘—ไนฆ', u'ๅซ่‘—ไฝœ': u'ๅซ่‘—ไฝœ', u'ๅซ่‘—ๅ': u'ๅซ่‘—ๅ', u'ๅซ่‘—้Œ„': u'ๅซ่‘—ๅฝ•', u'ๅซ่‘—็จฑ': u'ๅซ่‘—็งฐ', u'ๅซ่‘—่€…': u'ๅซ่‘—่€…', u'ๅซ่‘—่ฟฐ': u'ๅซ่‘—่ฟฐ', u'่ฝไธ่‘—': u'ๅฌไธ็€', u'่ฝๅพ—่‘—': u'ๅฌๅพ—็€', u'่ฝ่‘—': u'ๅฌ็€', u'่ฝ่‘—ๆ›ธ': u'ๅฌ่‘—ไนฆ', u'่ฝ่‘—ไฝœ': u'ๅฌ่‘—ไฝœ', u'่ฝ่‘—ๅ': u'ๅฌ่‘—ๅ', u'่ฝ่‘—้Œ„': u'ๅฌ่‘—ๅฝ•', u'่ฝ่‘—็จฑ': u'ๅฌ่‘—็งฐ', u'่ฝ่‘—่€…': u'ๅฌ่‘—่€…', u'่ฝ่‘—่ฟฐ': u'ๅฌ่‘—่ฟฐ', u'ๅดๅ…ถๆฟฌ': u'ๅดๅ…ถๆฟฌ', u'ๅณๅ…ถๆฟฌ': u'ๅดๅ…ถๆฟฌ', u'ๅน่‘—': u'ๅน็€', u'ๅน่‘—ๆ›ธ': u'ๅน่‘—ไนฆ', u'ๅน่‘—ไฝœ': u'ๅน่‘—ไฝœ', u'ๅน่‘—ๅ': u'ๅน่‘—ๅ', u'ๅน่‘—้Œ„': u'ๅน่‘—ๅฝ•', u'ๅน่‘—็จฑ': u'ๅน่‘—็งฐ', u'ๅน่‘—่€…': u'ๅน่‘—่€…', u'ๅน่‘—่ฟฐ': u'ๅน่‘—่ฟฐ', u'ๅ‘จๆ˜“ไนพ': u'ๅ‘จๆ˜“ไนพ', u'ๅ‘ณ่‘—': u'ๅ‘ณ็€', u'ๅ‘ณ่‘—ๆ›ธ': u'ๅ‘ณ่‘—ไนฆ', u'ๅ‘ณ่‘—ไฝœ': u'ๅ‘ณ่‘—ไฝœ', u'ๅ‘ณ่‘—ๅ': u'ๅ‘ณ่‘—ๅ', u'ๅ‘ณ่‘—้Œ„': u'ๅ‘ณ่‘—ๅฝ•', u'ๅ‘ณ่‘—็จฑ': u'ๅ‘ณ่‘—็งฐ', u'ๅ‘ณ่‘—่€…': u'ๅ‘ณ่‘—่€…', u'ๅ‘ณ่‘—่ฟฐ': u'ๅ‘ณ่‘—่ฟฐ', u'ๅ‘ผๅนบๅ–ๅ…ญ': u'ๅ‘ผๅนบๅ–ๅ…ญ', u'้Ÿฟ่‘—': u'ๅ“็€', u'้Ÿฟ่‘—ๆ›ธ': u'ๅ“่‘—ไนฆ', u'้Ÿฟ่‘—ไฝœ': u'ๅ“่‘—ไฝœ', u'้Ÿฟ่‘—ๅ': u'ๅ“่‘—ๅ', u'้Ÿฟ่‘—้Œ„': u'ๅ“่‘—ๅฝ•', u'้Ÿฟ่‘—็จฑ': u'ๅ“่‘—็งฐ', u'้Ÿฟ่‘—่€…': u'ๅ“่‘—่€…', u'้Ÿฟ่‘—่ฟฐ': u'ๅ“่‘—่ฟฐ', u'ๅ“ชๅ’': u'ๅ“ชๅ’', u'ๅ“ญ่‘—': u'ๅ“ญ็€', u'ๅ“ญ่‘—ๆ›ธ': u'ๅ“ญ่‘—ไนฆ', u'ๅ“ญ่‘—ไฝœ': u'ๅ“ญ่‘—ไฝœ', u'ๅ“ญ่‘—ๅ': u'ๅ“ญ่‘—ๅ', u'ๅ“ญ่‘—้Œ„': u'ๅ“ญ่‘—ๅฝ•', u'ๅ“ญ่‘—็จฑ': u'ๅ“ญ่‘—็งฐ', u'ๅ“ญ่‘—่€…': u'ๅ“ญ่‘—่€…', u'ๅ“ญ่‘—่ฟฐ': u'ๅ“ญ่‘—่ฟฐ', u'ๅ”ฑ่‘—': u'ๅ”ฑ็€', u'ๅ”ฑ่‘—ๆ›ธ': u'ๅ”ฑ่‘—ไนฆ', u'ๅ”ฑ่‘—ไฝœ': u'ๅ”ฑ่‘—ไฝœ', u'ๅ”ฑ่‘—ๅ': u'ๅ”ฑ่‘—ๅ', u'ๅ”ฑ่‘—้Œ„': u'ๅ”ฑ่‘—ๅฝ•', u'ๅ”ฑ่‘—็จฑ': u'ๅ”ฑ่‘—็งฐ', u'ๅ”ฑ่‘—่€…': u'ๅ”ฑ่‘—่€…', u'ๅ”ฑ่‘—่ฟฐ': u'ๅ”ฑ่‘—่ฟฐ', u'ๅ–่‘—': u'ๅ–็€', u'ๅ–่‘—ๆ›ธ': u'ๅ–่‘—ไนฆ', u'ๅ–่‘—ไฝœ': u'ๅ–่‘—ไฝœ', u'ๅ–่‘—ๅ': u'ๅ–่‘—ๅ', u'ๅ–่‘—้Œ„': u'ๅ–่‘—ๅฝ•', u'ๅ–่‘—็จฑ': u'ๅ–่‘—็งฐ', u'ๅ–่‘—่€…': u'ๅ–่‘—่€…', u'ๅ–่‘—่ฟฐ': u'ๅ–่‘—่ฟฐ', u'ๅ—…ไธ่‘—': u'ๅ—…ไธ็€', u'ๅ—…ๅพ—่‘—': u'ๅ—…ๅพ—็€', u'ๅ—…่‘—': u'ๅ—…็€', u'ๅšท่‘—': u'ๅšท็€', u'ๅšท่‘—ๆ›ธ': u'ๅšท่‘—ไนฆ', u'ๅšท่‘—ไฝœ': u'ๅšท่‘—ไฝœ', u'ๅšท่‘—ๅ': u'ๅšท่‘—ๅ', u'ๅšท่‘—้Œ„': u'ๅšท่‘—ๅฝ•', u'ๅšท่‘—็จฑ': u'ๅšท่‘—็งฐ', u'ๅšท่‘—่€…': u'ๅšท่‘—่€…', u'ๅšท่‘—่ฟฐ': u'ๅšท่‘—่ฟฐ', u'ๅ›ž่ฆ†': u'ๅ›žๅค', u'ๅ› ่‘—': u'ๅ› ็€', u'ๅ› ่‘—ใ€ˆ': u'ๅ› ่‘—ใ€ˆ', u'ๅ› ่‘—ใ€Š': u'ๅ› ่‘—ใ€Š', u'ๅ› ่‘—ๆ›ธ': u'ๅ› ่‘—ไนฆ', u'ๅ› ่‘—ไฝœ': u'ๅ› ่‘—ไฝœ', u'ๅ› ่‘—ๅ': u'ๅ› ่‘—ๅ', u'ๅ› ่‘—้Œ„': u'ๅ› ่‘—ๅฝ•', u'ๅ› ่‘—็จฑ': u'ๅ› ่‘—็งฐ', u'ๅ› ่‘—่€…': u'ๅ› ่‘—่€…', u'ๅ› ่‘—่ฟฐ': u'ๅ› ่‘—่ฟฐ', u'ๅ›ฐ่‘—': u'ๅ›ฐ็€', u'ๅ›ฐ่‘—ๆ›ธ': u'ๅ›ฐ่‘—ไนฆ', u'ๅ›ฐ่‘—ไฝœ': u'ๅ›ฐ่‘—ไฝœ', u'ๅ›ฐ่‘—ๅ': u'ๅ›ฐ่‘—ๅ', u'ๅ›ฐ่‘—้Œ„': u'ๅ›ฐ่‘—ๅฝ•', u'ๅ›ฐ่‘—็จฑ': u'ๅ›ฐ่‘—็งฐ', u'ๅ›ฐ่‘—่€…': u'ๅ›ฐ่‘—่€…', u'ๅ›ฐ่‘—่ฟฐ': u'ๅ›ฐ่‘—่ฟฐ', u'ๅœ่‘—': u'ๅ›ด็€', u'ๅœ่‘—ๆ›ธ': u'ๅ›ด่‘—ไนฆ', u'ๅœ่‘—ไฝœ': u'ๅ›ด่‘—ไฝœ', u'ๅœ่‘—ๅ': u'ๅ›ด่‘—ๅ', u'ๅœ่‘—้Œ„': u'ๅ›ด่‘—ๅฝ•', u'ๅœ่‘—็จฑ': u'ๅ›ด่‘—็งฐ', u'ๅœ่‘—่€…': u'ๅ›ด่‘—่€…', u'ๅœ่‘—่ฟฐ': u'ๅ›ด่‘—่ฟฐ', u'ๅœŸ่‘—': u'ๅœŸ่‘—', u'ๅœจ่‘—': u'ๅœจ็€', u'ๅœจ่‘—ๆ›ธ': u'ๅœจ่‘—ไนฆ', u'ๅœจ่‘—ไฝœ': u'ๅœจ่‘—ไฝœ', u'ๅœจ่‘—ๅ': u'ๅœจ่‘—ๅ', u'ๅœจ่‘—้Œ„': u'ๅœจ่‘—ๅฝ•', u'ๅœจ่‘—็จฑ': u'ๅœจ่‘—็งฐ', u'ๅœจ่‘—่€…': u'ๅœจ่‘—่€…', u'ๅœจ่‘—่ฟฐ': u'ๅœจ่‘—่ฟฐ', u'ๅ่‘—': u'ๅ็€', u'ๅ่‘—ๆ›ธ': u'ๅ่‘—ไนฆ', u'ๅ่‘—ไฝœ': u'ๅ่‘—ไฝœ', u'ๅ่‘—ๅ': u'ๅ่‘—ๅ', u'ๅ่‘—้Œ„': u'ๅ่‘—ๅฝ•', u'ๅ่‘—็จฑ': u'ๅ่‘—็งฐ', u'ๅ่‘—่€…': u'ๅ่‘—่€…', u'ๅ่‘—่ฟฐ': u'ๅ่‘—่ฟฐ', u'ๅคไนพ': u'ๅคไนพ', u'ๅ‚™่‘—': u'ๅค‡็€', u'ๅ‚™่‘—ๆ›ธ': u'ๅค‡่‘—ไนฆ', u'ๅ‚™่‘—ไฝœ': u'ๅค‡่‘—ไฝœ', u'ๅ‚™่‘—ๅ': u'ๅค‡่‘—ๅ', u'ๅ‚™่‘—้Œ„': u'ๅค‡่‘—ๅฝ•', u'ๅ‚™่‘—็จฑ': u'ๅค‡่‘—็งฐ', u'ๅ‚™่‘—่€…': u'ๅค‡่‘—่€…', u'ๅ‚™่‘—่ฟฐ': u'ๅค‡่‘—่ฟฐ', u'ๅคฉ้“ไธบไนพ': u'ๅคฉ้“ไธบไนพ', u'ๅคฉ้“็‚บไนพ': u'ๅคฉ้“ไธบไนพ', u'ๅคพ่‘—': u'ๅคน็€', u'ๅคพ่‘—ๆ›ธ': u'ๅคน่‘—ไนฆ', u'ๅคพ่‘—ไฝœ': u'ๅคน่‘—ไฝœ', u'ๅคพ่‘—ๅ': u'ๅคน่‘—ๅ', u'ๅคพ่‘—้Œ„': u'ๅคน่‘—ๅฝ•', u'ๅคพ่‘—็จฑ': u'ๅคน่‘—็งฐ', u'ๅคพ่‘—่€…': u'ๅคน่‘—่€…', u'ๅคพ่‘—่ฟฐ': u'ๅคน่‘—่ฟฐ', u'ๅฅงๅ€': u'ๅฅงๅŒบ', u'ๅง“ๅนบ': u'ๅง“ๅนบ', u'ๅญ˜ๆ‘บ': u'ๅญ˜ๆ‘บ', u'ๅญค่‘—': u'ๅญค็€', u'ๅญค่‘—ๆ›ธ': u'ๅญค่‘—ไนฆ', u'ๅญค่‘—ไฝœ': u'ๅญค่‘—ไฝœ', u'ๅญค่‘—ๅ': u'ๅญค่‘—ๅ', u'ๅญค่‘—้Œ„': u'ๅญค่‘—ๅฝ•', u'ๅญค่‘—็จฑ': u'ๅญค่‘—็งฐ', u'ๅญค่‘—่€…': u'ๅญค่‘—่€…', u'ๅญค่‘—่ฟฐ': u'ๅญค่‘—่ฟฐ', u'ๅญธ่‘—': u'ๅญฆ็€', u'ๅญธ่‘—ๆ›ธ': u'ๅญฆ่‘—ไนฆ', u'ๅญธ่‘—ไฝœ': u'ๅญฆ่‘—ไฝœ', u'ๅญธ่‘—ๅ': u'ๅญฆ่‘—ๅ', u'ๅญธ่‘—้Œ„': u'ๅญฆ่‘—ๅฝ•', u'ๅญธ่‘—็จฑ': u'ๅญฆ่‘—็งฐ', u'ๅญธ่‘—่€…': u'ๅญฆ่‘—่€…', u'ๅญธ่‘—่ฟฐ': u'ๅญฆ่‘—่ฟฐ', u'ๅฎˆ่‘—': u'ๅฎˆ็€', u'ๅฎˆ่‘—ๆ›ธ': u'ๅฎˆ่‘—ไนฆ', u'ๅฎˆ่‘—ไฝœ': u'ๅฎˆ่‘—ไฝœ', u'ๅฎˆ่‘—ๅ': u'ๅฎˆ่‘—ๅ', u'ๅฎˆ่‘—้Œ„': u'ๅฎˆ่‘—ๅฝ•', u'ๅฎˆ่‘—็จฑ': u'ๅฎˆ่‘—็งฐ', u'ๅฎˆ่‘—่€…': u'ๅฎˆ่‘—่€…', u'ๅฎˆ่‘—่ฟฐ': u'ๅฎˆ่‘—่ฟฐ', u'ๅฎš่‘—': u'ๅฎš็€', u'ๅฎš่‘—ๆ›ธ': u'ๅฎš่‘—ไนฆ', u'ๅฎš่‘—ไฝœ': u'ๅฎš่‘—ไฝœ', u'ๅฎš่‘—ๅ': u'ๅฎš่‘—ๅ', u'ๅฎš่‘—้Œ„': u'ๅฎš่‘—ๅฝ•', u'ๅฎš่‘—็จฑ': u'ๅฎš่‘—็งฐ', u'ๅฎš่‘—่€…': u'ๅฎš่‘—่€…', u'ๅฎš่‘—่ฟฐ': u'ๅฎš่‘—่ฟฐ', u'ๅฐ่‘—': u'ๅฏน็€', u'ๅฐ่‘—ๆ›ธ': u'ๅฏน่‘—ไนฆ', u'ๅฐ่‘—ไฝœ': u'ๅฏน่‘—ไฝœ', u'ๅฐ่‘—ๅ': u'ๅฏน่‘—ๅ', u'ๅฐ่‘—้Œ„': u'ๅฏน่‘—ๅฝ•', u'ๅฐ่‘—็จฑ': u'ๅฏน่‘—็งฐ', u'ๅฐ่‘—่€…': u'ๅฏน่‘—่€…', u'ๅฐ่‘—่ฟฐ': u'ๅฏน่‘—่ฟฐ', u'ๅฐ‹่‘—': u'ๅฏป็€', u'ๅฐ‹่‘—ๆ›ธ': u'ๅฏป่‘—ไนฆ', u'ๅฐ‹่‘—ไฝœ': u'ๅฏป่‘—ไฝœ', u'ๅฐ‹่‘—ๅ': u'ๅฏป่‘—ๅ', u'ๅฐ‹่‘—้Œ„': u'ๅฏป่‘—ๅฝ•', u'ๅฐ‹่‘—็จฑ': u'ๅฏป่‘—็งฐ', u'ๅฐ‹่‘—่€…': u'ๅฏป่‘—่€…', u'ๅฐ‹่‘—่ฟฐ': u'ๅฏป่‘—่ฟฐ', u'ๅฐ‡่ปๆŠฝ่ปŠ': u'ๅฐ†ๅ†›ๆŠฝ่ปŠ', u'ๅฐผไนพ้™€': u'ๅฐผไนพ้™€', u'ๅฑ•่‘—': u'ๅฑ•็€', u'ๅฑ•่‘—ๆ›ธ': u'ๅฑ•่‘—ไนฆ', u'ๅฑ•่‘—ไฝœ': u'ๅฑ•่‘—ไฝœ', u'ๅฑ•่‘—ๅ': u'ๅฑ•่‘—ๅ', u'ๅฑ•่‘—้Œ„': u'ๅฑ•่‘—ๅฝ•', u'ๅฑ•่‘—็จฑ': u'ๅฑ•่‘—็งฐ', u'ๅฑ•่‘—่€…': u'ๅฑ•่‘—่€…', u'ๅฑ•่‘—่ฟฐ': u'ๅฑ•่‘—่ฟฐ', u'ๅทจ่‘—': u'ๅทจ่‘—', u'ๅธถ่‘—': u'ๅธฆ็€', u'ๅธถ่‘—ๆ›ธ': u'ๅธฆ่‘—ไนฆ', u'ๅธถ่‘—ไฝœ': u'ๅธฆ่‘—ไฝœ', u'ๅธถ่‘—ๅ': u'ๅธฆ่‘—ๅ', u'ๅธถ่‘—้Œ„': u'ๅธฆ่‘—ๅฝ•', u'ๅธถ่‘—็จฑ': u'ๅธฆ่‘—็งฐ', u'ๅธถ่‘—่€…': u'ๅธฆ่‘—่€…', u'ๅธถ่‘—่ฟฐ': u'ๅธฆ่‘—่ฟฐ', u'ๅนซ่‘—': u'ๅธฎ็€', u'ๅนซ่‘—ๆ›ธ': u'ๅธฎ่‘—ไนฆ', u'ๅนซ่‘—ไฝœ': u'ๅธฎ่‘—ไฝœ', u'ๅนซ่‘—ๅ': u'ๅธฎ่‘—ๅ', u'ๅนซ่‘—้Œ„': u'ๅธฎ่‘—ๅฝ•', u'ๅนซ่‘—็จฑ': u'ๅธฎ่‘—็งฐ', u'ๅนซ่‘—่€…': u'ๅธฎ่‘—่€…', u'ๅนซ่‘—่ฟฐ': u'ๅธฎ่‘—่ฟฐ', u'ไนพไนพๆทจๆทจ': u'ๅนฒๅนฒๅ‡€ๅ‡€', u'ไนพไนพ่„†่„†': u'ๅนฒๅนฒ่„†่„†', u'ไนพๆณ‰ๆฐด': u'ๅนฒๆณ‰ๆฐด', u'ๅนน่‘—': u'ๅนฒ็€', u'ไนˆไบŒไธ‰': u'ๅนบไบŒไธ‰', u'ๅนบไบŒไธ‰': u'ๅนบไบŒไธ‰', u'ไนˆๅ…ƒ': u'ๅนบๅ…ƒ', u'ๅนบๅ…ƒ': u'ๅนบๅ…ƒ', u'ๅนบ้ณณ': u'ๅนบๅ‡ค', u'ไนˆ้ณณ': u'ๅนบๅ‡ค', u'ไนˆๅŠ็พค': u'ๅนบๅŠ็พค', u'ๅนบๅŠ็พค': u'ๅนบๅŠ็พค', u'ๅนบๅป': u'ๅนบๅŽฎ', u'ๅนบๅŽฎ': u'ๅนบๅŽฎ', u'ไนˆๅ”': u'ๅนบๅ”', u'ๅนบๅ”': u'ๅนบๅ”', u'ไนˆๅชฝ': u'ๅนบๅฆˆ', u'ๅนบๅชฝ': u'ๅนบๅฆˆ', u'ไนˆๅฆน': u'ๅนบๅฆน', u'ๅนบๅฆน': u'ๅนบๅฆน', u'ไนˆๅง“': u'ๅนบๅง“', u'ๅนบๅง“': u'ๅนบๅง“', u'ไนˆๅงจ': u'ๅนบๅงจ', u'ๅนบๅงจ': u'ๅนบๅงจ', u'ไนˆๅจ˜': u'ๅนบๅจ˜', u'ไนˆๅญƒ': u'ๅนบๅจ˜', u'ๅนบๅจ˜': u'ๅนบๅจ˜', u'ๅนบๅญƒ': u'ๅนบๅจ˜', u'ๅนบๅฐ': u'ๅนบๅฐ', u'ไนˆๅฐ': u'ๅนบๅฐ', u'ๅนบๆฐ': u'ๅนบๆฐ', u'ไนˆๆฐ': u'ๅนบๆฐ', u'ไนˆ็ˆธ': u'ๅนบ็ˆธ', u'ๅนบ็ˆธ': u'ๅนบ็ˆธ', u'ๅนบ็ˆน': u'ๅนบ็ˆน', u'ไนˆ็ˆน': u'ๅนบ็ˆน', u'ไนˆ็ฏ‡': u'ๅนบ็ฏ‡', u'ๅนบ็ฏ‡': u'ๅนบ็ฏ‡', u'ไนˆ่ˆ…': u'ๅนบ่ˆ…', u'ๅนบ่ˆ…': u'ๅนบ่ˆ…', u'ไนˆ่›พๅญ': u'ๅนบ่›พๅญ', u'ๅนบ่›พๅญ': u'ๅนบ่›พๅญ', u'ไนˆ่ฌ™': u'ๅนบ่ฐฆ', u'ๅนบ่ฌ™': u'ๅนบ่ฐฆ', u'ๅนบ้บฝ': u'ๅนบ้บฝ', u'ไนˆ้บผ': u'ๅนบ้บฝ', u'ๅนบ้บฝๅฐไธ‘': u'ๅนบ้บฝๅฐไธ‘', u'ไนˆ้บผๅฐไธ‘': u'ๅนบ้บฝๅฐไธ‘', u'ๅบ‡่ญท่‘—': u'ๅบ‡ๆŠค็€', u'ๆ‡‰่‘—': u'ๅบ”็€', u'ๆ‡‰่‘—ๆ›ธ': u'ๅบ”่‘—ไนฆ', u'ๆ‡‰่‘—ไฝœ': u'ๅบ”่‘—ไฝœ', u'ๆ‡‰่‘—ๅ': u'ๅบ”่‘—ๅ', u'ๆ‡‰่‘—้Œ„': u'ๅบ”่‘—ๅฝ•', u'ๆ‡‰่‘—็จฑ': u'ๅบ”่‘—็งฐ', u'ๆ‡‰่‘—่€…': u'ๅบ”่‘—่€…', u'ๆ‡‰่‘—่ฟฐ': u'ๅบ”่‘—่ฟฐ', u'ๅบทไนพ': u'ๅบทไนพ', u'ๅบท่‘—': u'ๅบท็€', u'ๅบท่‘—ๆ›ธ': u'ๅบท่‘—ไนฆ', u'ๅบท่‘—ไฝœ': u'ๅบท่‘—ไฝœ', u'ๅบท่‘—ๅ': u'ๅบท่‘—ๅ', u'ๅบท่‘—้Œ„': u'ๅบท่‘—ๅฝ•', u'ๅบท่‘—็จฑ': u'ๅบท่‘—็งฐ', u'ๅบท่‘—่€…': u'ๅบท่‘—่€…', u'ๅบท่‘—่ฟฐ': u'ๅบท่‘—่ฟฐ', u'้–‹่‘—': u'ๅผ€็€', u'้–‹่‘—ๆ›ธ': u'ๅผ€่‘—ไนฆ', u'้–‹่‘—ไฝœ': u'ๅผ€่‘—ไฝœ', u'้–‹่‘—ๅ': u'ๅผ€่‘—ๅ', u'้–‹่‘—้Œ„': u'ๅผ€่‘—ๅฝ•', u'้–‹่‘—็จฑ': u'ๅผ€่‘—็งฐ', u'้–‹่‘—่€…': u'ๅผ€่‘—่€…', u'้–‹่‘—่ฟฐ': u'ๅผ€่‘—่ฟฐ', u'ๅผตๆณ•ไนพ': u'ๅผ ๆณ•ไนพ', u'ๅผ ๆณ•ไนพ': u'ๅผ ๆณ•ไนพ', u'็•ถ่‘—': u'ๅฝ“็€', u'็•ถ่‘—ๆ›ธ': u'ๅฝ“่‘—ไนฆ', u'็•ถ่‘—ไฝœ': u'ๅฝ“่‘—ไฝœ', u'็•ถ่‘—ๅ': u'ๅฝ“่‘—ๅ', u'็•ถ่‘—้Œ„': u'ๅฝ“่‘—ๅฝ•', u'็•ถ่‘—็จฑ': u'ๅฝ“่‘—็งฐ', u'็•ถ่‘—่€…': u'ๅฝ“่‘—่€…', u'็•ถ่‘—่ฟฐ': u'ๅฝ“่‘—่ฟฐ', u'ๅฝฐๆ˜Ž่ผƒ่‘—': u'ๅฝฐๆ˜Ž่พƒ่‘—', u'ๅพ…่‘—': u'ๅพ…็€', u'ๅพ…่‘—ๆ›ธ': u'ๅพ…่‘—ไนฆ', u'ๅพ…่‘—ไฝœ': u'ๅพ…่‘—ไฝœ', u'ๅพ…่‘—ๅ': u'ๅพ…่‘—ๅ', u'ๅพ…่‘—้Œ„': u'ๅพ…่‘—ๅฝ•', u'ๅพ…่‘—็จฑ': u'ๅพ…่‘—็งฐ', u'ๅพ…่‘—่€…': u'ๅพ…่‘—่€…', u'ๅพ…่‘—่ฟฐ': u'ๅพ…่‘—่ฟฐ', u'ๅพ—่‘—': u'ๅพ—็€', u'ๅพ—่‘—ๆ›ธ': u'ๅพ—่‘—ไนฆ', u'ๅพ—่‘—ไฝœ': u'ๅพ—่‘—ไฝœ', u'ๅพ—่‘—ๅ': u'ๅพ—่‘—ๅ', u'ๅพ—่‘—้Œ„': u'ๅพ—่‘—ๅฝ•', u'ๅพ—่‘—็จฑ': u'ๅพ—่‘—็งฐ', u'ๅพ—่‘—่€…': u'ๅพ—่‘—่€…', u'ๅพ—่‘—่ฟฐ': u'ๅพ—่‘—่ฟฐ', u'ๅพช่‘—': u'ๅพช็€', u'ๅพช่‘—ๆ›ธ': u'ๅพช่‘—ไนฆ', u'ๅพช่‘—ไฝœ': u'ๅพช่‘—ไฝœ', u'ๅพช่‘—ๅ': u'ๅพช่‘—ๅ', u'ๅพช่‘—้Œ„': u'ๅพช่‘—ๅฝ•', u'ๅพช่‘—็จฑ': u'ๅพช่‘—็งฐ', u'ๅพช่‘—่€…': u'ๅพช่‘—่€…', u'ๅพช่‘—่ฟฐ': u'ๅพช่‘—่ฟฐ', u'ๅฟƒ่‘—': u'ๅฟƒ็€', u'ๅฟƒ่‘—ๆ›ธ': u'ๅฟƒ่‘—ไนฆ', u'ๅฟƒ่‘—ไฝœ': u'ๅฟƒ่‘—ไฝœ', u'ๅฟƒ่‘—ๅ': u'ๅฟƒ่‘—ๅ', u'ๅฟƒ่‘—้Œ„': u'ๅฟƒ่‘—ๅฝ•', u'ๅฟƒ่‘—็จฑ': u'ๅฟƒ่‘—็งฐ', u'ๅฟƒ่‘—่€…': u'ๅฟƒ่‘—่€…', u'ๅฟƒ่‘—่ฟฐ': u'ๅฟƒ่‘—่ฟฐ', u'ๅฟ่‘—': u'ๅฟ็€', u'ๅฟ่‘—ๆ›ธ': u'ๅฟ่‘—ไนฆ', u'ๅฟ่‘—ไฝœ': u'ๅฟ่‘—ไฝœ', u'ๅฟ่‘—ๅ': u'ๅฟ่‘—ๅ', u'ๅฟ่‘—้Œ„': u'ๅฟ่‘—ๅฝ•', u'ๅฟ่‘—็จฑ': u'ๅฟ่‘—็งฐ', u'ๅฟ่‘—่€…': u'ๅฟ่‘—่€…', u'ๅฟ่‘—่ฟฐ': u'ๅฟ่‘—่ฟฐ', u'ๅฟ—่‘—': u'ๅฟ—็€', u'ๅฟ—่‘—ๆ›ธ': u'ๅฟ—่‘—ไนฆ', u'ๅฟ—่‘—ไฝœ': u'ๅฟ—่‘—ไฝœ', u'ๅฟ—่‘—ๅ': u'ๅฟ—่‘—ๅ', u'ๅฟ—่‘—้Œ„': u'ๅฟ—่‘—ๅฝ•', u'ๅฟ—่‘—็จฑ': u'ๅฟ—่‘—็งฐ', u'ๅฟ—่‘—่€…': u'ๅฟ—่‘—่€…', u'ๅฟ—่‘—่ฟฐ': u'ๅฟ—่‘—่ฟฐ', u'ๅฟ™่‘—': u'ๅฟ™็€', u'ๅฟ™่‘—ๆ›ธ': u'ๅฟ™่‘—ไนฆ', u'ๅฟ™่‘—ไฝœ': u'ๅฟ™่‘—ไฝœ', u'ๅฟ™่‘—ๅ': u'ๅฟ™่‘—ๅ', u'ๅฟ™่‘—้Œ„': u'ๅฟ™่‘—ๅฝ•', u'ๅฟ™่‘—็จฑ': u'ๅฟ™่‘—็งฐ', u'ๅฟ™่‘—่€…': u'ๅฟ™่‘—่€…', u'ๅฟ™่‘—่ฟฐ': u'ๅฟ™่‘—่ฟฐ', u'ๆ‡ท่‘—': u'ๆ€€็€', u'ๆ‡ท่‘—ๆ›ธ': u'ๆ€€่‘—ไนฆ', u'ๆ‡ท่‘—ไฝœ': u'ๆ€€่‘—ไฝœ', u'ๆ‡ท่‘—ๅ': u'ๆ€€่‘—ๅ', u'ๆ‡ท่‘—้Œ„': u'ๆ€€่‘—ๅฝ•', u'ๆ‡ท่‘—็จฑ': u'ๆ€€่‘—็งฐ', u'ๆ‡ท่‘—่€…': u'ๆ€€่‘—่€…', u'ๆ‡ท่‘—่ฟฐ': u'ๆ€€่‘—่ฟฐ', u'ๆ€ฅ่‘—': u'ๆ€ฅ็€', u'ๆ€ฅ่‘—ๆ›ธ': u'ๆ€ฅ่‘—ไนฆ', u'ๆ€ฅ่‘—ไฝœ': u'ๆ€ฅ่‘—ไฝœ', u'ๆ€ฅ่‘—ๅ': u'ๆ€ฅ่‘—ๅ', u'ๆ€ฅ่‘—้Œ„': u'ๆ€ฅ่‘—ๅฝ•', u'ๆ€ฅ่‘—็จฑ': u'ๆ€ฅ่‘—็งฐ', u'ๆ€ฅ่‘—่€…': u'ๆ€ฅ่‘—่€…', u'ๆ€ฅ่‘—่ฟฐ': u'ๆ€ฅ่‘—่ฟฐ', u'ๆ€ง่‘—': u'ๆ€ง็€', u'ๆ€ง่‘—ๆ›ธ': u'ๆ€ง่‘—ไนฆ', u'ๆ€ง่‘—ไฝœ': u'ๆ€ง่‘—ไฝœ', u'ๆ€ง่‘—ๅ': u'ๆ€ง่‘—ๅ', u'ๆ€ง่‘—้Œ„': u'ๆ€ง่‘—ๅฝ•', u'ๆ€ง่‘—็จฑ': u'ๆ€ง่‘—็งฐ', u'ๆ€ง่‘—่€…': u'ๆ€ง่‘—่€…', u'ๆ€ง่‘—่ฟฐ': u'ๆ€ง่‘—่ฟฐ', u'ๆˆ€่‘—': u'ๆ‹็€', u'ๆˆ€่‘—ๆ›ธ': u'ๆ‹่‘—ไนฆ', u'ๆˆ€่‘—ไฝœ': u'ๆ‹่‘—ไฝœ', u'ๆˆ€่‘—ๅ': u'ๆ‹่‘—ๅ', u'ๆˆ€่‘—้Œ„': u'ๆ‹่‘—ๅฝ•', u'ๆˆ€่‘—็จฑ': u'ๆ‹่‘—็งฐ', u'ๆˆ€่‘—่€…': u'ๆ‹่‘—่€…', u'ๆˆ€่‘—่ฟฐ': u'ๆ‹่‘—่ฟฐ', u'ๆฉๅจไธฆ่‘—': u'ๆฉๅจๅนถ่‘—', u'ๆ‚ ่‘—': u'ๆ‚ ็€', u'ๆ‚ ่‘—ๆ›ธ': u'ๆ‚ ่‘—ไนฆ', u'ๆ‚ ่‘—ไฝœ': u'ๆ‚ ่‘—ไฝœ', u'ๆ‚ ่‘—ๅ': u'ๆ‚ ่‘—ๅ', u'ๆ‚ ่‘—้Œ„': u'ๆ‚ ่‘—ๅฝ•', u'ๆ‚ ่‘—็จฑ': u'ๆ‚ ่‘—็งฐ', u'ๆ‚ ่‘—่€…': u'ๆ‚ ่‘—่€…', u'ๆ‚ ่‘—่ฟฐ': u'ๆ‚ ่‘—่ฟฐ', u'ๆ…ฃ่‘—': u'ๆƒฏ็€', u'ๆ…ฃ่‘—ๆ›ธ': u'ๆƒฏ่‘—ไนฆ', u'ๆ…ฃ่‘—ไฝœ': u'ๆƒฏ่‘—ไฝœ', u'ๆ…ฃ่‘—ๅ': u'ๆƒฏ่‘—ๅ', u'ๆ…ฃ่‘—้Œ„': u'ๆƒฏ่‘—ๅฝ•', u'ๆ…ฃ่‘—็จฑ': u'ๆƒฏ่‘—็งฐ', u'ๆ…ฃ่‘—่€…': u'ๆƒฏ่‘—่€…', u'ๆ…ฃ่‘—่ฟฐ': u'ๆƒฏ่‘—่ฟฐ', u'ๆƒณ่‘—': u'ๆƒณ็€', u'ๆƒณ่‘—ๆ›ธ': u'ๆƒณ่‘—ไนฆ', u'ๆƒณ่‘—ไฝœ': u'ๆƒณ่‘—ไฝœ', u'ๆƒณ่‘—ๅ': u'ๆƒณ่‘—ๅ', u'ๆƒณ่‘—้Œ„': u'ๆƒณ่‘—ๅฝ•', u'ๆƒณ่‘—็จฑ': u'ๆƒณ่‘—็งฐ', u'ๆƒณ่‘—่€…': u'ๆƒณ่‘—่€…', u'ๆƒณ่‘—่ฟฐ': u'ๆƒณ่‘—่ฟฐ', u'ๆˆฐ่‘—': u'ๆˆ˜็€', u'ๆˆฐ่‘—ๆ›ธ': u'ๆˆ˜่‘—ไนฆ', u'ๆˆฐ่‘—ไฝœ': u'ๆˆ˜่‘—ไฝœ', u'ๆˆฐ่‘—ๅ': u'ๆˆ˜่‘—ๅ', u'ๆˆฐ่‘—้Œ„': u'ๆˆ˜่‘—ๅฝ•', u'ๆˆฐ่‘—็จฑ': u'ๆˆ˜่‘—็งฐ', u'ๆˆฐ่‘—่€…': u'ๆˆ˜่‘—่€…', u'ๆˆฐ่‘—่ฟฐ': u'ๆˆ˜่‘—่ฟฐ', u'ๆˆด่‘—': u'ๆˆด็€', u'ๆˆด่‘—ๆ›ธ': u'ๆˆด่‘—ไนฆ', u'ๆˆด่‘—ไฝœ': u'ๆˆด่‘—ไฝœ', u'ๆˆด่‘—ๅ': u'ๆˆด่‘—ๅ', u'ๆˆด่‘—้Œ„': u'ๆˆด่‘—ๅฝ•', u'ๆˆด่‘—็จฑ': u'ๆˆด่‘—็งฐ', u'ๆˆด่‘—่€…': u'ๆˆด่‘—่€…', u'ๆˆด่‘—่ฟฐ': u'ๆˆด่‘—่ฟฐ', u'ๆ‰Ž่‘—': u'ๆ‰Ž็€', u'ๆ‰Ž่‘—ๆ›ธ': u'ๆ‰Ž่‘—ไนฆ', u'ๆ‰Ž่‘—ไฝœ': u'ๆ‰Ž่‘—ไฝœ', u'ๆ‰Ž่‘—ๅ': u'ๆ‰Ž่‘—ๅ', u'ๆ‰Ž่‘—้Œ„': u'ๆ‰Ž่‘—ๅฝ•', u'ๆ‰Ž่‘—็จฑ': u'ๆ‰Ž่‘—็งฐ', u'ๆ‰Ž่‘—่€…': u'ๆ‰Ž่‘—่€…', u'ๆ‰Ž่‘—่ฟฐ': u'ๆ‰Ž่‘—่ฟฐ', u'ๆ‰“่‘—': u'ๆ‰“็€', u'ๆ‰“่‘—ๆ›ธ': u'ๆ‰“่‘—ไนฆ', u'ๆ‰“่‘—ไฝœ': u'ๆ‰“่‘—ไฝœ', u'ๆ‰“่‘—ๅ': u'ๆ‰“่‘—ๅ', u'ๆ‰“่‘—้Œ„': u'ๆ‰“่‘—ๅฝ•', u'ๆ‰“่‘—็จฑ': u'ๆ‰“่‘—็งฐ', u'ๆ‰“่‘—่€…': u'ๆ‰“่‘—่€…', u'ๆ‰“่‘—่ฟฐ': u'ๆ‰“่‘—่ฟฐ', u'ๆ‰›่‘—': u'ๆ‰›็€', u'ๆ‰›่‘—ๆ›ธ': u'ๆ‰›่‘—ไนฆ', u'ๆ‰›่‘—ไฝœ': u'ๆ‰›่‘—ไฝœ', u'ๆ‰›่‘—ๅ': u'ๆ‰›่‘—ๅ', u'ๆ‰›่‘—้Œ„': u'ๆ‰›่‘—ๅฝ•', u'ๆ‰›่‘—็จฑ': u'ๆ‰›่‘—็งฐ', u'ๆ‰›่‘—่€…': u'ๆ‰›่‘—่€…', u'ๆ‰›่‘—่ฟฐ': u'ๆ‰›่‘—่ฟฐ', u'ๅŸท่‘—': u'ๆ‰ง่‘—', u'ๆ‰พไธ่‘—': u'ๆ‰พไธ็€', u'ๆ‰พๅพ—่‘—': u'ๆ‰พๅพ—็€', u'ๆŠ“่‘—': u'ๆŠ“็€', u'ๆŠ“่‘—ไฝœ': u'ๆŠ“่‘—ไฝœ', u'ๆŠ“่‘—ๅ': u'ๆŠ“่‘—ๅ', u'ๆŠ“่‘—้Œ„': u'ๆŠ“่‘—ๅฝ•', u'ๆŠ“่‘—็จฑ': u'ๆŠ“่‘—็งฐ', u'ๆŠ“่‘—่€…': u'ๆŠ“่‘—่€…', u'ๆŠ“่‘—่ฟฐ': u'ๆŠ“่‘—่ฟฐ', u'่ญท่‘—': u'ๆŠค็€', u'่ญท่‘—ๆ›ธ': u'ๆŠค่‘—ไนฆ', u'่ญท่‘—ไฝœ': u'ๆŠค่‘—ไฝœ', u'่ญท่‘—ๅ': u'ๆŠค่‘—ๅ', u'่ญท่‘—้Œ„': u'ๆŠค่‘—ๅฝ•', u'่ญท่‘—็จฑ': u'ๆŠค่‘—็งฐ', u'่ญท่‘—่€…': u'ๆŠค่‘—่€…', u'่ญท่‘—่ฟฐ': u'ๆŠค่‘—่ฟฐ', u'ๆŠซ่‘—': u'ๆŠซ็€', u'ๆŠซ่‘—ๆ›ธ': u'ๆŠซ่‘—ไนฆ', u'ๆŠซ่‘—ไฝœ': u'ๆŠซ่‘—ไฝœ', u'ๆŠซ่‘—ๅ': u'ๆŠซ่‘—ๅ', u'ๆŠซ่‘—้Œ„': u'ๆŠซ่‘—ๅฝ•', u'ๆŠซ่‘—็จฑ': u'ๆŠซ่‘—็งฐ', u'ๆŠซ่‘—่€…': u'ๆŠซ่‘—่€…', u'ๆŠซ่‘—่ฟฐ': u'ๆŠซ่‘—่ฟฐ', u'ๆŠฌ่‘—': u'ๆŠฌ็€', u'ๆŠฌ่‘—ไฝœ': u'ๆŠฌ่‘—ไฝœ', u'ๆŠฌ่‘—ๅ': u'ๆŠฌ่‘—ๅ', u'ๆŠฌ่‘—้Œ„': u'ๆŠฌ่‘—ๅฝ•', u'ๆŠฌ่‘—็จฑ': u'ๆŠฌ่‘—็งฐ', u'ๆŠฌ่‘—่€…': u'ๆŠฌ่‘—่€…', u'ๆŠฌ่‘—่ฟฐ': u'ๆŠฌ่‘—่ฟฐ', u'ๆŠฑ่‘—': u'ๆŠฑ็€', u'ๆŠฑ่‘—ไฝœ': u'ๆŠฑ่‘—ไฝœ', u'ๆŠฑ่‘—ๅ': u'ๆŠฑ่‘—ๅ', u'ๆŠฑ่‘—้Œ„': u'ๆŠฑ่‘—ๅฝ•', u'ๆŠฑ่‘—็จฑ': u'ๆŠฑ่‘—็งฐ', u'ๆŠฑ่‘—่€…': u'ๆŠฑ่‘—่€…', u'ๆŠฑ่‘—่ฟฐ': u'ๆŠฑ่‘—่ฟฐ', u'ๆ‹‰่‘—': u'ๆ‹‰็€', u'ๆ‹‰่‘—ๆ›ธ': u'ๆ‹‰่‘—ไนฆ', u'ๆ‹‰่‘—ไฝœ': u'ๆ‹‰่‘—ไฝœ', u'ๆ‹‰่‘—ๅ': u'ๆ‹‰่‘—ๅ', u'ๆ‹‰่‘—้Œ„': u'ๆ‹‰่‘—ๅฝ•', u'ๆ‹‰่‘—็จฑ': u'ๆ‹‰่‘—็งฐ', u'ๆ‹‰่‘—่€…': u'ๆ‹‰่‘—่€…', u'ๆ‹‰่‘—่ฟฐ': u'ๆ‹‰่‘—่ฟฐ', u'ๆ‹‰้Š': u'ๆ‹‰้“พ', u'ๆ‹Ž่‘—': u'ๆ‹Ž็€', u'ๆ‹Ž่‘—ไฝœ': u'ๆ‹Ž่‘—ไฝœ', u'ๆ‹Ž่‘—ๅ': u'ๆ‹Ž่‘—ๅ', u'ๆ‹Ž่‘—้Œ„': u'ๆ‹Ž่‘—ๅฝ•', u'ๆ‹Ž่‘—็จฑ': u'ๆ‹Ž่‘—็งฐ', u'ๆ‹Ž่‘—่€…': u'ๆ‹Ž่‘—่€…', u'ๆ‹Ž่‘—่ฟฐ': u'ๆ‹Ž่‘—่ฟฐ', u'ๆ‹–่‘—': u'ๆ‹–็€', u'ๆ‹–่‘—ไฝœ': u'ๆ‹–่‘—ไฝœ', u'ๆ‹–่‘—ๅ': u'ๆ‹–่‘—ๅ', u'ๆ‹–่‘—้Œ„': u'ๆ‹–่‘—ๅฝ•', u'ๆ‹–่‘—็จฑ': u'ๆ‹–่‘—็งฐ', u'ๆ‹–่‘—่€…': u'ๆ‹–่‘—่€…', u'ๆ‹–่‘—่ฟฐ': u'ๆ‹–่‘—่ฟฐ', u'ๆ‹™่‘—': u'ๆ‹™่‘—', u'ๆ‹šๅ‘ฝ': u'ๆ‹šๅ‘ฝ', u'ๆ‹šๆ': u'ๆ‹šๆ', u'ๆ‹šๆญป': u'ๆ‹šๆญป', u'ๆ‹ผ่‘—': u'ๆ‹ผ็€', u'ๆ‹ผ่‘—ไฝœ': u'ๆ‹ผ่‘—ไฝœ', u'ๆ‹ผ่‘—ๅ': u'ๆ‹ผ่‘—ๅ', u'ๆ‹ผ่‘—้Œ„': u'ๆ‹ผ่‘—ๅฝ•', u'ๆ‹ผ่‘—็จฑ': u'ๆ‹ผ่‘—็งฐ', u'ๆ‹ผ่‘—่€…': u'ๆ‹ผ่‘—่€…', u'ๆ‹ผ่‘—่ฟฐ': u'ๆ‹ผ่‘—่ฟฐ', u'ๆ‹ฟ่‘—': u'ๆ‹ฟ็€', u'ๆ‹ฟ่‘—ไฝœ': u'ๆ‹ฟ่‘—ไฝœ', u'ๆ‹ฟ่‘—ๅ': u'ๆ‹ฟ่‘—ๅ', u'ๆ‹ฟ่‘—้Œ„': u'ๆ‹ฟ่‘—ๅฝ•', u'ๆ‹ฟ่‘—็จฑ': u'ๆ‹ฟ่‘—็งฐ', u'ๆ‹ฟ่‘—่€…': u'ๆ‹ฟ่‘—่€…', u'ๆ‹ฟ่‘—่ฟฐ': u'ๆ‹ฟ่‘—่ฟฐ', u'ๆŒ่‘—': u'ๆŒ็€', u'ๆŒ่‘—ไฝœ': u'ๆŒ่‘—ไฝœ', u'ๆŒ่‘—ๅ': u'ๆŒ่‘—ๅ', u'ๆŒ่‘—้Œ„': u'ๆŒ่‘—ๅฝ•', u'ๆŒ่‘—็จฑ': u'ๆŒ่‘—็งฐ', u'ๆŒ่‘—่€…': u'ๆŒ่‘—่€…', u'ๆŒ่‘—่ฟฐ': u'ๆŒ่‘—่ฟฐ', u'ๆŒ‘่‘—': u'ๆŒ‘็€', u'ๆŒ‘่‘—ไฝœ': u'ๆŒ‘่‘—ไฝœ', u'ๆŒ‘่‘—ๅ': u'ๆŒ‘่‘—ๅ', u'ๆŒ‘่‘—้Œ„': u'ๆŒ‘่‘—ๅฝ•', u'ๆŒ‘่‘—็จฑ': u'ๆŒ‘่‘—็งฐ', u'ๆŒ‘่‘—่€…': u'ๆŒ‘่‘—่€…', u'ๆŒ‘่‘—่ฟฐ': u'ๆŒ‘่‘—่ฟฐ', u'ๆ“‹่‘—': u'ๆŒก็€', u'ๆ“‹่‘—ไฝœ': u'ๆŒก่‘—ไฝœ', u'ๆ“‹่‘—ๅ': u'ๆŒก่‘—ๅ', u'ๆ“‹่‘—้Œ„': u'ๆŒก่‘—ๅฝ•', u'ๆ“‹่‘—็จฑ': u'ๆŒก่‘—็งฐ', u'ๆ“‹่‘—่€…': u'ๆŒก่‘—่€…', u'ๆ“‹่‘—่ฟฐ': u'ๆŒก่‘—่ฟฐ', u'ๆŽ™่‘—': u'ๆŒฃ็€', u'ๆŽ™่‘—ๆ›ธ': u'ๆŒฃ่‘—ไนฆ', u'ๆŽ™่‘—ไฝœ': u'ๆŒฃ่‘—ไฝœ', u'ๆŽ™่‘—ๅ': u'ๆŒฃ่‘—ๅ', u'ๆŽ™่‘—้Œ„': u'ๆŒฃ่‘—ๅฝ•', u'ๆŽ™่‘—็จฑ': u'ๆŒฃ่‘—็งฐ', u'ๆŽ™่‘—่€…': u'ๆŒฃ่‘—่€…', u'ๆŽ™่‘—่ฟฐ': u'ๆŒฃ่‘—่ฟฐ', u'ๆฎ่‘—': u'ๆŒฅ็€', u'ๆฎ่‘—ไฝœ': u'ๆŒฅ่‘—ไฝœ', u'ๆฎ่‘—ๅ': u'ๆŒฅ่‘—ๅ', u'ๆฎ่‘—้Œ„': u'ๆŒฅ่‘—ๅฝ•', u'ๆฎ่‘—็จฑ': u'ๆŒฅ่‘—็งฐ', u'ๆฎ่‘—่€…': u'ๆŒฅ่‘—่€…', u'ๆฎ่‘—่ฟฐ': u'ๆŒฅ่‘—่ฟฐ', u'ๆŒจ่‘—': u'ๆŒจ็€', u'ๆŒจ่‘—ไฝœ': u'ๆŒจ่‘—ไฝœ', u'ๆŒจ่‘—ๅ': u'ๆŒจ่‘—ๅ', u'ๆŒจ่‘—้Œ„': u'ๆŒจ่‘—ๅฝ•', u'ๆŒจ่‘—็จฑ': u'ๆŒจ่‘—็งฐ', u'ๆŒจ่‘—่€…': u'ๆŒจ่‘—่€…', u'ๆŒจ่‘—่ฟฐ': u'ๆŒจ่‘—่ฟฐ', u'ๆ†่‘—': u'ๆ†็€', u'ๆ†่‘—ไฝœ': u'ๆ†่‘—ไฝœ', u'ๆ†่‘—ๅ': u'ๆ†่‘—ๅ', u'ๆ†่‘—้Œ„': u'ๆ†่‘—ๅฝ•', u'ๆ†่‘—็จฑ': u'ๆ†่‘—็งฐ', u'ๆ†่‘—่€…': u'ๆ†่‘—่€…', u'ๆ†่‘—่ฟฐ': u'ๆ†่‘—่ฟฐ', u'ๆ“š่‘—': u'ๆฎ็€', u'ๆ“š่‘—ๆ›ธ': u'ๆฎ่‘—ไนฆ', u'ๆ“š่‘—ไฝœ': u'ๆฎ่‘—ไฝœ', u'ๆ“š่‘—ๅ': u'ๆฎ่‘—ๅ', u'ๆ“š่‘—้Œ„': u'ๆฎ่‘—ๅฝ•', u'ๆ“š่‘—็จฑ': u'ๆฎ่‘—็งฐ', u'ๆ“š่‘—่€…': u'ๆฎ่‘—่€…', u'ๆ“š่‘—่ฟฐ': u'ๆฎ่‘—่ฟฐ', u'ๆŽ–่‘—': u'ๆŽ–็€', u'ๆŽ–่‘—ไฝœ': u'ๆŽ–่‘—ไฝœ', u'ๆŽ–่‘—ๅ': u'ๆŽ–่‘—ๅ', u'ๆŽ–่‘—้Œ„': u'ๆŽ–่‘—ๅฝ•', u'ๆŽ–่‘—็จฑ': u'ๆŽ–่‘—็งฐ', u'ๆŽ–่‘—่€…': u'ๆŽ–่‘—่€…', u'ๆŽ–่‘—่ฟฐ': u'ๆŽ–่‘—่ฟฐ', u'ๆŽฅ่‘—': u'ๆŽฅ็€', u'ๆŽฅ่‘—ไฝœ': u'ๆŽฅ่‘—ไฝœ', u'ๆŽฅ่‘—ๅ': u'ๆŽฅ่‘—ๅ', u'ๆŽฅ่‘—้Œ„': u'ๆŽฅ่‘—ๅฝ•', u'ๆŽฅ่‘—็จฑ': u'ๆŽฅ่‘—็งฐ', u'ๆŽฅ่‘—่€…': u'ๆŽฅ่‘—่€…', u'ๆŽฅ่‘—่ฟฐ': u'ๆŽฅ่‘—่ฟฐ', u'ๆ‰่‘—': u'ๆ‰็€', u'ๆ‰่‘—ๆ›ธ': u'ๆ‰่‘—ไนฆ', u'ๆ‰่‘—ไฝœ': u'ๆ‰่‘—ไฝœ', u'ๆ‰่‘—ๅ': u'ๆ‰่‘—ๅ', u'ๆ‰่‘—้Œ„': u'ๆ‰่‘—ๅฝ•', u'ๆ‰่‘—็จฑ': u'ๆ‰่‘—็งฐ', u'ๆ‰่‘—่€…': u'ๆ‰่‘—่€…', u'ๆ‰่‘—่ฟฐ': u'ๆ‰่‘—่ฟฐ', u'ๆ่‘—': u'ๆ็€', u'ๆ่‘—ไฝœ': u'ๆ่‘—ไฝœ', u'ๆ่‘—ๅ': u'ๆ่‘—ๅ', u'ๆ่‘—้Œ„': u'ๆ่‘—ๅฝ•', u'ๆ่‘—็จฑ': u'ๆ่‘—็งฐ', u'ๆ่‘—่€…': u'ๆ่‘—่€…', u'ๆ่‘—่ฟฐ': u'ๆ่‘—่ฟฐ', u'ๆ‘Ÿ่‘—': u'ๆ‚็€', u'ๆ‘Ÿ่‘—ไฝœ': u'ๆ‚่‘—ไฝœ', u'ๆ‘Ÿ่‘—ๅ': u'ๆ‚่‘—ๅ', u'ๆ‘Ÿ่‘—้Œ„': u'ๆ‚่‘—ๅฝ•', u'ๆ‘Ÿ่‘—็จฑ': u'ๆ‚่‘—็งฐ', u'ๆ‘Ÿ่‘—่€…': u'ๆ‚่‘—่€…', u'ๆ‘Ÿ่‘—่ฟฐ': u'ๆ‚่‘—่ฟฐ', u'ๆ“บ่‘—': u'ๆ‘†็€', u'ๆ“บ่‘—ไฝœ': u'ๆ‘†่‘—ไฝœ', u'ๆ“บ่‘—ๅ': u'ๆ‘†่‘—ๅ', u'ๆ“บ่‘—้Œ„': u'ๆ‘†่‘—ๅฝ•', u'ๆ“บ่‘—็จฑ': u'ๆ‘†่‘—็งฐ', u'ๆ“บ่‘—่€…': u'ๆ‘†่‘—่€…', u'ๆ“บ่‘—่ฟฐ': u'ๆ‘†่‘—่ฟฐ', u'ๆ’ฐ่‘—': u'ๆ’ฐ่‘—', u'ๆ’ผ่‘—': u'ๆ’ผ็€', u'ๆ’ผ่‘—ๆ›ธ': u'ๆ’ผ่‘—ไนฆ', u'ๆ’ผ่‘—ไฝœ': u'ๆ’ผ่‘—ไฝœ', u'ๆ’ผ่‘—ๅ': u'ๆ’ผ่‘—ๅ', u'ๆ’ผ่‘—้Œ„': u'ๆ’ผ่‘—ๅฝ•', u'ๆ’ผ่‘—็จฑ': u'ๆ’ผ่‘—็งฐ', u'ๆ’ผ่‘—่€…': u'ๆ’ผ่‘—่€…', u'ๆ’ผ่‘—่ฟฐ': u'ๆ’ผ่‘—่ฟฐ', u'ๆ•ž่‘—': u'ๆ•ž็€', u'ๆ•ž่‘—ไฝœ': u'ๆ•ž่‘—ไฝœ', u'ๆ•ž่‘—ๅ': u'ๆ•ž่‘—ๅ', u'ๆ•ž่‘—้Œ„': u'ๆ•ž่‘—ๅฝ•', u'ๆ•ž่‘—็จฑ': u'ๆ•ž่‘—็งฐ', u'ๆ•ž่‘—่€…': u'ๆ•ž่‘—่€…', u'ๆ•ž่‘—่ฟฐ': u'ๆ•ž่‘—่ฟฐ', u'ๆ•ธ่‘—': u'ๆ•ฐ็€', u'ๆ•ธ่‘—ไฝœ': u'ๆ•ฐ่‘—ไฝœ', u'ๆ•ธ่‘—ๅ': u'ๆ•ฐ่‘—ๅ', u'ๆ•ธ่‘—้Œ„': u'ๆ•ฐ่‘—ๅฝ•', u'ๆ•ธ่‘—็จฑ': u'ๆ•ฐ่‘—็งฐ', u'ๆ•ธ่‘—่€…': u'ๆ•ฐ่‘—่€…', u'ๆ•ธ่‘—่ฟฐ': u'ๆ•ฐ่‘—่ฟฐ', u'ๆ–—่‘—': u'ๆ–—็€', u'ๆ–—่‘—ๆ›ธ': u'ๆ–—่‘—ไนฆ', u'ๆ–—่‘—ไฝœ': u'ๆ–—่‘—ไฝœ', u'ๆ–—่‘—ๅ': u'ๆ–—่‘—ๅ', u'ๆ–—่‘—้Œ„': u'ๆ–—่‘—ๅฝ•', u'ๆ–—่‘—็จฑ': u'ๆ–—่‘—็งฐ', u'ๆ–—่‘—่€…': u'ๆ–—่‘—่€…', u'ๆ–—่‘—่ฟฐ': u'ๆ–—่‘—่ฟฐ', u'ๆ–ฅ่‘—': u'ๆ–ฅ็€', u'ๆ–ฅ่‘—ๆ›ธ': u'ๆ–ฅ่‘—ไนฆ', u'ๆ–ฅ่‘—ไฝœ': u'ๆ–ฅ่‘—ไฝœ', u'ๆ–ฅ่‘—ๅ': u'ๆ–ฅ่‘—ๅ', u'ๆ–ฅ่‘—้Œ„': u'ๆ–ฅ่‘—ๅฝ•', u'ๆ–ฅ่‘—็จฑ': u'ๆ–ฅ่‘—็งฐ', u'ๆ–ฅ่‘—่€…': u'ๆ–ฅ่‘—่€…', u'ๆ–ฅ่‘—่ฟฐ': u'ๆ–ฅ่‘—่ฟฐ', u'ๆ–ฐ่‘—': u'ๆ–ฐ่‘—', u'ๆ–ฐ่‘—้พ่™Ž้–€': u'ๆ–ฐ่‘—้พ™่™Ž้—จ', u'ๆ–ผไธ–ๆˆ': u'ๆ–ผไธ–ๆˆ', u'ๆ–ผไนŽ': u'ๆ–ผไนŽ', u'ๆ–ผไน™ไบŽๅŒ': u'ๆ–ผไน™ไบŽๅŒ', u'ๆ–ผไน™ๅฎ‡ๅŒ': u'ๆ–ผไน™ๅฎ‡ๅŒ', u'ๆ–ผไบŽๅŒ': u'ๆ–ผไบŽๅŒ', u'ๆ–ผๅ“ฒ': u'ๆ–ผๅ“ฒ', u'ๆ–ผๅคซ็ฝ—': u'ๆ–ผๅคซ็ฝ—', u'ๆ–ผๅคซ็พ…': u'ๆ–ผๅคซ็ฝ—', u'ๆ–ผๅง“': u'ๆ–ผๅง“', u'ๆ–ผๅฎ‡ๅŒ': u'ๆ–ผๅฎ‡ๅŒ', u'ๆ–ผๅด‡ๆ–‡': u'ๆ–ผๅด‡ๆ–‡', u'ๆ–ผๅฟ—่ณ€': u'ๆ–ผๅฟ—่ดบ', u'ๆ–ผๅฟ—่ดบ': u'ๆ–ผๅฟ—่ดบ', u'ๆ–ผๆˆฒ': u'ๆ–ผๆˆ', u'ๆ–ผๆขจ่ฏ': u'ๆ–ผๆขจๅŽ', u'ๆ–ผๆขจๅŽ': u'ๆ–ผๆขจๅŽ', u'ๆ–ผๆฐ': u'ๆ–ผๆฐ', u'ๆ–ผๆฝ›็ธฃ': u'ๆ–ผๆฝœๅŽฟ', u'ๆ–ผๆฝœๅŽฟ': u'ๆ–ผๆฝœๅŽฟ', u'ๆ–ผ็ฅฅ็މ': u'ๆ–ผ็ฅฅ็މ', u'ๆ–ผ่Ÿ': u'ๆ–ผ่Ÿ', u'ๆ–ผ่ณขๅพท': u'ๆ–ผ่ดคๅพท', u'ๆ–ผ้™ค้žฌ': u'ๆ–ผ้™ค้žฌ', u'ๆ—‹ไนพ่ฝฌๅค': u'ๆ—‹ไนพ่ฝฌๅค', u'ๆ—‹ไนพ่ฝ‰ๅค': u'ๆ—‹ไนพ่ฝฌๅค', u'ๆ› ่‹ฅ็™ผ็Ÿ‡': u'ๆ—ท่‹ฅๅ‘็Ÿ‡', u'ๆ˜‚่‘—': u'ๆ˜‚็€', u'ๆ˜‚่‘—ๆ›ธ': u'ๆ˜‚่‘—ไนฆ', u'ๆ˜‚่‘—ไฝœ': u'ๆ˜‚่‘—ไฝœ', u'ๆ˜‚่‘—ๅ': u'ๆ˜‚่‘—ๅ', u'ๆ˜‚่‘—้Œ„': u'ๆ˜‚่‘—ๅฝ•', u'ๆ˜‚่‘—็จฑ': u'ๆ˜‚่‘—็งฐ', u'ๆ˜‚่‘—่€…': u'ๆ˜‚่‘—่€…', u'ๆ˜‚่‘—่ฟฐ': u'ๆ˜‚่‘—่ฟฐ', u'ๆ˜“ยทไนพ': u'ๆ˜“ยทไนพ', u'ๆ˜“็ถ“ยทไนพ': u'ๆ˜“็ปยทไนพ', u'ๆ˜“็ปยทไนพ': u'ๆ˜“็ปยทไนพ', u'ๆ˜“็ถ“ไนพ': u'ๆ˜“็ปไนพ', u'ๆ˜“็ปไนพ': u'ๆ˜“็ปไนพ', u'ๆ˜ ่‘—': u'ๆ˜ ็€', u'ๆ˜ ่‘—ๆ›ธ': u'ๆ˜ ่‘—ไนฆ', u'ๆ˜ ่‘—ไฝœ': u'ๆ˜ ่‘—ไฝœ', u'ๆ˜ ่‘—ๅ': u'ๆ˜ ่‘—ๅ', u'ๆ˜ ่‘—้Œ„': u'ๆ˜ ่‘—ๅฝ•', u'ๆ˜ ่‘—็จฑ': u'ๆ˜ ่‘—็งฐ', u'ๆ˜ ่‘—่€…': u'ๆ˜ ่‘—่€…', u'ๆ˜ ่‘—่ฟฐ': u'ๆ˜ ่‘—่ฟฐ', u'ๆ˜ญ่‘—': u'ๆ˜ญ่‘—', u'้กฏ่‘—': u'ๆ˜พ่‘—', u'ๆ˜พ่‘—': u'ๆ˜พ่‘—', u'ๆ™ƒ่‘—': u'ๆ™ƒ็€', u'ๆ™ƒ่‘—ไฝœ': u'ๆ™ƒ่‘—ไฝœ', u'ๆ™ƒ่‘—ๅ': u'ๆ™ƒ่‘—ๅ', u'ๆ™ƒ่‘—้Œ„': u'ๆ™ƒ่‘—ๅฝ•', u'ๆ™ƒ่‘—็จฑ': u'ๆ™ƒ่‘—็งฐ', u'ๆ™ƒ่‘—่€…': u'ๆ™ƒ่‘—่€…', u'ๆ™ƒ่‘—่ฟฐ': u'ๆ™ƒ่‘—่ฟฐ', u'ๆš—่‘—': u'ๆš—็€', u'ๆš—่‘—ๆ›ธ': u'ๆš—่‘—ไนฆ', u'ๆš—่‘—ไฝœ': u'ๆš—่‘—ไฝœ', u'ๆš—่‘—ๅ': u'ๆš—่‘—ๅ', u'ๆš—่‘—้Œ„': u'ๆš—่‘—ๅฝ•', u'ๆš—่‘—็จฑ': u'ๆš—่‘—็งฐ', u'ๆš—่‘—่€…': u'ๆš—่‘—่€…', u'ๆš—่‘—่ฟฐ': u'ๆš—่‘—่ฟฐ', u'ๆœ‰่‘—': u'ๆœ‰็€', u'ๆœ‰่‘—ๆ›ธ': u'ๆœ‰่‘—ไนฆ', u'ๆœ‰่‘—ไฝœ': u'ๆœ‰่‘—ไฝœ', u'ๆœ‰่‘—ๅ': u'ๆœ‰่‘—ๅ', u'ๆœ‰่‘—้Œ„': u'ๆœ‰่‘—ๅฝ•', u'ๆœ‰่‘—็จฑ': u'ๆœ‰่‘—็งฐ', u'ๆœ‰่‘—่€…': u'ๆœ‰่‘—่€…', u'ๆœ‰่‘—่ฟฐ': u'ๆœ‰่‘—่ฟฐ', u'ๆœ›่‘—': u'ๆœ›็€', u'ๆœ›่‘—ไฝœ': u'ๆœ›่‘—ไฝœ', u'ๆœ›่‘—ๅ': u'ๆœ›่‘—ๅ', u'ๆœ›่‘—้Œ„': u'ๆœ›่‘—ๅฝ•', u'ๆœ›่‘—็จฑ': u'ๆœ›่‘—็งฐ', u'ๆœ›่‘—่€…': u'ๆœ›่‘—่€…', u'ๆœ›่‘—่ฟฐ': u'ๆœ›่‘—่ฟฐ', u'ๆœไนพๅค•ๆƒ•': u'ๆœไนพๅค•ๆƒ•', u'ๆœ่‘—': u'ๆœ็€', u'ๆœ่‘—ไฝœ': u'ๆœ่‘—ไฝœ', u'ๆœ่‘—ๅ': u'ๆœ่‘—ๅ', u'ๆœ่‘—้Œ„': u'ๆœ่‘—ๅฝ•', u'ๆœ่‘—็จฑ': u'ๆœ่‘—็งฐ', u'ๆœ่‘—่€…': u'ๆœ่‘—่€…', u'ๆœ่‘—่ฟฐ': u'ๆœ่‘—่ฟฐ', u'ๆœฌ่‘—': u'ๆœฌ็€', u'ๆœฌ่‘—ๆ›ธ': u'ๆœฌ่‘—ไนฆ', u'ๆœฌ่‘—ไฝœ': u'ๆœฌ่‘—ไฝœ', u'ๆœฌ่‘—ๅ': u'ๆœฌ่‘—ๅ', u'ๆœฌ่‘—้Œ„': u'ๆœฌ่‘—ๅฝ•', u'ๆœฌ่‘—็จฑ': u'ๆœฌ่‘—็งฐ', u'ๆœฌ่‘—่€…': u'ๆœฌ่‘—่€…', u'ๆœฌ่‘—่ฟฐ': u'ๆœฌ่‘—่ฟฐ', u'ๆœดๆ–ผๅฎ‡ๅŒ': u'ๆœดๆ–ผๅฎ‡ๅŒ', u'ๆฎบ่‘—': u'ๆ€็€', u'ๆฎบ่‘—ๆ›ธ': u'ๆ€่‘—ไนฆ', u'ๆฎบ่‘—ไฝœ': u'ๆ€่‘—ไฝœ', u'ๆฎบ่‘—ๅ': u'ๆ€่‘—ๅ', u'ๆฎบ่‘—้Œ„': u'ๆ€่‘—ๅฝ•', u'ๆฎบ่‘—็จฑ': u'ๆ€่‘—็งฐ', u'ๆฎบ่‘—่€…': u'ๆ€่‘—่€…', u'ๆฎบ่‘—่ฟฐ': u'ๆ€่‘—่ฟฐ', u'้›œ่‘—': u'ๆ‚็€', u'้›œ่‘—ๆ›ธ': u'ๆ‚่‘—ไนฆ', u'้›œ่‘—ไฝœ': u'ๆ‚่‘—ไฝœ', u'้›œ่‘—ๅ': u'ๆ‚่‘—ๅ', u'้›œ่‘—้Œ„': u'ๆ‚่‘—ๅฝ•', u'้›œ่‘—็จฑ': u'ๆ‚่‘—็งฐ', u'้›œ่‘—่€…': u'ๆ‚่‘—่€…', u'้›œ่‘—่ฟฐ': u'ๆ‚่‘—่ฟฐ', u'ๆŽไนพๅพท': u'ๆŽไนพๅพท', u'ๆŽไนพ้ †': u'ๆŽไนพ้กบ', u'ๆŽไนพ้กบ': u'ๆŽไนพ้กบ', u'ๆŽๆพค้‰…': u'ๆŽๆณฝ้’œ', u'ไพ†่‘—': u'ๆฅ็€', u'ไพ†่‘—ๆ›ธ': u'ๆฅ่‘—ไนฆ', u'ไพ†่‘—ไฝœ': u'ๆฅ่‘—ไฝœ', u'ไพ†่‘—ๅ': u'ๆฅ่‘—ๅ', u'ไพ†่‘—้Œ„': u'ๆฅ่‘—ๅฝ•', u'ไพ†่‘—็จฑ': u'ๆฅ่‘—็งฐ', u'ไพ†่‘—่€…': u'ๆฅ่‘—่€…', u'ไพ†่‘—่ฟฐ': u'ๆฅ่‘—่ฟฐ', u'ๆฅŠๅนบ': u'ๆจๅนบ', u'ๆž•่‘—': u'ๆž•็€', u'ๆž•่‘—ไฝœ': u'ๆž•่‘—ไฝœ', u'ๆž•่‘—ๅ': u'ๆž•่‘—ๅ', u'ๆž•่‘—้Œ„': u'ๆž•่‘—ๅฝ•', u'ๆž•่‘—็จฑ': u'ๆž•่‘—็งฐ', u'ๆž•่‘—่€…': u'ๆž•่‘—่€…', u'ๆž•่‘—่ฟฐ': u'ๆž•่‘—่ฟฐ', u'ๆŸณ่ฉ’ๅพต': u'ๆŸณ่ฏ’ๅพต', u'ๆŸณ่ฏ’ๅพต': u'ๆŸณ่ฏ’ๅพต', u'ๆจ™ๅฟ—่‘—': u'ๆ ‡ๅฟ—็€', u'ๆจ™่ชŒ่‘—': u'ๆ ‡ๅฟ—็€', u'ๅคข่‘—': u'ๆขฆ็€', u'ๅคข่‘—ๆ›ธ': u'ๆขฆ่‘—ไนฆ', u'ๅคข่‘—ไฝœ': u'ๆขฆ่‘—ไฝœ', u'ๅคข่‘—ๅ': u'ๆขฆ่‘—ๅ', u'ๅคข่‘—้Œ„': u'ๆขฆ่‘—ๅฝ•', u'ๅคข่‘—็จฑ': u'ๆขฆ่‘—็งฐ', u'ๅคข่‘—่€…': u'ๆขฆ่‘—่€…', u'ๅคข่‘—่ฟฐ': u'ๆขฆ่‘—่ฟฐ', u'ๆขณ่‘—': u'ๆขณ็€', u'ๆขณ่‘—ไฝœ': u'ๆขณ่‘—ไฝœ', u'ๆขณ่‘—ๅ': u'ๆขณ่‘—ๅ', u'ๆขณ่‘—้Œ„': u'ๆขณ่‘—ๅฝ•', u'ๆขณ่‘—็จฑ': u'ๆขณ่‘—็งฐ', u'ๆขณ่‘—่€…': u'ๆขณ่‘—่€…', u'ๆขณ่‘—่ฟฐ': u'ๆขณ่‘—่ฟฐ', u'ๆจŠๆ–ผๆœŸ': u'ๆจŠๆ–ผๆœŸ', u'ๆฐ†ๆฐŒ': u'ๆฐ†ๆฐŒ', u'ๆฑ‚่‘—': u'ๆฑ‚็€', u'ๆฑ‚่‘—ๆ›ธ': u'ๆฑ‚่‘—ไนฆ', u'ๆฑ‚่‘—ไฝœ': u'ๆฑ‚่‘—ไฝœ', u'ๆฑ‚่‘—ๅ': u'ๆฑ‚่‘—ๅ', u'ๆฑ‚่‘—้Œ„': u'ๆฑ‚่‘—ๅฝ•', u'ๆฑ‚่‘—็จฑ': u'ๆฑ‚่‘—็งฐ', u'ๆฑ‚่‘—่€…': u'ๆฑ‚่‘—่€…', u'ๆฑ‚่‘—่ฟฐ': u'ๆฑ‚่‘—่ฟฐ', u'ๆฒˆๆฒ’': u'ๆฒ‰ๆฒก', u'ๆฒ‰่‘—': u'ๆฒ‰็€', u'ๆฒˆ็ฉ': u'ๆฒ‰็งฏ', u'ๆฒˆ่ˆน': u'ๆฒ‰่ˆน', u'ๆฒ‰่‘—ๆ›ธ': u'ๆฒ‰่‘—ไนฆ', u'ๆฒ‰่‘—ไฝœ': u'ๆฒ‰่‘—ไฝœ', u'ๆฒ‰่‘—ๅ': u'ๆฒ‰่‘—ๅ', u'ๆฒ‰่‘—้Œ„': u'ๆฒ‰่‘—ๅฝ•', u'ๆฒ‰่‘—็จฑ': u'ๆฒ‰่‘—็งฐ', u'ๆฒ‰่‘—่€…': u'ๆฒ‰่‘—่€…', u'ๆฒ‰่‘—่ฟฐ': u'ๆฒ‰่‘—่ฟฐ', u'ๆฒˆ้ป˜': u'ๆฒ‰้ป˜', u'ๆฒฟ่‘—': u'ๆฒฟ็€', u'ๆฒฟ่‘—ๆ›ธ': u'ๆฒฟ่‘—ไนฆ', u'ๆฒฟ่‘—ไฝœ': u'ๆฒฟ่‘—ไฝœ', u'ๆฒฟ่‘—ๅ': u'ๆฒฟ่‘—ๅ', u'ๆฒฟ่‘—้Œ„': u'ๆฒฟ่‘—ๅฝ•', u'ๆฒฟ่‘—็จฑ': u'ๆฒฟ่‘—็งฐ', u'ๆฒฟ่‘—่€…': u'ๆฒฟ่‘—่€…', u'ๆฒฟ่‘—่ฟฐ': u'ๆฒฟ่‘—่ฟฐ', u'ๆฐพๆฟซ': u'ๆณ›ๆปฅ', u'ๆด—้Š': u'ๆด—็ปƒ', u'ๆดป่‘—': u'ๆดป็€', u'ๆดป่‘—ๆ›ธ': u'ๆดป่‘—ไนฆ', u'ๆดป่‘—ไฝœ': u'ๆดป่‘—ไฝœ', u'ๆดป่‘—ๅ': u'ๆดป่‘—ๅ', u'ๆดป่‘—้Œ„': u'ๆดป่‘—ๅฝ•', u'ๆดป่‘—็จฑ': u'ๆดป่‘—็งฐ', u'ๆดป่‘—่€…': u'ๆดป่‘—่€…', u'ๆดป่‘—่ฟฐ': u'ๆดป่‘—่ฟฐ', u'ๆต่‘—': u'ๆต็€', u'ๆต่‘—ๆ›ธ': u'ๆต่‘—ไนฆ', u'ๆต่‘—ไฝœ': u'ๆต่‘—ไฝœ', u'ๆต่‘—ๅ': u'ๆต่‘—ๅ', u'ๆต่‘—้Œ„': u'ๆต่‘—ๅฝ•', u'ๆต่‘—็จฑ': u'ๆต่‘—็งฐ', u'ๆต่‘—่€…': u'ๆต่‘—่€…', u'ๆต่‘—่ฟฐ': u'ๆต่‘—่ฟฐ', u'ๆต้œฒ่‘—': u'ๆต้œฒ็€', u'ๆตฎ่‘—': u'ๆตฎ็€', u'ๆตฎ่‘—ๆ›ธ': u'ๆตฎ่‘—ไนฆ', u'ๆตฎ่‘—ไฝœ': u'ๆตฎ่‘—ไฝœ', u'ๆตฎ่‘—ๅ': u'ๆตฎ่‘—ๅ', u'ๆตฎ่‘—้Œ„': u'ๆตฎ่‘—ๅฝ•', u'ๆตฎ่‘—็จฑ': u'ๆตฎ่‘—็งฐ', u'ๆตฎ่‘—่€…': u'ๆตฎ่‘—่€…', u'ๆตฎ่‘—่ฟฐ': u'ๆตฎ่‘—่ฟฐ', u'ๆฝค่‘—': u'ๆถฆ็€', u'ๆฝค่‘—ๆ›ธ': u'ๆถฆ่‘—ไนฆ', u'ๆฝค่‘—ไฝœ': u'ๆถฆ่‘—ไฝœ', u'ๆฝค่‘—ๅ': u'ๆถฆ่‘—ๅ', u'ๆฝค่‘—้Œ„': u'ๆถฆ่‘—ๅฝ•', u'ๆฝค่‘—็จฑ': u'ๆถฆ่‘—็งฐ', u'ๆฝค่‘—่€…': u'ๆถฆ่‘—่€…', u'ๆฝค่‘—่ฟฐ': u'ๆถฆ่‘—่ฟฐ', u'ๆถต่‘—': u'ๆถต็€', u'ๆถต่‘—ๆ›ธ': u'ๆถต่‘—ไนฆ', u'ๆถต่‘—ไฝœ': u'ๆถต่‘—ไฝœ', u'ๆถต่‘—ๅ': u'ๆถต่‘—ๅ', u'ๆถต่‘—้Œ„': u'ๆถต่‘—ๅฝ•', u'ๆถต่‘—็จฑ': u'ๆถต่‘—็งฐ', u'ๆถต่‘—่€…': u'ๆถต่‘—่€…', u'ๆถต่‘—่ฟฐ': u'ๆถต่‘—่ฟฐ', u'ๆธด่‘—': u'ๆธด็€', u'ๆธด่‘—ๆ›ธ': u'ๆธด่‘—ไนฆ', u'ๆธด่‘—ไฝœ': u'ๆธด่‘—ไฝœ', u'ๆธด่‘—ๅ': u'ๆธด่‘—ๅ', u'ๆธด่‘—้Œ„': u'ๆธด่‘—ๅฝ•', u'ๆธด่‘—็จฑ': u'ๆธด่‘—็งฐ', u'ๆธด่‘—่€…': u'ๆธด่‘—่€…', u'ๆธด่‘—่ฟฐ': u'ๆธด่‘—่ฟฐ', u'ๆบข่‘—': u'ๆบข็€', u'ๆบข่‘—ๆ›ธ': u'ๆบข่‘—ไนฆ', u'ๆบข่‘—ไฝœ': u'ๆบข่‘—ไฝœ', u'ๆบข่‘—ๅ': u'ๆบข่‘—ๅ', u'ๆบข่‘—้Œ„': u'ๆบข่‘—ๅฝ•', u'ๆบข่‘—็จฑ': u'ๆบข่‘—็งฐ', u'ๆบข่‘—่€…': u'ๆบข่‘—่€…', u'ๆบข่‘—่ฟฐ': u'ๆบข่‘—่ฟฐ', u'ๆผ”่‘—': u'ๆผ”็€', u'ๆผ”่‘—ๆ›ธ': u'ๆผ”่‘—ไนฆ', u'ๆผ”่‘—ไฝœ': u'ๆผ”่‘—ไฝœ', u'ๆผ”่‘—ๅ': u'ๆผ”่‘—ๅ', u'ๆผ”่‘—้Œ„': u'ๆผ”่‘—ๅฝ•', u'ๆผ”่‘—็จฑ': u'ๆผ”่‘—็งฐ', u'ๆผ”่‘—่€…': u'ๆผ”่‘—่€…', u'ๆผ”่‘—่ฟฐ': u'ๆผ”่‘—่ฟฐ', u'ๆผซ่‘—': u'ๆผซ็€', u'ๆผซ่‘—ๆ›ธ': u'ๆผซ่‘—ไนฆ', u'ๆผซ่‘—ไฝœ': u'ๆผซ่‘—ไฝœ', u'ๆผซ่‘—ๅ': u'ๆผซ่‘—ๅ', u'ๆผซ่‘—้Œ„': u'ๆผซ่‘—ๅฝ•', u'ๆผซ่‘—็จฑ': u'ๆผซ่‘—็งฐ', u'ๆผซ่‘—่€…': u'ๆผซ่‘—่€…', u'ๆผซ่‘—่ฟฐ': u'ๆผซ่‘—่ฟฐ', u'้ปž่‘—': u'็‚น็€', u'้ปž่‘—ไฝœ': u'็‚น่‘—ไฝœ', u'้ปž่‘—ๅ': u'็‚น่‘—ๅ', u'้ปž่‘—้Œ„': u'็‚น่‘—ๅฝ•', u'้ปž่‘—็จฑ': u'็‚น่‘—็งฐ', u'้ปž่‘—่€…': u'็‚น่‘—่€…', u'้ปž่‘—่ฟฐ': u'็‚น่‘—่ฟฐ', u'็‡’่‘—': u'็ƒง็€', u'็‡’่‘—ไฝœ': u'็ƒง่‘—ไฝœ', u'็‡’่‘—ๅ': u'็ƒง่‘—ๅ', u'็‡’่‘—้Œ„': u'็ƒง่‘—ๅฝ•', u'็‡’่‘—็จฑ': u'็ƒง่‘—็งฐ', u'็‡’่‘—่€…': u'็ƒง่‘—่€…', u'็‡’่‘—่ฟฐ': u'็ƒง่‘—่ฟฐ', u'็…ง่‘—': u'็…ง็€', u'็…ง่‘—ๆ›ธ': u'็…ง่‘—ไนฆ', u'็…ง่‘—ไฝœ': u'็…ง่‘—ไฝœ', u'็…ง่‘—ๅ': u'็…ง่‘—ๅ', u'็…ง่‘—้Œ„': u'็…ง่‘—ๅฝ•', u'็…ง่‘—็จฑ': u'็…ง่‘—็งฐ', u'็…ง่‘—่€…': u'็…ง่‘—่€…', u'็…ง่‘—่ฟฐ': u'็…ง่‘—่ฟฐ', u'ๆ„›่ญท่‘—': u'็ˆฑๆŠค็€', u'ๆ„›่‘—': u'็ˆฑ็€', u'ๆ„›่‘—ๆ›ธ': u'็ˆฑ่‘—ไนฆ', u'ๆ„›่‘—ไฝœ': u'็ˆฑ่‘—ไฝœ', u'ๆ„›่‘—ๅ': u'็ˆฑ่‘—ๅ', u'ๆ„›่‘—้Œ„': u'็ˆฑ่‘—ๅฝ•', u'ๆ„›่‘—็จฑ': u'็ˆฑ่‘—็งฐ', u'ๆ„›่‘—่€…': u'็ˆฑ่‘—่€…', u'ๆ„›่‘—่ฟฐ': u'็ˆฑ่‘—่ฟฐ', u'็‰ฝ่‘—': u'็‰ต็€', u'็‰ฝ่‘—ๆ›ธ': u'็‰ต่‘—ไนฆ', u'็‰ฝ่‘—ไฝœ': u'็‰ต่‘—ไฝœ', u'็‰ฝ่‘—ๅ': u'็‰ต่‘—ๅ', u'็‰ฝ่‘—้Œ„': u'็‰ต่‘—ๅฝ•', u'็‰ฝ่‘—็จฑ': u'็‰ต่‘—็งฐ', u'็‰ฝ่‘—่€…': u'็‰ต่‘—่€…', u'็‰ฝ่‘—่ฟฐ': u'็‰ต่‘—่ฟฐ', u'็Šฏไธ่‘—': u'็Šฏไธ็€', u'็Šฏๅพ—่‘—': u'็Šฏๅพ—็€', u'็จ่‘—': u'็‹ฌ็€', u'็จ่‘—ๆ›ธ': u'็‹ฌ่‘—ไนฆ', u'็จ่‘—ไฝœ': u'็‹ฌ่‘—ไฝœ', u'็จ่‘—ๅ': u'็‹ฌ่‘—ๅ', u'็จ่‘—้Œ„': u'็‹ฌ่‘—ๅฝ•', u'็จ่‘—็จฑ': u'็‹ฌ่‘—็งฐ', u'็จ่‘—่€…': u'็‹ฌ่‘—่€…', u'็จ่‘—่ฟฐ': u'็‹ฌ่‘—่ฟฐ', u'็Œœ่‘—': u'็Œœ็€', u'็Œœ่‘—ๆ›ธ': u'็Œœ็€ไนฆ', u'็Œœ่‘—ไฝœ': u'็Œœ่‘—ไฝœ', u'็Œœ่‘—ๅ': u'็Œœ่‘—ๅ', u'็Œœ่‘—้Œ„': u'็Œœ่‘—ๅฝ•', u'็Œœ่‘—็จฑ': u'็Œœ่‘—็งฐ', u'็Œœ่‘—่€…': u'็Œœ่‘—่€…', u'็Œœ่‘—่ฟฐ': u'็Œœ่‘—่ฟฐ', u'็Žฉ่‘—': u'็Žฉ็€', u'็”œ่‘—': u'็”œ็€', u'็”œ่‘—ๆ›ธ': u'็”œ่‘—ไนฆ', u'็”œ่‘—ไฝœ': u'็”œ่‘—ไฝœ', u'็”œ่‘—ๅ': u'็”œ่‘—ๅ', u'็”œ่‘—้Œ„': u'็”œ่‘—ๅฝ•', u'็”œ่‘—็จฑ': u'็”œ่‘—็งฐ', u'็”œ่‘—่€…': u'็”œ่‘—่€…', u'็”œ่‘—่ฟฐ': u'็”œ่‘—่ฟฐ', u'็”จไธ่‘—': u'็”จไธ็€', u'็”จๅพ—่‘—': u'็”จๅพ—็€', u'็”จ่‘—': u'็”จ็€', u'็”จ่‘—ๆ›ธ': u'็”จ่‘—ไนฆ', u'็”จ่‘—ไฝœ': u'็”จ่‘—ไฝœ', u'็”จ่‘—ๅ': u'็”จ่‘—ๅ', u'็”จ่‘—้Œ„': u'็”จ่‘—ๅฝ•', u'็”จ่‘—็จฑ': u'็”จ่‘—็งฐ', u'็”จ่‘—่€…': u'็”จ่‘—่€…', u'็”จ่‘—่ฟฐ': u'็”จ่‘—่ฟฐ', u'็”ทไธบไนพ': u'็”ทไธบไนพ', u'็”ท็ˆฒไนพ': u'็”ทไธบไนพ', u'็”ท็‚บไนพ': u'็”ทไธบไนพ', u'็”ทๆ€ง็‚บไนพ': u'็”ทๆ€งไธบไนพ', u'็”ทๆ€ง็ˆฒไนพ': u'็”ทๆ€งไธบไนพ', u'็”ทๆ€งไธบไนพ': u'็”ทๆ€งไธบไนพ', u'็•™่‘—': u'็•™็€', u'็•™่‘—ๆ›ธ': u'็•™็€ไนฆ', u'็•™่‘—ไฝœ': u'็•™่‘—ไฝœ', u'็•™่‘—ๅ': u'็•™่‘—ๅ', u'็•™่‘—้Œ„': u'็•™่‘—ๅฝ•', u'็•™่‘—็จฑ': u'็•™่‘—็งฐ', u'็•™่‘—่€…': u'็•™่‘—่€…', u'็•™่‘—่ฟฐ': u'็•™่‘—่ฟฐ', u'็–‘่‘—': u'็–‘็€', u'็–‘่‘—ๆ›ธ': u'็–‘่‘—ไนฆ', u'็–‘่‘—ไฝœ': u'็–‘่‘—ไฝœ', u'็–‘่‘—ๅ': u'็–‘่‘—ๅ', u'็–‘่‘—้Œ„': u'็–‘่‘—ๅฝ•', u'็–‘่‘—็จฑ': u'็–‘่‘—็งฐ', u'็–‘่‘—่€…': u'็–‘่‘—่€…', u'็–‘่‘—่ฟฐ': u'็–‘่‘—่ฟฐ', u'็™ฅ็˜•': u'็™ฅ็˜•', u'็šบ่‘—': u'็šฑ็€', u'็šบ่‘—ๆ›ธ': u'็šฑ่‘—ไนฆ', u'็šบ่‘—ไฝœ': u'็šฑ่‘—ไฝœ', u'็šบ่‘—ๅ': u'็šฑ่‘—ๅ', u'็šบ่‘—้Œ„': u'็šฑ่‘—ๅฝ•', u'็šบ่‘—็จฑ': u'็šฑ่‘—็งฐ', u'็šบ่‘—่€…': u'็šฑ่‘—่€…', u'็šบ่‘—่ฟฐ': u'็šฑ่‘—่ฟฐ', u'็››่‘—': u'็››็€', u'็››่‘—ๆ›ธ': u'็››่‘—ไนฆ', u'็››่‘—ไฝœ': u'็››่‘—ไฝœ', u'็››่‘—ๅ': u'็››่‘—ๅ', u'็››่‘—้Œ„': u'็››่‘—ๅฝ•', u'็››่‘—็จฑ': u'็››่‘—็งฐ', u'็››่‘—่€…': u'็››่‘—่€…', u'็››่‘—่ฟฐ': u'็››่‘—่ฟฐ', u'็›ฏ่‘—': u'็›ฏ็€', u'็›ฏ่‘—ๆ›ธ': u'็›ฏ็€ไนฆ', u'็›ฏ่‘—ไฝœ': u'็›ฏ่‘—ไฝœ', u'็›ฏ่‘—ๅ': u'็›ฏ่‘—ๅ', u'็›ฏ่‘—้Œ„': u'็›ฏ่‘—ๅฝ•', u'็›ฏ่‘—็จฑ': u'็›ฏ่‘—็งฐ', u'็›ฏ่‘—่€…': u'็›ฏ่‘—่€…', u'็›ฏ่‘—่ฟฐ': u'็›ฏ่‘—่ฟฐ', u'็›พ่‘—': u'็›พ็€', u'็›พ่‘—ๆ›ธ': u'็›พ่‘—ไนฆ', u'็›พ่‘—ไฝœ': u'็›พ่‘—ไฝœ', u'็›พ่‘—ๅ': u'็›พ่‘—ๅ', u'็›พ่‘—้Œ„': u'็›พ่‘—ๅฝ•', u'็›พ่‘—็จฑ': u'็›พ่‘—็งฐ', u'็›พ่‘—่€…': u'็›พ่‘—่€…', u'็›พ่‘—่ฟฐ': u'็›พ่‘—่ฟฐ', u'็œ‹ไธ่‘—': u'็œ‹ไธ็€', u'็œ‹ๅพ—่‘—': u'็œ‹ๅพ—็€', u'็œ‹่‘—': u'็œ‹็€', u'็œ‹่‘—ๆ›ธ': u'็œ‹็€ไนฆ', u'็œ‹่‘—ไฝœ': u'็œ‹่‘—ไฝœ', u'็œ‹่‘—ๅ': u'็œ‹่‘—ๅ', u'็œ‹่‘—้Œ„': u'็œ‹่‘—ๅฝ•', u'็œ‹่‘—็จฑ': u'็œ‹่‘—็งฐ', u'็œ‹่‘—่€…': u'็œ‹่‘—่€…', u'็œ‹่‘—่ฟฐ': u'็œ‹่‘—่ฟฐ', u'่‘—ๆฅญ': u'็€ไธš', u'่‘—็ตฒ': u'็€ไธ', u'่‘—ไนˆ': u'็€ไนˆ', u'่‘—ไบบ': u'็€ไบบ', u'่‘—ไป€ไนˆๆ€ฅ': u'็€ไป€ไนˆๆ€ฅ', u'่‘—ไป–': u'็€ไป–', u'่‘—ไปค': u'็€ไปค', u'่‘—ไฝ': u'็€ไฝ', u'่‘—้ซ”': u'็€ไฝ“', u'่‘—ไฝ ': u'็€ไฝ ', u'่‘—ไพฟ': u'็€ไพฟ', u'่‘—ๆถผ': u'็€ๅ‡‰', u'่‘—ๅŠ›': u'็€ๅŠ›', u'่‘—ๅ‹': u'็€ๅŠฒ', u'่‘—่™Ÿ': u'็€ๅท', u'่‘—ๅ‘ข': u'็€ๅ‘ข', u'่‘—ๅ“ฉ': u'็€ๅ“ฉ', u'่‘—ๅœฐ': u'็€ๅœฐ', u'่‘—ๅขจ': u'็€ๅขจ', u'่‘—่ฒ': u'็€ๅฃฐ', u'่‘—่™•': u'็€ๅค„', u'่‘—ๅฅน': u'็€ๅฅน', u'่‘—ๅฆณ': u'็€ๅฆณ', u'่‘—ๅง“': u'็€ๅง“', u'่‘—ๅฎƒ': u'็€ๅฎƒ', u'่‘—ๅฎš': u'็€ๅฎš', u'่‘—ๅฏฆ': u'็€ๅฎž', u'่‘—ๅทฑ': u'็€ๅทฑ', u'่‘—ๅธณ': u'็€ๅธ', u'่‘—ๅบŠ': u'็€ๅบŠ', u'่‘—ๅบธ': u'็€ๅบธ', u'่‘—ๅผ': u'็€ๅผ', u'่‘—้Œ„': u'็€ๅฝ•', u'่‘—ๅฟƒ': u'็€ๅฟƒ', u'่‘—ๅฟ—': u'็€ๅฟ—', u'่‘—ๅฟ™': u'็€ๅฟ™', u'่‘—ๆ€ฅ': u'็€ๆ€ฅ', u'่‘—ๆƒฑ': u'็€ๆผ', u'่‘—้ฉš': u'็€ๆƒŠ', u'่‘—ๆƒณ': u'็€ๆƒณ', u'่‘—ๆ„': u'็€ๆ„', u'่‘—ๆ…Œ': u'็€ๆ…Œ', u'่‘—ๆˆ‘': u'็€ๆˆ‘', u'่‘—ๆ‰‹': u'็€ๆ‰‹', u'่‘—ๆŠน': u'็€ๆŠน', u'่‘—ๆ‘ธ': u'็€ๆ‘ธ', u'่‘—ๆ’ฐ': u'็€ๆ’ฐ', u'่‘—ๆ•ธ': u'็€ๆ•ฐ', u'่‘—ๆ˜Ž': u'็€ๆ˜Ž', u'่‘—ๆœซ': u'็€ๆœซ', u'่‘—ๆฅต': u'็€ๆž', u'่‘—ๆ ผ': u'็€ๆ ผ', u'่‘—ๆฃ‹': u'็€ๆฃ‹', u'่‘—ๆง': u'็€ๆง', u'่‘—ๆฐฃ': u'็€ๆฐ”', u'่‘—ๆณ•': u'็€ๆณ•', u'่‘—ๆทบ': u'็€ๆต…', u'่‘—็ซ': u'็€็ซ', u'่‘—็„ถ': u'็€็„ถ', u'่‘—็”š': u'็€็”š', u'่‘—็”Ÿ': u'็€็”Ÿ', u'่‘—็–‘': u'็€็–‘', u'่‘—็™ฝ': u'็€็™ฝ', u'่‘—็›ธ': u'็€็›ธ', u'่‘—็œผ': u'็€็œผ', u'่‘—่‘—': u'็€็€', u'่‘—็ฅ‚': u'็€็ฅ‚', u'่‘—็ฉ': u'็€็งฏ', u'่‘—็จฟ': u'็€็จฟ', u'่‘—็ญ†': u'็€็ฌ”', u'่‘—็ฑ': u'็€็ฑ', u'่‘—็ทŠ': u'็€็ดง', u'่‘—็ท‘': u'็€็ท‘', u'่‘—็ต†': u'็€็ปŠ', u'่‘—็ธพ': u'็€็ปฉ', u'่‘—็ท‹': u'็€็ปฏ', u'่‘—็ถ ': u'็€็ปฟ', u'่‘—่‚‰': u'็€่‚‰', u'่‘—่…ณ': u'็€่„š', u'่‘—่‰ฆ': u'็€่ˆฐ', u'่‘—่‰ฒ': u'็€่‰ฒ', u'่‘—็ฏ€': u'็€่Š‚', u'่‘—่Šฑ': u'็€่Šฑ', u'่‘—่Žซ': u'็€่Žซ', u'่‘—่ฝ': u'็€่ฝ', u'่‘—่—': u'็€่—', u'่‘—่กฃ': u'็€่กฃ', u'่‘—่ฃ': u'็€่ฃ…', u'่‘—่ฆ': u'็€่ฆ', u'่‘—่ญฆ': u'็€่ญฆ', u'่‘—่ถฃ': u'็€่ถฃ', u'่‘—้‚Š': u'็€่พน', u'่‘—่ฟท': u'็€่ฟท', u'่‘—่ทก': u'็€่ฟน', u'่‘—้‡': u'็€้‡', u'่‘—้Œฒ': u'็€้Œฒ', u'่‘—่ž': u'็€้—ป', u'่‘—้™ธ': u'็€้™†', u'่‘—้›': u'็€้›', u'่‘—้žญ': u'็€้žญ', u'่‘—้กŒ': u'็€้ข˜', u'่‘—้ญ”': u'็€้ญ”', u'็กไธ่‘—': u'็กไธ็€', u'็กๅพ—่‘—': u'็กๅพ—็€', u'็ก่‘—': u'็ก็€', u'็ก่‘—ๆ›ธ': u'็ก่‘—ไนฆ', u'็ก่‘—ไฝœ': u'็ก่‘—ไฝœ', u'็ก่‘—ๅ': u'็ก่‘—ๅ', u'็ก่‘—้Œ„': u'็ก่‘—ๅฝ•', u'็ก่‘—็จฑ': u'็ก่‘—็งฐ', u'็ก่‘—่€…': u'็ก่‘—่€…', u'็ก่‘—่ฟฐ': u'็ก่‘—่ฟฐ', u'็นๅพฎ็Ÿฅ่‘—': u'็นๅพฎ็Ÿฅ่‘—', u'็ชไธธ': u'็พไธธ', u'็žž่‘—': u'็ž’็€', u'็žž่‘—ๆ›ธ': u'็ž’่‘—ไนฆ', u'็žž่‘—ไฝœ': u'็ž’่‘—ไฝœ', u'็žž่‘—ๅ': u'็ž’่‘—ๅ', u'็žž่‘—้Œ„': u'็ž’่‘—ๅฝ•', u'็žž่‘—็จฑ': u'็ž’่‘—็งฐ', u'็žž่‘—่€…': u'็ž’่‘—่€…', u'็žž่‘—่ฟฐ': u'็ž’่‘—่ฟฐ', u'็žง่‘—': u'็žง็€', u'็žง่‘—ๆ›ธ': u'็žง็€ไนฆ', u'็žง่‘—ไฝœ': u'็žง่‘—ไฝœ', u'็žง่‘—ๅ': u'็žง่‘—ๅ', u'็žง่‘—้Œ„': u'็žง่‘—ๅฝ•', u'็žง่‘—็จฑ': u'็žง่‘—็งฐ', u'็žง่‘—่€…': u'็žง่‘—่€…', u'็žง่‘—่ฟฐ': u'็žง่‘—่ฟฐ', u'็žช่‘—': u'็žช็€', u'็žช่‘—ๆ›ธ': u'็žช่‘—ไนฆ', u'็žช่‘—ไฝœ': u'็žช่‘—ไฝœ', u'็žช่‘—ๅ': u'็žช่‘—ๅ', u'็žช่‘—้Œ„': u'็žช่‘—ๅฝ•', u'็žช่‘—็จฑ': u'็žช่‘—็งฐ', u'็žช่‘—่€…': u'็žช่‘—่€…', u'็žช่‘—่ฟฐ': u'็žช่‘—่ฟฐ', u'็žญๆœ›': u'็žญๆœ›', u'็Ÿณ็ข้•‡': u'็Ÿณ็ข้•‡', u'็Ÿณ็ข้Žฎ': u'็Ÿณ็ข้•‡', u'็ฆ่‘—': u'็ฆ็€', u'็ฆ่‘—ๆ›ธ': u'็ฆ่‘—ไนฆ', u'็ฆ่‘—ไฝœ': u'็ฆ่‘—ไฝœ', u'็ฆ่‘—ๅ': u'็ฆ่‘—ๅ', u'็ฆ่‘—้Œ„': u'็ฆ่‘—ๅฝ•', u'็ฆ่‘—็จฑ': u'็ฆ่‘—็งฐ', u'็ฆ่‘—่€…': u'็ฆ่‘—่€…', u'็ฆ่‘—่ฟฐ': u'็ฆ่‘—่ฟฐ', u'็ฉ€ๆข': u'็ฉ€ๆข', u'็ฉบ่‘—': u'็ฉบ็€', u'็ฉบ่‘—ๆ›ธ': u'็ฉบ่‘—ไนฆ', u'็ฉบ่‘—ไฝœ': u'็ฉบ่‘—ไฝœ', u'็ฉบ่‘—ๅ': u'็ฉบ่‘—ๅ', u'็ฉบ่‘—้Œ„': u'็ฉบ่‘—ๅฝ•', u'็ฉบ่‘—็จฑ': u'็ฉบ่‘—็งฐ', u'็ฉบ่‘—่€…': u'็ฉบ่‘—่€…', u'็ฉบ่‘—่ฟฐ': u'็ฉบ่‘—่ฟฐ', u'็ฉฟ่‘—': u'็ฉฟ็€', u'็ฉฟ่‘—ๆ›ธ': u'็ฉฟ่‘—ไนฆ', u'็ฉฟ่‘—ไฝœ': u'็ฉฟ่‘—ไฝœ', u'็ฉฟ่‘—ๅ': u'็ฉฟ่‘—ๅ', u'็ฉฟ่‘—้Œ„': u'็ฉฟ่‘—ๅฝ•', u'็ฉฟ่‘—็จฑ': u'็ฉฟ่‘—็งฐ', u'็ฉฟ่‘—่€…': u'็ฉฟ่‘—่€…', u'็ฉฟ่‘—่ฟฐ': u'็ฉฟ่‘—่ฟฐ', u'่ฑŽ่‘—': u'็ซ–็€', u'่ฑŽ่‘—ๆ›ธ': u'็ซ–่‘—ไนฆ', u'่ฑŽ่‘—ไฝœ': u'็ซ–่‘—ไฝœ', u'่ฑŽ่‘—ๅ': u'็ซ–่‘—ๅ', u'่ฑŽ่‘—้Œ„': u'็ซ–่‘—ๅฝ•', u'่ฑŽ่‘—็จฑ': u'็ซ–่‘—็งฐ', u'่ฑŽ่‘—่€…': u'็ซ–่‘—่€…', u'่ฑŽ่‘—่ฟฐ': u'็ซ–่‘—่ฟฐ', u'็ซ™่‘—': u'็ซ™็€', u'็ซ™่‘—ๆ›ธ': u'็ซ™่‘—ไนฆ', u'็ซ™่‘—ไฝœ': u'็ซ™่‘—ไฝœ', u'็ซ™่‘—ๅ': u'็ซ™่‘—ๅ', u'็ซ™่‘—้Œ„': u'็ซ™่‘—ๅฝ•', u'็ซ™่‘—็จฑ': u'็ซ™่‘—็งฐ', u'็ซ™่‘—่€…': u'็ซ™่‘—่€…', u'็ซ™่‘—่ฟฐ': u'็ซ™่‘—่ฟฐ', u'็ฌ‘่‘—': u'็ฌ‘็€', u'็ฌ‘่‘—ๆ›ธ': u'็ฌ‘่‘—ไนฆ', u'็ฌ‘่‘—ไฝœ': u'็ฌ‘่‘—ไฝœ', u'็ฌ‘่‘—ๅ': u'็ฌ‘่‘—ๅ', u'็ฌ‘่‘—้Œ„': u'็ฌ‘่‘—ๅฝ•', u'็ฌ‘่‘—็จฑ': u'็ฌ‘่‘—็งฐ', u'็ฌ‘่‘—่€…': u'็ฌ‘่‘—่€…', u'็ฌ‘่‘—่ฟฐ': u'็ฌ‘่‘—่ฟฐ', u'็ญ”่ฆ†': u'็ญ”ๅค', u'็ฎก่‘—': u'็ฎก็€', u'็ฎก่‘—ๆ›ธ': u'็ฎก่‘—ไนฆ', u'็ฎก่‘—ไฝœ': u'็ฎก่‘—ไฝœ', u'็ฎก่‘—ๅ': u'็ฎก่‘—ๅ', u'็ฎก่‘—้Œ„': u'็ฎก่‘—ๅฝ•', u'็ฎก่‘—็จฑ': u'็ฎก่‘—็งฐ', u'็ฎก่‘—่€…': u'็ฎก่‘—่€…', u'็ฎก่‘—่ฟฐ': u'็ฎก่‘—่ฟฐ', u'็ถ่‘—': u'็ป‘็€', u'็ถ่‘—ๆ›ธ': u'็ป‘่‘—ไนฆ', u'็ถ่‘—ไฝœ': u'็ป‘่‘—ไฝœ', u'็ถ่‘—ๅ': u'็ป‘่‘—ๅ', u'็ถ่‘—้Œ„': u'็ป‘่‘—ๅฝ•', u'็ถ่‘—็จฑ': u'็ป‘่‘—็งฐ', u'็ถ่‘—่€…': u'็ป‘่‘—่€…', u'็ถ่‘—่ฟฐ': u'็ป‘่‘—่ฟฐ', u'็นž่‘—': u'็ป•็€', u'็นž่‘—ๆ›ธ': u'็ป•่‘—ไนฆ', u'็นž่‘—ไฝœ': u'็ป•่‘—ไฝœ', u'็นž่‘—ๅ': u'็ป•่‘—ๅ', u'็นž่‘—้Œ„': u'็ป•่‘—ๅฝ•', u'็นž่‘—็จฑ': u'็ป•่‘—็งฐ', u'็นž่‘—่€…': u'็ป•่‘—่€…', u'็นž่‘—่ฟฐ': u'็ป•่‘—่ฟฐ', u'็ทจ่‘—': u'็ผ–่‘—', u'็บ่‘—': u'็ผ ็€', u'็บ่‘—ๆ›ธ': u'็ผ ่‘—ไนฆ', u'็บ่‘—ไฝœ': u'็ผ ่‘—ไฝœ', u'็บ่‘—ๅ': u'็ผ ่‘—ๅ', u'็บ่‘—้Œ„': u'็ผ ่‘—ๅฝ•', u'็บ่‘—็จฑ': u'็ผ ่‘—็งฐ', u'็บ่‘—่€…': u'็ผ ่‘—่€…', u'็บ่‘—่ฟฐ': u'็ผ ่‘—่ฟฐ', u'็ฝฉ่‘—': u'็ฝฉ็€', u'็ฝฉ่‘—ๆ›ธ': u'็ฝฉ่‘—ไนฆ', u'็ฝฉ่‘—ไฝœ': u'็ฝฉ่‘—ไฝœ', u'็ฝฉ่‘—ๅ': u'็ฝฉ่‘—ๅ', u'็ฝฉ่‘—้Œ„': u'็ฝฉ่‘—ๅฝ•', u'็ฝฉ่‘—็จฑ': u'็ฝฉ่‘—็งฐ', u'็ฝฉ่‘—่€…': u'็ฝฉ่‘—่€…', u'็ฝฉ่‘—่ฟฐ': u'็ฝฉ่‘—่ฟฐ', u'็พŽ่‘—': u'็พŽ็€', u'็พŽ่‘—ๆ›ธ': u'็พŽ่‘—ไนฆ', u'็พŽ่‘—ไฝœ': u'็พŽ่‘—ไฝœ', u'็พŽ่‘—ๅ': u'็พŽ่‘—ๅ', u'็พŽ่‘—้Œ„': u'็พŽ่‘—ๅฝ•', u'็พŽ่‘—็จฑ': u'็พŽ่‘—็งฐ', u'็พŽ่‘—่€…': u'็พŽ่‘—่€…', u'็พŽ่‘—่ฟฐ': u'็พŽ่‘—่ฟฐ', u'่€€่‘—': u'่€€็€', u'่€€่‘—ๆ›ธ': u'่€€่‘—ไนฆ', u'่€€่‘—ไฝœ': u'่€€่‘—ไฝœ', u'่€€่‘—ๅ': u'่€€่‘—ๅ', u'่€€่‘—้Œ„': u'่€€่‘—ๅฝ•', u'่€€่‘—็จฑ': u'่€€่‘—็งฐ', u'่€€่‘—่€…': u'่€€่‘—่€…', u'่€€่‘—่ฟฐ': u'่€€่‘—่ฟฐ', u'่€ๅนบ': u'่€ๅนบ', u'่€ƒ่‘—': u'่€ƒ็€', u'่€ƒ่‘—ๆ›ธ': u'่€ƒ่‘—ไนฆ', u'่€ƒ่‘—ไฝœ': u'่€ƒ่‘—ไฝœ', u'่€ƒ่‘—ๅ': u'่€ƒ่‘—ๅ', u'่€ƒ่‘—้Œ„': u'่€ƒ่‘—ๅฝ•', u'่€ƒ่‘—็จฑ': u'่€ƒ่‘—็งฐ', u'่€ƒ่‘—่€…': u'่€ƒ่‘—่€…', u'่€ƒ่‘—่ฟฐ': u'่€ƒ่‘—่ฟฐ', u'่‚‰ไนพไนพ': u'่‚‰ๅนฒๅนฒ', u'่‚˜ๆ‰‹้Š่ถณ': u'่‚˜ๆ‰‹้“พ่ถณ', u'่ƒŒ่‘—': u'่ƒŒ็€', u'่ƒŒ่‘—ๆ›ธ': u'่ƒŒ่‘—ไนฆ', u'่ƒŒ่‘—ไฝœ': u'่ƒŒ่‘—ไฝœ', u'่ƒŒ่‘—ๅ': u'่ƒŒ่‘—ๅ', u'่ƒŒ่‘—้Œ„': u'่ƒŒ่‘—ๅฝ•', u'่ƒŒ่‘—็จฑ': u'่ƒŒ่‘—็งฐ', u'่ƒŒ่‘—่€…': u'่ƒŒ่‘—่€…', u'่ƒŒ่‘—่ฟฐ': u'่ƒŒ่‘—่ฟฐ', u'่† ่‘—': u'่ƒถ็€', u'่† ่‘—ๆ›ธ': u'่ƒถ่‘—ไนฆ', u'่† ่‘—ไฝœ': u'่ƒถ่‘—ไฝœ', u'่† ่‘—ๅ': u'่ƒถ่‘—ๅ', u'่† ่‘—้Œ„': u'่ƒถ่‘—ๅฝ•', u'่† ่‘—็จฑ': u'่ƒถ่‘—็งฐ', u'่† ่‘—่€…': u'่ƒถ่‘—่€…', u'่† ่‘—่ฟฐ': u'่ƒถ่‘—่ฟฐ', u'่—่‘—': u'่‰บ็€', u'่—่‘—ๆ›ธ': u'่‰บ่‘—ไนฆ', u'่—่‘—ไฝœ': u'่‰บ่‘—ไฝœ', u'่—่‘—ๅ': u'่‰บ่‘—ๅ', u'่—่‘—้Œ„': u'่‰บ่‘—ๅฝ•', u'่—่‘—็จฑ': u'่‰บ่‘—็งฐ', u'่—่‘—่€…': u'่‰บ่‘—่€…', u'่—่‘—่ฟฐ': u'่‰บ่‘—่ฟฐ', u'่‹ฆ่‘—': u'่‹ฆ็€', u'่‹ฆ่‘—ๆ›ธ': u'่‹ฆ่‘—ไนฆ', u'่‹ฆ่‘—ไฝœ': u'่‹ฆ่‘—ไฝœ', u'่‹ฆ่‘—ๅ': u'่‹ฆ่‘—ๅ', u'่‹ฆ่‘—้Œ„': u'่‹ฆ่‘—ๅฝ•', u'่‹ฆ่‘—็จฑ': u'่‹ฆ่‘—็งฐ', u'่‹ฆ่‘—่€…': u'่‹ฆ่‘—่€…', u'่‹ฆ่‘—่ฟฐ': u'่‹ฆ่‘—่ฟฐ', u'่‹ง็ƒฏ': u'่‹ง็ƒฏ', u'่–ด็ƒฏ': u'่‹ง็ƒฏ', u'็ฒ่‘—': u'่Žท็€', u'็ฒ่‘—ๆ›ธ': u'่Žท่‘—ไนฆ', u'็ฒ่‘—ไฝœ': u'่Žท่‘—ไฝœ', u'็ฒ่‘—ๅ': u'่Žท่‘—ๅ', u'็ฒ่‘—้Œ„': u'่Žท่‘—ๅฝ•', u'็ฒ่‘—็จฑ': u'่Žท่‘—็งฐ', u'็ฒ่‘—่€…': u'่Žท่‘—่€…', u'็ฒ่‘—่ฟฐ': u'่Žท่‘—่ฟฐ', u'่•ญไนพ': u'่งไนพ', u'่งไนพ': u'่งไนพ', u'่ฝ่‘—': u'่ฝ็€', u'่ฝ่‘—ๆ›ธ': u'่ฝ่‘—ไนฆ', u'่ฝ่‘—ไฝœ': u'่ฝ่‘—ไฝœ', u'่ฝ่‘—ๅ': u'่ฝ่‘—ๅ', u'่ฝ่‘—้Œ„': u'่ฝ่‘—ๅฝ•', u'่ฝ่‘—็จฑ': u'่ฝ่‘—็งฐ', u'่ฝ่‘—่€…': u'่ฝ่‘—่€…', u'่ฝ่‘—่ฟฐ': u'่ฝ่‘—่ฟฐ', u'่‘—ๆ›ธ': u'่‘—ไนฆ', u'่‘—ๆ›ธ็ซ‹่ชช': u'่‘—ไนฆ็ซ‹่ฏด', u'่‘—ไฝœ': u'่‘—ไฝœ', u'่‘—ๅ': u'่‘—ๅ', u'่‘—้Œ„่ฆๅ‰‡': u'่‘—ๅฝ•่ง„ๅˆ™', u'่‘—ๆ–‡': u'่‘—ๆ–‡', u'่‘—ๆœ‰': u'่‘—ๆœ‰', u'่‘—็จฑ': u'่‘—็งฐ', u'่‘—่€…': u'่‘—่€…', u'่‘—่บซ': u'่‘—่บซ', u'่‘—่ฟฐ': u'่‘—่ฟฐ', u'่’™่‘—': u'่’™็€', u'่’™่‘—ๆ›ธ': u'่’™่‘—ไนฆ', u'่’™่‘—ไฝœ': u'่’™่‘—ไฝœ', u'่’™่‘—ๅ': u'่’™่‘—ๅ', u'่’™่‘—้Œ„': u'่’™่‘—ๅฝ•', u'่’™่‘—็จฑ': u'่’™่‘—็งฐ', u'่’™่‘—่€…': u'่’™่‘—่€…', u'่’™่‘—่ฟฐ': u'่’™่‘—่ฟฐ', u'่—่‘—': u'่—็€', u'่—่‘—ๆ›ธ': u'่—่‘—ไนฆ', u'่—่‘—ไฝœ': u'่—่‘—ไฝœ', u'่—่‘—ๅ': u'่—่‘—ๅ', u'่—่‘—้Œ„': u'่—่‘—ๅฝ•', u'่—่‘—็จฑ': u'่—่‘—็งฐ', u'่—่‘—่€…': u'่—่‘—่€…', u'่—่‘—่ฟฐ': u'่—่‘—่ฟฐ', u'่˜ธ่‘—': u'่˜ธ็€', u'่˜ธ่‘—ๆ›ธ': u'่˜ธ่‘—ไนฆ', u'่˜ธ่‘—ไฝœ': u'่˜ธ่‘—ไฝœ', u'่˜ธ่‘—ๅ': u'่˜ธ่‘—ๅ', u'่˜ธ่‘—้Œ„': u'่˜ธ่‘—ๅฝ•', u'่˜ธ่‘—็จฑ': u'่˜ธ่‘—็งฐ', u'่˜ธ่‘—่€…': u'่˜ธ่‘—่€…', u'่˜ธ่‘—่ฟฐ': u'่˜ธ่‘—่ฟฐ', u'่กŒ่‘—': u'่กŒ็€', u'่กŒ่‘—ๆ›ธ': u'่กŒ่‘—ไนฆ', u'่กŒ่‘—ไฝœ': u'่กŒ่‘—ไฝœ', u'่กŒ่‘—ๅ': u'่กŒ่‘—ๅ', u'่กŒ่‘—้Œ„': u'่กŒ่‘—ๅฝ•', u'่กŒ่‘—็จฑ': u'่กŒ่‘—็งฐ', u'่กŒ่‘—่€…': u'่กŒ่‘—่€…', u'่กŒ่‘—่ฟฐ': u'่กŒ่‘—่ฟฐ', u'่กฃ่‘—': u'่กฃ็€', u'่กฃ่‘—ๆ›ธ': u'่กฃ่‘—ไนฆ', u'่กฃ่‘—ไฝœ': u'่กฃ่‘—ไฝœ', u'่กฃ่‘—ๅ': u'่กฃ่‘—ๅ', u'่กฃ่‘—้Œ„': u'่กฃ่‘—ๅฝ•', u'่กฃ่‘—็จฑ': u'่กฃ่‘—็งฐ', u'่กฃ่‘—่€…': u'่กฃ่‘—่€…', u'่กฃ่‘—่ฟฐ': u'่กฃ่‘—่ฟฐ', u'่ฃ่‘—': u'่ฃ…็€', u'่ฃ่‘—ๆ›ธ': u'่ฃ…่‘—ไนฆ', u'่ฃ่‘—ไฝœ': u'่ฃ…่‘—ไฝœ', u'่ฃ่‘—ๅ': u'่ฃ…่‘—ๅ', u'่ฃ่‘—้Œ„': u'่ฃ…่‘—ๅฝ•', u'่ฃ่‘—็จฑ': u'่ฃ…่‘—็งฐ', u'่ฃ่‘—่€…': u'่ฃ…่‘—่€…', u'่ฃ่‘—่ฟฐ': u'่ฃ…่‘—่ฟฐ', u'่ฃน่‘—': u'่ฃน็€', u'่ฃน่‘—ๆ›ธ': u'่ฃน่‘—ไนฆ', u'่ฃน่‘—ไฝœ': u'่ฃน่‘—ไฝœ', u'่ฃน่‘—ๅ': u'่ฃน่‘—ๅ', u'่ฃน่‘—้Œ„': u'่ฃน่‘—ๅฝ•', u'่ฃน่‘—็จฑ': u'่ฃน่‘—็งฐ', u'่ฃน่‘—่€…': u'่ฃน่‘—่€…', u'่ฃน่‘—่ฟฐ': u'่ฃน่‘—่ฟฐ', u'่ฆ†่“‹': u'่ฆ†่“‹', u'่ฆ‹ๅพฎ็Ÿฅ่‘—': u'่งๅพฎ็Ÿฅ่‘—', u'่ฆ‹่‘—': u'่ง็€', u'่ฆ‹่‘—ๆ›ธ': u'่ง่‘—ไนฆ', u'่ฆ‹่‘—ไฝœ': u'่ง่‘—ไฝœ', u'่ฆ‹่‘—ๅ': u'่ง่‘—ๅ', u'่ฆ‹่‘—้Œ„': u'่ง่‘—ๅฝ•', u'่ฆ‹่‘—็จฑ': u'่ง่‘—็งฐ', u'่ฆ‹่‘—่€…': u'่ง่‘—่€…', u'่ฆ‹่‘—่ฟฐ': u'่ง่‘—่ฟฐ', u'่ฆ–ๅพฎ็Ÿฅ่‘—': u'่ง†ๅพฎ็Ÿฅ่‘—', u'่จ€ๅนพๆž็†': u'่จ€ๅนพๆž็†', u'่จ˜่‘—': u'่ฎฐ็€', u'่จ˜่‘—ๆ›ธ': u'่ฎฐ่‘—ไนฆ', u'่จ˜่‘—ไฝœ': u'่ฎฐ่‘—ไฝœ', u'่จ˜่‘—ๅ': u'่ฎฐ่‘—ๅ', u'่จ˜่‘—้Œ„': u'่ฎฐ่‘—ๅฝ•', u'่จ˜่‘—็จฑ': u'่ฎฐ่‘—็งฐ', u'่จ˜่‘—่€…': u'่ฎฐ่‘—่€…', u'่จ˜่‘—่ฟฐ': u'่ฎฐ่‘—่ฟฐ', u'่ซ–่‘—': u'่ฎบ่‘—', u'่ญฏ่‘—': u'่ฏ‘่‘—', u'่ฉฆ่‘—': u'่ฏ•็€', u'่ฉฆ่‘—ๆ›ธ': u'่ฏ•่‘—ไนฆ', u'่ฉฆ่‘—ไฝœ': u'่ฏ•่‘—ไฝœ', u'่ฉฆ่‘—ๅ': u'่ฏ•่‘—ๅ', u'่ฉฆ่‘—้Œ„': u'่ฏ•่‘—ๅฝ•', u'่ฉฆ่‘—็จฑ': u'่ฏ•่‘—็งฐ', u'่ฉฆ่‘—่€…': u'่ฏ•่‘—่€…', u'่ฉฆ่‘—่ฟฐ': u'่ฏ•่‘—่ฟฐ', u'่ชž่‘—': u'่ฏญ็€', u'่ชž่‘—ๆ›ธ': u'่ฏญ่‘—ไนฆ', u'่ชž่‘—ไฝœ': u'่ฏญ่‘—ไฝœ', u'่ชž่‘—ๅ': u'่ฏญ่‘—ๅ', u'่ชž่‘—้Œ„': u'่ฏญ่‘—ๅฝ•', u'่ชž่‘—็จฑ': u'่ฏญ่‘—็งฐ', u'่ชž่‘—่€…': u'่ฏญ่‘—่€…', u'่ชž่‘—่ฟฐ': u'่ฏญ่‘—่ฟฐ', u'่ฑซ่‘—': u'่ฑซ็€', u'่ฑซ่‘—ๆ›ธ': u'่ฑซ่‘—ไนฆ', u'่ฑซ่‘—ไฝœ': u'่ฑซ่‘—ไฝœ', u'่ฑซ่‘—ๅ': u'่ฑซ่‘—ๅ', u'่ฑซ่‘—้Œ„': u'่ฑซ่‘—ๅฝ•', u'่ฑซ่‘—็จฑ': u'่ฑซ่‘—็งฐ', u'่ฑซ่‘—่€…': u'่ฑซ่‘—่€…', u'่ฑซ่‘—่ฟฐ': u'่ฑซ่‘—่ฟฐ', u'่ฒž่‘—': u'่ดž็€', u'่ฒž่‘—ๆ›ธ': u'่ดž่‘—ไนฆ', u'่ฒž่‘—ไฝœ': u'่ดž่‘—ไฝœ', u'่ฒž่‘—ๅ': u'่ดž่‘—ๅ', u'่ฒž่‘—้Œ„': u'่ดž่‘—ๅฝ•', u'่ฒž่‘—็จฑ': u'่ดž่‘—็งฐ', u'่ฒž่‘—่€…': u'่ดž่‘—่€…', u'่ฒž่‘—่ฟฐ': u'่ดž่‘—่ฟฐ', u'่ตฐ่‘—': u'่ตฐ็€', u'่ตฐ่‘—ๆ›ธ': u'่ตฐ่‘—ไนฆ', u'่ตฐ่‘—ไฝœ': u'่ตฐ่‘—ไฝœ', u'่ตฐ่‘—ๅ': u'่ตฐ่‘—ๅ', u'่ตฐ่‘—้Œ„': u'่ตฐ่‘—ๅฝ•', u'่ตฐ่‘—็จฑ': u'่ตฐ่‘—็งฐ', u'่ตฐ่‘—่€…': u'่ตฐ่‘—่€…', u'่ตฐ่‘—่ฟฐ': u'่ตฐ่‘—่ฟฐ', u'่ถ•่‘—': u'่ตถ็€', u'่ถ•่‘—ๆ›ธ': u'่ตถ่‘—ไนฆ', u'่ถ•่‘—ไฝœ': u'่ตถ่‘—ไฝœ', u'่ถ•่‘—ๅ': u'่ตถ่‘—ๅ', u'่ถ•่‘—้Œ„': u'่ตถ่‘—ๅฝ•', u'่ถ•่‘—็จฑ': u'่ตถ่‘—็งฐ', u'่ถ•่‘—่€…': u'่ตถ่‘—่€…', u'่ถ•่‘—่ฟฐ': u'่ตถ่‘—่ฟฐ', u'่ถด่‘—': u'่ถด็€', u'่ถด่‘—ๆ›ธ': u'่ถด่‘—ไนฆ', u'่ถด่‘—ไฝœ': u'่ถด่‘—ไฝœ', u'่ถด่‘—ๅ': u'่ถด่‘—ๅ', u'่ถด่‘—้Œ„': u'่ถด่‘—ๅฝ•', u'่ถด่‘—็จฑ': u'่ถด่‘—็งฐ', u'่ถด่‘—่€…': u'่ถด่‘—่€…', u'่ถด่‘—่ฟฐ': u'่ถด่‘—่ฟฐ', u'่บ่‘—': u'่ทƒ็€', u'่บ่‘—ๆ›ธ': u'่ทƒ่‘—ไนฆ', u'่บ่‘—ไฝœ': u'่ทƒ่‘—ไฝœ', u'่บ่‘—ๅ': u'่ทƒ่‘—ๅ', u'่บ่‘—้Œ„': u'่ทƒ่‘—ๅฝ•', u'่บ่‘—็จฑ': u'่ทƒ่‘—็งฐ', u'่บ่‘—่€…': u'่ทƒ่‘—่€…', u'่บ่‘—่ฟฐ': u'่ทƒ่‘—่ฟฐ', u'่ท‘่‘—': u'่ท‘็€', u'่ท‘่‘—ๆ›ธ': u'่ท‘่‘—ไนฆ', u'่ท‘่‘—ไฝœ': u'่ท‘่‘—ไฝœ', u'่ท‘่‘—ๅ': u'่ท‘่‘—ๅ', u'่ท‘่‘—้Œ„': u'่ท‘่‘—ๅฝ•', u'่ท‘่‘—็จฑ': u'่ท‘่‘—็งฐ', u'่ท‘่‘—่€…': u'่ท‘่‘—่€…', u'่ท‘่‘—่ฟฐ': u'่ท‘่‘—่ฟฐ', u'่ทŸ่‘—': u'่ทŸ็€', u'่ทŸ่‘—ๆ›ธ': u'่ทŸ่‘—ไนฆ', u'่ทŸ่‘—ไฝœ': u'่ทŸ่‘—ไฝœ', u'่ทŸ่‘—ๅ': u'่ทŸ่‘—ๅ', u'่ทŸ่‘—้Œ„': u'่ทŸ่‘—ๅฝ•', u'่ทŸ่‘—็จฑ': u'่ทŸ่‘—็งฐ', u'่ทŸ่‘—่€…': u'่ทŸ่‘—่€…', u'่ทŸ่‘—่ฟฐ': u'่ทŸ่‘—่ฟฐ', u'่ทช่‘—': u'่ทช็€', u'่ทช่‘—ๆ›ธ': u'่ทช่‘—ไนฆ', u'่ทช่‘—ไฝœ': u'่ทช่‘—ไฝœ', u'่ทช่‘—ๅ': u'่ทช่‘—ๅ', u'่ทช่‘—้Œ„': u'่ทช่‘—ๅฝ•', u'่ทช่‘—็จฑ': u'่ทช่‘—็งฐ', u'่ทช่‘—่€…': u'่ทช่‘—่€…', u'่ทช่‘—่ฟฐ': u'่ทช่‘—่ฟฐ', u'่ทณ่‘—': u'่ทณ็€', u'่ทณ่‘—ๆ›ธ': u'่ทณ่‘—ไนฆ', u'่ทณ่‘—ไฝœ': u'่ทณ่‘—ไฝœ', u'่ทณ่‘—ๅ': u'่ทณ่‘—ๅ', u'่ทณ่‘—้Œ„': u'่ทณ่‘—ๅฝ•', u'่ทณ่‘—็จฑ': u'่ทณ่‘—็งฐ', u'่ทณ่‘—่€…': u'่ทณ่‘—่€…', u'่ทณ่‘—่ฟฐ': u'่ทณ่‘—่ฟฐ', u'่บŠ่บ‡ๆปฟๅฟ—': u'่ธŒ่บ‡ๆปฟๅฟ—', u'่ธ่‘—': u'่ธ็€', u'่ธ่‘—ๆ›ธ': u'่ธ่‘—ไนฆ', u'่ธ่‘—ไฝœ': u'่ธ่‘—ไฝœ', u'่ธ่‘—ๅ': u'่ธ่‘—ๅ', u'่ธ่‘—้Œ„': u'่ธ่‘—ๅฝ•', u'่ธ่‘—็จฑ': u'่ธ่‘—็งฐ', u'่ธ่‘—่€…': u'่ธ่‘—่€…', u'่ธ่‘—่ฟฐ': u'่ธ่‘—่ฟฐ', u'่ธฉ่‘—': u'่ธฉ็€', u'่ธฉ่‘—ๆ›ธ': u'่ธฉ่‘—ไนฆ', u'่ธฉ่‘—ไฝœ': u'่ธฉ่‘—ไฝœ', u'่ธฉ่‘—ๅ': u'่ธฉ่‘—ๅ', u'่ธฉ่‘—้Œ„': u'่ธฉ่‘—ๅฝ•', u'่ธฉ่‘—็จฑ': u'่ธฉ่‘—็งฐ', u'่ธฉ่‘—่€…': u'่ธฉ่‘—่€…', u'่ธฉ่‘—่ฟฐ': u'่ธฉ่‘—่ฟฐ', u'่บซ่‘—': u'่บซ็€', u'่บซ่‘—ๆ›ธ': u'่บซ่‘—ไนฆ', u'่บซ่‘—ไฝœ': u'่บซ่‘—ไฝœ', u'่บซ่‘—ๅ': u'่บซ่‘—ๅ', u'่บซ่‘—้Œ„': u'่บซ่‘—ๅฝ•', u'่บซ่‘—็จฑ': u'่บซ่‘—็งฐ', u'่บซ่‘—่€…': u'่บซ่‘—่€…', u'่บซ่‘—่ฟฐ': u'่บซ่‘—่ฟฐ', u'่บบ่‘—': u'่บบ็€', u'่บบ่‘—ๆ›ธ': u'่บบ่‘—ไนฆ', u'่บบ่‘—ไฝœ': u'่บบ่‘—ไฝœ', u'่บบ่‘—ๅ': u'่บบ่‘—ๅ', u'่บบ่‘—้Œ„': u'่บบ่‘—ๅฝ•', u'่บบ่‘—็จฑ': u'่บบ่‘—็งฐ', u'่บบ่‘—่€…': u'่บบ่‘—่€…', u'่บบ่‘—่ฟฐ': u'่บบ่‘—่ฟฐ', u'่ฝ‰่‘—': u'่ฝฌ็€', u'่ฝ‰่‘—ๆ›ธ': u'่ฝฌ่‘—ไนฆ', u'่ฝ‰่‘—ไฝœ': u'่ฝฌ่‘—ไฝœ', u'่ฝ‰่‘—ๅ': u'่ฝฌ่‘—ๅ', u'่ฝ‰่‘—้Œ„': u'่ฝฌ่‘—ๅฝ•', u'่ฝ‰่‘—็จฑ': u'่ฝฌ่‘—็งฐ', u'่ฝ‰่‘—่€…': u'่ฝฌ่‘—่€…', u'่ฝ‰่‘—่ฟฐ': u'่ฝฌ่‘—่ฟฐ', u'่ผ‰่‘—': u'่ฝฝ็€', u'่ผ‰่‘—ๆ›ธ': u'่ฝฝ่‘—ไนฆ', u'่ผ‰่‘—ไฝœ': u'่ฝฝ่‘—ไฝœ', u'่ผ‰่‘—ๅ': u'่ฝฝ่‘—ๅ', u'่ผ‰่‘—้Œ„': u'่ฝฝ่‘—ๅฝ•', u'่ผ‰่‘—็จฑ': u'่ฝฝ่‘—็งฐ', u'่ผ‰่‘—่€…': u'่ฝฝ่‘—่€…', u'่ผ‰่‘—่ฟฐ': u'่ฝฝ่‘—่ฟฐ', u'่ผƒ่‘—': u'่พƒ่‘—', u'้”่‘—': u'่พพ็€', u'้”่‘—ๆ›ธ': u'่พพ่‘—ไนฆ', u'้”่‘—ไฝœ': u'่พพ่‘—ไฝœ', u'้”่‘—ๅ': u'่พพ่‘—ๅ', u'้”่‘—้Œ„': u'่พพ่‘—ๅฝ•', u'้”่‘—็จฑ': u'่พพ่‘—็งฐ', u'้”่‘—่€…': u'่พพ่‘—่€…', u'้”่‘—่ฟฐ': u'่พพ่‘—่ฟฐ', u'่ฟ‘่ง’่ชไฟก': u'่ฟ‘่ง’่ชไฟก', u'่ฟ‘่ง’่ฐไฟก': u'่ฟ‘่ง’่ชไฟก', u'้ ่‘—': u'่ฟœ็€', u'้ ่‘—ๆ›ธ': u'่ฟœ่‘—ไนฆ', u'้ ่‘—ไฝœ': u'่ฟœ่‘—ไฝœ', u'้ ่‘—ๅ': u'่ฟœ่‘—ๅ', u'้ ่‘—้Œ„': u'่ฟœ่‘—ๅฝ•', u'้ ่‘—็จฑ': u'่ฟœ่‘—็งฐ', u'้ ่‘—่€…': u'่ฟœ่‘—่€…', u'้ ่‘—่ฟฐ': u'่ฟœ่‘—่ฟฐ', u'้€ฃ่‘—': u'่ฟž็€', u'้€ฃ่‘—ๆ›ธ': u'่ฟž่‘—ไนฆ', u'้€ฃ่‘—ไฝœ': u'่ฟž่‘—ไฝœ', u'้€ฃ่‘—ๅ': u'่ฟž่‘—ๅ', u'้€ฃ่‘—้Œ„': u'่ฟž่‘—ๅฝ•', u'้€ฃ่‘—็จฑ': u'่ฟž่‘—็งฐ', u'้€ฃ่‘—่€…': u'่ฟž่‘—่€…', u'้€ฃ่‘—่ฟฐ': u'่ฟž่‘—่ฟฐ', u'่ฟซ่‘—': u'่ฟซ็€', u'่ฟฝ่‘—': u'่ฟฝ็€', u'่ฟฝ่‘—ๆ›ธ': u'่ฟฝ่‘—ไนฆ', u'่ฟฝ่‘—ไฝœ': u'่ฟฝ่‘—ไฝœ', u'่ฟฝ่‘—ๅ': u'่ฟฝ่‘—ๅ', u'่ฟฝ่‘—้Œ„': u'่ฟฝ่‘—ๅฝ•', u'่ฟฝ่‘—็จฑ': u'่ฟฝ่‘—็งฐ', u'่ฟฝ่‘—่€…': u'่ฟฝ่‘—่€…', u'่ฟฝ่‘—่ฟฐ': u'่ฟฝ่‘—่ฟฐ', u'้€†่‘—': u'้€†็€', u'้€†่‘—ๆ›ธ': u'้€†่‘—ไนฆ', u'้€†่‘—ไฝœ': u'้€†่‘—ไฝœ', u'้€†่‘—ๅ': u'้€†่‘—ๅ', u'้€†่‘—้Œ„': u'้€†่‘—ๅฝ•', u'้€†่‘—็จฑ': u'้€†่‘—็งฐ', u'้€†่‘—่€…': u'้€†่‘—่€…', u'้€†่‘—่ฟฐ': u'้€†่‘—่ฟฐ', u'้€ผ่‘—': u'้€ผ็€', u'้€ผ่‘—ๆ›ธ': u'้€ผ่‘—ไนฆ', u'้€ผ่‘—ไฝœ': u'้€ผ่‘—ไฝœ', u'้€ผ่‘—ๅ': u'้€ผ่‘—ๅ', u'้€ผ่‘—้Œ„': u'้€ผ่‘—ๅฝ•', u'้€ผ่‘—็จฑ': u'้€ผ่‘—็งฐ', u'้€ผ่‘—่€…': u'้€ผ่‘—่€…', u'้€ผ่‘—่ฟฐ': u'้€ผ่‘—่ฟฐ', u'้‡่‘—': u'้‡็€', u'้‡่‘—ๆ›ธ': u'้‡่‘—ไนฆ', u'้‡่‘—ไฝœ': u'้‡่‘—ไฝœ', u'้‡่‘—ๅ': u'้‡่‘—ๅ', u'้‡่‘—้Œ„': u'้‡่‘—ๅฝ•', u'้‡่‘—็จฑ': u'้‡่‘—็งฐ', u'้‡่‘—่€…': u'้‡่‘—่€…', u'้‡่‘—่ฟฐ': u'้‡่‘—่ฟฐ', u'้บ่‘—': u'้—่‘—', u'้‚ฃ้บฝ': u'้‚ฃ้บฝ', u'้ƒญๅญไนพ': u'้ƒญๅญไนพ', u'้…่‘—': u'้…็€', u'้…่‘—ๆ›ธ': u'้…่‘—ไนฆ', u'้…่‘—ไฝœ': u'้…่‘—ไฝœ', u'้…่‘—ๅ': u'้…่‘—ๅ', u'้…่‘—้Œ„': u'้…่‘—ๅฝ•', u'้…่‘—็จฑ': u'้…่‘—็งฐ', u'้…่‘—่€…': u'้…่‘—่€…', u'้…่‘—่ฟฐ': u'้…่‘—่ฟฐ', u'้‡€่‘—': u'้…ฟ็€', u'้‡€่‘—ๆ›ธ': u'้…ฟ่‘—ไนฆ', u'้‡€่‘—ไฝœ': u'้…ฟ่‘—ไฝœ', u'้‡€่‘—ๅ': u'้…ฟ่‘—ๅ', u'้‡€่‘—้Œ„': u'้…ฟ่‘—ๅฝ•', u'้‡€่‘—็จฑ': u'้…ฟ่‘—็งฐ', u'้‡€่‘—่€…': u'้…ฟ่‘—่€…', u'้‡€่‘—่ฟฐ': u'้…ฟ่‘—่ฟฐ', u'้†ฏๅฃบ': u'้†ฏๅฃถ', u'้†ฏๅฃถ': u'้†ฏๅฃถ', u'้†ฏ้…ฑ': u'้†ฏ้…ฑ', u'้†ฏ้†ฌ': u'้†ฏ้…ฑ', u'้†ฏ้†‹': u'้†ฏ้†‹', u'้†ฏ้†ข': u'้†ฏ้†ข', u'้†ฏ้ธก': u'้†ฏ้ธก', u'้†ฏ้›ž': u'้†ฏ้ธก', u'้‡่ฆ†': u'้‡ๅค', u'้‡‘้Š': u'้‡‘้“พ', u'้ต้Š': u'้“้“พ', u'้‰ธ้Š': u'้“ฐ้“พ', u'้Š€้Š': u'้“ถ้“พ', u'้‹ช่‘—': u'้“บ็€', u'้‹ช่‘—ๆ›ธ': u'้“บ่‘—ไนฆ', u'้‹ช่‘—ไฝœ': u'้“บ่‘—ไฝœ', u'้‹ช่‘—ๅ': u'้“บ่‘—ๅ', u'้‹ช่‘—้Œ„': u'้“บ่‘—ๅฝ•', u'้‹ช่‘—็จฑ': u'้“บ่‘—็งฐ', u'้‹ช่‘—่€…': u'้“บ่‘—่€…', u'้‹ช่‘—่ฟฐ': u'้“บ่‘—่ฟฐ', u'้Šๅญ': u'้“พๅญ', u'้Šๆข': u'้“พๆก', u'้Š้Ž–': u'้“พ้”', u'้Š้Œ˜': u'้“พ้”ค', u'้Ž–้Š': u'้”้“พ', u'้พ้›': u'้”บ้”ป', u'้›้พ': u'้”ป้”บ', u'้–ปๆ‡ท็ฆฎ': u'้—ซๆ€€็คผ', u'้–‰่‘—': u'้—ญ็€', u'้–‰่‘—ๆ›ธ': u'้—ญ่‘—ไนฆ', u'้–‰่‘—ไฝœ': u'้—ญ่‘—ไฝœ', u'้–‰่‘—ๅ': u'้—ญ่‘—ๅ', u'้–‰่‘—้Œ„': u'้—ญ่‘—ๅฝ•', u'้–‰่‘—็จฑ': u'้—ญ่‘—็งฐ', u'้–‰่‘—่€…': u'้—ญ่‘—่€…', u'้–‰่‘—่ฟฐ': u'้—ญ่‘—่ฟฐ', u'้–‘่‘—': u'้—ฒ็€', u'้–‘่‘—ๆ›ธ': u'้—ฒ่‘—ไนฆ', u'้–‘่‘—ไฝœ': u'้—ฒ่‘—ไฝœ', u'้–‘่‘—ๅ': u'้—ฒ่‘—ๅ', u'้–‘่‘—้Œ„': u'้—ฒ่‘—ๅฝ•', u'้–‘่‘—็จฑ': u'้—ฒ่‘—็งฐ', u'้–‘่‘—่€…': u'้—ฒ่‘—่€…', u'้–‘่‘—่ฟฐ': u'้—ฒ่‘—่ฟฐ', u'่žไธ่‘—': u'้—ปไธ็€', u'่žๅพ—่‘—': u'้—ปๅพ—็€', u'่ž่‘—': u'้—ป็€', u'้˜ณไธบไนพ': u'้˜ณไธบไนพ', u'้™ฝ็ˆฒไนพ': u'้˜ณไธบไนพ', u'้™ฝ็‚บไนพ': u'้˜ณไธบไนพ', u'้˜ฟ้ƒจๆญฃ็žญ': u'้˜ฟ้ƒจๆญฃ็žญ', u'้™„่‘—': u'้™„็€', u'้™„็ช': u'้™„็พ', u'้™„่‘—ๆ›ธ': u'้™„่‘—ไนฆ', u'้™„่‘—ไฝœ': u'้™„่‘—ไฝœ', u'้™„่‘—ๅ': u'้™„่‘—ๅ', u'้™„่‘—้Œ„': u'้™„่‘—ๅฝ•', u'้™„่‘—็จฑ': u'้™„่‘—็งฐ', u'้™„่‘—่€…': u'้™„่‘—่€…', u'้™„่‘—่ฟฐ': u'้™„่‘—่ฟฐ', u'้™‹่‘—': u'้™‹็€', u'้™‹่‘—ๆ›ธ': u'้™‹่‘—ไนฆ', u'้™‹่‘—ไฝœ': u'้™‹่‘—ไฝœ', u'้™‹่‘—ๅ': u'้™‹่‘—ๅ', u'้™‹่‘—้Œ„': u'้™‹่‘—ๅฝ•', u'้™‹่‘—็จฑ': u'้™‹่‘—็งฐ', u'้™‹่‘—่€…': u'้™‹่‘—่€…', u'้™‹่‘—่ฟฐ': u'้™‹่‘—่ฟฐ', u'้™ช่‘—': u'้™ช็€', u'้™ช่‘—ๆ›ธ': u'้™ช่‘—ไนฆ', u'้™ช่‘—ไฝœ': u'้™ช่‘—ไฝœ', u'้™ช่‘—ๅ': u'้™ช่‘—ๅ', u'้™ช่‘—้Œ„': u'้™ช่‘—ๅฝ•', u'้™ช่‘—็จฑ': u'้™ช่‘—็งฐ', u'้™ช่‘—่€…': u'้™ช่‘—่€…', u'้™ช่‘—่ฟฐ': u'้™ช่‘—่ฟฐ', u'้™ณๅ ต': u'้™ณๅ ต', u'้™ณ็ฆ•': u'้™ณ็ฆ•', u'้šจ่‘—': u'้š็€', u'้šจ่‘—ๆ›ธ': u'้š่‘—ไนฆ', u'้šจ่‘—ไฝœ': u'้š่‘—ไฝœ', u'้šจ่‘—ๅ': u'้š่‘—ๅ', u'้šจ่‘—้Œ„': u'้š่‘—ๅฝ•', u'้šจ่‘—็จฑ': u'้š่‘—็งฐ', u'้šจ่‘—่€…': u'้š่‘—่€…', u'้šจ่‘—่ฟฐ': u'้š่‘—่ฟฐ', u'้š”่‘—': u'้š”็€', u'้š”่‘—ๆ›ธ': u'้š”่‘—ไนฆ', u'้š”่‘—ไฝœ': u'้š”่‘—ไฝœ', u'้š”่‘—ๅ': u'้š”่‘—ๅ', u'้š”่‘—้Œ„': u'้š”่‘—ๅฝ•', u'้š”่‘—็จฑ': u'้š”่‘—็งฐ', u'้š”่‘—่€…': u'้š”่‘—่€…', u'้š”่‘—่ฟฐ': u'้š”่‘—่ฟฐ', u'้šฑ็ช': u'้šฑ็พ', u'้›…่‘—': u'้›…็€', u'้›…่‘—ๆ›ธ': u'้›…่‘—ไนฆ', u'้›…่‘—ไฝœ': u'้›…่‘—ไฝœ', u'้›…่‘—ๅ': u'้›…่‘—ๅ', u'้›…่‘—้Œ„': u'้›…่‘—ๅฝ•', u'้›…่‘—็จฑ': u'้›…่‘—็งฐ', u'้›…่‘—่€…': u'้›…่‘—่€…', u'้›…่‘—่ฟฐ': u'้›…่‘—่ฟฐ', u'้›ไนพ': u'้›ไนพ', u'้ ่‘—': u'้ ็€', u'้ ่‘—ไฝœ': u'้ ่‘—ไฝœ', u'้ ่‘—ๅ': u'้ ่‘—ๅ', u'้ ่‘—้Œ„': u'้ ่‘—ๅฝ•', u'้ ่‘—็จฑ': u'้ ่‘—็งฐ', u'้ ่‘—่€…': u'้ ่‘—่€…', u'้ ่‘—่ฟฐ': u'้ ่‘—่ฟฐ', u'้ ‚่‘—': u'้กถ็€', u'้ ‚่‘—ๆ›ธ': u'้กถ่‘—ไนฆ', u'้ ‚่‘—ไฝœ': u'้กถ่‘—ไฝœ', u'้ ‚่‘—ๅ': u'้กถ่‘—ๅ', u'้ ‚่‘—้Œ„': u'้กถ่‘—ๅฝ•', u'้ ‚่‘—็จฑ': u'้กถ่‘—็งฐ', u'้ ‚่‘—่€…': u'้กถ่‘—่€…', u'้ ‚่‘—่ฟฐ': u'้กถ่‘—่ฟฐ', u'้ …้Š': u'้กน้“พ', u'้ †่‘—': u'้กบ็€', u'้ †่‘—ๆ›ธ': u'้กบ่‘—ไนฆ', u'้ †่‘—ไฝœ': u'้กบ่‘—ไฝœ', u'้ †่‘—ๅ': u'้กบ่‘—ๅ', u'้ †่‘—้Œ„': u'้กบ่‘—ๅฝ•', u'้ †่‘—็จฑ': u'้กบ่‘—็งฐ', u'้ †่‘—่€…': u'้กบ่‘—่€…', u'้ †่‘—่ฟฐ': u'้กบ่‘—่ฟฐ', u'้ ˜่‘—': u'้ข†็€', u'้ ˜่‘—ๆ›ธ': u'้ข†่‘—ไนฆ', u'้ ˜่‘—ไฝœ': u'้ข†่‘—ไฝœ', u'้ ˜่‘—ๅ': u'้ข†่‘—ๅ', u'้ ˜่‘—้Œ„': u'้ข†่‘—ๅฝ•', u'้ ˜่‘—็จฑ': u'้ข†่‘—็งฐ', u'้ ˜่‘—่€…': u'้ข†่‘—่€…', u'้ ˜่‘—่ฟฐ': u'้ข†่‘—่ฟฐ', u'้ฃ„่‘—': u'้ฃ˜็€', u'้ฃ„่‘—ๆ›ธ': u'้ฃ˜่‘—ไนฆ', u'้ฃ„่‘—ไฝœ': u'้ฃ˜่‘—ไฝœ', u'้ฃ„่‘—ๅ': u'้ฃ˜่‘—ๅ', u'้ฃ„่‘—้Œ„': u'้ฃ˜่‘—ๅฝ•', u'้ฃ„่‘—็จฑ': u'้ฃ˜่‘—็งฐ', u'้ฃ„่‘—่€…': u'้ฃ˜่‘—่€…', u'้ฃ„่‘—่ฟฐ': u'้ฃ˜่‘—่ฟฐ', u'้ฃญไปค': u'้ฃญไปค', u'้ง•่‘—': u'้ฉพ็€', u'้ง•่‘—ๆ›ธ': u'้ฉพ่‘—ไนฆ', u'้ง•่‘—ไฝœ': u'้ฉพ่‘—ไฝœ', u'้ง•่‘—ๅ': u'้ฉพ่‘—ๅ', u'้ง•่‘—้Œ„': u'้ฉพ่‘—ๅฝ•', u'้ง•่‘—็จฑ': u'้ฉพ่‘—็งฐ', u'้ง•่‘—่€…': u'้ฉพ่‘—่€…', u'้ง•่‘—่ฟฐ': u'้ฉพ่‘—่ฟฐ', u'็ฝต่‘—': u'้ช‚็€', u'็ฝต่‘—ๆ›ธ': u'้ช‚่‘—ไนฆ', u'็ฝต่‘—ไฝœ': u'้ช‚่‘—ไฝœ', u'็ฝต่‘—ๅ': u'้ช‚่‘—ๅ', u'็ฝต่‘—้Œ„': u'้ช‚่‘—ๅฝ•', u'็ฝต่‘—็จฑ': u'้ช‚่‘—็งฐ', u'็ฝต่‘—่€…': u'้ช‚่‘—่€…', u'็ฝต่‘—่ฟฐ': u'้ช‚่‘—่ฟฐ', u'้จŽ่‘—': u'้ช‘็€', u'้จŽ่‘—ๆ›ธ': u'้ช‘่‘—ไนฆ', u'้จŽ่‘—ไฝœ': u'้ช‘่‘—ไฝœ', u'้จŽ่‘—ๅ': u'้ช‘่‘—ๅ', u'้จŽ่‘—้Œ„': u'้ช‘่‘—ๅฝ•', u'้จŽ่‘—็จฑ': u'้ช‘่‘—็งฐ', u'้จŽ่‘—่€…': u'้ช‘่‘—่€…', u'้จŽ่‘—่ฟฐ': u'้ช‘่‘—่ฟฐ', u'้จ™่‘—': u'้ช—็€', u'้จ™่‘—ๆ›ธ': u'้ช—่‘—ไนฆ', u'้จ™่‘—ไฝœ': u'้ช—่‘—ไฝœ', u'้จ™่‘—ๅ': u'้ช—่‘—ๅ', u'้จ™่‘—้Œ„': u'้ช—่‘—ๅฝ•', u'้จ™่‘—็จฑ': u'้ช—่‘—็งฐ', u'้จ™่‘—่€…': u'้ช—่‘—่€…', u'้จ™่‘—่ฟฐ': u'้ช—่‘—่ฟฐ', u'้ซ˜่‘—': u'้ซ˜็€', u'้ซ˜่‘—ๆ›ธ': u'้ซ˜่‘—ไนฆ', u'้ซ˜่‘—ไฝœ': u'้ซ˜่‘—ไฝœ', u'้ซ˜่‘—ๅ': u'้ซ˜่‘—ๅ', u'้ซ˜่‘—้Œ„': u'้ซ˜่‘—ๅฝ•', u'้ซ˜่‘—็จฑ': u'้ซ˜่‘—็งฐ', u'้ซ˜่‘—่€…': u'้ซ˜่‘—่€…', u'้ซ˜่‘—่ฟฐ': u'้ซ˜่‘—่ฟฐ', u'้ซญ่‘—': u'้ซญ็€', u'้ซญ่‘—ๆ›ธ': u'้ซญ่‘—ไนฆ', u'้ซญ่‘—ไฝœ': u'้ซญ่‘—ไฝœ', u'้ซญ่‘—ๅ': u'้ซญ่‘—ๅ', u'้ซญ่‘—้Œ„': u'้ซญ่‘—ๅฝ•', u'้ซญ่‘—็จฑ': u'้ซญ่‘—็งฐ', u'้ซญ่‘—่€…': u'้ซญ่‘—่€…', u'้ซญ่‘—่ฟฐ': u'้ซญ่‘—่ฟฐ', u'้ฌฑๅง“': u'้ฌฑๅง“', u'้ฌฑๆฐ': u'้ฌฑๆฐ', u'้ญๅพต': u'้ญๅพต', u'้ญšไนพไนพ': u'้ฑผๅนฒๅนฒ', u'้ฏฐ้ญš': u'้ฒถ้ฑผ', u'้บฏๅด‡่ฃ•': u'้บฏๅด‡่ฃ•', u'้บด็พฉ': u'้บดไน‰', u'้บดไน‰': u'้บดไน‰', u'้บด่‹ฑ': u'้บด่‹ฑ', u'้บฝๆฐ': u'้บฝๆฐ', u'้บฝ้บฝ': u'้บฝ้บฝ', u'้บผ้บผ': u'้บฝ้บฝ', u'้ป„ๆถฆไนพ': u'้ป„ๆถฆไนพ', u'้ปƒๆฝคไนพ': u'้ป„ๆถฆไนพ', u'้ป่‘—': u'้ป็€', u'้ป่‘—ๆ›ธ': u'้ป่‘—ไนฆ', u'้ป่‘—ไฝœ': u'้ป่‘—ไฝœ', u'้ป่‘—ๅ': u'้ป่‘—ๅ', u'้ป่‘—้Œ„': u'้ป่‘—ๅฝ•', u'้ป่‘—็จฑ': u'้ป่‘—็งฐ', u'้ป่‘—่€…': u'้ป่‘—่€…', u'้ป่‘—่ฟฐ': u'้ป่‘—่ฟฐ', } zh2tw = { u'โ€œ': u'ใ€Œ', u'โ€': u'ใ€', u'โ€˜': u'ใ€Ž', u'โ€™': u'ใ€', u'ไธ‰ๆฅต็ฎก': u'ไธ‰ๆฅต้ซ”', u'ไธ‰ๆž็ฎก': u'ไธ‰ๆฅต้ซ”', u'ไธ–็•Œ่ฃ': u'ไธ–็•Œ่ฃก', u'ไธญๆ–‡่ฃ': u'ไธญๆ–‡่ฃก', u'ไธฒ่กŒ': u'ไธฒๅˆ—', u'ไธฒๅˆ—ๅŠ ้€Ÿๅ™จ': u'ไธฒๅˆ—ๅŠ ้€Ÿๅ™จ', u'ไปฅๅคช็ฝ‘': u'ไน™ๅคช็ถฒ', u'ๅฅถ้…ช': u'ไนณ้…ช', u'ไบŒๆฅต็ฎก': u'ไบŒๆฅต้ซ”', u'ไบŒๆž็ฎก': u'ไบŒๆฅต้ซ”', u'ไบคไบ’ๅผ': u'ไบ’ๅ‹•ๅผ', u'้˜ฟๅกžๆ‹œ็–†': u'ไบžๅกžๆ‹œ็„ถ', u'ไบบๅทฅๆ™บ่ƒฝ': u'ไบบๅทฅๆ™บๆ…ง', u'ๆŽฅๅฃ': u'ไป‹้ข', u'ไปปๆ„็ƒๅ“ก': u'ไปปๆ„็ƒๅ“ก', u'ไปปๆ„็ƒๅ‘˜': u'ไปปๆ„็ƒๅ“ก', u'ๆœๅŠกๅ™จ': u'ไผบๆœๅ™จ', u'ๅญ—็ฏ€': u'ไฝๅ…ƒ็ต„', u'ๅญ—่Š‚': u'ไฝๅ…ƒ็ต„', u'ไฝœๅ“่ฃ': u'ไฝœๅ“่ฃก', u'ไผ˜ๅ…ˆ็บง': u'ๅ„ชๅ…ˆ้ †ๅบ', u'ๅ…ƒๅ…‡': u'ๅ…ƒๅ‡ถ', u'ๅ…ƒๅ‡ถ': u'ๅ…ƒๅ‡ถ', u'ๅ…‰็›˜': u'ๅ…‰็ขŸ', u'ๅ…‰้ฉฑ': u'ๅ…‰็ขŸๆฉŸ', u'ๅ…‹็พ…ๅœฐไบž': u'ๅ…‹็พ…ๅŸƒ่ฅฟไบž', u'ๅ…‹็ฝ—ๅœฐไบš': u'ๅ…‹็พ…ๅŸƒ่ฅฟไบž', u'ๅ…จ่ง’': u'ๅ…จๅฝข', u'ๅ†ฌๅคฉ่ฃ': u'ๅ†ฌๅคฉ่ฃก', u'ๅ†ฌๆ—ฅ่ฃ': u'ๅ†ฌๆ—ฅ่ฃก', u'ๅ‡‰่œ': u'ๅ†ท็›ค', u'ๅ†ท่œ': u'ๅ†ท็›ค', u'ๅ‡ถๅ™จ': u'ๅ‡ถๅ™จ', u'ๅ…‡ๅ™จ': u'ๅ‡ถๅ™จ', u'ๅ‡ถๅพ’': u'ๅ‡ถๅพ’', u'ๅ…‡ๅพ’': u'ๅ‡ถๅพ’', u'ๅ…‡ๆ‰‹': u'ๅ‡ถๆ‰‹', u'ๅ‡ถๆ‰‹': u'ๅ‡ถๆ‰‹', u'ๅ…‡ๆกˆ': u'ๅ‡ถๆกˆ', u'ๅ‡ถๆกˆ': u'ๅ‡ถๆกˆ', u'ๅ‡ถๆฎ˜': u'ๅ‡ถๆฎ˜', u'ๅ…‡ๆฎ˜': u'ๅ‡ถๆฎ˜', u'ๅ‡ถๆฎ‹': u'ๅ‡ถๆฎ˜', u'ๅ…‡ๆฎบ': u'ๅ‡ถๆฎบ', u'ๅ‡ถๆ€': u'ๅ‡ถๆฎบ', u'ๅ‡ถๆฎบ': u'ๅ‡ถๆฎบ', u'ๅˆ†ๅธƒๅผ': u'ๅˆ†ๆ•ฃๅผ', u'ๆ‰“ๅฐ': u'ๅˆ—ๅฐ', u'ๅˆ—ๆ”ฏๆ•ฆๅฃซ็™ป': u'ๅˆ—ๆ”ฏๆ•ฆๆ–ฏ็™ป', u'ๅ‰ชๅฝฉ': u'ๅ‰ช็ถต', u'ๅŠ ่“ฌ': u'ๅŠ ๅฝญ', u'ๆ€ป็บฟ': u'ๅŒฏๆตๆŽ’', u'ๅฑ€ๅŸŸ็ฝ‘': u'ๅ€ๅŸŸ็ถฒ', u'็‰น็ซ‹ๅฐผ้”ๅ’Œๅคšๅทดๅ“ฅ': u'ๅƒ้‡Œ้”ๆ‰˜่ฒๅ“ฅ', u'็‰น็ซ‹ๅฐผ่พพๅ’Œๆ‰˜ๅทดๅ“ฅ': u'ๅƒ้‡Œ้”ๆ‰˜่ฒๅ“ฅ', u'ๅŠ่ง’': u'ๅŠๅฝข', u'ๅกๅก”็ˆพ': u'ๅก้”', u'ๅกๅก”ๅฐ”': u'ๅก้”', u'ๆ‰“ๅฐๆฉŸ': u'ๅฐ่กจๆฉŸ', u'ๆ‰“ๅฐๆœบ': u'ๅฐ่กจๆฉŸ', u'ๅŽ„็ซ‹็‰น้‡Œไบž': u'ๅŽ„ๅˆฉๅž‚ไบž', u'ๅŽ„็ซ‹็‰น้‡Œไบš': u'ๅŽ„ๅˆฉๅž‚ไบž', u'ๅŽ„็“œๅคšๅฐ”': u'ๅŽ„็“œๅคš', u'ๅŽ„็“œๅคš็ˆพ': u'ๅŽ„็“œๅคš', u'ๆ–ฏๅจๅฃซๅ…ฐ': u'ๅฒ็“ฆๆฟŸ่˜ญ', u'ๆ–ฏๅจๅฃซ่˜ญ': u'ๅฒ็“ฆๆฟŸ่˜ญ', u'ๅ‰ๅธƒๆ': u'ๅ‰ๅธƒๅœฐ', u'ๅ‰ๅธƒๅ ค': u'ๅ‰ๅธƒๅœฐ', u'ๅŸบ้‡Œๅทดๆ–ฏ': u'ๅ‰้‡Œๅทดๆ–ฏ', u'ๅœ–็“ฆ็›ง': u'ๅ็“ฆ้ญฏ', u'ๅ›พ็“ฆๅข': u'ๅ็“ฆ้ญฏ', u'ๅ“ˆ่จๅ…‹ๆ–ฏๅฆ': u'ๅ“ˆ่–ฉๅ…‹', u'ๅ“ฅๆ–ฏ้”้ปŽๅŠ ': u'ๅ“ฅๆ–ฏๅคง้ปŽๅŠ ', u'ๅ“ฅๆ–ฏ่พพ้ปŽๅŠ ': u'ๅ“ฅๆ–ฏๅคง้ปŽๅŠ ', u'ๆ ผ้ญฏๅ‰ไบž': u'ๅ–ฌๆฒปไบž', u'ๆ ผ้ฒๅ‰ไบš': u'ๅ–ฌๆฒปไบž', u'ไฝๆฒปไบš': u'ๅ–ฌๆฒปไบž', u'ไฝๆฒปไบž': u'ๅ–ฌๆฒปไบž', u'ๅ˜ด่ฃ': u'ๅ˜ด่ฃก', u'ๅœŸๅบ“ๆ›ผๆ–ฏๅฆ': u'ๅœŸๅบซๆ›ผ', u'่–ฏไป”': u'ๅœŸ่ฑ†', u'ๅœŸ่ฑ†็ถฒ': u'ๅœŸ่ฑ†็ถฒ', u'ๅœŸ่ฑ†็ฝ‘': u'ๅœŸ่ฑ†็ถฒ', u'ๅฆๆก‘ๅฐผไบš': u'ๅฆๅฐšๅฐผไบž', u'ๅฆๆก‘ๅฐผไบž': u'ๅฆๅฐšๅฐผไบž', u'็ซฏๅฃ': u'ๅŸ ', u'ๅก”ๅ‰ๅ…‹ๆ–ฏๅฆ': u'ๅก”ๅ‰ๅ…‹', u'ๅกž่ˆŒๅฐ”': u'ๅกžๅธญ็ˆพ', u'ๅกž่ˆŒ็ˆพ': u'ๅกžๅธญ็ˆพ', u'ๅกžๆตฆ่ทฏๆ–ฏ': u'ๅกžๆ™ฎๅ‹’ๆ–ฏ', u'ๅคๅคฉ่ฃ': u'ๅคๅคฉ่ฃก', u'ๅคๆ—ฅ่ฃ': u'ๅคๆ—ฅ่ฃก', u'ๅคšๆ˜ŽๅฐผๅŠ ๅ…ฑๅ’Œๅœ‹': u'ๅคšๆ˜ŽๅฐผๅŠ ', u'ๅคš็ฑณๅฐผๅŠ ๅ…ฑๅ’Œๅ›ฝ': u'ๅคšๆ˜ŽๅฐผๅŠ ', u'ๅคš็ฑณๅฐผๅŠ ๅ…ฑๅ’Œๅœ‹': u'ๅคšๆ˜ŽๅฐผๅŠ ', u'ๅคš็ฑณๅฐผๅŠ ๅ›ฝ': u'ๅคš็ฑณๅฐผๅ…‹', u'ๅคšๆ˜ŽๅฐผๅŠ ๅœ‹': u'ๅคš็ฑณๅฐผๅ…‹', u'็ฉฟๆขญๆฉŸ': u'ๅคช็ฉบๆขญ', u'่ˆชๅคฉ้ฃžๆœบ': u'ๅคช็ฉบๆขญ', u'ๅฐผๆ—ฅๅˆฉไบš': u'ๅฅˆๅŠๅˆฉไบž', u'ๅฐผๆ—ฅๅˆฉไบž': u'ๅฅˆๅŠๅˆฉไบž', u'ๅญ—็ฌฆ': u'ๅญ—ๅ…ƒ', u'ๅญ—ๅท': u'ๅญ—ๅž‹ๅคงๅฐ', u'ๅญ—ๅบ“': u'ๅญ—ๅž‹ๆช”', u'ๅญ—็ฌฆ้›†': u'ๅญ—็ฌฆ้›†', u'ๅญ˜็›˜': u'ๅญ˜ๆช”', u'ๅญธ่ฃ': u'ๅญธ่ฃก', u'ๅฎ‰ๆ็“œๅ’Œๅทดๅธƒ้”': u'ๅฎ‰ๅœฐๅกๅŠๅทดๅธƒ้”', u'ๅฎ‰ๆ็“œๅ’Œๅทดๅธƒ่พพ': u'ๅฎ‰ๅœฐๅกๅŠๅทดๅธƒ้”', u'ๅฎ‹ๅ…ƒ': u'ๅฎ‹ๅ…ƒ', u'ๆดช้ƒฝๆ‹‰ๆ–ฏ': u'ๅฎ้ƒฝๆ‹‰ๆ–ฏ', u'ๅฏปๅ€': u'ๅฎšๅ€', u'ๅฏ’ๅ‡่ฃ': u'ๅฏ’ๅ‡่ฃก', u'ๅฎฝๅธฆ': u'ๅฏฌ้ ป', u'่€ๆ’พ': u'ๅฏฎๅœ‹', u'่€ๆŒ': u'ๅฏฎๅœ‹', u'ๆ‰“้—จ': u'ๅฐ„้–€', u'ๅฐˆ่ผฏ่ฃ': u'ๅฐˆ่ผฏ่ฃก', u'่ดŠๆฏ”ไบž': u'ๅฐšๆฏ”ไบž', u'่ตžๆฏ”ไบš': u'ๅฐšๆฏ”ไบž', u'ๅฐผๆ—ฅ็ˆพ': u'ๅฐผๆ—ฅ', u'ๅฐผๆ—ฅๅฐ”': u'ๅฐผๆ—ฅ', u'ๅฑฑๆดž่ฃ': u'ๅฑฑๆดž่ฃก', u'ๅทดๅธƒไบžๆ–ฐ็•ฟๅ…งไบž': u'ๅทดๅธƒไบž็ดๅนพๅ…งไบž', u'ๅทดๅธƒไบšๆ–ฐๅ‡ ๅ†…ไบš': u'ๅทดๅธƒไบž็ดๅนพๅ…งไบž', u'ๅทดๅทดๅคšๆ–ฏ': u'ๅทด่ฒๅคš', u'ๅธƒๅŸบ็บณๆณ•็ดข': u'ๅธƒๅ‰็ดๆณ•็ดข', u'ๅธƒๅŸบ็ดๆณ•็ดข': u'ๅธƒๅ‰็ดๆณ•็ดข', u'ๅธƒไป€': u'ๅธƒๅธŒ', u'ๅธƒๆฎŠ': u'ๅธƒๅธŒ', u'ๅธ•ๅŠณ': u'ๅธ›็‰', u'ไพ‹็จ‹': u'ๅธธๅผ', u'ๅนณๆฒปไน‹ไนฑ': u'ๅนณๆฒปไน‹ไบ‚', u'ๅนณๆฒปไน‹ไบ‚': u'ๅนณๆฒปไน‹ไบ‚', u'ๅนดไปฃ่ฃ': u'ๅนดไปฃ่ฃก', u'ๅ‡ ๅ†…ไบšๆฏ”็ป': u'ๅนพๅ…งไบžๆฏ”็ดข', u'ๅนพๅ…งไบžๆฏ”็ดน': u'ๅนพๅ…งไบžๆฏ”็ดข', u'ๅฝฉๅธฆ': u'ๅฝฉๅธถ', u'ๅฝฉๆŽ’': u'ๅฝฉๆŽ’', u'ๅฝฉๆฅผ': u'ๅฝฉๆจ“', u'ๅฝฉ็‰Œๆฅผ': u'ๅฝฉ็‰Œๆจ“', u'ๅพฉ่˜‡': u'ๅพฉ็”ฆ', u'ๅค่‹': u'ๅพฉ็”ฆ', u'ๅฟƒ่ฃ': u'ๅฟƒ่ฃก', u'ๅฟซ้—ชๅญ˜ๅ‚จๅ™จ': u'ๅฟซ้–ƒ่จ˜ๆ†ถ้ซ”', u'้—ชๅญ˜': u'ๅฟซ้–ƒ่จ˜ๆ†ถ้ซ”', u'ๆƒณ่ฑก': u'ๆƒณๅƒ', u'ไผ ๆ„Ÿ': u'ๆ„Ÿๆธฌ', u'ไน ็”จ': u'ๆ…ฃ็”จ', u'ๆˆๅฝฉๅจฑไบฒ': u'ๆˆฒ็ถตๅจ›่ฆช', u'ๆˆฒ่ฃ': u'ๆˆฒ่ฃก', u'ๆ‰‹็”ต็ญ’': u'ๆ‰‹้›ป็ญ’', u'ๆ‰‹็”ต': u'ๆ‰‹้›ป็ญ’', u'ๆ‹ฌๅท': u'ๆ‹ฌๅผง', u'ๆ‹ฟ็ ดไพ–': u'ๆ‹ฟ็ ดๅด™', u'ๆ‹ฟ็ ดไป‘': u'ๆ‹ฟ็ ดๅด™', u'็ฉๆžถ': u'ๆท่ฑน', u'ๆ‰ซ็ž„ไปช': u'ๆŽƒ็ž„ๅ™จ', u'ๆŒ‚้’ฉ': u'ๆŽ›้‰ค', u'ๆŽ›้ˆŽ': u'ๆŽ›้‰ค', u'ๆŽงไปถ': u'ๆŽงๅˆถ้ …', u'ๅฐ็ƒ': u'ๆ’ž็ƒ', u'ๆกŒ็ƒ': u'ๆ’ž็ƒ', u'ไพฟๆบๅผ': u'ๆ”œๅธถๅž‹', u'ๆ•…ไบ‹่ฃ': u'ๆ•…ไบ‹่ฃก', u'่ฐƒๅˆถ่งฃ่ฐƒๅ™จ': u'ๆ•ธๆ“šๆฉŸ', u'่ชฟๅˆถ่งฃ่ชฟๅ™จ': u'ๆ•ธๆ“šๆฉŸ', u'ๆ–ฏๆด›ๆ–‡ๅฐผไบž': u'ๆ–ฏๆด›็ถญๅฐผไบž', u'ๆ–ฏๆด›ๆ–‡ๅฐผไบš': u'ๆ–ฏๆด›็ถญๅฐผไบž', u'ๆ–ฐ็บชๅ…ƒ': u'ๆ–ฐ็ด€ๅ…ƒ', u'ๆ–ฐ็ด€ๅ…ƒ': u'ๆ–ฐ็ด€ๅ…ƒ', u'ๆ—ฅๅญ่ฃ': u'ๆ—ฅๅญ่ฃก', u'ๆ˜ฅๅ‡่ฃ': u'ๆ˜ฅๅ‡่ฃก', u'ๆ˜ฅๅคฉ่ฃ': u'ๆ˜ฅๅคฉ่ฃก', u'ๆ˜ฅๆ—ฅ่ฃ': u'ๆ˜ฅๆ—ฅ่ฃก', u'ๆ™‚้–“่ฃ': u'ๆ™‚้–“่ฃก', u'่Šฏ็‰‡': u'ๆ™ถๅ…ƒ', u'ๆš‘ๅ‡่ฃ': u'ๆš‘ๅ‡่ฃก', u'ๆ‘ๅญ่ฃ': u'ๆ‘ๅญ่ฃก', u'ไนๅพ—': u'ๆŸฅๅพท', u'ๅ…‹ๆž—้ “': u'ๆŸฏๆž—้ “', u'ๅ…‹ๆž—้กฟ': u'ๆŸฏๆž—้ “', u'ๆ ผๆž—็ด้”': u'ๆ ผ็‘ž้‚ฃ้”', u'ๆ ผๆž—็บณ่พพ': u'ๆ ผ็‘ž้‚ฃ้”', u'ๅ‡ก้ซ˜': u'ๆขต่ฐท', u'ๆฃฎๆž—่ฃ': u'ๆฃฎๆž—่ฃก', u'ๆฃบๆ่ฃ': u'ๆฃบๆ่ฃก', u'ๆฆด่“ฎ': u'ๆฆดๆงค', u'ๆฆด่Žฒ': u'ๆฆดๆงค', u'ไปฟ็œŸ': u'ๆจกๆ“ฌ', u'ๆฏ›้‡Œ่ฃ˜ๆ–ฏ': u'ๆจก้‡Œ่ฅฟๆ–ฏ', u'ๆฏ›้‡Œๆฑ‚ๆ–ฏ': u'ๆจก้‡Œ่ฅฟๆ–ฏ', u'ๆฉŸๆขฐไบบ': u'ๆฉŸๅ™จไบบ', u'ๆœบๅ™จไบบ': u'ๆฉŸๅ™จไบบ', u'ๅญ—ๆฎต': u'ๆฌ„ไฝ', u'ๆญทๅฒ่ฃ': u'ๆญทๅฒ่ฃก', u'ๅ…ƒ้Ÿณ': u'ๆฏ้Ÿณ', u'ๆฐธๅކ': u'ๆฐธๆ›†', u'ๆ–‡่Žฑ': u'ๆฑถ่Š', u'ๆฒ™็‰น้˜ฟๆ‹‰ไผฏ': u'ๆฒ™็ƒๅœฐ้˜ฟๆ‹‰ไผฏ', u'ๆฒ™ๅœฐ้˜ฟๆ‹‰ไผฏ': u'ๆฒ™็ƒๅœฐ้˜ฟๆ‹‰ไผฏ', u'ๆณขๆ–ฏๅฐผไบž้ป‘ๅกžๅ“ฅ็ถญ้‚ฃ': u'ๆณขๅฃซๅฐผไบž่ตซๅกžๅ“ฅ็ถญ็ด', u'ๆณขๆ–ฏๅฐผไบšๅ’Œ้ป‘ๅกžๅ“ฅ็ปด้‚ฃ': u'ๆณขๅฃซๅฐผไบž่ตซๅกžๅ“ฅ็ถญ็ด', u'ๅš่Œจ็“ฆ็บณ': u'ๆณขๆœญ้‚ฃ', u'ๅš่Œจ็“ฆ็ด': u'ๆณขๆœญ้‚ฃ', u'ไพฏ่ต›ๅ› ': u'ๆตท็Š', u'ไพฏ่ณฝๅ› ': u'ๆตท็Š', u'ๆทฑๆทต่ฃ': u'ๆทฑๆทต่ฃก', u'ๅ…‰ๆ ‡': u'ๆธธๆจ™', u'้ผ ๆ ‡': u'ๆป‘้ผ ', u'็ฎ—ๆณ•': u'ๆผ”็ฎ—ๆณ•', u'ไนŒๅ…นๅˆซๅ…‹ๆ–ฏๅฆ': u'็ƒ่Œฒๅˆฅๅ…‹', u'่ฏ็ป„': u'็‰‡่ชž', u'็„่ฃ': u'็„่ฃก', u'ๅกžๆ‹‰ๅˆฉๆ˜‚': u'็…ๅญๅฑฑ', u'ๅฑๅœฐ้ฉฌๆ‹‰': u'็“œๅœฐ้ฆฌๆ‹‰', u'ๅฑๅœฐ้ฆฌๆ‹‰': u'็“œๅœฐ้ฆฌๆ‹‰', u'ๅ†ˆๆฏ”ไบš': u'็”˜ๆฏ”ไบž', u'ๅฒกๆฏ”ไบž': u'็”˜ๆฏ”ไบž', u'็–‘ๅ…‡': u'็–‘ๅ‡ถ', u'็–‘ๅ‡ถ': u'็–‘ๅ‡ถ', u'็™พ็ง‘่ฃ': u'็™พ็ง‘่ฃก', u'็šฎ่ฃ้™ฝ็ง‹': u'็šฎ่ฃก้™ฝ็ง‹', u'็›งๆ—บ้”': u'็›งๅฎ‰้”', u'ๅขๆ—บ่พพ': u'็›งๅฎ‰้”', u'็œŸๅ‡ถ': u'็œŸๅ‡ถ', u'็œŸๅ…‡': u'็œŸๅ‡ถ', u'็œผ็›่ฃ': u'็œผ็›่ฃก', u'็ก…็‰‡': u'็Ÿฝ็‰‡', u'็ก…่ฐท': u'็Ÿฝ่ฐท', u'็กฌ็›˜': u'็กฌ็ขŸ', u'็กฌไปถ': u'็กฌ้ซ”', u'็›˜็‰‡': u'็ขŸ็‰‡', u'็ฃ็›˜': u'็ฃ็ขŸ', u'็ฃ้“': u'็ฃ่ปŒ', u'็ง‹ๅ‡่ฃ': u'็ง‹ๅ‡่ฃก', u'็ง‹ๅคฉ่ฃ': u'็ง‹ๅคฉ่ฃก', u'็ง‹ๆ—ฅ่ฃ': u'็ง‹ๆ—ฅ่ฃก', u'็จ‹ๆŽง': u'็จ‹ๅผๆŽงๅˆถ', u'็ชๅฐผๆ–ฏ': u'็ชๅฐผ่ฅฟไบž', u'ๅฐพๆณจ': u'็ซ ็ฏ€้™„่จป', u'่นฆๆž่ทณ': u'็ฌจ่ฑฌ่ทณ', u'็ป‘็ดง่ทณ': u'็ฌจ่ฑฌ่ทณ', u'็ญ‰ไบŽ': u'็ญ‰ๆ–ผ', u'็Ÿญ่จŠ': u'็ฐก่จŠ', u'็Ÿญไฟก': u'็ฐก่จŠ', u'็ณปๅˆ—่ฃ': u'็ณปๅˆ—่ฃก', u'ๆ–ฐ่ฅฟ่˜ญ': u'็ด่ฅฟ่˜ญ', u'ๆ–ฐ่ฅฟๅ…ฐ': u'็ด่ฅฟ่˜ญ', u'ๆ‰€็ฝ—้—จ็พคๅฒ›': u'็ดข็พ…้–€็พคๅณถ', u'ๆ‰€็พ…้–€็พคๅณถ': u'็ดข็พ…้–€็พคๅณถ', u'็ดข้ฆฌ้‡Œ': u'็ดข้ฆฌๅˆฉไบž', u'็ดข้ฉฌ้‡Œ': u'็ดข้ฆฌๅˆฉไบž', u'็ป“ๅฝฉ': u'็ต็ถต', u'ไฝ›ๅพ—่ง’': u'็ถญๅพท่ง’', u'็ถฒ็ตก': u'็ถฒ่ทฏ', u'็ฝ‘็ปœ': u'็ถฒ่ทฏ', u'ไบ’่ฏ็ถฒ': u'็ถฒ้š›็ถฒ่ทฏ', u'ๅ› ็‰น็ฝ‘': u'็ถฒ้š›็ถฒ่ทฏ', u'ๅฝฉ็ƒ': u'็ถต็ƒ', u'ๅฝฉ็ปธ': u'็ถต็ถข', u'ๅฝฉ็บฟ': u'็ถต็ทš', u'ๅฝฉ่ˆน': u'็ถต่ˆน', u'ๅฝฉ่กฃ': u'็ถต่กฃ', u'็ผ‰ๅ‡ถ': u'็ทๅ‡ถ', u'็ทๅ…‡': u'็ทๅ‡ถ', u'็ทๅ‡ถ': u'็ทๅ‡ถ', u'ๆ„ๅคงๅˆฉ': u'็พฉๅคงๅˆฉ', u'่€ๅญ—ๅท': u'่€ๅญ—่™Ÿ', u'ๅœฃๅŸบ่Œจๅ’Œๅฐผ็ปดๆ–ฏ': u'่–ๅ…‹้‡Œๆ–ฏๅคš็ฆๅŠๅฐผ็ถญๆ–ฏ', u'่–ๅ‰ๆ–ฏ็ดๅŸŸๆ–ฏ': u'่–ๅ…‹้‡Œๆ–ฏๅคš็ฆๅŠๅฐผ็ถญๆ–ฏ', u'่–ๆ–‡ๆฃฎ็‰นๅ’Œๆ ผๆž—็ดไธๆ–ฏ': u'่–ๆ–‡ๆฃฎๅŠๆ ผ็‘ž้‚ฃไธ', u'ๅœฃๆ–‡ๆฃฎ็‰นๅ’Œๆ ผๆž—็บณไธๆ–ฏ': u'่–ๆ–‡ๆฃฎๅŠๆ ผ็‘ž้‚ฃไธ', u'ๅœฃๅข่ฅฟไบš': u'่–้œฒ่ฅฟไบž', u'่–็›ง่ฅฟไบž': u'่–้œฒ่ฅฟไบž', u'ๅœฃ้ฉฌๅŠ›่ฏบ': u'่–้ฆฌๅˆฉ่ซพ', u'่–้ฆฌๅŠ›่ซพ': u'่–้ฆฌๅˆฉ่ซพ', u'่‚š่ฃ': u'่‚š่ฃก', u'่‚ฏๅฐผไบš': u'่‚ฏไบž', u'่‚ฏ้›…': u'่‚ฏไบž', u'ไปปๆ„็ƒ': u'่‡ช็”ฑ็ƒ', u'่ˆชๅคฉๅคงๅญฆ': u'่ˆชๅคฉๅคงๅญธ', u'่‹ฆ่ฃ': u'่‹ฆ่ฃก', u'ๆฏ›้‡Œๅก”ๅฐผไบš': u'่Œ…ๅˆฉๅก”ๅฐผไบž', u'ๆฏ›้‡Œๅก”ๅฐผไบž': u'่Œ…ๅˆฉๅก”ๅฐผไบž', u'่Žซๆก‘ๆฏ”ๅ…‹': u'่Žซไธ‰ๆฏ”ๅ…‹', u'ไธ‡ๅކ': u'่ฌๆ›†', u'็“ฆๅŠช้˜ฟๅ›พ': u'่ฌ้‚ฃๆœ', u'็“ฆๅŠช้˜ฟๅœ–': u'่ฌ้‚ฃๆœ', u'ไนŸ้–€': u'่‘‰้–€', u'ไนŸ้—จ': u'่‘‰้–€', u'็€': u'่‘—', u'็ง‘ๆ‘ฉ็พ…': u'่‘›ๆ‘ฉ', u'็ง‘ๆ‘ฉ็ฝ—': u'่‘›ๆ‘ฉ', u'ๅธƒ้š†่ฟช': u'่’ฒ้š†ๅœฐ', u'ๅœญไบž้‚ฃ': u'่“‹ไบž้‚ฃ', u'ๅœญไบš้‚ฃ': u'่“‹ไบž้‚ฃ', u'็ซ้”…็›–ๅธฝ': u'่“‹็ซ้‹', u'่‹้‡Œๅ—': u'่˜‡ๅˆฉๅ—', u'่กŒๅ‡ถ': u'่กŒๅ‡ถ', u'่กŒๅ…‡': u'่กŒๅ‡ถ', u'่กŒๅ‡ถๅŽ': u'่กŒๅ‡ถๅพŒ', u'่กŒๅ…‡ๅพŒ': u'่กŒๅ‡ถๅพŒ', u'่กŒๅ‡ถๅพŒ': u'่กŒๅ‡ถๅพŒ', u'ๆตๅ‹•้›ป่ฉฑ': u'่กŒๅ‹•้›ป่ฉฑ', u'็งปๅŠจ็”ต่ฏ': u'่กŒๅ‹•้›ป่ฉฑ', u'่กŒ็จ‹ๆŽงๅˆถ': u'่กŒ็จ‹ๆŽงๅˆถ', u'่กž': u'่ก›', u'ๅซ็”Ÿ': u'่ก›็”Ÿ', u'่กž็”Ÿ': u'่ก›็”Ÿ', u'ๅŸƒๅกžไฟ„ๆฏ”ไบš': u'่กฃ็ดขๆฏ”ไบž', u'ๅŸƒๅกžไฟ„ๆฏ”ไบž': u'่กฃ็ดขๆฏ”ไบž', u'่ฃๅ‹พๅค–้€ฃ': u'่ฃกๅ‹พๅค–้€ฃ', u'่ฃ้ข': u'่ฃก้ข', u'ๅˆ†่พจ็އ': u'่งฃๆžๅบฆ', u'่ฏ‘็ ': u'่งฃ็ขผ', u'ๅ‡บ็งŸ่ฝฆ': u'่จˆ็จ‹่ปŠ', u'ๆƒ้™': u'่จฑๅฏๆฌŠ', u'็‘™้ฒ': u'่ซพ้ญฏ', u'็‘™้ญฏ': u'่ซพ้ญฏ', u'ๅ˜้‡': u'่ฎŠๆ•ธ', u'็ง‘็‰น่ฟช็“ฆ': u'่ฑก็‰™ๆตทๅฒธ', u'่ฒๅฏง': u'่ฒๅ—', u'่ดๅฎ': u'่ฒๅ—', u'ไผฏๅˆฉ่Œฒ': u'่ฒ้‡Œๆ–ฏ', u'ไผฏๅˆฉๅ…น': u'่ฒ้‡Œๆ–ฏ', u'่ฒทๅ…‡': u'่ฒทๅ‡ถ', u'ไนฐๅ‡ถ': u'่ฒทๅ‡ถ', u'่ฒทๅ‡ถ': u'่ฒทๅ‡ถ', u'ๆ•ฐๆฎๅบ“': u'่ณ‡ๆ–™ๅบซ', u'ไฟกๆฏ่ฎบ': u'่ณ‡่จŠ็†่ซ–', u'ๅฅ”้ฉฐ': u'่ณ“ๅฃซ', u'ๅนณๆฒป': u'่ณ“ๅฃซ', u'ๅˆฉๆฏ”้‡Œไบš': u'่ณดๆฏ”็‘žไบž', u'ๅˆฉๆฏ”้‡Œไบž': u'่ณดๆฏ”็‘žไบž', u'่Š็ดขๆ‰˜': u'่ณด็ดขๆ‰˜', u'่Žฑ็ดขๆ‰˜': u'่ณด็ดขๆ‰˜', u'่ฝฏ้ฉฑ': u'่ปŸ็ขŸๆฉŸ', u'่ปŸไปถ': u'่ปŸ้ซ”', u'่ฝฏไปถ': u'่ปŸ้ซ”', u'ๅŠ ่ฝฝ': u'่ผ‰ๅ…ฅ', u'ๆดฅๅทดๅธƒ้Ÿฆ': u'่พ›ๅทดๅจ', u'ๆดฅๅทดๅธƒ้Ÿ‹': u'่พ›ๅทดๅจ', u'่ฏๆฑ‡': u'่พญๅฝ™', u'ๅŠ ็บณ': u'่ฟฆ็ด', u'ๅŠ ็ด': u'่ฟฆ็ด', u'่ฟฝๅ‡ถ': u'่ฟฝๅ‡ถ', u'่ฟฝๅ…‡': u'่ฟฝๅ‡ถ', u'้€™่ฃ': u'้€™่ฃก', u'ไฟก้“': u'้€š้“', u'้€žๅ‡ถ้ฌฅ็‹ ': u'้€žๅ‡ถ้ฌฅ็‹ ', u'้€žๅ…‡้ฌฅ็‹ ': u'้€žๅ‡ถ้ฌฅ็‹ ', u'้€žๅ‡ถๆ–—็‹ ': u'้€žๅ‡ถ้ฌฅ็‹ ', u'ๅณ้ฃŸ้บต': u'้€Ÿ้ฃŸ้บต', u'ๆ–นไพฟ้ข': u'้€Ÿ้ฃŸ้บต', u'ๅฟซ้€Ÿ้ข': u'้€Ÿ้ฃŸ้บต', u'่ฟžๅญ—ๅท': u'้€ฃๅญ—่™Ÿ', u'่ฟ›ๅˆถ': u'้€ฒไฝ', u'ๅ…ฅ็ƒ': u'้€ฒ็ƒ', u'็ฎ—ๅญ': u'้‹็ฎ—ๅ…ƒ', u'้ ็จ‹ๆŽงๅˆถ': u'้ ็จ‹ๆŽงๅˆถ', u'่ฟœ็จ‹ๆŽงๅˆถ': u'้ ็จ‹ๆŽงๅˆถ', u'ๆบซ็ดๅœ–่ฌ': u'้‚ฃๆœ', u'้†ซ้™ข่ฃ': u'้†ซ้™ข่ฃก', u'้…ฐ': u'้†ฏ', u'ๅทจๅ•†': u'้‰…่ณˆ', u'้’ฉ': u'้‰ค', u'้ˆŽ': u'้‰ค', u'้’ฉๅฟƒๆ–—่ง’': u'้‰คๅฟƒ้ฌฅ่ง’', u'้ˆŽๅฟƒ้ฌฅ่ง’': u'้‰คๅฟƒ้ฌฅ่ง’', u'ๅ†™ไฟๆŠค': u'้˜ฒๅฏซ', u'้˜ฟๆ‹‰ไผฏ่”ๅˆ้…‹้•ฟๅ›ฝ': u'้˜ฟๆ‹‰ไผฏ่ฏๅˆๅคงๅ…ฌๅœ‹', u'้˜ฟๆ‹‰ไผฏ่ฏๅˆ้…‹้•ทๅœ‹': u'้˜ฟๆ‹‰ไผฏ่ฏๅˆๅคงๅ…ฌๅœ‹', u'ๅ™ชๅฃฐ': u'้›œ่จŠ', u'่„ฑๆœบ': u'้›ข็ทš', u'้›ช่ฃ็ด…': u'้›ช่ฃก็ด…', u'้›ช่ฃ่•ป': u'้›ช่ฃก่•ป', u'้›ช้“้พ™': u'้›ช้ต้พ', u'้’้œ‰็ด ': u'้’้ปด็ด ', u'ๅผ‚ๆญฅ': u'้žๅŒๆญฅ', u'ๅฃฐๅก': u'้Ÿณๆ•ˆๅก', u'็ผบ็œ': u'้ ่จญ', u'้ขๅธƒ': u'้ ’ๅธƒ', u'้ ’ไฝˆ': u'้ ’ๅธƒ', u'้ ˜ๅŸŸ่ฃ': u'้ ˜ๅŸŸ่ฃก', u'ๅคด็ƒ': u'้ ญๆงŒ', u'็ฒ’ๅ…ฅ็ƒ': u'้ก†้€ฒ็ƒ', u'้คจ่ฃ': u'้คจ่ฃก', u'้ฉฌ้‡Œๅ…ฑๅ’Œๅ›ฝ': u'้ฆฌๅˆฉๅ…ฑๅ’Œๅœ‹', u'้ฆฌ้‡Œๅ…ฑๅ’Œๅœ‹': u'้ฆฌๅˆฉๅ…ฑๅ’Œๅœ‹', u'้ฉฌ่€ณไป–': u'้ฆฌ็ˆพไป–', u'้ฉฌๅฐ”ไปฃๅคซ': u'้ฆฌ็ˆพๅœฐๅคซ', u'้ฆฌ็ˆพไปฃๅคซ': u'้ฆฌ็ˆพๅœฐๅคซ', u'่ฌไบ‹ๅพ—': u'้ฆฌ่‡ช้”', u'็‹„ๅฎ‰ๅจœ': u'้ป›ๅฎ‰ๅจœ', u'ๆˆดๅฎ‰ๅจœ': u'้ป›ๅฎ‰ๅจœ', u'้ปž่ฃ': u'้ปž่ฃก', u'ไฝๅ›พ': u'้ปž้™ฃๅœ–', }
AdvancedLangConv
/AdvancedLangConv-0.01.tar.gz/AdvancedLangConv-0.01/langconv/defaulttables/zh_hans.py
zh_hans.py
import argparse import base64 import httplib import urllib import asd def arg_parser(): """Setup argument Parsing.""" parser = argparse.ArgumentParser( usage='%(prog)s', description='Gather information quickly and efficiently', epilog='Licensed... Go read...' ) query_search = argparse.ArgumentParser(add_help=False) services = ['nova', 'swift', 'glance', 'keystone', 'heat', 'cinder', 'ceilometer', 'trove', 'python', 'openstack', 'linux', 'ubuntu', 'centos', 'mysql', 'rabbitmq', 'lvm', 'kernel', 'networking', 'ipv4', 'ipv6', 'neutron', 'quantum', 'custom'] meta = 'Gather information quickly and efficiently from trusted sources' subpar = parser.add_subparsers(title='Search Options', metavar=meta) for service in services: action = subpar.add_parser( service, parents=[query_search], help='Look for "%s" Information' % service ) action.set_defaults(topic=service) action.add_argument( '--now', default=False, action='store_true', help='Perform a more CPU intense search, will produce faster' ' results.' ) action.add_argument('--query', nargs='*', required=True) return parser class ExternalInformationIndexer(object): def __init__(self, config): standard_salt = 'aHR0cDovL2xtZ3RmeS5jb20vP3E9' optimized_salt = 'aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS93ZWJocCNxPQ==' self.config = config if self.config.get('now', False) is True: self.definition_salt = optimized_salt else: self.definition_salt = standard_salt query = self.config.get('query') topic = self.config.get('topic') if topic != 'custom': query.insert(0, '"%s"' % topic) self.query = urllib.quote(' '.join(query)) with asd.Timer() as time: self.indexer() print('Advanced Search completed in %s Seconds' % time.interval) def indexer(self): """Builds the query content for our targeted search.""" prefix = base64.decodestring(self.definition_salt) self.fetch_results(query_text='%s%s' % (prefix, self.query)) @staticmethod def fetch_results(query_text): """Opens a web browser tab containing the search information. Sends a query request to the Index engine for the provided search criteria. :param query_text: ``str`` """ import webbrowser if webbrowser.open(url=query_text) is not True: encoder = 'dGlueXVybC5jb20=' api = 'L2FwaS1jcmVhdGUucGhwP3VybD0lcw==' conn = httplib.HTTPConnection(host=base64.decodestring(encoder)) conn.request('GET', base64.decodestring(api) % query_text) resp = conn.getresponse() if resp.status >= 300: raise httplib.CannotSendRequest('failed to make request...') print("It seems that you are not executing from a desktop\n" "operating system or you don't have a browser installed.\n" "Here is the link to the content that you're looking for.\n") print('\nContent: %s\n' % resp.read()) def main(): """Run Main Program.""" parser = arg_parser() config = vars(parser.parse_args()) ExternalInformationIndexer(config=config) if __name__ == '__main__': main()
AdvancedSearchDiscovery
/AdvancedSearchDiscovery-0.0.3.tar.gz/AdvancedSearchDiscovery-0.0.3/asd/run.py
run.py