package
stringlengths 1
122
| pacakge-description
stringlengths 0
1.3M
|
---|---|
airtel
|
AlightPython module for yourAPI Airtel.Documentation|Report bugs|Contribute
|
airtel-airflow-plugins
|
##### Airtel Airflow Plugins Library#####Table of ContentsOverview
Setup
UsageOverview
Airtel Airflow Plugins Library containing all custom operators, sensors, callbacks and hooks.Setup
first create virtual env in your local setup -
Note - make sure to create virtual env ending with _venv so the at the time of pushing code to git it must be ignored.python3 -m venv <virtual_env_name_venv>source <virtual_env_name_venv>/bin/activateChange the interpreter setting in your IDE to point it to correct virtual envFirst, clone this repo to your local system. After you clone the repo, make sure to run the setup.py file, so you can install any dependencies you may need. To run the setup.py file, run the following command in your terminal.Run requirement.txt in your virtual env. This will install all the dependencies listed in the requirements.txt filepip install -r requirements.txtBelow command will create wheel file.python setup.py bdist_wheelPlease make sure to have .pypirc file in your local home directory. We have attached sample in project home dir as well.
This file contains repository url along with credential which we would be needing while running below command to publish
the package on PYPI repository.Below command will publish the package on PyPI repositorytwine upload –repository nexus dist/*Usage
The plugins package can be used in any Airflow setup for custom modules (Operators / Plugins / Sensors / Callbacks).
|
airtest
|
UI Test Automation Framework for Games and Apps on Android/iOS/Windows, present by NetEase Games
|
airtest-ext
|
No description available on PyPI.
|
airtest_for_h9
|
#-- coding: utf-8 --This program is an extension of airtest for com.netease.H9.Thanks to the authors for their great work.About airtest, Please refer to “http://netease.github.io/airtest/overview/quick_start.html”
for complete information and details .Author: zheng wen
Mail :[email protected] : 2014年12月17日 11:59:28
|
airtest-sd
|
UI Test Automation Framework for Games and Apps on Android/iOS/Windows, present by NetEase Games
|
airtest-selenium
|
Selenium with airtest test framework. 2018 present by NetEase Games
|
airthings
|
airthingsairthingsis a simple python package that contains methods to
communicate with Airthings[1] devices. The package utilizies bluepy[2]
for the communication between python and the devices. The package
features can be found below.[1]https://airthings.com/[2]https://github.com/IanHarvey/bluepyNote: Some features are currently undocumented, and parts are
untested/not yet implemented.FeaturesAutodiscover Airthings devicesFind and search for Airthings devices by using MAC addresses and/or
identifiersFetch sensor measurements from various Airthings models, see sensor
capability list belowRequirementsbluepyonly supports Linux, and is therefore currently the only
supported operating system. I have only tested with theWave Plus Gen 1 (2930). Other device models should in theory work
fine, but they are untested or might be unimplemented.System requirements:- libglib2.0-dev
Installation
------------
The current stable version of airthings is available on pypi and can be
installed by running:
``pip install airthings``
Other sources:
- pypi: http://pypi.python.org/pypi/airthings/
- github: https://github.com/kotlarz/airthings/
Usage
-----
Examples can be found in the `examples <./examples>`__ directory.
Supported devices
-----------------
*Note: “Model number” are the first 4 digits of the Airthings device
serial number*
Wave Gen 1 (Model number: 2900)On 1st Gen Wave, temperature and humidity are updated every time we read
the wave.Sensor capabilities
^^^^^^^^^^^^^^^^^^^Humidity (%rH)Radon short term average (Bp/m3)Radon long term average (Bq/m3)Temperature (°C)Wave Mini Gen 1 (Model number: 2920)Sensor values are updated every 5 minutes.
.. _sensor-capabilities-1:
Sensor capabilities
^^^^^^^^^^^^^^^^^^^
- Humidity (%rH)
- Temperature (°C)
- VOC (ppb)
Wave Plus Gen 1 (Model number: 2930)Except for the radon measurements, the Wave Plus updates its current
sensor values once every 5 minutes. Radon measurements are updated once
every hour... _sensor-capabilities-2:Sensor capabilities
^^^^^^^^^^^^^^^^^^^Humidity (%rH)Radon short term average (Bp/m3)Radon long term average (Bq/m3)Temperature (°C)Atmospheric pressure (hPa)CO2 (ppm)VOC (ppb)Wave Gen 2 (Model number: 2950)On 2nd Gen Wave, temperature and humidity are updated every 5 minutes.
On both devices, radon measurements are updated once every hour.
.. _sensor-capabilities-3:
Sensor capabilities
^^^^^^^^^^^^^^^^^^^
- Humidity (%rH)
- Radon short term average (Bp/m3)
- Radon long term average (Bq/m3)
- Temperature (°C)
|
airthings-ble
|
airthings-bleLibrary to control Airthings devices through BLE, primarily meant to be used in
theHome Assistant integration.Getting StartedPrerequisites:Pythonwith version 3.11 that is required by Home Assistant (docs)PoetryInstall dependencies:poetryinstallRun tests:poetryrunpytestLicenseThis project is licensed under the MIT License. See theLICENSEfile for details.
|
airthings-cloud
|
No description available on PyPI.
|
airthings-exporter
|
Airthings ExporterPrometheus exporter for Airthings devices.RequirementsPython 3Airthings deviceSetupRegister your Airthings device to sync with the cloud following the instructions manualCheck the Airthings app or theweb dashboardto obtain your device serial number. This is your client idGo to theAirthings Integrations webpageand request an API Client to obtain a client secretInstall airthings-exporterpipinstallairthings-exporterUsage# Start server (1 device)airthings-exporter--client-id[client_id]--client-secret[client_secret]--device-id[device_id]# Start server (2 devices)airthings-exporter--client-id[client_id]--client-secret[client_secret]--device-id[device_id_1]--device-id[device_id_2]# Test server workscurl-slocalhost:8000Tested DevicesAirthings View PlusAirthings Wave MiniExample Prometheus configuration file (prometheus.yml)scrape_configs:
- job_name: 'airthings'
scrape_interval: 5m
scrape_timeout: 10s
static_configs:
- targets: ['localhost:8000']API limitationsAirthings API for consumers allows only up to 120 requests per hour. Every scrape in prometheus sends one request per device to the Airthings API. Make sure the configured prometheus scrape interval does not exceed the limit.
|
airthingswave-mqtt
|
# Get Readings from an Airthings Wave and publish to MQTT server[Airthings](http://airthings.com) makes a BTLE Radon detector called "Wave". This is an executable intended to be called periodically from Cron or some other scheduler to publish readings to an MQTT server.## LimitationsThis application doesn't implement 'find' as provided in the example at https://airthings.com/raspberry-pi/## API```Pythonclass AirthingsWave:def __init__(self, config_file):```Class instantiation requires a path to a config file in YAML format.```mqtt:broker: 192.168.30.18port: 1883waves:- name: "basement-radon"addr: 98:07:2d:43:4d:ff```Before taking a reading, you should:```def ble_connect(self, addr):```Then you can:```def get_readings(self, p):def publish_readings(self, name, readings):```## ExampleFrom __main__.py:```pythonc = sys.argv[1]atw=airthingswave.AirthingsWave_mqtt(c)count=len(atw.waves)if count > 0:iter=0while iter<count:handle = atw.ble_connect(atw.waves[iter]["addr"])r = atw.get_readings(handle)atw.ble_disconnect(handle)atw.publish_readings(atw.waves[iter]["name"], r)iter = iter+1return True```
|
airtight
|
If you’re going toimport antigravity, you’d better make sure the hatch is closed.Theairtightpackage is written for Python 3.6+. It provides idiosyncratic code that somewhat simplifies the creation and debugging of command-line python scripts.simpler than a templateInstead of copying some 50-line template for your python script and then writing a bunch of calls toargparseandloggingjust build some lists describing the arguments and logging level you want and invokeartight.cli.configure_commandline():#!/usr/bin/env python3# -*- coding: utf-8 -*-"""
Example script template using the airtight module
"""fromairtight.cliimportconfigure_commandlineimportloggingDEFAULT_LOG_LEVEL=logging.WARNINGOPTIONAL_ARGUMENTS=[# each argument is a list: short option, long option, default value,# help string, required?['-l','--loglevel','NOTSET','desired logging level ('+'case-insensitive string: DEBUG, INFO, WARNING, or ERROR',False],['-v','--verbose',False,'verbose output (logging level == INFO)',False],['-w','--veryverbose',False,'very verbose output (logging level == DEBUG)',False],['-x','--custom',7,'your custom argument',False]]POSITIONAL_ARGUMENTS=[# each argument is a list with 3 elements: name, type, help['foovar',str,'some input value that you want']]defmain(**kwargs):"""Main function of your script.
kwargs -- keyword arguments as parsed from the command line
"""# your additional code hereif__name__=="__main__":main(**configure_commandline(OPTIONAL_ARGUMENTS,POSITIONAL_ARGUMENTS,DEFAULT_LOG_LEVEL))make debug logging just a wee bit easierTheairtight.loggingmodule provides two methods:configure_logging(), which is used byairtight.cli.configure_commandline(), andflog(), which reduces typing when you want to log a variable’s name and value.So, you can write:>fromairtight.loggingimportflog>fish='salmon'>flog(fish)DEBUG:foo_script:fish:'salmon'flog()logs to DEBUG by default, but an optional keyword argumentlevelmay be used to specify another standard level, e.g.:>fromairtight.loggingimportflog>importlogging>fish='salmon'>flog(fish,level=logging.WARNING)WARNING:foo_script:fish:'salmon'Another optional keyword argument (comment) may be specified. A string value supplied via this argument will be postfixed to the logged variable name and value, thus:>fromairtight.loggingimportflog>fish='salmon'>flog(fish,comment='I like this fish!')DEBUG:foo_script:fish:'salmon'Ilikethisfish!etc.Bug reports and feature requests are welcome, but really I’d prefer pull requests.tododocstrings
|
airtiler
|
The airtiler generates training / test data for neural networks by
downloading buildings from vector data from OpenStreetMap and the
corresponding satellite images from Microsoft Bing Maps.It then generates binary masks from the vector data which can be used
for example for instance segmentation.ExamplesInstance SeparationImageMaskFalseTrueInstallationTo install airtiler run:pipinstallairtilerUsageairtiler -c sample_config.jsonAPIairtiler=Airtiler("bing_key")airtiler.process(config)ConfigKeyRequiredoptionsboundingboxesYesOptions (optional)KeyDescriptiontarget_dirThe directory where the files
will be written tozoom_levelsGlobal zoom levels which will be
used, if a boundingbox if
specified in short format or has
no boundingboxes.separate_instancesIf true, each building instance
will be separated. Otherwise, a
building consisting from multiple
instances will be rendered as
one.Sample config{"options":{"target_dir":"./output/blabla","zoom_levels":[15,16,17],"separate_instances":false},"query":{"tags":["highway","building","leisure=swimming_pool"]},"boundingboxes":{"firenze":[11.239844,43.765851,11.289969,43.790065],"rapperswil":{"zoom_levels":[17,18],"tr":8.818724,"tl":47.222126,"br":8.847435,"bl":47.234629},"new_york":{"tr":-74.02059,"tl":40.646089,"br":-73.864722,"bl":40.77413}}}ProjectsThe airtiler is used in the following projects:Deep OSM - Instance segmentation using orthophotos and OSM data
|
airtime
|
AirtimeThis package contains Python scripts for analyzing transcript files from Zoom and Microsoft Teams meetings.These scripts function by reading in a transcript, calculating the spoken word totals for each of its speakers, and displaying the analyzed results. The intended purpose of this package is to allow users to determine what percentage of a given meeting was allocated to which speaker(s).Why this mattersAirtime is a project management tool which operates from the assumption that teams and organizations could only stand to benefit from an increase in perspectives from and participation among their members. The first step, of course, remains to create good teams. Once that’s been accomplished, the second step is to then allow all those involved the necessary time and space to make the contributions for which their expertise was initially sought out.This tool aims to provide those who regularly undertake collaborative work in fields that base their decisions on statistical findings with the necessary data to foster more genuinely collaborative work environments. It’s one thing, in other words, for a team member to convey to their project lead that they often have their contributions minimized in meetings while others monopolize the floor; it’s quite another for them to be able to demonstrate that the data support their experiences and that there’s room to move forward with a data-driven solution.Key features✅ Ease of use: Just 3 lines of code from installation to analysis.✅ Privacy and security: All analyses run locally on your machine.✅ Efficient computing: Lightweight algorithms for fast processing times.Coming soon: a step-by-step implementation guide for non-developers with no prior Python experience. If you’d like to be notified when this becomes available, please send me an email.VersionThe latest version of this package is v1.0.2.Installation and requirementspip install airtimeThis package uses only the Python standard library.
No external dependencies are required to run these scripts.UsageAfter installing the package, it can be used as follows:For analyzing transcripts from Zoom meetings:from airtime import zoom_analzyer
zoom_analzyer.analyze_zoom_transcript("transcript.txt")Note: recent releases of Zoom will output several transcript formats, but only one of these formats will preserve the speaker names which are essential to this analysis. To access the required transcript format,before the meeting ends, navigate to Captions > View full transcript > Save transcript.For analyzing transcripts from Microsoft Teams meetings:from airtime import teams_analzyer
teams_analzyer.analyze_teams_transcript("transcript.vtt")OutputThese scripts print:Meeting totals (total words and total speakers)Speaker totals (total words per speaker and their contributed percentage to the meeting total)Example output:MEETING TOTALS
- 1,000 words
- 2 speaker(s)
SPEAKER TOTALS
- Holmes: 900 words (90.0% of meeting)
- Watson: 100 words (10.0% of meeting)ContributingContributions to this package are welcome, and especially in the form of introducing additional transcript formats from other services (e.g., Google Meet). Please open an issue to discuss your idea or submit a pull request.LicenseThis package is licensed under the terms of the GNU Affero General Public License (GNU AGPL).Contact informationJana M. Perkins, the developer of this package, can be reached viaTwitter (@jcontd)or the contact information on herwebsite.CitationPerkins, J. M. (2023). Airtime. github.com/jcontd/airtime
|
airtm
|
A python library for interfacing with the AirTM API.
|
airton-ac
|
airton-acControl an Airton AC device over LAN.
This requires having thewifi module.Featuresasynchronous methods and transportpersistent communication to the deviceautomatic remote device state updates (remotes can still be used)configurable buffering for subsequent updatesconstraints between device commandsDomoticz plugin using a dedicated threadUsageSeelocal tuya requirementsfirst to find device information.Example usage:fromlocal_tuyaimportDeviceConfig,ProtocolConfigfromairton_acimportACDevice,ACFanSpeedasyncwithACDevice(DeviceConfig(ProtocolConfig("{id}","{address}",b"{key}")))asdevice:awaitdevice.switch(True)awaitdevice.set_speed(ACFanSpeed.L2)awaitdevice.switch(False)Domoticz pluginThe plugin requires having fetched device information using instructions above.
Make sure to readplugin instructionsfirst.💡 The Domoticz version should be2022.1or higher.python-mpipinstall--upgradeairton-ac[domoticz]python-mairton_ac.domoticz.installDomoticz path defaults to~/domoticzbut you can pass a-poption to the second command to change that:python-mairton_ac.domoticz.install-p/some/other/pathRestart Domoticz and create a new Hardware usingTuya Airton AC. Fill in device information and add.
The hardware will create up to 5 devices to control the fan (all prefixed with hardware name):power: to turn on or offset point: to set the target temperaturetemperature: to record curent temperature as measured by the unitmode: to control operating modefan: to control fan speedeco: toggle low heat when heating and eco-mode when coolinglight: toggle display on the unitswing: toggle swing modesleep: toggle sleep modehealth: toggle health modeYou can customize the devices you want added in the hardware page.All device names and levels can be changed once added as only IDs are used internally.
|
airtools
|
AIRtoolsLimited subset of P.C. Hansen and J. S. JørgensenAIRtools 1.0Matlab suite of inversion / regularization tools, along with some ReguTools functions.
Also includes linear constrained least squares solver using cvxopt inlsqlin.pyMore function are available in Matlab fromAIRtools 2.Installpython-mpipinstall-e.UsageJust paste the code from each test into your console for the function
you're interested in. Would you like to submit a pull request for an
inversion example making a cool plot?picard.py: Picard Plotkaczmarz.py Kaczmarz ARTmaxent.py: Maximum Entropy Regularization (from ReguTools)rzr.py: remove unused or little used rows from tomographic projection matrixlsqlin.py: linear constrained least squares solvermatlab/logmart.m: Implementation of log-MARTfortran/logmart.f90: log-MART in FortranExamplestests/test_all.pyTestsRun a comparison of the Python code with the Matlab code in thematlabdirectory by:python airtools/tests/test_octave.pywhich runs the Matlab version viaOct2Py.
|
airtouch
|
AirtouchA library for Polyaire Airtouch controllers
|
airtouch2
|
airtouch2_pythonIn-development Python client for the Polyaire AirTouch 2 and 2+ airconditioning systems.Thanks tohttps://github.com/tonymyatt/airtouch3apihttps://github.com/ozczecho/vzduch-dotekhttps://community.home-assistant.io/t/polyaire-air-touch-2-or-3/19650/44
|
airtouch3
|
Airtouch 3 Python APIApi for the monitoring and control of a HVAC unit branded Polyaire Airtouch 3.https://www.polyaire.com.au/about-us/news/airtouch-version-3-now-available/API DefinitionGeneral UsageTo initialise:at3 = AirTouch3("192.168.1.1")To read status from unit, return true if succesful, otherwise false:at3.UpdateStatus();Air Touch Objectat3.nameat3.idat3.comms_statusat3.comms_errorat3.groupsat3.ac_unitsat3.sensorsat3.update_status()at3.print_status()Group Functions (aka Zones in most other systems)at3.toggle_group(group_id)at3.toggle_position_group(group_id, direction)Group Objectsat3.groups[group_id].numberat3.groups[group_id].nameat3.groups[group_id].is_onat3.groups[group_id].modeat3.groups[group_id].open_percentat3.groups[group_id].temperatureat3.groups[group_id].tempeature_spat3.groups[group_id].toggle()at3.groups[group_id].position_dec()at3.groups[group_id].position_inc()AC Unit Functionsat3.toogle_ac_unit(unit_id)at3.toggle_temperature_ac_unit(unit_id, direction:AT3Command)at3.set_fan_speed_ac_unit(unit_id, speed:AT3AcFanSpeed)at3.set_mode_ac_unit(unit_id, mode:AT3AcMode)AC Unit Objectsat3.acUnits[unit_id].numberat3.acUnits[unit_id].is_onat3.acUnits[unit_id].has_errorat3.acUnits[unit_id].modeat3.acUnits[unit_id].brandat3.acUnits[unit_id].fan_speedat3.acUnits[unit_id].temperatureat3.acUnits[unit_id].temperature_spat3.acUnits[unit_id].toggle()at3.acUnits[unit_id].temperature_inc()at3.acUnits[unit_id].temperature_dec()at3.acUnits[unit_id].set_fan_speed(speed:AT3AcFanSpeed)at3.acUnits[unit_id].set_mode(mode:AT3AcMode)AC Sensor Objectsat3.sensors[sensor_name].nameat3.sensors[sensor_name].temperatureat3.sensors[sensor_name].low_batterySimple Examplefrom airtouch3 import AT3CommsStatus
from airtouch3 import AT3Command
from airtouch3.airtouch3 import AT3AcFanSpeed
at3 = AirTouch3('192.168.1.72')
at3.update_status()
if at3.comms_status != AT3CommsStatus.OK:
print("Connection failed "+at3.comms_error)
exit()
at3.print_status()
print(f"Fan Speed for AC0 {at3.set_fan_speed_ac_unit(1, AT3AcFanSpeed.HIGH)}")
exit()
# Toggle a zone on/off
#print(f"Toogle Group 7 {at3.groups[7].toggle()}")
print(f"Toogle Group 7 {at3.toggle_group(7)}")
at3.print_status()
g = at3.groups[7]
print(f"Group {g.name}: {g.is_on}; Mode is {g.mode}; {g.open_percent}%; "
f"Temp: {g.temperature}degC Target: {g.temperature_sp}degC")
# Increase a group position
#print(f"Increase zone 0: {at3.toggle_position_group(0, AT3Command.INCREMENT)}")
print(f"Increase zone 0: {at3.groups[0].position_inc()}")
g = at3.groups[0]
print(f"Group {g.name}: {g.is_on}; Mode is {g.mode}; {g.open_percent}%; "
f"Temp: {g.temperature}degC Target: {g.temperature_sp}degC")
# Decrease a group position
print(f"Decrease zone 6: {at3.toggle_position_group(6, AT3Command.DECREMENT)}")
#print(f"Decrease zone 6: {at3.groups[6].position_dec()}")
g = at3.groups[6]
print(f"Group {g.name}: {g.is_on}; Mode is {g.mode}; {g.open_percent}%; "
f"Temp: {g.temperature}degC Target: {g.temperature_sp}degC")
# Toogle AC Unit 1 on/off
#print(f"Toogle AC Unit 1 {at3.toggle_ac_unit(1)}")
print(f"Toogle AC Unit 1 {at3.ac_units[1].toggle()}")
# Toogle AC Unit 1 Temp Setpoint Up
#print(f"Toogle AC Unit 1 {at3.toggle_temperature_ac_unit(1, AT3Command.INCREMENT)}")
print(f"Toogle AC Unit 1 {at3.ac_units[1].temperature_inc()}")
# Toogle AC Unit 0 Temp Setpoint Down
#print(f"Toogle AC Unit 0 {at3.toggle_temperature_ac_unit(0, AT3Command.DECREMENT)}")
print(f"Toogle AC Unit 0 {at3.ac_units[0].temperature_dec()}")WarningThis was code developed by testing with my Airtouch 3 system. I noted during development, if the unit received unexpected data, it would stop all communication (which includes to your mobile app) for a couple of minutes. There should be no issues with your Airtouch 3 system continuing to work with your mobile app while using this API, buts that your risk if you try it and you have problems.ThanksWith thanks to the following projects which provided inspiration:https://github.com/ozczecho/vzduch-dotekhttps://github.com/L0rdCha0s/homebridge-airtouch3-airconditionerhttps://github.com/LonePurpleWolf/airtouch4pyapi
|
airtouch4pyapi
|
Airtouch 4 & 5 Python TCP APIAn api allowing control of AC state (temperature, on/off, mode) of an Airtouch 4 controller locally over TCP. Airtouch 5 support is experimental as of 28 Nov 2022, and is fully interface compatible with AT4.All you need to do is initialise and specify the correct AirTouchVersion (if you don't, it assumes 4).WarningI am using this with my own Airtouch 4 and see no issues. Please don't blame me if you have any issues with your Airtouch 4 or AC system after using this - I don't know much about AC systems and will probably not be able to help!Others are using it with Airtouch 5 and see no issues.UsageTo initialise:airTouch = AirTouch("192.168.1.19")airTouch = AirTouch("192.168.1.1", AirTouchVersion.AIRTOUCH5)As a test:Use the demo.py file and pass in an AirTouch IP. It takes you through a few tests.NotesAirTouch5: If you turn off all zones, the AC itself turns off. Turning on a zone does not turn the AC back on by itself. You must turn it back on too. Same behaviour in 'official' app.To load:await airTouch.UpdateInfo();-- This loads the config from the AirTouch. Make sure you check for any errors before using it. It will load the Group/Zone info, the AC info, then capabilities. This needs to happen prior to using.The following functions are available:Group Level Functions:SetGroupToTemperature(async)TurnGroupOn(async)TurnGroupOff(async)SetCoolingModeByGroup(async)SetFanSpeedByGroup(async)GetSupportedCoolingModesByGroup-- Based on the loaded config.GetSupportedFanSpeedsByGroup-- Based on the loaded config.AC Level FunctionsTurnAcOn(async)TurnAcOff(async)SetFanSpeedForAc(async)SetCoolingModeForAc(async)GetSupportedCoolingModesForAcGetSupportedFanSpeedsForAcGetAcs
|
airtouch5py
|
airtouch5pyPython client for the airtouch 5
|
airTracker
|
No description available on PyPI.
|
air-tracker
|
No description available on PyPI.
|
airtrackrelay
|
airtrackrelayUDP socket server to collect live tracking reports and
relay them to metarace telegraph as JSON encoded objects.Supported tracking devices and messages:Quectel GL300/320 "Air Interface"+ACK : Command acknowledge, type: 'drdack'+RESP, +BUFF:GTFRI, GTRTL, GTSOS, GTLOC : Location report, type: 'drdpos'GTINF : Information report, type: 'drdstat'BeakerAES128 Location, type 'drdpos'Configuration is via metarace sysconf section 'airtrackrelay' with the
following keys:key(type) Description [default]topic(string) MQTT relay topic ['tracking/data']port(int) UDP listen port [1911]k1(string) Beaker K1, 128 bit hex string [null]k2(string) Beaker K2, 128 bit hex string [null]uid(int32) Beaker uid/config id [0x45670123]Tracker imeis are read from the section 'tracking' under the
key 'devices', which is a map of device ids to a dict object:key(type) Description [default]imei(string) Device IMEItype(string) Device typeExample config:{
"airtrackrelay": {
"port": 12345,
"topic": "tracking/data",
"key": "000102030405060708090a0b0c0d0e0f",
"cbcsig": 1234567890
},
"tracking": {
"devices": {
"bob": { "imei": "012345678901234", "label": null,
"phone": "+12345678901", "type": null },
"gem": { "imei": "023456788901234", "label": null,
"phone": null, "type": null },
}
}
}Example Info Message:{"type": "drdstat", "drd": "bob", "devstate": "41", "rssi": "13",
"voltage": "4.08", "battery": "94", "charging": "0", "buffered": false,
"sendtime": "20220101023424" }Example Ack Message:{"type": "drdack", "drd": "gem", "ctype": "GTFRI", "cid": "1A3D",
"sendtime": "20220101031607", "req": ""}Example GL3xx Location Message:{"type": "drdpos", "fix": true, "lat": "-13.567891",
"lng": "101.367815", "elev": "22.6", "speed": "12.7",
"drd": "gem", "fixtime": "20220101022231",
"buffered": false, "battery": "94", "flags": 0}Example Beaker Location Message:{"type": "drdpos", "fix": true, "lat": "-12.345666",
"lng": "101.123555", "speed": "0.0", "drd": "bob",
"fixtime": "2023-01-13T03:12:49.00Z", "battery": "100",
"buffered": false, "flags": 255}Requirementsmetarace >=2.0Installation$ pip3 install airtrackrelay
|
airtrik
|
Airtrik pythonAirtrik is an IoT Cloud platform for managing communication between IoT devices and software platforms.
This is python sdk that can be used for communicating to both IoT device running python as a programming language like Raspberry Pi
and software platform running python. This library can also be used for making realtime data pipeline for applying machine learning on the IoT devices.SummaryGetting StartedPrerequisitesInstallingConnecting to your App's NetworkSubscribe to device in App's NetworkSending message to deviceReceiving messages from deviceVersioningAuthorsLicenseGetting StartedFollow the below instructions to get your device and application up and running within minutes. It is very easy to integrate airtrik into your project.PrerequisitesBefore proceeding further you have the following software installed in your system or development system.python (Version > 3.5)
pip (any recent version)InstallingInstalling airtrik python library is straight forward, just install it with pip. Although it will work pretty well with your system python.
We recommend using the virtual environment for your projectpip install airtrikConnecting to your App's Networkimportairtrik.iotasiot# create app in the panel to get the app keyiot.connect("__APP_KEY__")Subscribe to device in App's Network# you have to create device inside app from paneldevice="__DEVICE_ID__"iot.subscribe(device)Sending message to devicemessage="YOUR MESSAGE TO DEVICE"iot.send(device,message)Receiving messages from device# you can write your custom function handling the incoming messagedefonReceive(deviceId,message):print(deviceId,message)iot.onReceive=onReceiveiot.waitForMessage()VersioningAuthorsVishal Pandey-Written Python Library-vishal-pandeySee also the list ofcontributorswho participated in this project.LicenseThis project is licensed under theMITCreative Commons License - see theLICENSEfile for
details
|
airtunnel
|
Airtunnelis a means of supplementingApache Airflow, a platform for
workflow automation in Python which is angled at analytics/data
pipelining. It was born out of years of project experience in data
science, and the hardships of running large data platforms in real life
businesses. Hence, Airtunnel is both a set of principles (read more on
them in theAirtunnel introduction article) and a lightweight Python
libraryto tame your airflow!Why choose airtunnel?Because you will…❤️ …stop worrying and love the uncompromised consistency🚀 …need a clean codebase with separated concerns to be scalable📝 …get metadata for ingested files, load status and lineage
out-of-the-box🏃 …have it up and running in minutes🍺 …spend less time debugging Airflow DAGs doing worthwhile things
insteadGetting startedTo get started, we warmly recommended to read theAirtunnel introduction articleand theAirtunnel tutorial.
Also check out thedemo project.InstallationWe suppose you have installed Apache Airflow in some kind of Python virtual
environment. From there, simply do apip install airtunnelto get
the package.Configure your codebase according to the Airtunnel principles: You
need to add three folders for a declaration store, a scripts store
and finally the data store:2.1) Thedeclaration storefolder has no subfolders. It is where your
data asset declarations (YAML files) will reside2.2) Thescripts storefolder is where all your Python & SQL scripts to process data assets will reside.
It should be broken down by subfolderspyfor Python scripts andsqlfor SQL scripts. Please further add
subfoldersdmlandddlinto thesqlscript folder.2.3) Thedata storefolder follows a convention as well,refer to the docson how to structure it.Configure Airtunnel by extending your existingairflow.cfg(as documented here):3.1) Add the configuration section[airtunnel]in which,
you need to add three configuration keys.3.2) adddeclarations_folderwhich takes the absolute path to the folder you set up in 2.13.3) addscripts_folderwhich takes the absolute path to the folder you set up in 2.23.4) adddata_store_folder, which takes the absolute path to the folder you set up in 2.3
for your data storeInstallation requirementsPython >= 3.6,Airflow >=1.10andPandas >= 0.23We assume Airtunnel is implemented best early on in a project, which is why going with a
recent Python and Airflow version makes the most sense. In the future
we might do more tests and include coverage for older Airflow
versions.PySparkis supported from2.3+DocumentationAirtunnel’s documentation ison GitHub pages.
|
airudi
|
No description available on PyPI.
|
airunner
|
Stable Diffusion and Kandinsky on your own hardwareNo web server to run, additional requirements to install or technical knowledge required.Just download the compiled packageand start generating AI Art!⭐ FeaturesEasily generate AI art using Stable Diffusion.Easy setup - download and run. No need to install any requirements*Fast! Generate images in approximately 2 seconds using an RTX 2080s, 512x512 dimensions, 20 steps euler_a (approximately 10 seconds for 512x512 20 steps Euler A on 1080gtx). Also runs on CPU†txt2img, img2img, inpaint, outpaint, pix2pix, depth2img, controlnet, txt2vidLayers and drawing toolsImage filtersDark modeInfinite scrolling canvas - use outpainting to create artwork at any size you wish or expand existing images.NSFW filter toggleStandard Stable Diffusion settingsFast load time, responsive interfacePure python - does not rely on a webserverRequirementsCuda capable GPU (2080s or higher recommended)At least 10gb of RAMat least 5.8gb of disc space to install AI RunnerThe core AI Runner program takes approximately 5.8gb of disc space to install, however the size of each model varies.
Typically models are between 2.5gb to 10gb in size. The more models you download, the more disc space you will need.Using AI RunnerInstructions on how to use AI Runner can be found in the wiki🔧 InstallationDownload the official build on itch.io!This is the compiled version of AI Runner which you can use without installing any additional dependencies.For those interested in installing the development version, there are three options to choose from.See the installation
wiki page for more informationUnit testsUnit tests can be run using the following command:All tests:python -m unittest discover testsIndividual test:python -m unittest tests.test_canvas
|
airvacuumvald
|
No description available on PyPI.
|
air-vapour-pressure-dynamics
|
Air Vapour Pressure DynamicsRepository with the basic functions related with the temperature, relative humidity, absolute humidity and entalpy of the air.How it work:Create en python environent and install the package using de command:pip install air_vapour_pressure_dynamicsOnce you get installed the package you can import the packahe in the python module by:fromair_vapour_pressure_dynamicsimportvapourpressure,absolutehumidity_kg_air,....orimportair_vapour_pressure_dynamicsSetters:put or remove argument validation::setArgumentCheck(bool)Put or remove Units to results of validation::setApplyUnits(bool)Examples:Basic Example::fromair_vapour_pressure_dynamicsimportabsolutehumidity_kg_airtemp=25.5rh=86.8absolutehumidity_kg_air(temp,rh)17.83208626438017Get units of calculation::fromair_vapour_pressure_dynamicsimportabsolutehumidity_kg_airtemp=25.5rh=86.8ab_hu=absolutehumidity_kg_air(temp,rh)ab_hu.units'g/Kg'Compatibility:NumpySympyBut it is not neccesary to install those packages if you are going to use it only with generic integers or float numbersContent:Functions:vapourpressure(temp)density_air(temp, rh)absolutehumidity_kg_air(temp, rh)absolutehumidity_m3_air(temp, rh)entalpie_kg_air(temp, rh)entalpie_m3_air(temp, rh)moisuredeficit_kg_air(temp, rh)moisuredeficit_m3_air(temp, rh)dew_point_factor(temp, rh)dew_point_temperature(temp, rh)Setters:setApplyUnits(bool)setArgumentCheck(bool)And that it!!
|
airvuesg
|
airvue_sg
|
airwater
|
.Airwateris a Python library that offers a sensible and human-friendly approach to Simply write spider, it helps you work with write spider with fewer imports and a lot less code.Why use Arrow over built-in modules?Python’s standard library and some other low-level modules have near-complete date, time and timezone functionality, but don’t work very well from a usability perspective:Too many modules: datetime, time, calendar, dateutil, pytz and moreToo many types: date, time, datetime, tzinfo, timedelta, relativedelta, etc.Timezones and timestamp conversions are verbose and unpleasantTimezone naivety is the normGaps in functionality: ISO-8601 parsing, timespans, humanizationFeaturesQuick StartInstallation$pipinstall-UairwaterExample Usage.
Documentation
————-For full documentation, please visitContributingContributions are welcome for both code and localizations (adding and updating locales). Begin by gaining familiarity with the Arrow library and its features. Then, jump into contributing:
|
airwaveapiclient
|
Airwaveapiclient is a utility tool for Aruba Networks AirWave users.
This module connects to AirWave and gets the information such as the access point list,
detail, client, etc.RequirementsPython2.7, 3.5, 3.6, 3.7, PyPy.InstallationPyPI or Github$ pip install airwaveapiclient
or
$ git clone https://github.com/mtoshi/airwaveapiclient
$ cd airwaveapiclient
$ sudo python setup.py installUsing exampleDocumentation:ReadthedocsSample code:GithubLogin>>> airwave = AirWaveAPIClient(username='admin',
... password='*****',
... url='https://192.168.1.1')
>>> airwave.login()Get Access Point List>>> res = airwave.ap_list()
>>> res.status_code
200
>>> res.text # xml output
'<?xml version="1.0" encoding="utf-8" ...'Get Access Point Detail>>> ap_id = 1
>>> res = airwave.ap_detail(ap_id)
>>> res.status_code
200
>>> res.text # xml output
'<?xml version="1.0" encoding="utf-8" ...'Logout>>> airwave.logout()See alsohttp://www.arubanetworks.com/products/networking/network-management/
|
airwaves
|
airwaves-cliis a small, single purpose command line utility for uploading data to the Internet Archive for theUnlocking the Airwavesproject. Page scans of digitized archival folders are packaged into a zip file and then uploaded to theMedia Historycollection at the Internet Archive using their API. The uploaded data includes metadata for the folder which is being curated in an Airtable database.InstallFor convenience the airwaves-cli is distributed on the Python Package Index. Once you have Python installed you can:pip install airwavesRunOnce the airwaves package is installed you can publish an item that has been cataloged in
our AirTable database by giving theuploadsubcommand a folder identifier and the name of the zip file containing the page scans for that folder.airwaves upload naeb-b110-f04-03 naeb-b110-f04-03.zipThe first time you run airwaves it will prompt you for the Airtable and Internet
Archive API keys since these are private to the project.If you want to see all the NAEB items by searching theAAPBwebsite you can:airwaves items
|
airway
|
AirwayAnatomical classification of human lung bronchus up to segmental/tertiary bronchi
based on high-resolution computed tomography (HRCT) image masks.
A rule-based approach is taken for classification, where the most cost-effective tree is found according
to their angle deviations by defined vectors.Here, a pipeline is implemented which, given a formatted input data structure, can create the anatomical segmentations, clusters of similar anatomy, and
the visualisations presented below.An example can be seen below, the 18 segments of the lung are automatically annotated in
the 3D voxel model.Example visualisation of split detection, rendered with Blender.This project currently uses masks created using Synapse 3D by Fujifilm,
which already segments the lobes, bronchus, arteries and veins in CT images.
However, this is not required. To use this project you only need a detailed segmentation of the bronchi.QuickstartInstall withpip install airwayRun the interactive tutorial withairway tutorial(this will guide you through a fake sample)Modify your own data so that you could use it withairway(described in the data section below)DataWe use a pipeline based approach to calculate the raw data. With the help ofairway_cli.pyyou are able to calculate each step (called stages).To get the pipeline to work you need to define and format the first input stage, which is then
used to create all other stages.
You have multiple options which stage you use as input, depending on which is simpler
for your use case:raw_datais the structure as created by Synapse 3D and cannot be
directly used as input. We used the script inscripts/separate-bronchus-files.shto createraw_airway. The format of it is still described in theDATADIRgraphic below for reference.raw_airwayis the same data asraw_data, but the directory structure has been reformatted. This was our
input stage for the pipeline. See theDATADIRgraphic below for more details on the file structure.
TheIMG\dfiles contain single slices for the CT scan, where -10000 was used for empty, and the rest
were various values between -1000 to 1000. We assumed -10000 to be empty, and everything else to be
a voxel of that type, as we already had segmented data.stage-01(recommended)can be used as an input stage as well, this may be considerably easier to compute
if you have a wildly different data structure. Only a single file needs to be created:model.npz.
It is a compressed numpy file where a single ~800×512×512 array is saved for the entire lung
(order is important, (transverse×sagittal×frontalplanes)).
The ~800 is variable and depends on the patient, the 512×512 is the slice dimension.
The array is of typenp.int8and
the array in the.npzfile is not named (it should be accessible asarr_0).
An encoding is used for each voxel to represent the 8 different classes as shown in the table below.
If you do not have some classes then you may ignore them, onlybronchus(encoded as1) is required,
as otherwise nothing will really work in the rest of the project. Empty or air should be encoded as0.
Seeairway/image_processing/save_images_as_npz.pyfor reference if you decide to use this stage as input.Note that the slice thickness for our data was 0.5 mm in all directions.
Currently, the pipeline assumes this is always the case.
It will work fairly well for different, but equal, thicknesses in all directions (e.g. 0.25 mm × 0.25 mm × 0.25 mm),
although some results may wary.
Different thicknesses in multiple directions (e.g. 0.8 mm × 0.8 mm × 3 mm) will likely not work well at all. In that case
we recommend to duplicate certain axes manually, so that the thickness is similar in all directions.CategoryEncodingEmpty0Bronchus1LeftLowerLobe2LeftUpperLobe3RightLowerLobe4RightMiddleLobe5RightUpperLobe6Vein7Artery8The directory structure for the data structure is described below. Note that if you usestage-01as input you do not needraw_dataorraw_airwayat all.DATADIR
├── raw_data 🠔 This is an example of entirely unformatted raw data as we received them
│ └── Ct_Thorax_3123156 🠔 Each patient has its own directory
│ └── DATA
│ ├── Data.txt 🠔 This contained the paths for finding the various bronchus and lobes
│ └── 3123156 🠔 Example patient ID
│ └── 20190308
│ └── 124101
│ └── EX1
│ ├── SE1 🠔 Each SE* folder contains a list of DICOM images
│ │ ├── IMG1 named IMG1 through IMG642 (may be a different amount)
│ │ ├── ... these represent the slices for that segmentation.
│ │ └── IMG642 E.g. SE4 is Bronchus, SE5 is the Right upper lobe.
│ ├── SE2 This is described in Data.txt for each patient.
│ ├── ...
│ └── SE10
│ └── SE11
│
├── raw_airway 🠔 Formatted data which will be used as input for stage-01
│ └── 3123156 🠔 Single patient folder, in total there are around 100 of these
│ ├── Artery
│ │ ├── IMG1 🠔 DICOM images, in our case 512x512 slices
│ │ ├── IMG2 🠔 with 0.5 mm thickness in all directions
│ │ ├── ...
│ │ ├── IMG641 🠔 There generally are between 400 and 800 of these slices
│ │ └── IMG642 🠔 So the number of slices is variable
│ ├── Bronchus
│ │ ├── IMG1 🠔 Same number and dimension of slices as above
│ │ ├── ...
│ │ └── IMG642
│ ├── LeftLowerLobe 🠔 All of these also share the same structure
│ ├── LeftUpperLobe
│ ├── RightLowerLobe
│ ├── RightMiddleLobe
│ ├── RightUpperLobe
│ └── Vein
│
├── stage-01 🠔 Each stage now has the same basic format
│ ├── 3123156
│ │ └── model.npz 🠔 See above for an explanation
│ ├── 3123193
│ │ └── model.npz
│ └── ...
├── stage-02 🠔 Each stage from here on will be created by the pipeline itself
│ ├── 3123156 so you do not need to handle this, each of them have different
│ └── ... files depending on their use.
...Note that currently NIFTI images are not supported, allIMG\dfiles are DICOM images.InstallationAt least Python 3.6 is required for this project.pip3 install airwayThe open source 3D visualisation softwareBlenderis required for visualisation. This dependency is optional if you do not need the visualisation part. Install from the website above or via a package manager like this (pip does not have blender):apt install blenderTested with Blender versions 2.76, 2.79, 2.82 (recommended) and 2.92.Now configure the defaults, copy and renameconfigs/example_defaults.yamltoconfigs/defaults.yaml(in the root folder of the project) and change the path in the file to where you have put the data.
You may ignore the other parameters for now, although feel free to read the comments there and adjust
them as needed (especially number of workers/threads).StagesFor every calculated stageairwaycreates a new directory (stage-xx) and
subdirectories for each patient (e.g.3123156).Each stage has input stages, these are handled for you though, so you only need to specify which stages to create.
If you useraw_airwayas input stage, then calculatestage-01:airway stages 1You may add the-1flag to calculate a single patient for test purposes. Note that calculation ofstage-01may be really slow if you store your data on an HDD (multiple hours), as there are a lot of single small files with a large
overhead for switching between files.Or if you usestage-01as input you can calculatestage-02directly:airway stages 2If this works then great! You may continue to create all other stages as described below.
If it does not work, then make sure the data format is correct as described in theDatasection.
If you think everything is correct then please open an issue or message me, there may be bugs, or some stuff
may be naively hard-coded.You may list all stages with short descriptions by callingairway stageswithout any arguments,
or you can list all commands by using the--helpflag.Summary of the stages:Stages1 - 7use the raw data to create the tree splits used in the rest of the stages.Stages30 - 35analyse the tree structure, focusing mostly on the left upper lobe.Stages60 - 62are 3D visualisations, wherein .obj files of the lungs are exported.Stages70 - 72are plot visualisations of various stats.Stage90is the website which displays information for each patient including the 3D models.The airway pipeline checks if the stage already exists, if you need to overwrite
a stage you need to add the-f/--forceflag.You can now create all remaining stages like this:airway stages 2+It may take a couple of hours for everything, depending on how many patients you have.
If you don't have some dependencies installed you can still safely run it, and only those stages will crash.
Open the./logfile and search forSTDERRif you want to see the errors listed byairway.By default, eight patients will be calculated in parallel (8 workers are used).
If you have more CPU threads, simply increase the number of workers:airway stages 1 2 3 -w 32or change the default in the config file (defaults.yaml).To see the results you may open blender interactively like this:airway vis 1 -oThis loads the bronchus model with the correct materials for the segments.You can also see the various files created by the stages:stage-62: renders based on the lungstage-10: which contain.graphmlfiles describing the tree structure,
and the classifications created by the algorithm.stage-35: creates a pdf with renders for each patientstage-11: creates a pdf with the found clusters of the various structuresMore imagesExample trees for patient 3183090Credits & Thanks toAirway originated as an observation by Dr. Rolf Oerter at the University of Rostock
that certain structures in the lungs bronchus he has seen while operating have not been documented.
The first steps of the project were made as a student project at the University of Rostock
at the Department of Systems Biology organised by Mariam Nassar.It consisted of this team:Martin SteinbachBrutenis GliwaLukas GroßehagenbrockJonas MoesickeJoris ThieleAfter this, the project is being continued by me (Brutenis) as my bachelor thesis.
Thanks to Mariam Nassar, Dr. Rolf Oerter, Gundram Leifert and Prof. Olaf Wolkenhauer for supervision during this time.
And thanks to Planet AI for letting me write my thesis at their office.
|
airwing_autopublish
|
gsconfiggsconfig is a python library for manipulating a GeoServer instance via the GeoServer RESTConfig API.The project is distributed under aMIT License.InstallingpipinstallgsconfigFor developers:[email protected]:boundlessgeo/gsconfig.gitcdgsconfigpythonsetup.pydevelopGetting HelpThere is a brief manual athttp://boundlessgeo.github.io/gsconfig/.
If you have questions, please ask them on the GeoServer Users mailing list:http://geoserver.org/comm/.Please use the Github project athttp://github.com/boundlessgeo/gsconfigfor any bug reports (and pull requests are welcome, but please include tests where possible.)Sample Layer Creation Codefromgeoserver.catalogimportCatalogcat=Catalog("http://localhost:8080/geoserver/")topp=cat.get_workspace("topp")shapefile_plus_sidecars=shapefile_and_friends("states")# shapefile_and_friends should look on the filesystem to find a shapefile# and related files based on the base path passed in## shapefile_plus_sidecars == {# 'shp': 'states.shp',# 'shx': 'states.shx',# 'prj': 'states.prj',# 'dbf': 'states.dbf'# }# 'data' is required (there may be a 'schema' alternative later, for creating empty featuretypes)# 'workspace' is optional (GeoServer's default workspace is used by... default)# 'name' is requiredft=cat.create_featurestore(name,workspace=topp,data=shapefile_plus_sidecars)Running TestsSince the entire purpose of this module is to interact with GeoServer, the test suite is mostly composed ofintegration tests.
These tests necessarily rely on a running copy of GeoServer, and expect that this GeoServer instance will be using the default data directory that is included with GeoServer.
This data is also included in the GeoServer source repository as/data/release/.
In addition, it is expected that there will be a postgres database available atpostgres:password@localhost:5432/db.
You can test connecting to this database with thepsqlcommand line client by running$ psql-ddb-Upostgres-hlocalhost-p5432(you will be prompted interactively for the password.)To override the assumed database connection parameters, the following environment variables are supported:DATABASEDBUSERDBPASSIf present, psycopg will be used to verify the database connection prior to running the tests.If provided, the following environment variables will be used to reset the data directory:GEOSERVER_HOMELocation of git repository to read the clean data from. If only this option is providedgit cleanwill be used to reset the data.GEOSERVER_DATA_DIROptional location of the data dir geoserver will be running with. If provided,rsyncwill be used to reset the data.GS_VERSIONOptional environment variable allowing the catalog test cases to automatically download
and start a vanilla GeoServer WAR form the web.
Be sure that there are no running services on HTTP port 8080.Here are the commands that I use to reset before running the gsconfig tests:$cd~/geoserver/src/web/app/$PGUSER=postgresdropdbdb$PGUSER=postgrescreatedbdb-Ttemplate_postgis$gitclean-dxff--../../../data/release/$gitcheckout-f$MAVEN_OPTS="-XX:PermSize=128M -Xmx1024M"\GEOSERVER_DATA_DIR=../../../data/release\mvnjetty:runAt this point, GeoServer will be running foregrounded, but it will take a few seconds to actually begin listening for http requests.
You can stop it withCTRL-C(but don’t do that until you’ve run the tests!)
You can run the gsconfig tests with the following command:$pythonsetup.pytestInstead of restarting GeoServer after each run to reset the data, the following should allow re-running the tests:$gitclean-dxff--../../../data/release/$curl-XPOST--useradmin:geoserverhttp://localhost:8080/geoserver/rest/reloadMore Examples - Updated for GeoServer 2.4+Loading the GeoServercatalogusinggsconfigis quite easy. The example below allows you to connect to GeoServer by specifying custom credentials.fromgeoserver.catalogimportCatalogcat=Catalog("http://localhost:8080/geoserver/rest/","admin","geoserver")The code below allows you to create a FeatureType from a Shapefilegeosolutions=cat.get_workspace("geosolutions")importgeoserver.utilshapefile_plus_sidecars=geoserver.util.shapefile_and_friends("C:/work/gsconfig/test/data/states")# shapefile_and_friends should look on the filesystem to find a shapefile# and related files based on the base path passed in## shapefile_plus_sidecars == {# 'shp': 'states.shp',# 'shx': 'states.shx',# 'prj': 'states.prj',# 'dbf': 'states.dbf'# }# 'data' is required (there may be a 'schema' alternative later, for creating empty featuretypes)# 'workspace' is optional (GeoServer's default workspace is used by... default)# 'name' is requiredft=cat.create_featurestore("test",shapefile_plus_sidecars,geosolutions)It is possible to create JDBC Virtual Layers too. The code below allow to create a new SQL View calledmy_jdbc_vt_testdefined by a customsql.fromgeoserver.catalogimportCatalogfromgeoserver.supportimportJDBCVirtualTable,JDBCVirtualTableGeometry,JDBCVirtualTableParamcat=Catalog('http://localhost:8080/geoserver/rest/','admin','****')store=cat.get_store('postgis-geoserver')geom=JDBCVirtualTableGeometry('newgeom','LineString','4326')ft_name='my_jdbc_vt_test'epsg_code='EPSG:4326'sql='select ST_MakeLine(wkb_geometry ORDER BY waypoint) As newgeom, assetid, runtime from waypoints group by assetid,runtime'keyColumn=Noneparameters=Nonejdbc_vt=JDBCVirtualTable(ft_name,sql,'false',geom,keyColumn,parameters)ft=cat.publish_featuretype(ft_name,store,epsg_code,jdbc_virtual_table=jdbc_vt)This example shows how to easily update alayerproperty. The same approach may be used with everycatalogresourcene_shaded=cat.get_layer("ne_shaded")ne_shaded.enabled=Truecat.save(ne_shaded)cat.reload()Deleting astorefrom thecatalogrequires to purge all the associatedlayersfirst. This can be done by doing something like this:st=cat.get_store("ne_shaded")cat.delete(ne_shaded)cat.reload()cat.delete(st)cat.reload()There are some functionalities allowing to manage theImageMosaiccoverages. It is possible to create new ImageMosaics, add granules to them,
and also read the coverages metadata, modify the mosaicDimensionsand finally query the mosaicgranulesand list their properties.The gsconfig methods map theREST APIs for ImageMosaicIn order to create a new ImageMosaic layer, you can prepare a zip file containing the properties files for the mosaic configuration. Refer to the GeoTools ImageMosaic Plugin guide
in order to get details on the mosaic configuration. The package contains an already configured zip file with two granules.
You need to update or remove thedatastore.propertiesfile before creating the mosaic otherwise you will get an exception.fromgeoserver.catalogimportCatalogcat=Catalog("http://localhost:8180/geoserver/rest")cat.create_imagemosaic("NOAAWW3_NCOMultiGrid_WIND_test","NOAAWW3_NCOMultiGrid_WIND_test.zip")By defualt thecat.create_imagemosaictries to configure the layer too. If you want to create the store only, you can specify the following parametercat.create_imagemosaic("NOAAWW3_NCOMultiGrid_WIND_test","NOAAWW3_NCOMultiGrid_WIND_test.zip","none")In order to retrieve from the catalog the ImageMosaic coverage store you can do thisstore=cat.get_store("NOAAWW3_NCOMultiGrid_WIND_test")It is possible to add more granules to the mosaic at runtime.
With the following method you can add granules already present on the machine local path.cat.harvest_externalgranule("file://D:/Work/apache-tomcat-6.0.16/instances/data/data/MetOc/NOAAWW3/20131001/WIND/NOAAWW3_NCOMultiGrid__WIND_000_20131001T000000.tif",store)The method below allows to send granules remotely via POST to the ImageMosaic.
The granules will be uploaded and stored on the ImageMosaic index folder.cat.harvest_uploadgranule("NOAAWW3_NCOMultiGrid__WIND_000_20131002T000000.zip",store)To delete an ImageMosaic store, you can follow the standard approach, by deleting the layers first.ATTENTION: at this time you need to manually cleanup the data dir from the mosaic granules and, in case you used a DB datastore, you must also drop the mosaic tables.layer=cat.get_layer("NOAAWW3_NCOMultiGrid_WIND_test")cat.delete(layer)cat.reload()cat.delete(store)cat.reload()The method below allows you the load and update the coverage metadata of the ImageMosaic.
You need to do this for every coverage of the ImageMosaic of course.coverage=cat.get_resource_by_url("http://localhost:8180/geoserver/rest/workspaces/natocmre/coveragestores/NOAAWW3_NCOMultiGrid_WIND_test/coverages/NOAAWW3_NCOMultiGrid_WIND_test.xml")coverage.supported_formats=['GEOTIFF']cat.save(coverage)By default the ImageMosaic layer has not the coverage dimensions configured. It is possible using the coverage metadata to update and manage the coverage dimensions.ATTENTION: notice that thepresentationparameters accepts only one among the following values {‘LIST’, ‘DISCRETE_INTERVAL’, ‘CONTINUOUS_INTERVAL’}fromgeoserver.supportimportDimensionInfotimeInfo=DimensionInfo("time","true","LIST",None,"ISO8601",None)coverage.metadata=({'dirName':'NOAAWW3_NCOMultiGrid_WIND_test_NOAAWW3_NCOMultiGrid_WIND_test','time':timeInfo})cat.save(coverage)One the ImageMosaic has been configures, it is possible to read the coverages along with their granule schema and granule info.fromgeoserver.catalogimportCatalogcat=Catalog("http://localhost:8180/geoserver/rest")store=cat.get_store("NOAAWW3_NCOMultiGrid_WIND_test")coverages=cat.mosaic_coverages(store)schema=cat.mosaic_coverage_schema(coverages['coverages']['coverage'][0]['name'],store)granules=cat.mosaic_granules(coverages['coverages']['coverage'][0]['name'],store)The granules details can be easily read by doing something like this:granules['crs']['properties']['name']granules['features']granules['features'][0]['properties']['time']granules['features'][0]['properties']['location']granules['features'][0]['properties']['run']When the mosaic grows up and starts having a huge set of granules, you may need to filter the granules query through a CQL filter on the coverage schema attributes.granules=cat.mosaic_granules(coverages['coverages']['coverage'][0]['name'],store,"time >= '2013-10-01T03:00:00.000Z'")granules=cat.mosaic_granules(coverages['coverages']['coverage'][0]['name'],store,"time >= '2013-10-01T03:00:00.000Z' AND run = 0")granules=cat.mosaic_granules(coverages['coverages']['coverage'][0]['name'],store,"location LIKE '%20131002T000000.tif'")
|
airwing_geoinfo
|
此脚本用于实现geoserver的自动发布地图服务功能,以及获取发布后地图范围的经纬度信息(左下角和右上角经纬度)
|
airwing_gwc
|
此脚本用于geoserver的自动生成瓦片功能
|
airworkflowdemo
|
air_backend_test
|
airworks
|
airworkspython sdk for airworks api_gateway
|
airworks-api
|
airworkspython sdk for airworks api_gateway
|
airworks-juvemark
|
Failed to fetch description. HTTP Status Code: 404
|
airy
|
Airy is a new Web application development framework.Contrast to most currently available frameworks, Airy
doesn’t use the standard notion of HTTP requests and pages.Instead, it makes use of WebSockets (via Socket.io) and
provides a set of tools to let you focus on the interface,
not content delivery.Currently Airy supports MongoDB only. We will support other
NoSQL databases, but we have no plans for supporting SQL.RequirementsAiry will install most required modules itself when you create a new
project, so all you need is:Python 2.6+MongoDBInstallationpip install airyThis will install Airy itself and theairy-admin.pyscript.UsageOnce you have it installed, you should be able to useairy-admin.pyTo create a new project, open a terminal and do:airy-admin.py startproject project_name
cd project_name/
python manage.py update_ve
python manage.py runserverYou should have it running locally on port 8000. Open your browser
and navigate tohttp://localhost:8000Note: if it complains about a “Connection Error” it means you have
no MongoDB running, or it’s dead.AboutAiry is created by Leto, a startup agency based in London, UK.
Check out Leto website for help and support:http://letolab.com
|
airyconf
|
airyconfLightweight configuration managerInspired byzifeo/dataconf,
implemented with no required dependencies for Python 3.6+
(except Python 3.6 requires dataclasses package to support the corresponding functionality). Supports typing, dataclasses and custom classes:fromtypingimportListfromdataclassesimportdataclassfromairyconfimportparseclassCustomObject(object):def__init__(self,a:float):self.a=adef__eq__(self,other):returnisinstance(other,CustomObject)andother.a==self.a@dataclassclassSomeDataclass:b:strc:List[CustomObject]json_config='{"SomeDataclass": {"b": "str", "c": [{"a": 1.0}, {"a": 2.0}]}}'expected=SomeDataclass('str',[CustomObject(1.),CustomObject(2.)])assertparse(json_config,SomeDataclass)==expectedOptionally, supports yaml if installed:fromairyconfimportparse_configparse_config('file/to/config.yaml',SomeDataclass)Thanks torossmacarthur/typing-compatby Ross MacArthur,
typing is supported for Python 3.6+ (the single-file package is copied to avoid dependencies).Package is under development. TODO:Add documentationGithub actions: tests & coverage & package updateMultiple typing arguments (Tuple[int, CustomClass, str])Nested typing arguments (List[Dict[str, List[int]]])Specify classes in config (a: SomeCustomClass({param: 1}))
|
airypi
|
Information on how to use this library can be found <a href=”https://www.airypi.com/docs/”>here</a>
|
airypi-rpi
|
Information on how to use this library can be found <a href=”https://www.airypi.com/docs/”>here</a>
|
ais
|
Created Module
|
ais2gpd
|
No description available on PyPI.
|
aisa
|
Auto-Information State AggregationThis is a python module aimed at partitioning networks through the maximization of Auto-Information.
If you use this code, please cite the following paper:State aggregations in Markov chains and block models of networks,Faccin, Schaub and Delvenne,ArXiv 2005.00337The module provides also a function to compute the Entrogram of a network with a suitable partition.
The Entrogram provides a concise, visual characterization of the Markovianity of the dynamics projected to the partition space.
In case you use this, please cite the following paper:Entrograms and coarse graining of dynamics on complex networks,Faccin, Schaub and Delvenne,Journal of Complex Networks, 6(5) p. 661-678 (2018),ArXiv 1711.01987Getting the codeRequirementsThe following modules are required toaisato work properly:numpyandscipynetworkxtqdm(optional)InstallDownload the codehereand unzip locally or clone thegitrepository fromGithub.On the terminl run:pip install --user path/to/moduleUninstallOn the terminl run:$ pip uninstall aisaUsageRead theonline documentationthat describes all classes and functions of the module.Some simple notebook examples on module usage are provided in theexamplessubfolder:a simple example of computing and drawing theentrogramand detecting the partition that maximize the auto-information in a well know small social network, see innbvieweran example on how to build arange dependent networkand find the partition that maximize auto-nformation, see innbviewerLicenseCopyright: Mauro Faccin (2020)AISA is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.AISA is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.Check LICENSE.txt for details.
|
aisak
|
No description available on PyPI.
|
aisapi
|
AIS-apiPython wrapper package for the AIS API.InstallpipinstallaisapiLook at the fileexample.pyfor a usage example.Developgitclonehttps://github.com/sviete/AIS-api.gitPublish the new version to piprm-rfdist
python3setup.pysdistbdist_wheel
twineuploaddist/*
|
ais-api
|
Failed to fetch description. HTTP Status Code: 404
|
aisaratuners
|
aisaratuners is an open-source optimization library to automate and expedite hyperparameter search. aisaratuners has two modules, aisara_general_tuner.py which is a framework agnostic, and aisara_keras_tuner.py which is tailored for Karas API.aisaratuners leverages AiSara algorithm, Latin hypercube sampling, and the concept of search space reduction for fast reach of the optimum hyperparameters.Change Log1.4.9 (19/11/2020)1.5.2 (20/1/2022)
|
ai-sas-connector
|
Example Of Pandas Columnsthis is ai sas API connector
|
aisastack
|
AisaStackGenerator tools for Full-Stack Developers work with the Django Rest Framework and Vue JS.MotivationMotivation 01:Build modular programs, Eric Raymond's 17 Unix RulesMotivation 02:Small is beautiful, Make each program do one thing well, Mike Gancarz The UNIX PhilosophyIn DRF we actually use ViewSet, in our opinion it is something that approaches the motivation above.
You can see how this workshere.Include Frontend FrameworkVue JSInstallationWith pip:$ pip install aisastackor manual install with clone this project:$ cd aisastack
$ python setup installStack Name AvailableThere are several UI stack available:blight: Bootstrap 4 Admin Light (vue) (you can see the package.json for detail dependencies).nativescript: NativeSacript Vue (coming soon)How ToCommand to do a clone:$ aisvue --clone <stack-name>
$ cd <stack-name>Creat app:$ aisvue --build <stack-name>A prompt will appear that you can fill in with the model name in the Django Rest backend.
|
aisating
|
第一案例这是一个循环打印的案例. You can useGithub-flavored Markdownto write your content.
|
aisc
|
AISC ToolsSmall helper tools.Installationpip install aiscURL Toolsfromaiscimporturl# Expand shortened urlurl.expand(url)
|
aiscalator
|
AIscalatorFree software: Apache Software License 2.0Website:http://www.aiscalate.comDocumentation:https://aiscalator.readthedocs.io/en/latest/Bugs:https://github.com/aiscalate/aiscalator/issuesKey FeaturesAiscalator is a toolbox to enable your team streamlining
processes from innovation to productization with:Jupyter workbenchExplore Data, Prototype SolutionsDocker wrapper toolsShare Code, Deploy Reproducible EnvironmentsAirflow machinerySchedule Tasks, Refine ProductsData Science and Data Engineering best practicesQuick StartInstallationTest if prerequisite softwares are installed:docker--versiondocker-compose--versionpip--versionInstall AIscalator tool:git clone https://github.com/Aiscalate/aiscalator.git
cd aiscalator/
make installGreat, we are now ready to use the AIscalator!The following setup commands are completely optional because they are dealing with
prebuilding Docker images. If you choose not to do it at this point, they
will get built later on whenever they are required.However, since producing a Docker image requires a certain amount of time
to download, install packages, and sometimes even compiling them, these
installation steps can be initiated right away all at once. Thus, you
should be free to go enjoy a nice coffee break!You might want to customize your environment with the AIscalator, this
will ask you few questions:aiscalator setupBuild docker images to run Jupyter environments:aiscalator jupyter setupBuild docker image to run Airflow:# aiscalator airflow setup <path-to-workspace-folder>
# for example,
aiscalator airflow setup $PWDStart workingAIscalator commands dealing with jupyter are defining tasks in Airflow jargon;
In our case, they are all wrapped inside a Docker container. We also refer to
them as Steps.Whereas AIscalator commands about airflow are made to author, schedule and monitor
DAGs (Directed Acyclic Graphs). They define how a workflow is composed of multiple
steps, their dependencies and execution times or triggers.JupyterCreate a new Jupyter notebook to work on, define corresponding AIscalator step:# aiscalator jupyter new <path-to-store-new-files>
# For example,
aiscalator jupyter new project
# (CTRL + c to kill when done)Or you can edit an existing AIscalator step:# aiscalator jupyter edit <aiscalator step>
# For example, if you cloned the git repository:
aiscalator jupyter edit resources/example/example.conf
# (CTRL + c to kill when done)Run the step without GUI:# aiscalator jupyter run <aiscalator task>
# For example, if you cloned the git repository:
aiscalator jupyter run resources/example/example.confAirflowStart Airflow services:aiscalator airflow startCreate a new AIscalator DAG, define the airflow job:# aiscalator airflow new <path-to-store-new-files>
# For example,
aiscalator airflow new project
# (CTRL + c to kill when done)Or you can edit an existing AIscalator DAG:# aiscalator airflow edit <aiscalator DAG>
# For example, if you cloned the git repository:
aiscalator airflow edit resources/example/example.conf
# (CTRL + c to kill when done)Schedule AIscalator DAG into local airflow dags folder:# aiscalator airflow push <aiscalator DAG>
# For example, if you cloned the git repository:
aiscalator airflow push resources/example/example.confStop Airflow services:aiscalator airflow stopHistory0.1.0 (2018-11-07)First Alpha release on PyPI.0.1.11 (2019-04-26)Added docker_image.docker_extra_options list feature0.1.13 (2019-06-23)Handle errors in Jupytext conversionsaiscalator run subcommand exit code propagated to cliConcurrent aiscalator run commands is possible
|
aiscan
|
AI ScanThis is a scanner that will scan your AI models for problems. Currently it focuses on bias testing. It is currently pre-alpha.How to UseFirst install it:pip install aiscanRun it in this manner (currently supports models from HuggingFace’s repository):aiscan --huggingface roberta-base --task fill-maskHere is another example with a different bias task.aiscan --huggingface cross-encoder/nli-distilroberta-base --task zero-shot-classificationThat’s it for now. More will come.Future WorkMore bias tests. More metrics for bias testing based on the research in the field.Integration with other types of testing (eg. adversarial robustness)More kinds of models besides HuggingFace models. We are especially interested in MLFlow integration.Documentation.Please contribute if you can. Help is always helpful.LicenseApacheCreditA project of IQT Labs.
|
ai-scholar-toolbox
|
ai-scholar-toolboxThe python package provides an efficient way to get statistics of a scholar on Google Scholar given academic information of the scholar.Install packagepipinstallai-scholar-toolboxDownload Browser Binary and Browser DriverBy default, our package uses Chromium binary file. Please take care the compatibility between the binary file and the browser driver. Also, if your OS is not linux based or you install the browser in a directory other than default directory, please refer toSelenium Chrome requirementswhen instantiating the browser driver.Download:Download Chrome browser driverInstall:Linuxsudoaptupdate
sudoaptupgrade
wgethttps://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
sudoaptinstall./google-chrome-stable_current_amd64.debGet Started in ai-scholar-toolboxInstantiate aScholarSearchobject. This will download 78k dataset to the local machine automatically.fromScholarSearchimportScholarSearchscholar_search=ScholarSearch()Set attributes for the class:# set the similarity ratio of comparing two strings when searching on Google Scholar webpage. If not given, default is 0.8.scholar_search.similarity_ratio=0.8# set the path of browser driver.scholar_search.driver_path='../../chromedriver'# required: setupscholar_search.setup()Optional: In case that you want to get responses of a list of scholars, the class methodget_profiles()is implemented for you to load (could be multiple) json data files.# optionalscholar_search.get_profiles(['../review_data/area_chair_id_to_profile.json','../review_data/reviewer_id_to_profile.json'])Search candidate scholars by matching a specific query:If you want to input the information of a scholar on OpenReview and get related google scholar information, you can pass in a pythondictionarywith necessary features based on the OpenReview scholar profile page (for instance, fromZhijing's OpenReview profile). Note that filling in more information as recommended below will get a better search result.TODO: change the dict to be of a certain person.# keys that are required:# scholar_info_dict['content']['gscholar']: the link to Google Scholar profile in the OpenReview webpage. If cannot be found, you can either choose not to include it or pass in an empty string.# scholar_info_dict['content']['history']: the most updated history of the scholar in the OpenReview webpage. Previous history is not needed.# scholar_info_dict['content']['relations']: all relations that the scholar list in the OpenReview webpage. We recommend to list all the relations here. Only name is needed.# scholar_info_dict['content']['expertise']: all keywords that the scholar label their academic research field. We recommend to list all the expertise keywords here. Only keyword is needed.# Most recommended:scholar_info_dict={"profile":{"id":"~Zhijing_Jin1",# most important information to use"content":{"gscholar":"https://scholar.google.com/citations?user=RkI8h-wAAAAJ","history":[# second most important information to use{"position":"PhD student","institution":{"domain":"mpg.de","name":"Max-Planck Institute"}}],"relations":[{"name":"Bernhard Schoelkopf"},{"name":"Rada Mihalcea"},{"name":"Mrinmaya Sachan"},{"name":"Ryan Cotterell"}],"expertise":[{"keywords":["causal inference"]},{"keywords":["computational social science"]},{"keywords":["social good"]},{"keywords":["natural language processing"]}]}}}# Minimum required but least recommended:scholar_info_dict={"profile":{"id":"~Zhijing_Jin1","content":{}}}Then, you can pass the dictionary to the methodget_scholar()to get possible candidates.# query: python dictionary that you just generated.resp=scholar_search.get_scholar(query=scholar_info_dict,simple=True,top_n=3,print_true=True)respAlternatively, if you just want to input the name of a scholar and get possible google scholar candidates, you can pass the name as a string directly to the function as the following:# query: python str, the name of the scholar.resp=scholar_search.get_scholar(query='Zhijing Jin',simple=True,top_n=3,print_true=True)respSearch AlgorithmsThe algorithm can be explained as follows if the input query is a python dictionary:defget_candidates(openreview_dict,top_n_related):ifgs_sidinopenreview_dict:ifgs_sidin78k_scholar:returndict(78k_scholar.loc[78k_scholar[‘gs_sid’]==gs_sid])else:response=search_directly_on_google_scholar_by_gssid(gs_sid)returnresponseelse:name,email_suffix,position,organization,relations=extract_name_from_openreview_dict(openreview_dict)response_78k=search_scholar_on_78k(name)response_gs=search_scholar_on_google_scholar(name,email_suffix,position,organization,relations)response=select_final_candidates(response_78k,response_gs,top_n_related=top_n_related)returnResponseStatistics SummaryOur 78k dataset has 78,066 AI scholars in total. Please check our78k AI scholar datasetfor more details.Given all the chairs and reviewers in OpenReview (664 in total), our package achieves 93.02% precision, 85.11% recall, and 88.89% F1-score on a random subset of 50 scholars that don't havegs_sidincluded in the input dict.FAQTODO: add contentSupportIf you have any questions, bug reports, or feature requests regarding either the codebase or the models released in the projects section, please don't hesitate to post on ourGithub Issues page.LicenseThe package is licensed under the MIT license.TODO: check licenses
|
aisci
|
No description available on PyPI.
|
ais-cli
|
- AIS -Ais (ai shell) is interactive command line ai tool powered by ChatGPT (GPT-3.5). Ais can translate your query into a bash command and explain it to you if you want. In this way, you can get rid of hours of searching for small tasks and increase your learning spectrum.Getting startedInstallation instructions(require python3.6+)pipinstallais-cliSetupSetup open ai access keyaissetACCESS_KEY<KEY>Running queryiesTranslate to commandYou can just write the what do you want. It will translate a bash commandais•openrtsp://113.76.151.33/1withffplaywithoutsound
──────────────Command───────────────────
ffplay-anrtsp://113.76.151.33/1
?Selectaction(Usearrowkeys)◌✅Runthiscommand❔Explainthiscommand❌CancelExplain commandsIf you select "Explain this command" action ChatCPT will explain this command for you.?Selectaction(Usearrowkeys)◌✅Runthiscommand❔Explainthiscommand❌CancelRegular questionsRunais askfor asking normal questions.ais•aisaskWhoisAtaturk?
───────────Result─────────────
AtaturkisthefounderofmodernTurkeyanditsfirstpresident.Run system command in interactive modeUse the!character as the first characterais•!whoami
knidRun without interactive modeCreate bash command$'ais-c"create port scanner with bash"Ask regular questions$ais-c"ais ask Suggest me a horror movie"
|
aiscot
|
DescriptionThe Automatic Identification System to Cursor on Target gateway (AISCOT) transforms
automatic identification system (AIS) to Cursor on Target (CoT) for use with TAK
products such as ATAK, WinTAK & iTAK. Vessels sending AIS either
over the air (RF), through a local networks (NMEA), or through internet aggregators
(AISHUB), will be displayed in TAK with appropriate icons, attitude, type, track,
bearing, speed, callsign and more.For more information the TAK Product suite, see:https://ww.tak.govAISCOT was original developed to support an open ocean boat race in the Northern
Pacific Ocean, as described in this article:http://ampledata.org/boat_race_support.htmlConcept of OperationsAISCOT can operate in two different modes, as described in detail below:AIS Over-the-air (RF)AIS Aggregator (AISHUB)AIS Over-the-air Operation (RF)Receive AIS data from a VHF AIS receiver, such as the
MegwattdAISy+. From there
AIS can be decoded byAIS Dispatcherand
forwarded to AISCOT to be transformed to COT and transmitted to COT destinations.AIS Aggregator Operation (AISHUB.com)Receive AIS data from theAISHUBservice.
Requires a subscription to AISHUB.Support DevelopmentInstallationAISCOT requires Python 3.6 or above.AISCOT functionality is provided by a command-line tool calledaiscot, which can be
installed several ways.Installing as a Debian/Ubuntu Package [Use Me!]:$ wget https://github.com/ampledata/pytak/releases/latest/download/python3-pytak_latest_all.deb
$ sudo apt install -f ./python3-pytak_latest_all.deb
$ wget https://github.com/ampledata/aiscot/releases/latest/download/python3-aiscot_latest_all.deb
$ sudo apt install -f ./python3-aiscot_latest_all.debInstall from the Python Package Index [Alternative]:$ python3 -m pip install -U pytak
$ python3 -m pip install -U aiscotInstall from this source tree [Developer]:$ git clone https://github.com/ampledata/aiscot.git
$ cd aiscot/
$ python3 setup.py aiscotUsageAISCOT can be configured with a INI-style configuration file, or using
environmental variables.Command-line options:usage: aiscot [-h] [-c CONFIG_FILE] [-p PREF_PACKAGE]
optional arguments:
-h, --help show this help message and exit
-c CONFIG_FILE, --CONFIG_FILE CONFIG_FILE
Optional configuration file. Default: config.ini
-p PREF_PACKAGE, --PREF_PACKAGE PREF_PACKAGE
Optional connection preferences package zip file (aka Data Package).Configuration options:COT_URLstr, default: udp://239.2.3.1:6969URL to CoT destination. Must be a URL, e.g.tcp://1.2.3.4:1234ortls://...:1234, etc. SeePyTAKfor options, including TLS support.AIS_PORTint, default: 5050AIS UDP Listen Port, for use with Over-the-air (RF) AIS.COT_STALEint, default: 3600CoT Stale period (“timeout”), in seconds. Default3600seconds (1 hour).COT_TYPEstr, default: a-u-S-X-MOverride COT Event Type (“marker type”).FEED_URLstr, optionalAISHUB feed URL. SeeAISHUB usage notesin README below.KNOWN_CRAFTstr, optionalKnown Craft hints file. CSV file containing callsign/marker hints.INCLUDE_ALL_CRAFTbool, optionalIf using KNOWN_CRAFT, still include other craft not in our KNOWN_CRAFT list.IGNORE_ATONbool, optionalIF SET- adsbcot will ignore AIS from Aids to Naviation (buoys, etc).See example-config.ini in the source tree for example configuration.AISHUB usage notesAISHUB.com requires registration. Once registered the site will provide you with a
Username that you’ll use with their feed. You’ll also need to specify a Bounding Box
when accessing the feed.The AISHUB_URL must be specified as follows:https://data.aishub.net/ws.php?format=1&output=json&compress=0&username=AISHUB_USERNAME&latmin=BBOX_LAT_MIN&latmax=BBOX_LAT_MAX&lonmin=BBOX_LON_MON&lonmax=BBOX_LON_MAXReplacingAISHUB_USERNAMEwith your AISHUB.com username, and specifying the
Bounding Box is specified as follows:latminsigned floatThe minimum latitude of the Bounding Box (degrees from Equator) as a signed float
(use negative sign for East:-).latmaxsigned floatThe maximum latitude of the Bounding Box (degrees from Equator) as a signed float
(use negative sign for East:-).lonminsigned floatThe minimum longitude of the Bound Box (degrees from Prime Meridian) as a signed float
(use negative sign for North:-).lonmaxsigned floatThe maximum longitude of the Bound Box (degrees from Prime Meridian) as a signed float
(use negative sign for North:-).For example, the following Bound Box paints a large swath around Northern California:latmin=35&latmax=38&lonmin=-124&lonmax=-121. This can be read as:
“Between 35° and 38° latitude & -121° and -124° longitude”.Example SetupThe following diagram shows an example setup of AISCOT utilizing a dAISy+ AIS receiver
with an outboard Marine VHF antenna, a Raspberry Pi running aisdispatcher and AISCOT,
forwarding COT to a TAK Server and WinTAK & ATAK clients. (OV-1)Database UpdateOccasional updates to the YADD Ship Name database can be found at:http://www.yaddnet.org/pages/php/test/tmp/Updates to the MID database can be found at: TKSourceGithub:https://github.com/ampledata/aiscotAuthorGreg [email protected]://ampledata.org/Copyrightaiscot Copyright 2023 Greg Albrecht <[email protected]>pyAISm.py Copyright 2016 Pierre PayenLicenseCopyright 2023 Greg Albrecht <[email protected]>Licensed under the Apache License, Version 2.0 (the “License”);
you may not use this file except in compliance with the License.
You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an “AS IS” BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.pyAISm.py is licensed under the MIT License. See aiscot/pyAISm.py for details.
|
aiscpy
|
Gestor de consultas para selección de perfiles AISCLa AISC (American Institute of Steel Construction) es el Instituto Americano de la Construcción en Acero.AISCPy establece una base de datos y funciones para consultar el mismo, de manera que con esta se pueda desarrollar aplicaciones de calculo estructural en acero.Propiedades de los perfiles y unidades:W, S, M, HP ShapesC, MC ShapesA= Cross-sectional area of member (in.^2)d= Depth of member, parallel to Y-axis (in.)h= Depth of member, parallel to Y-axis (in.)tw= Thickness of web of member (in.)bf= Width of flange of member, parallel to X-axis (in.)b= Width of member, parallel to X-axis (in.)tf= Thickness of flange of member (in.)k= Distance from outer face of flange to web toe of fillet (in.)k1= Distance from web centerline to flange toe of fillet (in.)T= Distance between fillets for wide-flange or channel shape = d(nom)-2*k(det) (in.)gage= Standard gage (bolt spacing) for member (in.) (Note: gages for angles are available by viewing comment box at cell K18.)Ix= Moment of inertia of member taken about X-axis (in.^4)Sx= Elastic section modulus of member taken about X-axis (in.^3)rx= Radius of gyration of member taken about X-axis (in.) = SQRT(Ix/A)Iy= Moment of inertia of member taken about Y-axis (in.^4)Sy= Elastic section modulus of member taken about Y-axis (in.^3)ry= Radius of gyration of member taken about Y-axis (in.) = SQRT(Iy/A)Zx= Plastic section modulus of member taken about X-axis (in.^3)Zy= Plastic section modulus of member taken about Y-axis (in.^3)rts= SQRT(SQRT(Iy*Cw)/Sx) (in.)xp= horizontal distance from designated member edge to plastic neutral axis (in.)yp= vertical distance from designated member edge to plastic neutral axis (in.)ho= Distance between centroid of flanges, d-tf (in.)J= Torsional moment of inertia of member (in.^4)Cw= Warping constant (in.^6)C= Torsional constant for HSS shapes (in.^3)a= Torsional property, a = SQRT(ECw/GJ) (in.)E= Modulus of elasticity of steel = 29,000 ksiG= Shear modulus of elasticity of steel = 11,200 ksiWno= Normalized warping function at a point at the flange edge (in.^2)Sw= Warping statical moment at a point on the cross section (in.^4)Qf= Statical moment for a point in the flange directly above the vertical edge of the web (in.^3)Qw= Statical moment at the mid-depth of the section (in.^3)x(bar)= Distance from outside face of web of channel shape or outside face of angle leg to Y-axis (in.)y(bar)= Distance from outside face of outside face of flange of WT or angle leg to Y-axis (in.)eo= Horizontal distance from the outer edge of a channel web to its shear center (in.) = (approx.) tf*(d-tf)^2*(bf-tw/2)^2/(4*Ix)-tw/2xo= x-coordinate of shear center with respect to the centroid of the section (in.)yo= y-coordinate of shear center with respect to the centroid of the section (in.)ro(bar)= Polar radius of gyration about the shear center = SQRT(xo^2+yo^2+(Ix+Iy)/A) (in.)H= Flexural constant, H = 1-(xo^2+yo^2)/ro(bar)^2)LLBB= Long legs back-to-back for double anglesSLBB= Short legs back-to-back for double anglesh(flat)= The workable flat (straight) dimension along the height, h (in.)b(flat)= The workable flat (straight) dimension along the width, b (in.)A(surf)= The total surface area of a rectangular or square HSS section (ft.^2/ft.)STD= Standard weight (Schedule 40) pipe sectionXS= Extra strong (Schedule 80) pipe sectionXXS= Double-extra strong pipe section¿Deseas colaborar?Por favor contactate al correo electronico que aparece al final.¿Deseas aprender?Por favor contactate al correo electronico que aparece al final.AISC: American Institute Steel of ConstructionAutor:Rodrigo JimenezEmail:[email protected]:The shapes contained in this database are taken from the AISC Version 13.0
"Shapes Database" CD-ROM Version (12/2005), as well as those listed in the
AISC 13th Edition Manual of Steel Construction (12/2005).
|
aiscraper
|
No description available on PyPI.
|
ai-scraper-toolkit
|
No description available on PyPI.
|
ais-data-to-tspi
|
No description available on PyPI.
|
aisdb
|
DescriptionPackage features:SQL database for storing AIS position reports and vessel metadataVessel position cleaning and trajectory modelingUtilities for streaming and decoding AIS data in the NMEA binary string format (SeeBase Station Deployment)Integration with external datasources including depth charts, distances from shore, vessel geometry, etc.Network graph analysis, MMSI deduplication, interpolation, and other processing utilitiesData visualizationWeb Interface:https://aisdb.meridian.cs.dal.ca/Docs:https://aisdb.meridian.cs.dal.ca/doc/readme.htmlSource Code:https://git-dev.cs.dal.ca/meridian/aisdbWhat is AIS?Wikipedia:https://en.wikipedia.org/wiki/Automatic_identification_systemDescription of message types:https://arundaleais.github.io/docs/ais/ais_message_types.htmlInstallRequires Python version 3.8 or newer.
Optionally requires SQLite (included in Python) or PostgresQL server (installed separately).
The AISDB Python package can be installed using pip.
It is recommended to install the package in a virtual Python environment such asvenv.python-mvenvenv_aissource./env_ais/*/activatepipinstallaisdbFor information on installing AISDB from source code, seeInstalling from SourceDocumentationAn introduction to AISDB can be found here:Introduction.Additional API documentation:API Docs.Code examplesParsing raw format messages into a
databaseAutomatically generate SQL database
queriesCompute trajectories from database rowsVessel trajectory cleaning and MMSI deduplicationCompute network graph of vessel movements between
polygonsIntegrating data from web sources, such as depth charts, shore distance, etc.
|
aisdc
|
AI-SDCA collection of tools and resources for managing the statistical disclosure control of trained machine learning models. For a brief introduction, seeSmith et al. (2022).User GuidesA collection of user guides can be found in the 'user_stories' folder of this repository. These guides include configurable examples from the perspective of both a researcher and a TRE, with separate scripts for each. Instructions on how to use each of these scripts and which scripts to use are included in the README of the'user_stories'folder.ContentaisdcattacksContains a variety of privacy attacks on machine learning models, including membership and attribute inference.preprocessingContains preprocessing modules for test datasets.safemodelThe safemodel package is an open source wrapper for common machine learning models. It is designed for use by researchers in Trusted Research Environments (TREs) where disclosure control methods must be implemented. Safemodel aims to give researchers greater confidence that their models are more compliant with disclosure control.docsContains Sphinx documentation files.example_notebooksContains short tutorials on the basic concept of "safe_XX" versions of machine learning algorithms, and examples of some specific algorithms.examplesContains examples of how to run the code contained in this repository:How to simulate attribute inference attacksattribute_inference_example.py.How to simulate membership inference attacks:Worst case scenario attackworst_case_attack_example.py.LIRA scenario attacklira_attack_example.py.Integration of attacks into safemodel classessafemodel_attack_integration_bothcalls.py.risk_examplesContains hypothetical examples of data leakage through machine learning models as described in theGreen Paper.testsContains unit tests.DocumentationDocumentation is hosted here:https://ai-sdc.github.io/AI-SDC/Quick StartDevelopmentClone the repository and install the dependencies (safest in a virtual env):$ git clone https://github.com/AI-SDC/AI-SDC.git
$ cd AI-SDC
$ pip install -r requirements.txtThen run the tests:$ pip install pytest
$ pytest .Or run an example:$ python -m examples.lira_attack_exampleInstallation / End-userInstallaisdc(safest in a virtual env) and manually copy theexamplesandexample_notebooks.$ pip install aisdcThen to run an example:$ python attribute_inference_example.pyOr start upjupyter notebookand run an example.Alternatively, you can clone the repo and install:$ git clone https://github.com/AI-SDC/AI-SDC.git
$ cd AI-SDC
$ pip install .This work was funded by UK Research and Innovation Grant Number MC_PC_21033 as part of Phase 1 of the DARE UK (Data and Analytics Research Environments UK) programme (https://dareuk.org.uk/), delivered in partnership with HDR UK and ADRUK. The specific project was Guidelines and Resources for AI Model Access from TrusTEd Research environments (GRAIMATTER). This project has also been supported by MRC and EPSRC [grant number MR/S010351/1]: PICTURES.
|
aisdk
|
No description available on PyPI.
|
ais-dom
|
Open source home automation that puts local control and privacy first.Based onhttps://home-assistant.io
|
ais-dom-frontend
|
AI-Speaker FrontendThis is the repository for the officialAI-Speakerfrontend.LicenseAI-Speaker is open-source and Apache 2 licensed. Feel free to browse the repository, learn and reuse parts in your own projects.
|
aise
|
Functions used to analyze data in science educationUsageInstallationRequirementsCompatibilityLicenceAuthorsaisewas written byJohn J.H.Lin.
|
aisearch
|
No description available on PyPI.
|
aisearchlab-py-commons
|
AiSearchLab common python libraryContains a bunch of common python functions and classes used by internal projectsGCP Logging adapterCollections stuffElastic singletonSimple coding functions for obfuscationPrerequisitespip3 install bump2versionDeploymentUsecloud-build.shto send build to Google Build.
|
aisee
|
An open-source library for computer vision built on top of PyTorch and Timm libraries. It provides an easy-to-use interface for training and predicting with State-of-the-Art neural networks.AISee key features🤗 Simple interface for training and predicting using timm library models.📁 Easily load images from a folder, a pandasDataFrameor a single image path.🏋🏽♂️ Train SOTA neural networks from pre-trained weights or from scratch in very few lines of code.🖼️ Supports multiclass and multilabel image classification tasks.🔨 We take care ofDataLoaders, image transformations and training and inference loops.InstallationInstall AISee using pip.pipinstallaiseeQuick tourHere's an example of how to quickly train a model using AISee. We just have to initialize aVisionClassifiermodel and create aTrainer. As easy as it gets!fromaiseeimportTrainer,VisionClassifier# Initialize a VisionClassifier modelmodel=VisionClassifier(model_name="vgg11_bn",num_classes=4,)# Create a Trainertrainer=Trainer(base_model=model,data=f"animals/",output_dir="trained_weights.pt",)# Traintrainer.train()To predict callpredictmethod, we take care of the rest:# Predicttrainer.base_model.predict("animals/without_label")Try it on Google ColabGetting startedVisitAISee Getting startedto get an overview of how AISee works.ExploreAISee Documentationpage for a detailed guide and comprehensive API reference.Check outAISee Examplesfor Jupyter Notebook examples and tutorials, showcasing how to use AISee effectively in various scenarios.ContributingWe value community contributions, and we encourage you to get involved in the continuous development of AISee. Please refer toAISee Developmentpage for guidelines on how to contribute effectively.We also appreciate your feedback which helps us develop a robust and efficient solution for computer vision tasks. Together, let's make AISee the go-to library for AI practitioners and enthusiasts alike.Contact UsFor any questions or inquiries regarding the AISee library or potential collaborations, please feel free to contact us [email protected] de Ingeniería del Conocimiento (IIC)IICis a non-profit R&D centre founded in 1989 that has been working on Big Data analysis and Artificial Intelligence for more than 30 years. Its value proposition is the development of algorithms and analytical solutions based on applied research tailored to different businesses in different sectors such as energy, health, insurance and talent analytics.
|
aisegcell
|
aiSEGcell - OverviewThis repository contains atorchimplementation of U-Net (Ronneberger et al., 2015).
We providetrainedmodels to semantically segment nuclei and whole cells in bright field images.
Please citethis paperif you are using this code in your research.ContentsInstallationVirtual environmentpip installationSource installationDataTrainingTrained modelsTestingPredictingnapari pluginImage annotation toolsTroubleshooting & supportCitationInstallationIf you do not have python installed already, we recommend installing it using theAnaconda distribution.aisegcellwas tested withpython 3.8.6.Virtual environment setupIf you do not use and IDE that handlesvirtual environmentsfor you (e.g.PyCharm) use your command line application (e.g.Terminal) and
one of the many virtual environment tools (seehere). We will
usecondaCreate new virtual environmentcondacreate-naisegcellpython=3.8.6Activate virtual environmentcondaactivateaisegcellpip installationRecommended if you do not want to develop theaisegcellcode base.Installaisegcell# update pippipinstall-Upip==23.2.1
pipinstallaisegcell(Optional)GPUsgreatly speed up training and inference of U-Net and are available fortorch(v1.10.2) forWindowsandLinux. Check if yourGPU(s)are CUDA compatible
(Windows,Linux) and
update their drivers if necessary.Installtorch/torchvisioncompatible with your system.aisegcellwas tested withtorchversion1.10.2,torchvisionversion0.11.3, andcudaversion11.3.1. Depending on your OS, yourCPUorGPU(andCUDAversion) the installation may change# Windows/Linux CPUpipinstalltorch==1.10.2+cputorchvision==0.11.3+cpu-fhttps://download.pytorch.org/whl/cpu/torch_stable.html# Windows/Linux GPU (CUDA 11.3.X)pipinstalltorch==1.10.2+cu113torchvision==0.11.3+cu113-fhttps://download.pytorch.org/whl/cu113/torch_stable.html# macOS CPUpipinstalltorch==1.10.2torchvision==0.11.3Installpytorch-lightning.aisegcellwas tested with version1.5.9.# note the installation of v1.5.9 does not use pip install lightningpipinstallpytorch-lightning==1.5.9Source installationInstallation requires a command line application (e.g.Terminal) withgitandpythoninstalled.
If you operate onWindowswe recommend usingUbuntu on Windows.
Alternatively, you can installAnacondaand
useAnaconda Powershell Prompt. An introductory tutorial on how to usegitand GitHub can be foundhere.(Optional) If you useAnaconda Powershell Prompt, installgitthroughcondacondainstall-canacondagitclone the repository (considersshalternative)# change directorycd/path/to/directory/to/clone/repository/to
gitclonehttps://github.com/CSDGroup/aisegcell.gitNavigate to the cloned directorycdaisegcellInstallaisegcell# update pippipinstall-Upip==23.2.1as a userpipinstall.as a developer (in editable mode with development dependencies and pre-commit hooks)pipinstall-e".[dev]"pre-commitinstall(Optional)GPUsgreatly speed up training and inference of U-Net and are available fortorch(v1.10.2) forWindowsandLinux. Check if yourGPU(s)are CUDA compatible
(Windows,Linux) and
update their drivers if necessary.Installtorch/torchvisioncompatible with your system.aisegcellwas tested withtorchversion1.10.2,torchvisionversion0.11.3, andcudaversion11.3.1. Depending on your OS, yourCPUorGPU(andCUDAversion) the installation may change# Windows/Linux CPUpipinstalltorch==1.10.2+cputorchvision==0.11.3+cpu-fhttps://download.pytorch.org/whl/cpu/torch_stable.html# Windows/Linux GPU (CUDA 11.3.X)pipinstalltorch==1.10.2+cu113torchvision==0.11.3+cu113-fhttps://download.pytorch.org/whl/cu113/torch_stable.html# macOS CPUpipinstalltorch==1.10.2torchvision==0.11.3Installpytorch-lightning.aisegcellwas tested with version1.5.9.# note the installation of v1.5.9 does not use pip install lightningpipinstallpytorch-lightning==1.5.9DataU-Net is currently intended for single-class semantic segmentation. Input images are expected to be 8-bit or
16-bit greyscale images. Segmentation masks are expected to decode background as 0 intensity and all intensities
>0 are converted to a single intensity value (255). Consequently, different instances of a class (instance
segmentation) or multi-class segmentations are handled as single-class segmentations. Have a look atthis notebookfor a data example.TrainingTraining U-Net is as simple as calling the commandaisegcell_train. We provide anotebookon how to train
U-Net with a minimal working example.aisegcell_trainis available if you activate the virtual environment youinstalledand can be called with the following arguments:--help: show help message--data: Path to CSV file containing training image file paths. The CSV file must have the columnsbfandmask.--data_val: Path to CSV file containing validation image file paths (same format as--data).--output_base_dir: Path to output directory.--model: Model type to train (currently only U-Net). Default is "Unet".--checkpoint: Path to checkpoint file matching--model. Only necessary if continuing a model training.
Default isNone.--devices: Devices to use for model training. If you want to use GPU(s) you have to provideintIDs.
Multiple GPU IDs have to be listed separated by spacebar (e.g.2 5 9). If you want to use the CPU you have
to use "cpu". Default is "cpu".--epochs: Number of training epochs. Default is 5.--batch_size: Number of samples per mini-batch. Default is 2.--lr: Learning rate of the optimizer. Default is 1e-4.--base_filters: Number of base_filters of Unet. Default is 32.--shape: Shape [heigth, width] that all images will be cropped/padded to before model submission. Height
and width cannot be smaller than--receptive_field. Default is [1024,1024].--receptive_fieldReceptive field of a neuron in the deepest layer. Default is 128.--log_frequency: Log performance metrics every N gradient steps during training. Default is 50.--loss_weight: Weight of the foreground class compared to the background class for the binary cross entropy loss.
Default is 1.--bilinear: If flag is used, use bilinear upsampling, else transposed convolutions.--multiprocessing: If flag is used, all GPUs given in devices will be used for traininig. Does not support CPU.--retrain: If flag is used, best scores for model saving will be reset (required for training on new data).--transform_intensity: If flag is used random intensity transformations will be applied to input images.--seed: None or Int to use for random seeding. Default isNone.The commandaisegcell_generate_listcan be used to write CSV files for--dataand--data_valand
has the following arguments:--help: show help message--bf: Path (globpattern) to input images (e.g. bright field). Naming convention must match naming convention of--mask.--mask: Path (globpattern) to segmentation masks corresponding to--bf.--out: Directory to which output file is saved.--prefix: Prefix for output file name (i.e.{PREFIX}_paths.csv). Default is "train".Usewildcard characterslike*to select all files you want to
input to--bfand--mask(see example below).Consider the following example:# activate the virtual environmentcondaactivateaisegcell# generate CSV files for data and data_valaisegcell_generate_list\--bf"/path/to/train_images/*/*.png"# i.e. select all PNG files in all sub-directories of /path/to/train_images\--mask"/path/to/train_masks/*/*mask.png"# i.e. select all files in all sub-directories that end with "mask.png"\--out/path/to/output_directory\--prefixtrain
aisegcell_generate_list\--bf"/path/to/val_images/*.png"\--mask"/path/to/val_masks/*.png"\--out/path/to/output_directory\--prefixval# starting multi-GPU trainingaisegcell_train\--data/path/to/output_directory/train_paths.csv\--data_val/path/to/output_directory/val_paths.csv\--modelUnet\--devices24# use GPU 2 and 4 \--output_base_dir/path/to/results/folder\--epochs10\--batch_size8\--lr1e-3\--base_filters32\--shape1024512\--receptive_field128\--log_frequency5\--loss_weight1\--bilinear\--multiprocessing# required if you use multiple --devices \--transform_intensity\--seed123# OR retrain an existing checkpoint with single GPUaisegcell_train\--data/path/to/output_directory/train_paths.csv\--data_val/path/to/output_directory/val_paths.csv\--modelUnet\--checkpoint/path/to/checkpoint/file.ckpt--devices0\--output_base_dir/path/to/results/folder\--epochs10\--batch_size8\--lr1e-3\--base_filters32\--shape10241024\--receptive_field128\--log_frequency5\--loss_weight1\--bilinear\--transform_intensity\--seed123The output ofaisegcell_trainwill be stored in subdirectories{DATE}_Unet_{ID1}/lightning_logs/version_{ID2}/at--output_base_dir. Its contents are:hparams.yaml: stores hyper-parameters of the model (used bypytorch_lightning.LightningModule)metrics.csv: contains all metrics tracked during trainingloss_step: training loss (binary cross-entropy) per gradient stepepoch: training epochstep: training gradient steploss_val_step: validation loss (binary cross-entropy) per validation mini-batchf1_step:f1 scoreper validation mini-batchiou_step: average ofiou_small_stepandiou_big_stepper validation mini-batchiou_big_step:intersection over unionof objects with
> 2000 px in size per validation mini-batchiou_small_step:intersection over unionof objects
with <= 2000 px in size per validation mini-batchloss_val_epoch: averageloss_val_stepover all validation steps per epochf1_epoch: averagef1_stepover all validation steps per epochiou_epoch: averageiou_stepover all validation steps per epochiou_big_epoch: averageiou_big_epochover all validation steps per epochiou_small_epoch: averageiou_small_epochover all validation steps per epochloss_epoch: averageloss_stepover all training gradient steps per epochcheckpoints: model checkpoints are stored in this directory. Path to model checkpoints are used as input to--checkpointofaisegcell_trainor--modelofaisegcell_testandaisegcell_predict.best-f1-epoch={EPOCH}-step={STEP}.ckpt: model weights with the (currently) highestf1_epochbest-iou-epoch={EPOCH}-step={STEP}.ckpt: model weights with the (currently) highestiou_epochbest-loss-epoch={EPOCH}-step={STEP}.ckpt: model weights with the (currently) lowestloss_val_epochlatest-epoch={EPOCH}-step={STEP}.ckpt: model weights of the (currently) latest checkpointTrained modelsWe provide trained models:modalityimage formatexample imagedescriptionavailabilitynucleus segmentation2D grayscaleTrained on a data set (link to data set) of 9849 images (~620k nuclei).ETH Research Collectionwhole cell segmentation2D grayscaleTrained on a data set (link to data set) of 224 images (~12k cells).ETH Research CollectionTestingA trained U-Net can be tested withaisegcell_test. We provide anotebookon how to test
with U-Net.aisegcell_testreturns predicted masks and performance metrics.aisegcell_testcan be called with the
following arguments:--help: show help message--data: Path to CSV file containing test image file paths. The CSV file must have the columnsbfand--mask.--model: Path to checkpoint file of trained pytorch_lightning.LightningModule.--suffix: Suffix to append to all mask file names.--output_base_dir: Path to output directory.--devices: Devices to use for model training. If you want to use GPU(s) you have to provideintIDs.
Multiple GPU IDs have to be listed separated by spacebar (e.g.2 5 9). If multiple GPUs are provided only
the first ID will be used. If you want to use the CPU you have to use "cpu". Default is "cpu".Make sure to activate the virtual environment created duringinstallationbefore callingaisegcell_test.Consider the following example:# activate the virtual environmentcondaactivateaisegcell# generate CSV file for dataaisegcell_generate_list\--bf"/path/to/test_images/*.png"\--mask"/path/to/test_masks/*.png"\--out/path/to/output_directory\--prefixtest# run testingaisegcell_test\--data/path/to/output_directory/test_paths.csv\--model/path/to/checkpoint/file.ckpt\--suffixmask\--output_base_dir/path/to/results/folder\--devices0# predict with GPU 0The output ofaisegcell_testwill be stored in subdirectorieslightning_logs/version_{ID}/at--output_base_dir. Its contents are:hparams.yaml: stores hyper-parameters of the model (used bypytorch_lightning.LightningModule)metrics.csv: contains all metrics tracked during testing. Column IDs are identical tometrics.csvduringtrainingtest_masks: directory containing segmentation masks obtained from U-NetPredictingA trained U-Net can used for predictions withaisegcell_predict. We provide anotebookon how to
predict with U-Net.aisegcell_predictreturns only predicted masks metrics and can be called with the following
arguments:--help: show help message--data: Path to CSV file containing predict image file paths. The CSV file must have the columnsbfand--mask.--model: Path to checkpoint file of trained pytorch_lightning.LightningModule.--suffix: Suffix to append to all mask file names.--output_base_dir: Path to output directory.--devices: Devices to use for model training. If you want to use GPU(s) you have to provideintIDs.
Multiple GPU IDs have to be listed separated by spacebar (e.g.2 5 9). If multiple GPUs are provided only
the first ID will be used. If you want to use the CPU you have to use "cpu". Default is "cpu".Make sure to activate the virtual environment created duringinstallationbefore callingaisegcell_predict.Consider the following example:# activate the virtual environmentcondaactivateaisegcell# generate CSV file for dataaisegcell_generate_list\--bf"/path/to/predict_images/*.png"\--mask"/path/to/predict_images/*.png"# necessary to provide "--mask" for aisegcell_generate_list \--out/path/to/output_directory\--prefixpredict# run predictionaisegcell_predict\--data/path/to/output_directory/predict_paths.csv\--model/path/to/checkpoint/file.ckpt\--suffixmask\--output_base_dir/path/to/results/folder\--devices0# predict with GPU 0The output ofaisegcell_predictwill be stored in subdirectorieslightning_logs/version_{ID}/at--output_base_dir. Its contents are:hparams.yaml: stores hyper-parameters of the model (used bypytorch_lightning.LightningModule)predicted_masks: directory containing segmentation masks obtained from U-Netnapari pluginaisegcell_predictis also available as a plug-in fornapari(link to napari-hub page and github page).Image annotation toolsAvailable tools to annotate segmentations include:napariLabkitforFijiQuPathilastikTroubleshooting & supportIn case you are experiencing issues withaisegcellinform us via theissue tracker.
Before you submit an issue, check if it has been addressed in a previous issue.Citationt.b.d.
|
aiser
|
Python package to serve AI agents and knowledge bases for Penlight AI.
|
aiserve
|
No description available on PyPI.
|
ai-server-sdk
|
AI Serverai-server-sdkis a python client SDK to connect to the AI ServerUsing this package you can:Inference with Models you have acces to within the serverCreate Pandas DataFrame from Databases connectionsPull Storage objectsRun pixel and get the direct output or full json response.Pull data products from an existing insight using REST API.Installpip install ai-server-sdkUsageTo interract with an ai-server instance, import theai_serverpackage and connect via RESTServer.Note: secret and access keys are requiredSetup>>>importai_server# define access keys>>>loginKeys={"secretKey":"<your_secret_key>","accessKey":"<your_access_key>"}# create connection object by passing in the secret key, access key and base url for the api>>>server_connection=ai_server.RESTServer(...access_key=loginKeys['accessKey'],...secret_key=loginKeys['secretKey'],...base='<Your deployed server Monolith URL>'...)Inference with different Model Engines# define a question and grab the engine id from the server>>>question='What is the capital of France?'>>>engine_id="2c6de0ff-62e0-4dd0-8380-782ac4d40245"### Option 1 - Use ModelEngine directly>>>model=ai_server.ModelEngine(engine_id=engine_id,insight_id=server_connection.cur_insight)>>>model.ask(question=question)[{'response':'The capital of France is Paris.','messageId':'0a80c2ce-76f9-4466-b2a2-8455e4cab34a','roomId':'28261853-0e41-49b0-8a50-df34e8c62a19'}]### Option 2 - Use the Driver class>>>driver=ai_server.Driver(insight_id=server_connection.cur_insight)>>>driver.run_model(question=question,engine_id=engine_id)['The capital of France is Paris.']Interact with a Vector Database by adding document(s), querying, and removing document(s)# grab the engine id from the server>>>engine_id="221a50a4-060c-4aa8-8b7c-e2bc97ee3396"# initialize the connection to the vector database>>>vectorEngine=ai_server.VectorEngine(...engine_id=engine_id,...insight_id=server_connection.cur_insight...)# Add document(s) that have been uploaded to the insight>>>vectorEngine.addDocument(file_paths=['fileName1.pdf','fileName2.pdf',...,'fileNameX.pdf'])# Perform a nearest neighbor search on the embedded documents>>>vectorEngine.nearestNeighbor(search_statement='Sample Search Statement',limit=5)# List all the documents the vector database currently comprises of>>>vectorEngine.listDocuments()# Remove document(s) from the vector database>>>vectorEngine.removeDocument(file_names=['fileName1.pdf','fileName2.pdf',...,'fileNameX.pdf'])Connect to Databases and execute create, read, and delete operationsRun the passed string query against the engine. The query passed must be in the structure that the specific engine implementation.# Create an relation to database based on the engine identifier>>>engine_id="4a1f9466-4e6d-49cd-894d-7d22182344cd">>>database=ai_server.DatabaseEngine(engine_id=engine_id,insight_id=a.cur_insight)>>>database.execQuery(query='SELECT PATIENT, HEIGHT, WEIGHT FROM diab LIMIT 4')PATIENTHEIGHTWEIGHT0203376411413750641612407856718731277872145execQuery commands can also be run through the Driver class### Use the Driver class>>>driver=ai_server.Driver(insight_id=server_connection.cur_insight)>>>driver.run_database(query='SELECT PATIENT, HEIGHT, WEIGHT FROM diab LIMIT 4',engine_id=engine_id)PATIENTHEIGHTWEIGHT0203376411413750641612407856718731277872145Run the passed string query against the engine as an insert query. Query must be in the structure that the specific engine implementation>>>database.insertData(query='INSERT INTO table_name (column1, column2, column3, ...) VALUES (value1, value2, value3, ...)')Run a delete query on the database>>>database.removeData(query='DELETE FROM diab WHERE age=19;')Using REST API to pull data product from an Insight# define the Project ID>>>projectId='30991037-1e73-49f5-99d3-f28210e6b95c'# define the Insight ID>>>inishgtId='26b373b3-cd52-452c-a987-0adb8817bf73'# define the SQL for the data product you want to query within the insight>>>sql='select * FROM DATA_PRODUCT_123'# if you dont provide one of the following, it will ask you to provide it via prompt>>>diabetes_df=server_connection.import_data_product(project_id=projectId,insight_id=inishgtId,sql=sql)>>>diabetes_df.head()AGEPATIENTWEIGHT019482311911917790135220104115932027632744203750161Get the output or JSON response of any pixel# run the pixel and get the output>>>server_connection.run_pixel('1+1')2# run the pixel and get the entire json response>>>server_connection.run_pixel('1+1',full_response=True){'insightID':'8b419eaf-df7d-4a7f-869e-8d7d59bbfde8','sessionTimeRemaining':'7196','pixelReturn':[{'pixelId':'3','pixelExpression':'1 + 1 ;','isMeta':False,'output':2,'operationType':['OPERATION']}]}
|
ai-service
|
Failed to fetch description. HTTP Status Code: 404
|
ai-service-deploy
|
Failed to fetch description. HTTP Status Code: 404
|
ai-services
|
ServicesThis is a pseudo-framework built in the shoulders ofsanicand inspired byDjangoThe intention is to provide some tools for web services development with focus on data services.The library tries to be the less intrusive possible, it is not intended to be a new framework but more to provided abstractions and code generation tools over
good established libraries and technologies.FeaturesAsync Web sever (Sanic)Generation code for apps (like Django)Multiple databases support (sync and async using SQLAlchemy 2.0)Schema migration tools pre-configurated to work in the first run (Alembic)OpenApi/Swagger docs generation (Sanic)Simple user system and authentication endpointsJWT supportVite supportSimple tasks implementationsStorage implementation for uploading files (local and google storage)QuickstartNote: please use your favorite tool for python environments and dependenciespython3 -m venv .venv
source .venv/bin/activate
pip install ai-servicesThen you can initialize a project:create-srv-project .
╭───────────────────────────────────────╮
│ 😸 Hello and welcome to AI services │
╰───────────────────────────────────────╯
Write a name for default web app please, avoid spaces and capital letters: (test_app):
The final name for the project is: test_app
╭─────────────────────────────────────────╮
│ 😸 Congrats!!! Project test_app created │
╰─────────────────────────────────────────╯
To test if everything is working you can run the following command:
srv web -L -DIt will ask you for a name for the firts app.Then your folder will be:» tree -a -L 2.├── alembic.ini├── example│ ├── __init__.py│ ├── __pycache__│ ├── api│ ├── commands│ ├── db.py│ ├── managers.py│ ├── migrations│ ├── models.py│ ├── tasks.py│ ├── templates│ ├── users_models.py│ ├── views.py│ └── web.py└── server_conf├── __init__.py└── settings.pyFinally, the last step if you want to use the User system provided in the code, you will need to run a revision and upgrade action:srv db revision test_app -m first -R 0001 -m first
srv db upgrade test_appWith the default configuration, it will creates adb.sqlitefile in the root of your project.Note: srv db uses alembic under the hood and Alembic is configurated in a way that is possible keep using it outside ofsrv db, it is more like a wrapper.Status:warning: The library is being in use in some production projects, but it is still under active development and therefore full backward compatibility is not guaranteed before reaching v1.0.0.Roadmap:UserManager abstractionAdd groupsUser RegistrationExpand command for users administrationCustom commands hooks insrvDev env files {Makefile, Dockerfile, docker-compose, etc}Task Queue abstraction {Redis, Google Cloud Pub/Sub, etc}Simple task system implementedFile upload (local and google storage)OAuth 2.0 integrationdocumentation (guides and reference api)Tools and abstraction for logging (stdout, google cloud log, etc)Metrics (prometheus)Update to Sanic 22.9Update to SQLAlchemy 2.0Websockets examplesFAQWhy Sanic?Regardless FastAPI is the most popular (50K starts in GH vs 16k for sanic) async framework right know and django is the most feature complete and stable(no proofs) web framework in the python world. What is very appealing for me is the own server implementation of Sanic which seems simpler than WSGI and AWSGI (you can still use ASGI with sanic if you want), and because most of the time I need to build web apis to expose Machine Learning models, I found it to be a good match.Usually models are very CPU and Mem intensive (an average Word2vec model needs at least 500mb with peaks of 1gb of RAM), so the strategy here is to load it in one main process and share it between the rest of workers. Sanic has a lot of conversations in their community about how process could be managedhttps://amhopkins.com/background-job-worker.And why not Django, is because their ORM. I found SQLAlchemy more flexible and lightway than Django ORM.In data/machine learning solutions is common to work in environments outside of the request/response cycle of a web app (Jupyter, Batch/ETL process, etc) it seems unnecesary to load a web environment for those cases. The other reason is that SQLAlchemy allows to work directly with RAW sql or table, inspect them and avoid the ORM part of the framework, which is very convenient when working with different sources of data.Releasesee docs/release.md
|
ai-service-test-23-10-20
|
Failed to fetch description. HTTP Status Code: 404
|
ai-service-test-23-10-21
|
Failed to fetch description. HTTP Status Code: 404
|
ai-service-test-23-10-22
|
Failed to fetch description. HTTP Status Code: 404
|
ai-service-test-23-10-23
|
Failed to fetch description. HTTP Status Code: 404
|
ai-service-test-23-10-24
|
Failed to fetch description. HTTP Status Code: 404
|
ai-service-test-23-10-25
|
Failed to fetch description. HTTP Status Code: 404
|
ai-service-wrapper
|
BE-Base-PythonImplementfromflaskimportFlaskfromconfigimportconfigfromai_service_wrapperimportAiServiceWrapperdefcreate_app():app=Flask(__name__)app.config.update(config)AiServiceWrapper(app)returnappif__name__=="__main__":app=create_app()app.run("0.0.0.0",8000,debug=True)
|
ai-serving-client
|
This README.md is a mirror for PyPI. Please visithttps://github.com/hanxiao/bert-as-service/blob/master/README.mdfor the latest README.md.Are you looking for X-as-service? TryJina!where X can be albert, pytorch-transformer, vgg, resnet, videobert, orANYdeep learning representation model?► Jina 101: First Thing to Learn About JinaEnglish•日本語•français•Deutsch•Русский язык•中文► From BERT-as-Service to X-as-ServiceLearn how to use Jina to extract feature vector using any deep learning representationbert-as-serviceUsing BERT model as a sentence encoding service, i.e. mapping a variable-length sentence to a fixed-length vector.Highlights•What is it•Install•Getting Started•API•Tutorials•FAQ•Benchmark•BlogMade by Han Xiao • :globe_with_meridians:https://hanxiao.github.ioWhat is itBERTis a NLP modeldeveloped by Googlefor pre-training language representations. It leverages an enormous amount of plain text data publicly available on the web and is trained in an unsupervised manner. Pre-training a BERT model is a fairly expensive yet one-time procedure for each language. Fortunately, Google released several pre-trained models whereyou can download from here.Sentence Encoding/Embeddingis a upstream task required in many NLP applications, e.g. sentiment analysis, text classification. The goal is to represent a variable length sentence into a fixed length vector, e.g.hello worldto[0.1, 0.3, 0.9]. Each element of the vector should "encode" some semantics of the original sentence.Finally,bert-as-serviceuses BERT as a sentence encoder and hosts it as a service via ZeroMQ, allowing you to map sentences into fixed-length representations in just two lines of code.Highlights:telescope:State-of-the-art: build on pretrained 12/24-layer BERT models released by Google AI, which is considered as a milestone in the NLP community.:hatching_chick:Easy-to-use: require only two lines of code to get sentence/token-level encodes.:zap:Fast: 900 sentences/s on a single Tesla M40 24GB. Low latency, optimized for speed. Seebenchmark.:octopus:Scalable: scale nicely and smoothly on multiple GPUs and multiple clients without worrying about concurrency. Seebenchmark.:gem:Reliable: tested on multi-billion sentences; days of running without a break or OOM or any nasty exceptions.More features:XLA & FP16 support; mix GPU-CPU workloads; optimized graph;tf.datafriendly; customized tokenizer; flexible pooling strategy;build-in HTTP serverand dashboard;async encoding;multicasting; etc.InstallInstall the server and client viapip. They can be installed separately or even ondifferentmachines:pipinstallbert-serving-server# serverpipinstallbert-serving-client# client, independent of `bert-serving-server`Note that the server MUST be running onPython >= 3.5withTensorflow >= 1.10(one-point-ten). Again, the server does not support Python 2!:point_up: The client can be running on both Python 2 and 3for the following consideration.Getting Started1. Download a Pre-trained BERT ModelDownload a model listed below, then uncompress the zip file into some folder, say/tmp/english_L-12_H-768_A-12/List of released pretrained BERT models (click to expand...)BERT-Base, Uncased12-layer, 768-hidden, 12-heads, 110M parametersBERT-Large, Uncased24-layer, 1024-hidden, 16-heads, 340M parametersBERT-Base, Cased12-layer, 768-hidden, 12-heads , 110M parametersBERT-Large, Cased24-layer, 1024-hidden, 16-heads, 340M parametersBERT-Base, Multilingual Cased (New)104 languages, 12-layer, 768-hidden, 12-heads, 110M parametersBERT-Base, Multilingual Cased (Old)102 languages, 12-layer, 768-hidden, 12-heads, 110M parametersBERT-Base, ChineseChinese Simplified and Traditional, 12-layer, 768-hidden, 12-heads, 110M parametersOptional:fine-tuning the model on your downstream task.Why is it optional?2. Start the BERT serviceAfter installing the server, you should be able to usebert-serving-startCLI as follows:bert-serving-start-model_dir/tmp/english_L-12_H-768_A-12/-num_worker=4This will start a service with four workers, meaning that it can handle up to fourconcurrentrequests. More concurrent requests will be queued in a load balancer. Details can be found in ourFAQandthe benchmark on number of clients.Below shows what the server looks like when starting correctly:Alternatively, one can start the BERT Service in a Docker Container (click to expand...)dockerbuild-tbert-as-service-f./docker/Dockerfile.NUM_WORKER=1PATH_MODEL=/PATH_TO/_YOUR_MODEL/
dockerrun--runtimenvidia-dit-p5555:5555-p5556:5556-v$PATH_MODEL:/model-tbert-as-service$NUM_WORKER3. Use Client to Get Sentence EncodesNow you can encode sentences simply as follows:frombert_serving.clientimportBertClientbc=BertClient()bc.encode(['First do it','then do it right','then do it better'])It will return andarray(orList[List[float]]if you wish), in which each row is a fixed-length vector representing a sentence. Having thousands of sentences? Justencode!Don't even bother to batch, the server will take care of it.As a feature of BERT, you may get encodes of a pair of sentences by concatenating them with|||(with whitespace before and after), e.g.bc.encode(['First do it ||| then do it right'])Below shows what the server looks like while encoding:Use BERT Service RemotelyOne may also start the service on one (GPU) machine and call it from another (CPU) machine as follows:# on another CPU machinefrombert_serving.clientimportBertClientbc=BertClient(ip='xx.xx.xx.xx')# ip address of the GPU machinebc.encode(['First do it','then do it right','then do it better'])Note that you only needpip install -U bert-serving-clientin this case, the server side is not required. You may alsocall the service via HTTP requests.:bulb:Want to learn more? Checkout our tutorials:Building a QA semantic search engine in 3 min.Serving a fine-tuned BERT modelGetting ELMo-like contextual word embeddingUsing your own tokenizerUsingBertClientwithtf.dataAPITraining a text classifier using BERT features and tf.estimator APISaving and loading with TFRecord dataAsynchronous encodingBroadcasting to multiple clientsMonitoring the service status in a dashboardUsingbert-as-serviceto serve HTTP requests in JSONStartingBertServerfrom PythonServer and Client API▴ Back to topThe best way to learnbert-as-servicelatest APIisreading the documentation.Server APIPlease always refer to the latest server-side API documented here., you may get the latest usage via:bert-serving-start--help
bert-serving-terminate--help
bert-serving-benchmark--helpArgumentTypeDefaultDescriptionmodel_dirstrRequiredfolder path of the pre-trained BERT model.tuned_model_dirstr(Optional)folder path of a fine-tuned BERT model.ckpt_namestrbert_model.ckptfilename of the checkpoint file.config_namestrbert_config.jsonfilename of the JSON config file for BERT model.graph_tmp_dirstrNonepath to graph temp filemax_seq_lenint25maximum length of sequence, longer sequence will be trimmed on the right side. Set it to NONE for dynamically using the longest sequence in a (mini)batch.cased_tokenizationboolFalseWhether tokenizer should skip the default lowercasing and accent removal. Should be used for e.g. the multilingual cased pretrained BERT model.mask_cls_sepboolFalsemasking the embedding on [CLS] and [SEP] with zero.num_workerint1number of (GPU/CPU) worker runs BERT model, each works in a separate process.max_batch_sizeint256maximum number of sequences handled by each worker, larger batch will be partitioned into small batches.priority_batch_sizeint16batch smaller than this size will be labeled as high priority, and jumps forward in the job queue to get result fasterportint5555port for pushing data from client to serverport_outint5556port for publishing results from server to clienthttp_portintNoneserver port for receiving HTTP requestscorsstr*setting "Access-Control-Allow-Origin" for HTTP requestspooling_strategystrREDUCE_MEANthe pooling strategy for generating encoding vectors, valid values areNONE,REDUCE_MEAN,REDUCE_MAX,REDUCE_MEAN_MAX,CLS_TOKEN,FIRST_TOKEN,SEP_TOKEN,LAST_TOKEN. Explanation of these strategiescan be found here. To get encoding for each token in the sequence, please set this toNONE.pooling_layerlist[-2]the encoding layer that pooling operates on, where-1means the last layer,-2means the second-to-last,[-1, -2]means concatenating the result of last two layers, etc.gpu_memory_fractionfloat0.5the fraction of the overall amount of memory that each GPU should be allocated per workercpuboolFalserun on CPU instead of GPUxlaboolFalseenableXLA compilerfor graph optimization (experimental!)fp16boolFalseuse float16 precision (experimental)device_maplist[]specify the list of GPU device ids that will be used (id starts from 0)show_tokens_to_clientboolFalsesending tokenization results to clientClient APIPlease always refer to the latest client-side API documented here.Client-side provides a Python class calledBertClient, which accepts arguments as follows:ArgumentTypeDefaultDescriptionipstrlocalhostIP address of the serverportint5555port for pushing data from client to server,must be consistent with the server side configport_outint5556port for publishing results from server to client,must be consistent with the server side configoutput_fmtstrndarraythe output format of the sentence encodes, either in numpy array or python List[List[float]] (ndarray/list)show_server_configboolFalsewhether to show server configs when first connectedcheck_versionboolTruewhether to force client and server to have the same versionidentitystrNonea UUID that identifies the client, useful in multi-castingtimeoutint-1set the timeout (milliseconds) for receive operation on the clientABertClientimplements the following methods and properties:MethodDescription.encode()Encode a list of strings to a list of vectors.encode_async()Asynchronous encode batches from a generator.fetch()Fetch all encoded vectors from server and return them in a generator, use it with.encode_async()or.encode(blocking=False). Sending order is NOT preserved..fetch_all()Fetch all encoded vectors from server and return them in a list, use it with.encode_async()or.encode(blocking=False). Sending order is preserved..close()Gracefully close the connection between the client and the server.statusGet the client status in JSON format.server_statusGet the server status in JSON format:book: Tutorial▴ Back to topThe full list of examples can be found inexample/. You can run each viapython example/example-k.py. Most of examples require you to start a BertServer first, please followthe instruction here. Note that althoughBertClientworks universally on both Python 2.x and 3.x, examples are only tested on Python 3.6.Table of contents (click to expand...)Building a QA semantic search engine in 3 min.Serving a fine-tuned BERT modelGetting ELMo-like contextual word embeddingUsing your own tokenizerUsingBertClientwithtf.dataAPITraining a text classifier using BERT features and tf.estimator APISaving and loading with TFRecord dataAsynchronous encodingBroadcasting to multiple clientsMonitoring the service status in a dashboardUsingbert-as-serviceto serve HTTP requests in JSONStartingBertServerfrom PythonBuilding a QA semantic search engine in 3 minutesThe complete example canbe found example8.py.As the first example, we will implement a simple QA search engine usingbert-as-servicein just three minutes. No kidding! The goal is to find similar questions to user's input and return the corresponding answer. To start, we need a list of question-answer pairs. Fortunately, this README file already containsa list of FAQ, so I will just use that to make this example perfectly self-contained. Let's first load all questions and show some statistics.prefix_q='##### **Q:** 'withopen('README.md')asfp:questions=[v.replace(prefix_q,'').strip()forvinfpifv.strip()andv.startswith(prefix_q)]print('%dquestions loaded, avg. len of%d'%(len(questions),np.mean([len(d.split())fordinquestions])))This gives33 questions loaded, avg. len of 9. So looks like we have enough questions. Now start a BertServer withuncased_L-12_H-768_A-12pretrained BERT model:bert-serving-start-num_worker=1-model_dir=/data/cips/data/lab/data/model/uncased_L-12_H-768_A-12Next, we need to encode our questions into vectors:bc=BertClient(port=4000,port_out=4001)doc_vecs=bc.encode(questions)Finally, we are ready to receive new query and perform a simple "fuzzy" search against the existing questions. To do that, every time a new query is coming, we encode it as a vector and compute its dot product withdoc_vecs; sort the result descendingly; and return the top-k similar questions as follows:whileTrue:query=input('your question: ')query_vec=bc.encode([query])[0]# compute normalized dot product as scorescore=np.sum(query_vec*doc_vecs,axis=1)/np.linalg.norm(doc_vecs,axis=1)topk_idx=np.argsort(score)[::-1][:topk]foridxintopk_idx:print('>%s\t%s'%(score[idx],questions[idx]))That's it! Now run the code and type your query, see how this search engine handles fuzzy match:Serving a fine-tuned BERT modelPretrained BERT models often show quite "okayish" performance on many tasks. However, to release the true power of BERT a fine-tuning on the downstream task (or on domain-specific data) is necessary. In this example, I will show you how to serve a fine-tuned BERT model.We follow the instruction in"Sentence (and sentence-pair) classification tasks"and userun_classifier.pyto fine tuneuncased_L-12_H-768_A-12model on MRPC task. The fine-tuned model is stored at/tmp/mrpc_output/, which can be changed by specifying--output_dirofrun_classifier.py.If you look into/tmp/mrpc_output/, it contains something like:checkpoint128eval4.0K
eval_results.txt86eval.tf_record219K
events.out.tfevents.1545202214.TENCENT64.site6.1M
events.out.tfevents.1545203242.TENCENT64.site14M
graph.pbtxt9.0M
model.ckpt-0.data-00000-of-000011.3G
model.ckpt-0.index23K
model.ckpt-0.meta3.9M
model.ckpt-343.data-00000-of-000011.3G
model.ckpt-343.index23K
model.ckpt-343.meta3.9M
train.tf_record2.0MDon't be afraid of those mysterious files, as the only important one to us ismodel.ckpt-343.data-00000-of-00001(looks like my training stops at the 343 step. One may getmodel.ckpt-123.data-00000-of-00001ormodel.ckpt-9876.data-00000-of-00001depending on the total training steps). Now we have collected all three pieces of information that are needed for serving this fine-tuned model:The pretrained model is downloaded to/path/to/bert/uncased_L-12_H-768_A-12Our fine-tuned model is stored at/tmp/mrpc_output/;Our fine-tuned model checkpoint is named asmodel.ckpt-343something something.Now start a BertServer by putting three pieces together:bert-serving-start-model_dir=/pretrained/uncased_L-12_H-768_A-12-tuned_model_dir=/tmp/mrpc_output/-ckpt_name=model.ckpt-343After the server started, you should find this line in the log:I:GRAPHOPT:[gra:opt: 50]:checkpoint (override by fine-tuned model): /tmp/mrpc_output/model.ckpt-343Which means the BERT parameters is overrode and successfully loaded from our fine-tuned/tmp/mrpc_output/model.ckpt-343. Done!In short, find your fine-tuned model path and checkpoint name, then feed them to-tuned_model_dirand-ckpt_name, respectively.Getting ELMo-like contextual word embeddingStart the server withpooling_strategyset to NONE.bert-serving-start-pooling_strategyNONE-model_dir/tmp/english_L-12_H-768_A-12/To get the word embedding corresponds to every token, you can simply use slice index as follows:# max_seq_len = 25# pooling_strategy = NONEbc=BertClient()vec=bc.encode(['hey you','whats up?'])vec# [2, 25, 768]vec[0]# [1, 25, 768], sentence embeddings for `hey you`vec[0][0]# [1, 1, 768], word embedding for `[CLS]`vec[0][1]# [1, 1, 768], word embedding for `hey`vec[0][2]# [1, 1, 768], word embedding for `you`vec[0][3]# [1, 1, 768], word embedding for `[SEP]`vec[0][4]# [1, 1, 768], word embedding for padding symbolvec[0][25]# error, out of index!Note that no matter how long your original sequence is, the service will always return a[max_seq_len, 768]matrix for every sequence. When using slice index to get the word embedding, beware of the special tokens padded to the sequence, i.e.[CLS],[SEP],0_PAD.Using your own tokenizerOften you want to use your own tokenizer to segment sentences instead of the default one from BERT. Simply callencode(is_tokenized=True)on the client slide as follows:texts=['hello world!','good day']# a naive whitespace tokenizertexts2=[s.split()forsintexts]vecs=bc.encode(texts2,is_tokenized=True)This gives[2, 25, 768]tensor where the first[1, 25, 768]corresponds to the token-level encoding of "hello world!". If you look into its values, you will find that only the first four elements, i.e.[1, 0:3, 768]have values, all the others are zeros. This is due to the fact that BERT considers "hello world!" as four tokens:[CLS]helloworld![SEP], the rest are padding symbols and are masked out before output.Note that there is no need to start a separate server for handling tokenized/untokenized sentences. The server can tell and handle both cases automatically.Sometimes you want to know explicitly the tokenization performed on the server side to have better understanding of the embedding result. One such case is asking word embedding from the server (with-pooling_strategy NONE), one wants to tell which word is tokenized and which is unrecognized. You can get such information with the following steps:enabling-show_tokens_to_clienton the server side;calling the server viaencode(..., show_tokens=True).For example, a basic usage likebc.encode(['hello world!','thisis it'],show_tokens=True)returns a tuple, where the first element is the embedding and the second is the tokenization result from the server:(array([[[ 0. , -0. , 0. , ..., 0. , -0. , -0. ],
[ 1.1100919 , -0.20474958, 0.9895898 , ..., 0.3873255 , -1.4093989 , -0.47620595],
..., -0. , -0. ]],
[[ 0. , -0. , 0. , ..., 0. , 0. , 0. ],
[ 0.6293478 , -0.4088499 , 0.6022662 , ..., 0.41740108, 1.214456 , 1.2532915 ],
..., 0. , 0. ]]], dtype=float32),
[['[CLS]', 'hello', 'world', '!', '[SEP]'], ['[CLS]', 'this', '##is', 'it', '[SEP]']])When using your own tokenization, you may still want to check if the server respects your tokens. For example,bc.encode([['hello','world!'],['thisis','it']],show_tokens=True,is_tokenized=True)returns:(array([[[ 0. , -0. , 0. , ..., 0. , -0. , 0. ],
[ 1.1111546 , -0.56572634, 0.37183186, ..., 0.02397121, -0.5445367 , 1.1009651 ],
..., -0. , 0. ]],
[[ 0. , 0. , 0. , ..., 0. , -0. , 0. ],
[ 0.39262453, 0.3782491 , 0.27096173, ..., 0.7122045 , -0.9874849 , 0.9318679 ],
..., -0. , 0. ]]], dtype=float32),
[['[CLS]', 'hello', '[UNK]', '[SEP]'], ['[CLS]', '[UNK]', 'it', '[SEP]']])One can observe thatworld!andthisisare not recognized on the server, hence they are set to[UNK].Finally, beware that the pretrained BERT Chinese from Google is character-based, i.e. its vocabulary is made of single Chinese characters. Therefore it makes no sense if you use word-level segmentation algorithm to pre-process the data and feed to such model.Extremely curious readers may notice that the first row in the above example is all-zero even though the tokenization result includes[CLS](well done, detective!). The reason is that the tokenization result willalwaysincludes[CLS]and[UNK]regardless the setting of-mask_cls_sep. This could be useful when you want to align the tokens afterwards. Remember,-mask_cls_seponly masks[CLS]and[SEP]out of the computation. It doesn't affect the tokenization algorithm.UsingBertClientwithtf.dataAPIThe complete example canbe found example4.py. There is alsoan example in Keras.Thetf.dataAPI enables you to build complex input pipelines from simple, reusable pieces. One can also useBertClientto encode sentences on-the-fly and use the vectors in a downstream model. Here is an example:batch_size=256num_parallel_calls=4num_clients=num_parallel_calls*2# should be at least greater than `num_parallel_calls`# start a pool of clientsbc_clients=[BertClient(show_server_config=False)for_inrange(num_clients)]defget_encodes(x):# x is `batch_size` of lines, each of which is a json objectsamples=[json.loads(l)forlinx]text=[s['raw_text']forsinsamples]# List[List[str]]labels=[s['label']forsinsamples]# List[str]# get a client from available clientsbc_client=bc_clients.pop()features=bc_client.encode(text)# after use, put it backbc_clients.append(bc_client)returnfeatures,labelsds=(tf.data.TextLineDataset(train_fp).batch(batch_size).map(lambdax:tf.py_func(get_encodes,[x],[tf.float32,tf.string]),num_parallel_calls=num_parallel_calls).map(lambdax,y:{'feature':x,'label':y}).make_one_shot_iterator().get_next())The trick here is to start a pool ofBertClientand reuse them one by one. In this way, we can fully harness the power ofnum_parallel_callsofDataset.map()API.Training a text classifier using BERT features andtf.estimatorAPIThe complete example canbe found example5.py.Following the last example, we can easily extend it to a full classifier usingtf.estimatorAPI. One only need minor change on the input function as follows:estimator=DNNClassifier(hidden_units=[512],feature_columns=[tf.feature_column.numeric_column('feature',shape=(768,))],n_classes=len(laws),config=run_config,label_vocabulary=laws_str,dropout=0.1)input_fn=lambdafp:(tf.data.TextLineDataset(fp).apply(tf.contrib.data.shuffle_and_repeat(buffer_size=10000)).batch(batch_size).map(lambdax:tf.py_func(get_encodes,[x],[tf.float32,tf.string]),num_parallel_calls=num_parallel_calls).map(lambdax,y:({'feature':x},y)).prefetch(20))train_spec=TrainSpec(input_fn=lambda:input_fn(train_fp))eval_spec=EvalSpec(input_fn=lambda:input_fn(eval_fp),throttle_secs=0)train_and_evaluate(estimator,train_spec,eval_spec)The complete example canbe found example5.py, in which a simple MLP is built on BERT features for predicting the relevant articles according to the fact description in the law documents. The problem is a part of theChinese AI and Law Challenge Competition.Saving and loading with TFRecord dataThe complete example canbe found example6.py.The TFRecord file format is a simple record-oriented binary format that many TensorFlow applications use for training data. You can also pre-encode all your sequences and store their encodings to a TFRecord file, then later load it to build atf.Dataset. For example, to write encoding into a TFRecord file:bc=BertClient()list_vec=bc.encode(lst_str)list_label=[0for_inlst_str]# a dummy list of all-zero labels# write to tfrecordwithtf.python_io.TFRecordWriter('tmp.tfrecord')aswriter:defcreate_float_feature(values):returntf.train.Feature(float_list=tf.train.FloatList(value=values))defcreate_int_feature(values):returntf.train.Feature(int64_list=tf.train.Int64List(value=list(values)))for(vec,label)inzip(list_vec,list_label):features={'features':create_float_feature(vec),'labels':create_int_feature([label])}tf_example=tf.train.Example(features=tf.train.Features(feature=features))writer.write(tf_example.SerializeToString())Now we can load from it and build atf.Dataset:def_decode_record(record):"""Decodes a record to a TensorFlow example."""returntf.parse_single_example(record,{'features':tf.FixedLenFeature([768],tf.float32),'labels':tf.FixedLenFeature([],tf.int64),})ds=(tf.data.TFRecordDataset('tmp.tfrecord').repeat().shuffle(buffer_size=100).apply(tf.contrib.data.map_and_batch(lambdarecord:_decode_record(record),batch_size=64)).make_one_shot_iterator().get_next())To save word/token-level embedding to TFRecord, one needs to first flatten[max_seq_len, num_hidden]tensor into an 1D array as follows:defcreate_float_feature(values):returntf.train.Feature(float_list=tf.train.FloatList(value=values.reshape(-1)))And later reconstruct the shape when loading it:name_to_features={"feature":tf.FixedLenFeature([max_seq_length*num_hidden],tf.float32),"label_ids":tf.FixedLenFeature([],tf.int64),}def_decode_record(record,name_to_features):"""Decodes a record to a TensorFlow example."""example=tf.parse_single_example(record,name_to_features)example['feature']=tf.reshape(example['feature'],[max_seq_length,-1])returnexampleBe careful, this will generate a huge TFRecord file.Asynchronous encodingThe complete example canbe found example2.py.BertClient.encode()offers a nice synchronous way to get sentence encodes. However, sometimes we want to do it in an asynchronous manner by feeding all textual data to the server first, fetching the encoded results later. This can be easily done by:# an endless data stream, generating data in an extremely fast speeddeftext_gen():whileTrue:yieldlst_str# yield a batch of text linesbc=BertClient()# get encoded vectorsforjinbc.encode_async(text_gen(),max_num_batch=10):print('received%dx%d'%(j.shape[0],j.shape[1]))Broadcasting to multiple clientsThe complete example canbe found in example3.py.The encoded result is routed to the client according to its identity. If you have multiple clients with same identity, then they all receive the results! You can use thismulticastfeature to do some cool things, e.g. training multiple different models (some usingscikit-learnsome usingtensorflow) in multiple separated processes while only callBertServeronce. In the example below,bcand its two clones will all receive encoded vector.# clone a client by reusing the identitydefclient_clone(id,idx):bc=BertClient(identity=id)forjinbc.listen():print('clone-client-%d: received%dx%d'%(idx,j.shape[0],j.shape[1]))bc=BertClient()# start two cloned clients sharing the same identity as bcforjinrange(2):threading.Thread(target=client_clone,args=(bc.identity,j)).start()for_inrange(3):bc.encode(lst_str)Monitoring the service status in a dashboardThe complete example canbe found in plugin/dashboard/.As a part of the infrastructure, one may also want to monitor the service status and show it in a dashboard. To do that, we can use:bc=BertClient(ip='server_ip')json.dumps(bc.server_status,ensure_ascii=False)This gives the current status of the server including number of requests, number of clients etc. in JSON format. The only thing remained is to start a HTTP server for returning this JSON to the frontend that renders it.Alternatively, one may simply expose an HTTP port when starting a server via:bert-serving-start-http_port8081-model_dir...This will allow one to use javascript orcurlto fetch the server status at port 8081.plugin/dashboard/index.htmlshows a simple dashboard based on Bootstrap and Vue.js.Usingbert-as-serviceto serve HTTP requests in JSONBesides callingbert-as-servicefrom Python, one can also call it via HTTP request in JSON. It is quite useful especially when low transport layer is prohibited. Behind the scene,bert-as-servicespawns a Flask server in a separate process and then reuse aBertClientinstance as a proxy to communicate with the ventilator.To enable the build-in HTTP server, we need to first (re)install the server with some extra Python dependencies:pipinstall-Ubert-serving-server[http]Then simply start the server with:bert-serving-start-model_dir=/YOUR_MODEL-http_port8125Done! Your server is now listening HTTP and TCP requests at port8125simultaneously!To send a HTTP request, first prepare the payload in JSON as following:{"id":123,"texts":["hello world","good day!"],"is_tokenized":false}, whereidis a unique identifier helping you to synchronize the results;is_tokenizedfollows the meaning inBertClientAPIandfalseby default.Then simply call the server at/encodevia HTTP POST request. You can use javascript or whatever, here is an example usingcurl:curl-XPOSThttp://xx.xx.xx.xx:8125/encode\-H'content-type: application/json'\-d'{"id": 123,"texts": ["hello world"], "is_tokenized": false}', which returns a JSON:{"id":123,"results":[[768float-list],[768float-list]],"status":200}To get the server's status and client's status, you can send GET requests at/status/serverand/status/client, respectively.Finally, one may also config CORS to restrict the public access of the server by specifying-corswhen startingbert-serving-start. By default-cors=*, meaning the server is public accessible.StartingBertServerfrom PythonBesides shell, one can also start aBertServerfrom python. Simply dofrombert_serving.server.helperimportget_args_parserfrombert_serving.serverimportBertServerargs=get_args_parser().parse_args(['-model_dir','YOUR_MODEL_PATH_HERE','-port','5555','-port_out','5556','-max_seq_len','NONE','-mask_cls_sep','-cpu'])server=BertServer(args)server.start()Note that it's basically mirroring the arg-parsing behavior in CLI, so everything in that.parse_args([])list should be string, e.g.['-port', '5555']not['-port', 5555].To shutdown the server, you may call the static method inBertServerclass via:BertServer.shutdown(port=5555)Or via shell CLI:bert-serving-terminate-port5555This will terminate the server running on localhost at port 5555. You may also use it to terminate a remote server, seebert-serving-terminate --helpfor details.:speech_balloon: FAQ▴ Back to topQ:Do you have a paper or other written explanation to introduce your model's details?The design philosophy and technical details can be foundin my blog post.Q:Where is the BERT code come from?A:BERT code of this repois forked from theoriginal BERT repowith necessary modification,especially in extract_features.py.Q:How large is a sentence vector?In general, each sentence is translated to a 768-dimensional vector. Depending on the pretrained BERT you are using,pooling_strategyandpooling_layerthe dimensions of the output vector could be different.Q:How do you get the fixed representation? Did you do pooling or something?A:Yes, pooling is required to get a fixed representation of a sentence. In the default strategyREDUCE_MEAN, I take the second-to-last hidden layer of all of the tokens in the sentence and do average pooling.Q:Are you suggesting using BERT without fine-tuning?A:Yes and no. On the one hand, Google pretrained BERT on Wikipedia data, thus should encode enough prior knowledge of the language into the model. Having such feature is not a bad idea. On the other hand, these prior knowledge is not specific to any particular domain. It should be totally reasonable if the performance is not ideal if you are using it on, for example, classifying legal cases. Nonetheless, you can always first fine-tune your own BERT on the downstream task and then usebert-as-serviceto extract the feature vectors efficiently. Keep in mind thatbert-as-serviceis just a feature extraction service based on BERT. Nothing stops you from using a fine-tuned BERT.Q:Can I get a concatenation of several layers instead of a single layer ?A:Sure! Just use a list of the layer you want to concatenate when calling the server. Example:bert-serving-start-pooling_layer-4-3-2-1-model_dir/tmp/english_L-12_H-768_A-12/Q:What are the available pooling strategies?A:Here is a table summarizes all pooling strategies I implemented. Choose your favorite one by specifyingbert-serving-start -pooling_strategy.StrategyDescriptionNONEno pooling at all, useful when you want to use word embedding instead of sentence embedding. This will results in a[max_seq_len, 768]encode matrix for a sequence.REDUCE_MEANtake the average of the hidden state of encoding layer on the time axisREDUCE_MAXtake the maximum of the hidden state of encoding layer on the time axisREDUCE_MEAN_MAXdoREDUCE_MEANandREDUCE_MAXseparately and then concat them together on the last axis, resulting in 1536-dim sentence encodesCLS_TOKENorFIRST_TOKENget the hidden state corresponding to[CLS], i.e. the first tokenSEP_TOKENorLAST_TOKENget the hidden state corresponding to[SEP], i.e. the last tokenQ:Why not use the hidden state of the first token as default strategy, i.e. the[CLS]?A:Because a pre-trained model is not fine-tuned on any downstream tasks yet. In this case, the hidden state of[CLS]is not a good sentence representation. If later you fine-tune the model, you may use[CLS]as well.Q:BERT has 12/24 layers, so which layer are you talking about?A:By default this service works on the second last layer, i.e.pooling_layer=-2. You can change it by settingpooling_layerto other negative values, e.g. -1 corresponds to the last layer.Q:Why not the last hidden layer? Why second-to-last?A:The last layer is too closed to the target functions (i.e. masked language model and next sentence prediction) during pre-training, therefore may be biased to those targets. If you question about this argument and want to use the last hidden layer anyway, please feel free to setpooling_layer=-1.Q:So which layer and which pooling strategy is the best?A:It depends. Keep in mind that different BERT layers capture different information. To see that more clearly, here is a visualization onUCI-News Aggregator Dataset, where I randomly sample 20K news titles; get sentence encodes from different layers and with different pooling strategies, finally reduce it to 2D via PCA (one can of course do t-SNE as well, but that's not my point). There are only four classes of the data, illustrated in red, blue, yellow and green. To reproduce the result, please runexample7.py.Intuitively,pooling_layer=-1is close to the training output, so it may be biased to the training targets. If you don't fine tune the model, then this could lead to a bad representation.pooling_layer=-12is close to the word embedding, may preserve the very original word information (with no fancy self-attention etc.). On the other hand, you may achieve the very same performance by simply using a word-embedding only. That said, anything in-between [-1, -12] is then a trade-off.Q:Could I use other pooling techniques?A:For sure. But if you introduce newtf.variablesto the graph, then you need to train those variables before using the model. You may also want to checksome pooling techniques I mentioned in my blog post.Q:Do I need to batch the data beforeencode()?No, not at all. Just doencodeand let the server handles the rest. If the batch is too large, the server will do batching automatically and it is more efficient than doing it by yourself. No matter how many sentences you have, 10K or 100K, as long as you can hold it in client's memory, just send it to the server. Please also readthe benchmark on the client batch size.Q:Can I start multiple clients and send requests to one server simultaneously?A:Yes! That's the purpose of this repo. In fact you can start as many clients as you want. One server can handle all of them (given enough time).Q:How many requests can one service handle concurrently?A:The maximum number of concurrent requests is determined bynum_workerinbert-serving-start. If you a sending more thannum_workerrequests concurrently, the new requests will be temporally stored in a queue until a free worker becomes available.Q:So one request means one sentence?A:No. One request means a list of sentences sent from a client. Think the size of a request as the batch size. A request may contain 256, 512 or 1024 sentences. The optimal size of a request is often determined empirically. One large request can certainly improve the GPU utilization, yet it also increases the overhead of transmission. You may runpython example/example1.pyfor a simple benchmark.Q:How about the speed? Is it fast enough for production?A:It highly depends on themax_seq_lenand the size of a request. On a single Tesla M40 24GB withmax_seq_len=40, you should get about 470 samples per second using a 12-layer BERT. In general, I'd suggest smallermax_seq_len(25) and larger request size (512/1024).Q:Did you benchmark the efficiency?A:Yes. SeeBenchmark.To reproduce the results, please runbert-serving-benchmark.Q:What is backend based on?A:ZeroMQ.Q:What is the parallel processing model behind the scene?Q:Why does the server need two ports?One port is for pushing text data into the server, the other port is for publishing the encoded result to the client(s). In this way, we get rid of back-chatter, meaning that at every level recipients never talk back to senders. The overall message flow is strictly one-way, as depicted in the above figure. Killing back-chatter is essential to real scalability, allowing us to useBertClientin an asynchronous way.Q:Do I need Tensorflow on the client side?A:No. Think ofBertClientas a general feature extractor, whose output can be fed toanyML models, e.g.scikit-learn,pytorch,tensorflow. The only file that client need isclient.py. Copy this file to your project and import it, then you are ready to go.Q:Can I use multilingual BERT model provided by Google?A:Yes.Q:Can I use my own fine-tuned BERT model?A:Yes. In fact, this is suggested. Make sure you have the following three items inmodel_dir:A TensorFlow checkpoint (bert_model.ckpt) containing the pre-trained weights (which is actually 3 files).A vocab file (vocab.txt) to map WordPiece to word id.A config file (bert_config.json) which specifies the hyperparameters of the model.Q:Can I run it in python 2?A:Server side no, client side yes. This is based on the consideration that python 2.x might still be a major piece in some tech stack. Migrating the whole downstream stack to python 3 for supportingbert-as-servicecan take quite some effort. On the other hand, setting upBertServeris just a one-time thing, which can be evenrun in a docker container. To ease the integration, we support python 2 on the client side so that you can directly useBertClientas a part of your python 2 project, whereas the server side should always be hosted with python 3.Q:Do I need to do segmentation for Chinese?No, if you are usingthe pretrained Chinese BERT released by Googleyou don't need word segmentation. As this Chinese BERT is character-based model. It won't recognize word/phrase even if you intentionally add space in-between. To see that more clearly, this is what the BERT model actually receives after tokenization:bc.encode(['hey you','whats up?','你好么?','我 还 可以'])tokens: [CLS] hey you [SEP]
input_ids: 101 13153 8357 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
input_mask: 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
tokens: [CLS] what ##s up ? [SEP]
input_ids: 101 9100 8118 8644 136 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
input_mask: 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
tokens: [CLS] 你 好 么 ? [SEP]
input_ids: 101 872 1962 720 8043 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
input_mask: 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
tokens: [CLS] 我 还 可 以 [SEP]
input_ids: 101 2769 6820 1377 809 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
input_mask: 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0That means the word embedding is actually the character embedding for Chinese-BERT.Q:Why my (English) word is tokenized to##something?Because your word is out-of-vocabulary (OOV). The tokenizer from Google uses a greedy longest-match-first algorithm to perform tokenization using the given vocabulary.For example:input="unaffable"tokenizer_output=["un","##aff","##able"]Q:Can I use my own tokenizer?Yes. If you already tokenize the sentence on your own, simply send useencodewithList[List[Str]]as input and turn onis_tokenized, i.e.bc.encode(texts, is_tokenized=True).Q:I encounterzmq.error.ZMQError: Operation cannot be accomplished in current statewhen usingBertClient, what should I do?A:This is often due to the misuse ofBertClientin multi-thread/process environment. Note that you can’t reuse oneBertClientamong multiple threads/processes, you have to make a separate instance for each thread/process. For example, the following won't work at all:# BAD examplebc=BertClient()# in Proc1/Thread1 scope:bc.encode(lst_str)# in Proc2/Thread2 scope:bc.encode(lst_str)Instead, please do:# in Proc1/Thread1 scope:bc1=BertClient()bc1.encode(lst_str)# in Proc2/Thread2 scope:bc2=BertClient()bc2.encode(lst_str)Q:After running the server, I have several garbagetmpXXXXfolders. How can I change this behavior ?A:These folders are used by ZeroMQ to store sockets. You can choose a different location by setting the environment variableZEROMQ_SOCK_TMP_DIR:export ZEROMQ_SOCK_TMP_DIR=/tmp/Q:The cosine similarity of two sentence vectors is unreasonably high (e.g. always > 0.8), what's wrong?A:A decent representation for a downstream task doesn't mean that it will be meaningful in terms of cosine distance. Since cosine distance is a linear space where all dimensions are weighted equally. if you want to use cosine distance anyway, then please focus on the rank not the absolute value. Namely, do not use:if cosine(A, B) > 0.9, then A and B are similarPlease consider the following instead:if cosine(A, B) > cosine(A, C), then A is more similar to B than C.The graph below illustrates the pairwise similarity of 3000 Chinese sentences randomly sampled from web (char. length < 25). We compute cosine similarity based on the sentence vectors andRouge-Lbased on the raw text. The diagonal (self-correlation) is removed for the sake of clarity. As one can see, there is some positive correlation between these two metrics.Q:I'm getting bad performance, what should I do?A:This often suggests that the pretrained BERT could not generate a decent representation of your downstream task. Thus, you can fine-tune the model on the downstream task and then usebert-as-serviceto serve the fine-tuned BERT. Note that,bert-as-serviceis just a feature extraction service based on BERT. Nothing stops you from using a fine-tuned BERT.Q:Can I run the server side on CPU-only machine?A:Yes, please runbert-serving-start -cpu -max_batch_size 16. Note that, CPUs do not scale as well as GPUs to large batches, therefore themax_batch_sizeon the server side needs to be smaller, e.g. 16 or 32.Q:How can I choosenum_worker?A:Generally, the number of workers should be less than or equal to the number of GPUs or CPUs that you have. Otherwise, multiple workers will be allocated to one GPU/CPU, which may not scale well (and may cause out-of-memory on GPU).Q:Can I specify which GPU to use?A:Yes, you can specifying-device_mapas follows:bert-serving-start-device_map014-num_worker4-model_dir...This will start four workers and allocate them to GPU0, GPU1, GPU4 and again GPU0, respectively. In general, ifnum_worker>device_map, then devices will be reused and shared by the workers (may scale suboptimally or cause OOM); ifnum_worker<device_map, onlydevice_map[:num_worker]will be used.Note,device_mapis ignored when running on CPU.:zap: Benchmark▴ Back to topThe primary goal of benchmarking is to test the scalability and the speed of this service, which is crucial for using it in a dev/prod environment. Benchmark was done on Tesla M40 24GB, experiments were repeated 10 times and the average value is reported.To reproduce the results, please runbert-serving-benchmark--helpCommon arguments across all experiments are:ParameterValuenum_worker1,2,4max_seq_len40client_batch_size2048max_batch_size256num_client1Speed wrt.max_seq_lenmax_seq_lenis a parameter on the server side, which controls the maximum length of a sequence that a BERT model can handle. Sequences larger thanmax_seq_lenwill be truncated on the left side. Thus, if your client want to send long sequences to the model, please make sure the server can handle them correctly.Performance-wise, longer sequences means slower speed and more chance of OOM, as the multi-head self-attention (the core unit of BERT) needs to do dot products and matrix multiplications between every two symbols in the sequence.max_seq_len1 GPU2 GPU4 GPU20903177432544047391916878023143576816011923746432054108212Speed wrt.client_batch_sizeclient_batch_sizeis the number of sequences from a client when invokingencode(). For performance reason, please consider encoding sequences in batch rather than encoding them one by one.For example, do:# prepare your sent in advancebc=BertClient()my_sentences=[sforsinmy_corpus.iter()]# doing encoding in one-shotvec=bc.encode(my_sentences)DON'T:bc=BertClient()vec=[]forsinmy_corpus.iter():vec.append(bc.encode(s))It's even worse if you putBertClient()inside the loop. Don't do that.client_batch_size1 GPU2 GPU4 GPU1757472420620520182742702671633232933064365365365256382383383512432766762102445986215172048473917168140964819431809Speed wrt.num_clientnum_clientrepresents the number of concurrent clients connected to the server at the same time.num_client1 GPU2 GPU4 GPU1473919175922615121028413326753386713627016346813632173468As one can observe, 1 clients 1 GPU = 381 seqs/s, 2 clients 2 GPU 402 seqs/s, 4 clients 4 GPU 413 seqs/s. This shows the efficiency of our parallel pipeline and job scheduling, as the service can leverage the GPU time more exhaustively as concurrent requests increase.Speed wrt.max_batch_sizemax_batch_sizeis a parameter on the server side, which controls the maximum number of samples per batch per worker. If a incoming batch from client is larger thanmax_batch_size, the server will split it into small batches so that each of them is less or equal thanmax_batch_sizebefore sending it to workers.max_batch_size1 GPU2 GPU4 GPU324508871726644598971759128473931181625647391916885124648661483Speed wrt.pooling_layerpooling_layerdetermines the encoding layer that pooling operates on. For example, in a 12-layer BERT model,-1represents the layer closed to the output,-12represents the layer closed to the embedding layer. As one can observe below, the depth of the pooling layer affects the speed.pooling_layer1 GPU2 GPU4 GPU[-1]4388441568[-2]4759161686[-3]5169951823[-4]56910761986[-5]63311932184[-6]71113402430[-7]82015282729[-8]94517723104[-9]112820473622[-10]139225424241[-11]152327374752[-12]156829855303Speed wrt.-fp16and-xlabert-as-servicesupports two additional optimizations: half-precision and XLA, which can be turned on by adding-fp16and-xlatobert-serving-start, respectively. To enable these two options, you have to meet the following requirements:your GPU supports FP16 instructions;your Tensorflow is self-compiled with XLA and-march=native;your CUDA and cudnn are not too old.On Tesla V100 withtensorflow=1.13.0-rc0it gives:FP16 achieves ~1.4x speedup (round-trip) comparing to the FP32 counterpart. To reproduce the result, please runpython example/example1.py.Citing▴ Back to topIf you use bert-as-service in a scientific publication, we would appreciate references to the following BibTex entry:@misc{xiao2018bertservice,
title={bert-as-service},
author={Xiao, Han},
howpublished={\url{https://github.com/hanxiao/bert-as-service}},
year={2018}}
|
ai-serving-server
|
This README.md is a mirror for PyPI. Please visithttps://github.com/hanxiao/bert-as-service/blob/master/README.mdfor the latest README.md.Are you looking for X-as-service? TryJina!where X can be albert, pytorch-transformer, vgg, resnet, videobert, orANYdeep learning representation model?► Jina 101: First Thing to Learn About JinaEnglish•日本語•français•Deutsch•Русский язык•中文► From BERT-as-Service to X-as-ServiceLearn how to use Jina to extract feature vector using any deep learning representationbert-as-serviceUsing BERT model as a sentence encoding service, i.e. mapping a variable-length sentence to a fixed-length vector.Highlights•What is it•Install•Getting Started•API•Tutorials•FAQ•Benchmark•BlogMade by Han Xiao • :globe_with_meridians:https://hanxiao.github.ioWhat is itBERTis a NLP modeldeveloped by Googlefor pre-training language representations. It leverages an enormous amount of plain text data publicly available on the web and is trained in an unsupervised manner. Pre-training a BERT model is a fairly expensive yet one-time procedure for each language. Fortunately, Google released several pre-trained models whereyou can download from here.Sentence Encoding/Embeddingis a upstream task required in many NLP applications, e.g. sentiment analysis, text classification. The goal is to represent a variable length sentence into a fixed length vector, e.g.hello worldto[0.1, 0.3, 0.9]. Each element of the vector should "encode" some semantics of the original sentence.Finally,bert-as-serviceuses BERT as a sentence encoder and hosts it as a service via ZeroMQ, allowing you to map sentences into fixed-length representations in just two lines of code.Highlights:telescope:State-of-the-art: build on pretrained 12/24-layer BERT models released by Google AI, which is considered as a milestone in the NLP community.:hatching_chick:Easy-to-use: require only two lines of code to get sentence/token-level encodes.:zap:Fast: 900 sentences/s on a single Tesla M40 24GB. Low latency, optimized for speed. Seebenchmark.:octopus:Scalable: scale nicely and smoothly on multiple GPUs and multiple clients without worrying about concurrency. Seebenchmark.:gem:Reliable: tested on multi-billion sentences; days of running without a break or OOM or any nasty exceptions.More features:XLA & FP16 support; mix GPU-CPU workloads; optimized graph;tf.datafriendly; customized tokenizer; flexible pooling strategy;build-in HTTP serverand dashboard;async encoding;multicasting; etc.InstallInstall the server and client viapip. They can be installed separately or even ondifferentmachines:pipinstallbert-serving-server# serverpipinstallbert-serving-client# client, independent of `bert-serving-server`Note that the server MUST be running onPython >= 3.5withTensorflow >= 1.10(one-point-ten). Again, the server does not support Python 2!:point_up: The client can be running on both Python 2 and 3for the following consideration.Getting Started1. Download a Pre-trained BERT ModelDownload a model listed below, then uncompress the zip file into some folder, say/tmp/english_L-12_H-768_A-12/List of released pretrained BERT models (click to expand...)BERT-Base, Uncased12-layer, 768-hidden, 12-heads, 110M parametersBERT-Large, Uncased24-layer, 1024-hidden, 16-heads, 340M parametersBERT-Base, Cased12-layer, 768-hidden, 12-heads , 110M parametersBERT-Large, Cased24-layer, 1024-hidden, 16-heads, 340M parametersBERT-Base, Multilingual Cased (New)104 languages, 12-layer, 768-hidden, 12-heads, 110M parametersBERT-Base, Multilingual Cased (Old)102 languages, 12-layer, 768-hidden, 12-heads, 110M parametersBERT-Base, ChineseChinese Simplified and Traditional, 12-layer, 768-hidden, 12-heads, 110M parametersOptional:fine-tuning the model on your downstream task.Why is it optional?2. Start the BERT serviceAfter installing the server, you should be able to usebert-serving-startCLI as follows:bert-serving-start-model_dir/tmp/english_L-12_H-768_A-12/-num_worker=4This will start a service with four workers, meaning that it can handle up to fourconcurrentrequests. More concurrent requests will be queued in a load balancer. Details can be found in ourFAQandthe benchmark on number of clients.Below shows what the server looks like when starting correctly:Alternatively, one can start the BERT Service in a Docker Container (click to expand...)dockerbuild-tbert-as-service-f./docker/Dockerfile.NUM_WORKER=1PATH_MODEL=/PATH_TO/_YOUR_MODEL/
dockerrun--runtimenvidia-dit-p5555:5555-p5556:5556-v$PATH_MODEL:/model-tbert-as-service$NUM_WORKER3. Use Client to Get Sentence EncodesNow you can encode sentences simply as follows:frombert_serving.clientimportBertClientbc=BertClient()bc.encode(['First do it','then do it right','then do it better'])It will return andarray(orList[List[float]]if you wish), in which each row is a fixed-length vector representing a sentence. Having thousands of sentences? Justencode!Don't even bother to batch, the server will take care of it.As a feature of BERT, you may get encodes of a pair of sentences by concatenating them with|||(with whitespace before and after), e.g.bc.encode(['First do it ||| then do it right'])Below shows what the server looks like while encoding:Use BERT Service RemotelyOne may also start the service on one (GPU) machine and call it from another (CPU) machine as follows:# on another CPU machinefrombert_serving.clientimportBertClientbc=BertClient(ip='xx.xx.xx.xx')# ip address of the GPU machinebc.encode(['First do it','then do it right','then do it better'])Note that you only needpip install -U bert-serving-clientin this case, the server side is not required. You may alsocall the service via HTTP requests.:bulb:Want to learn more? Checkout our tutorials:Building a QA semantic search engine in 3 min.Serving a fine-tuned BERT modelGetting ELMo-like contextual word embeddingUsing your own tokenizerUsingBertClientwithtf.dataAPITraining a text classifier using BERT features and tf.estimator APISaving and loading with TFRecord dataAsynchronous encodingBroadcasting to multiple clientsMonitoring the service status in a dashboardUsingbert-as-serviceto serve HTTP requests in JSONStartingBertServerfrom PythonServer and Client API▴ Back to topThe best way to learnbert-as-servicelatest APIisreading the documentation.Server APIPlease always refer to the latest server-side API documented here., you may get the latest usage via:bert-serving-start--help
bert-serving-terminate--help
bert-serving-benchmark--helpArgumentTypeDefaultDescriptionmodel_dirstrRequiredfolder path of the pre-trained BERT model.tuned_model_dirstr(Optional)folder path of a fine-tuned BERT model.ckpt_namestrbert_model.ckptfilename of the checkpoint file.config_namestrbert_config.jsonfilename of the JSON config file for BERT model.graph_tmp_dirstrNonepath to graph temp filemax_seq_lenint25maximum length of sequence, longer sequence will be trimmed on the right side. Set it to NONE for dynamically using the longest sequence in a (mini)batch.cased_tokenizationboolFalseWhether tokenizer should skip the default lowercasing and accent removal. Should be used for e.g. the multilingual cased pretrained BERT model.mask_cls_sepboolFalsemasking the embedding on [CLS] and [SEP] with zero.num_workerint1number of (GPU/CPU) worker runs BERT model, each works in a separate process.max_batch_sizeint256maximum number of sequences handled by each worker, larger batch will be partitioned into small batches.priority_batch_sizeint16batch smaller than this size will be labeled as high priority, and jumps forward in the job queue to get result fasterportint5555port for pushing data from client to serverport_outint5556port for publishing results from server to clienthttp_portintNoneserver port for receiving HTTP requestscorsstr*setting "Access-Control-Allow-Origin" for HTTP requestspooling_strategystrREDUCE_MEANthe pooling strategy for generating encoding vectors, valid values areNONE,REDUCE_MEAN,REDUCE_MAX,REDUCE_MEAN_MAX,CLS_TOKEN,FIRST_TOKEN,SEP_TOKEN,LAST_TOKEN. Explanation of these strategiescan be found here. To get encoding for each token in the sequence, please set this toNONE.pooling_layerlist[-2]the encoding layer that pooling operates on, where-1means the last layer,-2means the second-to-last,[-1, -2]means concatenating the result of last two layers, etc.gpu_memory_fractionfloat0.5the fraction of the overall amount of memory that each GPU should be allocated per workercpuboolFalserun on CPU instead of GPUxlaboolFalseenableXLA compilerfor graph optimization (experimental!)fp16boolFalseuse float16 precision (experimental)device_maplist[]specify the list of GPU device ids that will be used (id starts from 0)show_tokens_to_clientboolFalsesending tokenization results to clientClient APIPlease always refer to the latest client-side API documented here.Client-side provides a Python class calledBertClient, which accepts arguments as follows:ArgumentTypeDefaultDescriptionipstrlocalhostIP address of the serverportint5555port for pushing data from client to server,must be consistent with the server side configport_outint5556port for publishing results from server to client,must be consistent with the server side configoutput_fmtstrndarraythe output format of the sentence encodes, either in numpy array or python List[List[float]] (ndarray/list)show_server_configboolFalsewhether to show server configs when first connectedcheck_versionboolTruewhether to force client and server to have the same versionidentitystrNonea UUID that identifies the client, useful in multi-castingtimeoutint-1set the timeout (milliseconds) for receive operation on the clientABertClientimplements the following methods and properties:MethodDescription.encode()Encode a list of strings to a list of vectors.encode_async()Asynchronous encode batches from a generator.fetch()Fetch all encoded vectors from server and return them in a generator, use it with.encode_async()or.encode(blocking=False). Sending order is NOT preserved..fetch_all()Fetch all encoded vectors from server and return them in a list, use it with.encode_async()or.encode(blocking=False). Sending order is preserved..close()Gracefully close the connection between the client and the server.statusGet the client status in JSON format.server_statusGet the server status in JSON format:book: Tutorial▴ Back to topThe full list of examples can be found inexample/. You can run each viapython example/example-k.py. Most of examples require you to start a BertServer first, please followthe instruction here. Note that althoughBertClientworks universally on both Python 2.x and 3.x, examples are only tested on Python 3.6.Table of contents (click to expand...)Building a QA semantic search engine in 3 min.Serving a fine-tuned BERT modelGetting ELMo-like contextual word embeddingUsing your own tokenizerUsingBertClientwithtf.dataAPITraining a text classifier using BERT features and tf.estimator APISaving and loading with TFRecord dataAsynchronous encodingBroadcasting to multiple clientsMonitoring the service status in a dashboardUsingbert-as-serviceto serve HTTP requests in JSONStartingBertServerfrom PythonBuilding a QA semantic search engine in 3 minutesThe complete example canbe found example8.py.As the first example, we will implement a simple QA search engine usingbert-as-servicein just three minutes. No kidding! The goal is to find similar questions to user's input and return the corresponding answer. To start, we need a list of question-answer pairs. Fortunately, this README file already containsa list of FAQ, so I will just use that to make this example perfectly self-contained. Let's first load all questions and show some statistics.prefix_q='##### **Q:** 'withopen('README.md')asfp:questions=[v.replace(prefix_q,'').strip()forvinfpifv.strip()andv.startswith(prefix_q)]print('%dquestions loaded, avg. len of%d'%(len(questions),np.mean([len(d.split())fordinquestions])))This gives33 questions loaded, avg. len of 9. So looks like we have enough questions. Now start a BertServer withuncased_L-12_H-768_A-12pretrained BERT model:bert-serving-start-num_worker=1-model_dir=/data/cips/data/lab/data/model/uncased_L-12_H-768_A-12Next, we need to encode our questions into vectors:bc=BertClient(port=4000,port_out=4001)doc_vecs=bc.encode(questions)Finally, we are ready to receive new query and perform a simple "fuzzy" search against the existing questions. To do that, every time a new query is coming, we encode it as a vector and compute its dot product withdoc_vecs; sort the result descendingly; and return the top-k similar questions as follows:whileTrue:query=input('your question: ')query_vec=bc.encode([query])[0]# compute normalized dot product as scorescore=np.sum(query_vec*doc_vecs,axis=1)/np.linalg.norm(doc_vecs,axis=1)topk_idx=np.argsort(score)[::-1][:topk]foridxintopk_idx:print('>%s\t%s'%(score[idx],questions[idx]))That's it! Now run the code and type your query, see how this search engine handles fuzzy match:Serving a fine-tuned BERT modelPretrained BERT models often show quite "okayish" performance on many tasks. However, to release the true power of BERT a fine-tuning on the downstream task (or on domain-specific data) is necessary. In this example, I will show you how to serve a fine-tuned BERT model.We follow the instruction in"Sentence (and sentence-pair) classification tasks"and userun_classifier.pyto fine tuneuncased_L-12_H-768_A-12model on MRPC task. The fine-tuned model is stored at/tmp/mrpc_output/, which can be changed by specifying--output_dirofrun_classifier.py.If you look into/tmp/mrpc_output/, it contains something like:checkpoint128eval4.0K
eval_results.txt86eval.tf_record219K
events.out.tfevents.1545202214.TENCENT64.site6.1M
events.out.tfevents.1545203242.TENCENT64.site14M
graph.pbtxt9.0M
model.ckpt-0.data-00000-of-000011.3G
model.ckpt-0.index23K
model.ckpt-0.meta3.9M
model.ckpt-343.data-00000-of-000011.3G
model.ckpt-343.index23K
model.ckpt-343.meta3.9M
train.tf_record2.0MDon't be afraid of those mysterious files, as the only important one to us ismodel.ckpt-343.data-00000-of-00001(looks like my training stops at the 343 step. One may getmodel.ckpt-123.data-00000-of-00001ormodel.ckpt-9876.data-00000-of-00001depending on the total training steps). Now we have collected all three pieces of information that are needed for serving this fine-tuned model:The pretrained model is downloaded to/path/to/bert/uncased_L-12_H-768_A-12Our fine-tuned model is stored at/tmp/mrpc_output/;Our fine-tuned model checkpoint is named asmodel.ckpt-343something something.Now start a BertServer by putting three pieces together:bert-serving-start-model_dir=/pretrained/uncased_L-12_H-768_A-12-tuned_model_dir=/tmp/mrpc_output/-ckpt_name=model.ckpt-343After the server started, you should find this line in the log:I:GRAPHOPT:[gra:opt: 50]:checkpoint (override by fine-tuned model): /tmp/mrpc_output/model.ckpt-343Which means the BERT parameters is overrode and successfully loaded from our fine-tuned/tmp/mrpc_output/model.ckpt-343. Done!In short, find your fine-tuned model path and checkpoint name, then feed them to-tuned_model_dirand-ckpt_name, respectively.Getting ELMo-like contextual word embeddingStart the server withpooling_strategyset to NONE.bert-serving-start-pooling_strategyNONE-model_dir/tmp/english_L-12_H-768_A-12/To get the word embedding corresponds to every token, you can simply use slice index as follows:# max_seq_len = 25# pooling_strategy = NONEbc=BertClient()vec=bc.encode(['hey you','whats up?'])vec# [2, 25, 768]vec[0]# [1, 25, 768], sentence embeddings for `hey you`vec[0][0]# [1, 1, 768], word embedding for `[CLS]`vec[0][1]# [1, 1, 768], word embedding for `hey`vec[0][2]# [1, 1, 768], word embedding for `you`vec[0][3]# [1, 1, 768], word embedding for `[SEP]`vec[0][4]# [1, 1, 768], word embedding for padding symbolvec[0][25]# error, out of index!Note that no matter how long your original sequence is, the service will always return a[max_seq_len, 768]matrix for every sequence. When using slice index to get the word embedding, beware of the special tokens padded to the sequence, i.e.[CLS],[SEP],0_PAD.Using your own tokenizerOften you want to use your own tokenizer to segment sentences instead of the default one from BERT. Simply callencode(is_tokenized=True)on the client slide as follows:texts=['hello world!','good day']# a naive whitespace tokenizertexts2=[s.split()forsintexts]vecs=bc.encode(texts2,is_tokenized=True)This gives[2, 25, 768]tensor where the first[1, 25, 768]corresponds to the token-level encoding of "hello world!". If you look into its values, you will find that only the first four elements, i.e.[1, 0:3, 768]have values, all the others are zeros. This is due to the fact that BERT considers "hello world!" as four tokens:[CLS]helloworld![SEP], the rest are padding symbols and are masked out before output.Note that there is no need to start a separate server for handling tokenized/untokenized sentences. The server can tell and handle both cases automatically.Sometimes you want to know explicitly the tokenization performed on the server side to have better understanding of the embedding result. One such case is asking word embedding from the server (with-pooling_strategy NONE), one wants to tell which word is tokenized and which is unrecognized. You can get such information with the following steps:enabling-show_tokens_to_clienton the server side;calling the server viaencode(..., show_tokens=True).For example, a basic usage likebc.encode(['hello world!','thisis it'],show_tokens=True)returns a tuple, where the first element is the embedding and the second is the tokenization result from the server:(array([[[ 0. , -0. , 0. , ..., 0. , -0. , -0. ],
[ 1.1100919 , -0.20474958, 0.9895898 , ..., 0.3873255 , -1.4093989 , -0.47620595],
..., -0. , -0. ]],
[[ 0. , -0. , 0. , ..., 0. , 0. , 0. ],
[ 0.6293478 , -0.4088499 , 0.6022662 , ..., 0.41740108, 1.214456 , 1.2532915 ],
..., 0. , 0. ]]], dtype=float32),
[['[CLS]', 'hello', 'world', '!', '[SEP]'], ['[CLS]', 'this', '##is', 'it', '[SEP]']])When using your own tokenization, you may still want to check if the server respects your tokens. For example,bc.encode([['hello','world!'],['thisis','it']],show_tokens=True,is_tokenized=True)returns:(array([[[ 0. , -0. , 0. , ..., 0. , -0. , 0. ],
[ 1.1111546 , -0.56572634, 0.37183186, ..., 0.02397121, -0.5445367 , 1.1009651 ],
..., -0. , 0. ]],
[[ 0. , 0. , 0. , ..., 0. , -0. , 0. ],
[ 0.39262453, 0.3782491 , 0.27096173, ..., 0.7122045 , -0.9874849 , 0.9318679 ],
..., -0. , 0. ]]], dtype=float32),
[['[CLS]', 'hello', '[UNK]', '[SEP]'], ['[CLS]', '[UNK]', 'it', '[SEP]']])One can observe thatworld!andthisisare not recognized on the server, hence they are set to[UNK].Finally, beware that the pretrained BERT Chinese from Google is character-based, i.e. its vocabulary is made of single Chinese characters. Therefore it makes no sense if you use word-level segmentation algorithm to pre-process the data and feed to such model.Extremely curious readers may notice that the first row in the above example is all-zero even though the tokenization result includes[CLS](well done, detective!). The reason is that the tokenization result willalwaysincludes[CLS]and[UNK]regardless the setting of-mask_cls_sep. This could be useful when you want to align the tokens afterwards. Remember,-mask_cls_seponly masks[CLS]and[SEP]out of the computation. It doesn't affect the tokenization algorithm.UsingBertClientwithtf.dataAPIThe complete example canbe found example4.py. There is alsoan example in Keras.Thetf.dataAPI enables you to build complex input pipelines from simple, reusable pieces. One can also useBertClientto encode sentences on-the-fly and use the vectors in a downstream model. Here is an example:batch_size=256num_parallel_calls=4num_clients=num_parallel_calls*2# should be at least greater than `num_parallel_calls`# start a pool of clientsbc_clients=[BertClient(show_server_config=False)for_inrange(num_clients)]defget_encodes(x):# x is `batch_size` of lines, each of which is a json objectsamples=[json.loads(l)forlinx]text=[s['raw_text']forsinsamples]# List[List[str]]labels=[s['label']forsinsamples]# List[str]# get a client from available clientsbc_client=bc_clients.pop()features=bc_client.encode(text)# after use, put it backbc_clients.append(bc_client)returnfeatures,labelsds=(tf.data.TextLineDataset(train_fp).batch(batch_size).map(lambdax:tf.py_func(get_encodes,[x],[tf.float32,tf.string]),num_parallel_calls=num_parallel_calls).map(lambdax,y:{'feature':x,'label':y}).make_one_shot_iterator().get_next())The trick here is to start a pool ofBertClientand reuse them one by one. In this way, we can fully harness the power ofnum_parallel_callsofDataset.map()API.Training a text classifier using BERT features andtf.estimatorAPIThe complete example canbe found example5.py.Following the last example, we can easily extend it to a full classifier usingtf.estimatorAPI. One only need minor change on the input function as follows:estimator=DNNClassifier(hidden_units=[512],feature_columns=[tf.feature_column.numeric_column('feature',shape=(768,))],n_classes=len(laws),config=run_config,label_vocabulary=laws_str,dropout=0.1)input_fn=lambdafp:(tf.data.TextLineDataset(fp).apply(tf.contrib.data.shuffle_and_repeat(buffer_size=10000)).batch(batch_size).map(lambdax:tf.py_func(get_encodes,[x],[tf.float32,tf.string]),num_parallel_calls=num_parallel_calls).map(lambdax,y:({'feature':x},y)).prefetch(20))train_spec=TrainSpec(input_fn=lambda:input_fn(train_fp))eval_spec=EvalSpec(input_fn=lambda:input_fn(eval_fp),throttle_secs=0)train_and_evaluate(estimator,train_spec,eval_spec)The complete example canbe found example5.py, in which a simple MLP is built on BERT features for predicting the relevant articles according to the fact description in the law documents. The problem is a part of theChinese AI and Law Challenge Competition.Saving and loading with TFRecord dataThe complete example canbe found example6.py.The TFRecord file format is a simple record-oriented binary format that many TensorFlow applications use for training data. You can also pre-encode all your sequences and store their encodings to a TFRecord file, then later load it to build atf.Dataset. For example, to write encoding into a TFRecord file:bc=BertClient()list_vec=bc.encode(lst_str)list_label=[0for_inlst_str]# a dummy list of all-zero labels# write to tfrecordwithtf.python_io.TFRecordWriter('tmp.tfrecord')aswriter:defcreate_float_feature(values):returntf.train.Feature(float_list=tf.train.FloatList(value=values))defcreate_int_feature(values):returntf.train.Feature(int64_list=tf.train.Int64List(value=list(values)))for(vec,label)inzip(list_vec,list_label):features={'features':create_float_feature(vec),'labels':create_int_feature([label])}tf_example=tf.train.Example(features=tf.train.Features(feature=features))writer.write(tf_example.SerializeToString())Now we can load from it and build atf.Dataset:def_decode_record(record):"""Decodes a record to a TensorFlow example."""returntf.parse_single_example(record,{'features':tf.FixedLenFeature([768],tf.float32),'labels':tf.FixedLenFeature([],tf.int64),})ds=(tf.data.TFRecordDataset('tmp.tfrecord').repeat().shuffle(buffer_size=100).apply(tf.contrib.data.map_and_batch(lambdarecord:_decode_record(record),batch_size=64)).make_one_shot_iterator().get_next())To save word/token-level embedding to TFRecord, one needs to first flatten[max_seq_len, num_hidden]tensor into an 1D array as follows:defcreate_float_feature(values):returntf.train.Feature(float_list=tf.train.FloatList(value=values.reshape(-1)))And later reconstruct the shape when loading it:name_to_features={"feature":tf.FixedLenFeature([max_seq_length*num_hidden],tf.float32),"label_ids":tf.FixedLenFeature([],tf.int64),}def_decode_record(record,name_to_features):"""Decodes a record to a TensorFlow example."""example=tf.parse_single_example(record,name_to_features)example['feature']=tf.reshape(example['feature'],[max_seq_length,-1])returnexampleBe careful, this will generate a huge TFRecord file.Asynchronous encodingThe complete example canbe found example2.py.BertClient.encode()offers a nice synchronous way to get sentence encodes. However, sometimes we want to do it in an asynchronous manner by feeding all textual data to the server first, fetching the encoded results later. This can be easily done by:# an endless data stream, generating data in an extremely fast speeddeftext_gen():whileTrue:yieldlst_str# yield a batch of text linesbc=BertClient()# get encoded vectorsforjinbc.encode_async(text_gen(),max_num_batch=10):print('received%dx%d'%(j.shape[0],j.shape[1]))Broadcasting to multiple clientsThe complete example canbe found in example3.py.The encoded result is routed to the client according to its identity. If you have multiple clients with same identity, then they all receive the results! You can use thismulticastfeature to do some cool things, e.g. training multiple different models (some usingscikit-learnsome usingtensorflow) in multiple separated processes while only callBertServeronce. In the example below,bcand its two clones will all receive encoded vector.# clone a client by reusing the identitydefclient_clone(id,idx):bc=BertClient(identity=id)forjinbc.listen():print('clone-client-%d: received%dx%d'%(idx,j.shape[0],j.shape[1]))bc=BertClient()# start two cloned clients sharing the same identity as bcforjinrange(2):threading.Thread(target=client_clone,args=(bc.identity,j)).start()for_inrange(3):bc.encode(lst_str)Monitoring the service status in a dashboardThe complete example canbe found in plugin/dashboard/.As a part of the infrastructure, one may also want to monitor the service status and show it in a dashboard. To do that, we can use:bc=BertClient(ip='server_ip')json.dumps(bc.server_status,ensure_ascii=False)This gives the current status of the server including number of requests, number of clients etc. in JSON format. The only thing remained is to start a HTTP server for returning this JSON to the frontend that renders it.Alternatively, one may simply expose an HTTP port when starting a server via:bert-serving-start-http_port8081-model_dir...This will allow one to use javascript orcurlto fetch the server status at port 8081.plugin/dashboard/index.htmlshows a simple dashboard based on Bootstrap and Vue.js.Usingbert-as-serviceto serve HTTP requests in JSONBesides callingbert-as-servicefrom Python, one can also call it via HTTP request in JSON. It is quite useful especially when low transport layer is prohibited. Behind the scene,bert-as-servicespawns a Flask server in a separate process and then reuse aBertClientinstance as a proxy to communicate with the ventilator.To enable the build-in HTTP server, we need to first (re)install the server with some extra Python dependencies:pipinstall-Ubert-serving-server[http]Then simply start the server with:bert-serving-start-model_dir=/YOUR_MODEL-http_port8125Done! Your server is now listening HTTP and TCP requests at port8125simultaneously!To send a HTTP request, first prepare the payload in JSON as following:{"id":123,"texts":["hello world","good day!"],"is_tokenized":false}, whereidis a unique identifier helping you to synchronize the results;is_tokenizedfollows the meaning inBertClientAPIandfalseby default.Then simply call the server at/encodevia HTTP POST request. You can use javascript or whatever, here is an example usingcurl:curl-XPOSThttp://xx.xx.xx.xx:8125/encode\-H'content-type: application/json'\-d'{"id": 123,"texts": ["hello world"], "is_tokenized": false}', which returns a JSON:{"id":123,"results":[[768float-list],[768float-list]],"status":200}To get the server's status and client's status, you can send GET requests at/status/serverand/status/client, respectively.Finally, one may also config CORS to restrict the public access of the server by specifying-corswhen startingbert-serving-start. By default-cors=*, meaning the server is public accessible.StartingBertServerfrom PythonBesides shell, one can also start aBertServerfrom python. Simply dofrombert_serving.server.helperimportget_args_parserfrombert_serving.serverimportBertServerargs=get_args_parser().parse_args(['-model_dir','YOUR_MODEL_PATH_HERE','-port','5555','-port_out','5556','-max_seq_len','NONE','-mask_cls_sep','-cpu'])server=BertServer(args)server.start()Note that it's basically mirroring the arg-parsing behavior in CLI, so everything in that.parse_args([])list should be string, e.g.['-port', '5555']not['-port', 5555].To shutdown the server, you may call the static method inBertServerclass via:BertServer.shutdown(port=5555)Or via shell CLI:bert-serving-terminate-port5555This will terminate the server running on localhost at port 5555. You may also use it to terminate a remote server, seebert-serving-terminate --helpfor details.:speech_balloon: FAQ▴ Back to topQ:Do you have a paper or other written explanation to introduce your model's details?The design philosophy and technical details can be foundin my blog post.Q:Where is the BERT code come from?A:BERT code of this repois forked from theoriginal BERT repowith necessary modification,especially in extract_features.py.Q:How large is a sentence vector?In general, each sentence is translated to a 768-dimensional vector. Depending on the pretrained BERT you are using,pooling_strategyandpooling_layerthe dimensions of the output vector could be different.Q:How do you get the fixed representation? Did you do pooling or something?A:Yes, pooling is required to get a fixed representation of a sentence. In the default strategyREDUCE_MEAN, I take the second-to-last hidden layer of all of the tokens in the sentence and do average pooling.Q:Are you suggesting using BERT without fine-tuning?A:Yes and no. On the one hand, Google pretrained BERT on Wikipedia data, thus should encode enough prior knowledge of the language into the model. Having such feature is not a bad idea. On the other hand, these prior knowledge is not specific to any particular domain. It should be totally reasonable if the performance is not ideal if you are using it on, for example, classifying legal cases. Nonetheless, you can always first fine-tune your own BERT on the downstream task and then usebert-as-serviceto extract the feature vectors efficiently. Keep in mind thatbert-as-serviceis just a feature extraction service based on BERT. Nothing stops you from using a fine-tuned BERT.Q:Can I get a concatenation of several layers instead of a single layer ?A:Sure! Just use a list of the layer you want to concatenate when calling the server. Example:bert-serving-start-pooling_layer-4-3-2-1-model_dir/tmp/english_L-12_H-768_A-12/Q:What are the available pooling strategies?A:Here is a table summarizes all pooling strategies I implemented. Choose your favorite one by specifyingbert-serving-start -pooling_strategy.StrategyDescriptionNONEno pooling at all, useful when you want to use word embedding instead of sentence embedding. This will results in a[max_seq_len, 768]encode matrix for a sequence.REDUCE_MEANtake the average of the hidden state of encoding layer on the time axisREDUCE_MAXtake the maximum of the hidden state of encoding layer on the time axisREDUCE_MEAN_MAXdoREDUCE_MEANandREDUCE_MAXseparately and then concat them together on the last axis, resulting in 1536-dim sentence encodesCLS_TOKENorFIRST_TOKENget the hidden state corresponding to[CLS], i.e. the first tokenSEP_TOKENorLAST_TOKENget the hidden state corresponding to[SEP], i.e. the last tokenQ:Why not use the hidden state of the first token as default strategy, i.e. the[CLS]?A:Because a pre-trained model is not fine-tuned on any downstream tasks yet. In this case, the hidden state of[CLS]is not a good sentence representation. If later you fine-tune the model, you may use[CLS]as well.Q:BERT has 12/24 layers, so which layer are you talking about?A:By default this service works on the second last layer, i.e.pooling_layer=-2. You can change it by settingpooling_layerto other negative values, e.g. -1 corresponds to the last layer.Q:Why not the last hidden layer? Why second-to-last?A:The last layer is too closed to the target functions (i.e. masked language model and next sentence prediction) during pre-training, therefore may be biased to those targets. If you question about this argument and want to use the last hidden layer anyway, please feel free to setpooling_layer=-1.Q:So which layer and which pooling strategy is the best?A:It depends. Keep in mind that different BERT layers capture different information. To see that more clearly, here is a visualization onUCI-News Aggregator Dataset, where I randomly sample 20K news titles; get sentence encodes from different layers and with different pooling strategies, finally reduce it to 2D via PCA (one can of course do t-SNE as well, but that's not my point). There are only four classes of the data, illustrated in red, blue, yellow and green. To reproduce the result, please runexample7.py.Intuitively,pooling_layer=-1is close to the training output, so it may be biased to the training targets. If you don't fine tune the model, then this could lead to a bad representation.pooling_layer=-12is close to the word embedding, may preserve the very original word information (with no fancy self-attention etc.). On the other hand, you may achieve the very same performance by simply using a word-embedding only. That said, anything in-between [-1, -12] is then a trade-off.Q:Could I use other pooling techniques?A:For sure. But if you introduce newtf.variablesto the graph, then you need to train those variables before using the model. You may also want to checksome pooling techniques I mentioned in my blog post.Q:Do I need to batch the data beforeencode()?No, not at all. Just doencodeand let the server handles the rest. If the batch is too large, the server will do batching automatically and it is more efficient than doing it by yourself. No matter how many sentences you have, 10K or 100K, as long as you can hold it in client's memory, just send it to the server. Please also readthe benchmark on the client batch size.Q:Can I start multiple clients and send requests to one server simultaneously?A:Yes! That's the purpose of this repo. In fact you can start as many clients as you want. One server can handle all of them (given enough time).Q:How many requests can one service handle concurrently?A:The maximum number of concurrent requests is determined bynum_workerinbert-serving-start. If you a sending more thannum_workerrequests concurrently, the new requests will be temporally stored in a queue until a free worker becomes available.Q:So one request means one sentence?A:No. One request means a list of sentences sent from a client. Think the size of a request as the batch size. A request may contain 256, 512 or 1024 sentences. The optimal size of a request is often determined empirically. One large request can certainly improve the GPU utilization, yet it also increases the overhead of transmission. You may runpython example/example1.pyfor a simple benchmark.Q:How about the speed? Is it fast enough for production?A:It highly depends on themax_seq_lenand the size of a request. On a single Tesla M40 24GB withmax_seq_len=40, you should get about 470 samples per second using a 12-layer BERT. In general, I'd suggest smallermax_seq_len(25) and larger request size (512/1024).Q:Did you benchmark the efficiency?A:Yes. SeeBenchmark.To reproduce the results, please runbert-serving-benchmark.Q:What is backend based on?A:ZeroMQ.Q:What is the parallel processing model behind the scene?Q:Why does the server need two ports?One port is for pushing text data into the server, the other port is for publishing the encoded result to the client(s). In this way, we get rid of back-chatter, meaning that at every level recipients never talk back to senders. The overall message flow is strictly one-way, as depicted in the above figure. Killing back-chatter is essential to real scalability, allowing us to useBertClientin an asynchronous way.Q:Do I need Tensorflow on the client side?A:No. Think ofBertClientas a general feature extractor, whose output can be fed toanyML models, e.g.scikit-learn,pytorch,tensorflow. The only file that client need isclient.py. Copy this file to your project and import it, then you are ready to go.Q:Can I use multilingual BERT model provided by Google?A:Yes.Q:Can I use my own fine-tuned BERT model?A:Yes. In fact, this is suggested. Make sure you have the following three items inmodel_dir:A TensorFlow checkpoint (bert_model.ckpt) containing the pre-trained weights (which is actually 3 files).A vocab file (vocab.txt) to map WordPiece to word id.A config file (bert_config.json) which specifies the hyperparameters of the model.Q:Can I run it in python 2?A:Server side no, client side yes. This is based on the consideration that python 2.x might still be a major piece in some tech stack. Migrating the whole downstream stack to python 3 for supportingbert-as-servicecan take quite some effort. On the other hand, setting upBertServeris just a one-time thing, which can be evenrun in a docker container. To ease the integration, we support python 2 on the client side so that you can directly useBertClientas a part of your python 2 project, whereas the server side should always be hosted with python 3.Q:Do I need to do segmentation for Chinese?No, if you are usingthe pretrained Chinese BERT released by Googleyou don't need word segmentation. As this Chinese BERT is character-based model. It won't recognize word/phrase even if you intentionally add space in-between. To see that more clearly, this is what the BERT model actually receives after tokenization:bc.encode(['hey you','whats up?','你好么?','我 还 可以'])tokens: [CLS] hey you [SEP]
input_ids: 101 13153 8357 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
input_mask: 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
tokens: [CLS] what ##s up ? [SEP]
input_ids: 101 9100 8118 8644 136 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
input_mask: 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
tokens: [CLS] 你 好 么 ? [SEP]
input_ids: 101 872 1962 720 8043 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
input_mask: 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
tokens: [CLS] 我 还 可 以 [SEP]
input_ids: 101 2769 6820 1377 809 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
input_mask: 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0That means the word embedding is actually the character embedding for Chinese-BERT.Q:Why my (English) word is tokenized to##something?Because your word is out-of-vocabulary (OOV). The tokenizer from Google uses a greedy longest-match-first algorithm to perform tokenization using the given vocabulary.For example:input="unaffable"tokenizer_output=["un","##aff","##able"]Q:Can I use my own tokenizer?Yes. If you already tokenize the sentence on your own, simply send useencodewithList[List[Str]]as input and turn onis_tokenized, i.e.bc.encode(texts, is_tokenized=True).Q:I encounterzmq.error.ZMQError: Operation cannot be accomplished in current statewhen usingBertClient, what should I do?A:This is often due to the misuse ofBertClientin multi-thread/process environment. Note that you can’t reuse oneBertClientamong multiple threads/processes, you have to make a separate instance for each thread/process. For example, the following won't work at all:# BAD examplebc=BertClient()# in Proc1/Thread1 scope:bc.encode(lst_str)# in Proc2/Thread2 scope:bc.encode(lst_str)Instead, please do:# in Proc1/Thread1 scope:bc1=BertClient()bc1.encode(lst_str)# in Proc2/Thread2 scope:bc2=BertClient()bc2.encode(lst_str)Q:After running the server, I have several garbagetmpXXXXfolders. How can I change this behavior ?A:These folders are used by ZeroMQ to store sockets. You can choose a different location by setting the environment variableZEROMQ_SOCK_TMP_DIR:export ZEROMQ_SOCK_TMP_DIR=/tmp/Q:The cosine similarity of two sentence vectors is unreasonably high (e.g. always > 0.8), what's wrong?A:A decent representation for a downstream task doesn't mean that it will be meaningful in terms of cosine distance. Since cosine distance is a linear space where all dimensions are weighted equally. if you want to use cosine distance anyway, then please focus on the rank not the absolute value. Namely, do not use:if cosine(A, B) > 0.9, then A and B are similarPlease consider the following instead:if cosine(A, B) > cosine(A, C), then A is more similar to B than C.The graph below illustrates the pairwise similarity of 3000 Chinese sentences randomly sampled from web (char. length < 25). We compute cosine similarity based on the sentence vectors andRouge-Lbased on the raw text. The diagonal (self-correlation) is removed for the sake of clarity. As one can see, there is some positive correlation between these two metrics.Q:I'm getting bad performance, what should I do?A:This often suggests that the pretrained BERT could not generate a decent representation of your downstream task. Thus, you can fine-tune the model on the downstream task and then usebert-as-serviceto serve the fine-tuned BERT. Note that,bert-as-serviceis just a feature extraction service based on BERT. Nothing stops you from using a fine-tuned BERT.Q:Can I run the server side on CPU-only machine?A:Yes, please runbert-serving-start -cpu -max_batch_size 16. Note that, CPUs do not scale as well as GPUs to large batches, therefore themax_batch_sizeon the server side needs to be smaller, e.g. 16 or 32.Q:How can I choosenum_worker?A:Generally, the number of workers should be less than or equal to the number of GPUs or CPUs that you have. Otherwise, multiple workers will be allocated to one GPU/CPU, which may not scale well (and may cause out-of-memory on GPU).Q:Can I specify which GPU to use?A:Yes, you can specifying-device_mapas follows:bert-serving-start-device_map014-num_worker4-model_dir...This will start four workers and allocate them to GPU0, GPU1, GPU4 and again GPU0, respectively. In general, ifnum_worker>device_map, then devices will be reused and shared by the workers (may scale suboptimally or cause OOM); ifnum_worker<device_map, onlydevice_map[:num_worker]will be used.Note,device_mapis ignored when running on CPU.:zap: Benchmark▴ Back to topThe primary goal of benchmarking is to test the scalability and the speed of this service, which is crucial for using it in a dev/prod environment. Benchmark was done on Tesla M40 24GB, experiments were repeated 10 times and the average value is reported.To reproduce the results, please runbert-serving-benchmark--helpCommon arguments across all experiments are:ParameterValuenum_worker1,2,4max_seq_len40client_batch_size2048max_batch_size256num_client1Speed wrt.max_seq_lenmax_seq_lenis a parameter on the server side, which controls the maximum length of a sequence that a BERT model can handle. Sequences larger thanmax_seq_lenwill be truncated on the left side. Thus, if your client want to send long sequences to the model, please make sure the server can handle them correctly.Performance-wise, longer sequences means slower speed and more chance of OOM, as the multi-head self-attention (the core unit of BERT) needs to do dot products and matrix multiplications between every two symbols in the sequence.max_seq_len1 GPU2 GPU4 GPU20903177432544047391916878023143576816011923746432054108212Speed wrt.client_batch_sizeclient_batch_sizeis the number of sequences from a client when invokingencode(). For performance reason, please consider encoding sequences in batch rather than encoding them one by one.For example, do:# prepare your sent in advancebc=BertClient()my_sentences=[sforsinmy_corpus.iter()]# doing encoding in one-shotvec=bc.encode(my_sentences)DON'T:bc=BertClient()vec=[]forsinmy_corpus.iter():vec.append(bc.encode(s))It's even worse if you putBertClient()inside the loop. Don't do that.client_batch_size1 GPU2 GPU4 GPU1757472420620520182742702671633232933064365365365256382383383512432766762102445986215172048473917168140964819431809Speed wrt.num_clientnum_clientrepresents the number of concurrent clients connected to the server at the same time.num_client1 GPU2 GPU4 GPU1473919175922615121028413326753386713627016346813632173468As one can observe, 1 clients 1 GPU = 381 seqs/s, 2 clients 2 GPU 402 seqs/s, 4 clients 4 GPU 413 seqs/s. This shows the efficiency of our parallel pipeline and job scheduling, as the service can leverage the GPU time more exhaustively as concurrent requests increase.Speed wrt.max_batch_sizemax_batch_sizeis a parameter on the server side, which controls the maximum number of samples per batch per worker. If a incoming batch from client is larger thanmax_batch_size, the server will split it into small batches so that each of them is less or equal thanmax_batch_sizebefore sending it to workers.max_batch_size1 GPU2 GPU4 GPU324508871726644598971759128473931181625647391916885124648661483Speed wrt.pooling_layerpooling_layerdetermines the encoding layer that pooling operates on. For example, in a 12-layer BERT model,-1represents the layer closed to the output,-12represents the layer closed to the embedding layer. As one can observe below, the depth of the pooling layer affects the speed.pooling_layer1 GPU2 GPU4 GPU[-1]4388441568[-2]4759161686[-3]5169951823[-4]56910761986[-5]63311932184[-6]71113402430[-7]82015282729[-8]94517723104[-9]112820473622[-10]139225424241[-11]152327374752[-12]156829855303Speed wrt.-fp16and-xlabert-as-servicesupports two additional optimizations: half-precision and XLA, which can be turned on by adding-fp16and-xlatobert-serving-start, respectively. To enable these two options, you have to meet the following requirements:your GPU supports FP16 instructions;your Tensorflow is self-compiled with XLA and-march=native;your CUDA and cudnn are not too old.On Tesla V100 withtensorflow=1.13.0-rc0it gives:FP16 achieves ~1.4x speedup (round-trip) comparing to the FP32 counterpart. To reproduce the result, please runpython example/example1.py.Citing▴ Back to topIf you use bert-as-service in a scientific publication, we would appreciate references to the following BibTex entry:@misc{xiao2018bertservice,
title={bert-as-service},
author={Xiao, Han},
howpublished={\url{https://github.com/hanxiao/bert-as-service}},
year={2018}}
|
aiset
|
No description available on PyPI.
|
aisfx
|
aiSFXRepresentation Learning for the Automatic Indexing of Sound Effects Libraries (ISMIR 2022): Deep audio embeddings pre-trained on UCS & Non-UCS-compliant datasets.This work was inspired by the creation of theUniversal Category System (UCS), an industry-proposed public domain initiative initialized byTim Nielsen,Justin Drury,Kai Paquin, and others. First launching in the fall of 2020, UCS offers a standardized framework for sound effects library metadata designed by and for sound designers and editors.How To UsePlease refer to this package'sdocumentationforInstallation InstructionsandTutorialsof how to extract embeddings.Visualizations of UCS ClassesClick the above to visualize coarse-level "Category" UCS classes in Pro Sound Effects (PSE), Soundly (SDLY), and UCS Mixed (UMIX).Cite This WorkPlease cite the paper below if you use it in your work.This paper has been accepted at the 23rd International Society for Music Information Retrieval Conference (ISMIR) in Bengaluru, India (December 04-08, 2022). To cite our work, please refer to the following.[1]Representation Learning for the Automatic Indexing of Sound Effects Libraries@inproceedings{ismir_aisfx,
title={Representation Learning for the Automatic Indexing of Sound Effects Libraries},
author={Ma, Alison Bernice and Lerch, Alexander},
booktitle={Proceedings of the 23rd International Society for Music Information Retrieval Conference (ISMIR)},
year={2022},
pages={866--875}
}AcknowledgementsWe would like to thank those who provided the data required to conduct this research as well as those who took the time to share their insights and software licenses for tools regarding sound search, query, and retrieval.Universal Category System (UCS)•Alex Lane•All You Can Eat Audio•Articulated Sounds•Audio Shade•aXLSound•Big Sound Bank•BaseHead•Bonson•BOOM Library•Frick & Traa•Hzandbits•InspectorJ•Kai Paquin•KEDR Audio•Krotos Audio•Nikola Simikic•Penguin Grenade•Pro Sound Effects•Rick Allen Creative•Sononym•Sound Ideas•Soundly•Soundminer•Storyblocks•Tim Nielsen•Thomas Rex Beverly•ZapSplatLicense: Pre-trained Model & PaperThis pre-trained model and paper [1] is made available under a Creative Commons Attribution 4.0 International License (CC BY 4.0).
|
aisg-cli
|
Command line interface to simplify machine learning workflows - data acquisition, modeling, deployment
|
aish
|
Aish - ChatGPT CLIThis command-line interface (CLI) application is used to interact with the OpenAI through the OpenAI API. The chatbot takes an input prompt and returns a response from the selected model.InstallationEnsure you have Python 3.7+ installed. To install the required libraries, use:pipinstallaishUsageTo use the application, you need to set the environment variableOPENAI_API_KEYwith your OpenAI API key. Then, you can run the script from the terminal using the command:aishHowcanilistallfilesolderthan30days?aish-sHowcanilistallfilesolderthan30days?aish-ccodeWriteahelloworldappinpythonOptional parameters will take default values if not provided:ModelVersion: The model version that you want to use. Default is "gpt-3.5-turbo".TemperatureValue: The randomness of the AI’s responses. A lower value makes the output more focused and deterministic, while higher values produce more diverse and random outputs. Default is 0.5.TopPValue: A parameter for controlling randomness. A higher value generates more random responses, and a lower value generates more deterministic responses. Default is 0.5.TimeoutValue: The maximum time in seconds that the request will wait for a response from the API. Default is 60.HelpYou can display the help message which provides details about the command usage and the different parameters by running:aish--help
|
aishapdf
|
here is our first test
|
aish-distribution-test
|
No description available on PyPI.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.