text
stringlengths 55
456k
| metadata
dict |
---|---|
# STDOUT Trace Exporter
[](https://pkg.go.dev/go.opentelemetry.io/otel/exporters/stdout/stdouttrace) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/exporters/stdout/stdouttrace/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/exporters/stdout/stdouttrace/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 192
} |
# Experimental Features
The SDK contains features that have not yet stabilized in the OpenTelemetry specification.
These features are added to the OpenTelemetry Go SDK prior to stabilization in the specification so that users can start experimenting with them and provide feedback.
These feature may change in backwards incompatible ways as feedback is applied.
See the [Compatibility and Stability](#compatibility-and-stability) section for more information.
## Features
- [Resource](#resource)
### Resource
[OpenTelemetry resource semantic conventions] include many attribute definitions that are defined as experimental.
To have experimental semantic conventions be added by [resource detectors] set the `OTEL_GO_X_RESOURCE` environment variable.
The value set must be the case-insensitive string of `"true"` to enable the feature.
All other values are ignored.
<!-- TODO: document what attributes are added by which detector -->
[OpenTelemetry resource semantic conventions]: https://opentelemetry.io/docs/specs/semconv/resource/
[resource detectors]: https://pkg.go.dev/go.opentelemetry.io/otel/sdk/resource#Detector
#### Examples
Enable experimental resource semantic conventions.
```console
export OTEL_GO_X_RESOURCE=true
```
Disable experimental resource semantic conventions.
```console
unset OTEL_GO_X_RESOURCE
```
## Compatibility and Stability
Experimental features do not fall within the scope of the OpenTelemetry Go versioning and stability [policy](../../../VERSIONING.md).
These features may be removed or modified in successive version releases, including patch versions.
When an experimental feature is promoted to a stable feature, a migration path will be included in the changelog entry of the release.
There is no guarantee that any environment variable feature flags that enabled the experimental feature will be supported by the stable version.
If they are supported, they may be accompanied with a deprecation notice stating a timeline for the removal of that support. | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/sdk/internal/x/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/sdk/internal/x/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 2010
} |
# SDK Trace test
[](https://pkg.go.dev/go.opentelemetry.io/otel/sdk/trace/tracetest) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/sdk/trace/tracetest/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/sdk/trace/tracetest/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 167
} |
# Semconv v1.13.0 HTTP conv
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.13.0/httpconv) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.13.0/httpconv/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.13.0/httpconv/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 188
} |
# Semconv v1.13.0 NET conv
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.13.0/netconv) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.13.0/netconv/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.13.0/netconv/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 185
} |
# Semconv v1.14.0 HTTP conv
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.14.0/httpconv) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.14.0/httpconv/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.14.0/httpconv/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 188
} |
# Semconv v1.14.0 NET conv
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.14.0/netconv) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.14.0/netconv/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.14.0/netconv/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 185
} |
# Semconv v1.15.0 HTTP conv
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.15.0/httpconv) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.15.0/httpconv/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.15.0/httpconv/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 188
} |
# Semconv v1.15.0 NET conv
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.15.0/netconv) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.15.0/netconv/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.15.0/netconv/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 185
} |
# Semconv v1.16.0 HTTP conv
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.16.0/httpconv) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.16.0/httpconv/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.16.0/httpconv/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 188
} |
# Semconv v1.16.0 NET conv
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.16.0/netconv) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.16.0/netconv/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.16.0/netconv/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 185
} |
# Semconv v1.17.0 HTTP conv
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.17.0/httpconv) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.17.0/httpconv/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.17.0/httpconv/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 188
} |
# Semconv v1.16.0 NET conv
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.16.0/netconv) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.17.0/netconv/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.17.0/netconv/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 185
} |
# Semconv v1.18.0 HTTP conv
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.18.0/httpconv) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.18.0/httpconv/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.18.0/httpconv/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 188
} |
# Semconv v1.18.0 NET conv
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.18.0/netconv) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.18.0/netconv/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.18.0/netconv/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 185
} |
# Semconv v1.19.0 HTTP conv
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.19.0/httpconv) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.19.0/httpconv/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.19.0/httpconv/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 188
} |
# Semconv v1.19.0 NET conv
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.19.0/netconv) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.19.0/netconv/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.19.0/netconv/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 185
} |
# Semconv v1.20.0 HTTP conv
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.20.0/httpconv) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.20.0/httpconv/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.20.0/httpconv/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 188
} |
# Semconv v1.20.0 NET conv
[](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.20.0/netconv) | {
"source": "yandex/perforator",
"title": "vendor/go.opentelemetry.io/otel/semconv/v1.20.0/netconv/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/go.opentelemetry.io/otel/semconv/v1.20.0/netconv/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 185
} |
# h2i
**h2i** is an interactive HTTP/2 ("h2") console debugger. Miss the good ol'
days of telnetting to your HTTP/1.n servers? We're bringing you
back.
Features:
- send raw HTTP/2 frames
- PING
- SETTINGS
- HEADERS
- etc
- type in HTTP/1.n and have it auto-HPACK/frame-ify it for HTTP/2
- pretty print all received HTTP/2 frames from the peer (including HPACK decoding)
- tab completion of commands, options
Not yet features, but soon:
- unnecessary CONTINUATION frames on short boundaries, to test peer implementations
- request bodies (DATA frames)
- send invalid frames for testing server implementations (supported by underlying Framer)
Later:
- act like a server
## Installation
```
$ go install golang.org/x/net/http2/h2i@latest
$ h2i <host>
```
## Demo
```
$ h2i
Usage: h2i <hostname>
-insecure
Whether to skip TLS cert validation
-nextproto string
Comma-separated list of NPN/ALPN protocol names to negotiate. (default "h2,h2-14")
$ h2i google.com
Connecting to google.com:443 ...
Connected to 74.125.224.41:443
Negotiated protocol "h2-14"
[FrameHeader SETTINGS len=18]
[MAX_CONCURRENT_STREAMS = 100]
[INITIAL_WINDOW_SIZE = 1048576]
[MAX_FRAME_SIZE = 16384]
[FrameHeader WINDOW_UPDATE len=4]
Window-Increment = 983041
h2i> PING h2iSayHI
[FrameHeader PING flags=ACK len=8]
Data = "h2iSayHI"
h2i> headers
(as HTTP/1.1)> GET / HTTP/1.1
(as HTTP/1.1)> Host: ip.appspot.com
(as HTTP/1.1)> User-Agent: h2i/brad-n-blake
(as HTTP/1.1)>
Opening Stream-ID 1:
:authority = ip.appspot.com
:method = GET
:path = /
:scheme = https
user-agent = h2i/brad-n-blake
[FrameHeader HEADERS flags=END_HEADERS stream=1 len=77]
:status = "200"
alternate-protocol = "443:quic,p=1"
content-length = "15"
content-type = "text/html"
date = "Fri, 01 May 2015 23:06:56 GMT"
server = "Google Frontend"
[FrameHeader DATA flags=END_STREAM stream=1 len=15]
"173.164.155.78\n"
[FrameHeader PING len=8]
Data = "\x00\x00\x00\x00\x00\x00\x00\x00"
h2i> ping
[FrameHeader PING flags=ACK len=8]
Data = "h2i_ping"
h2i> ping
[FrameHeader PING flags=ACK len=8]
Data = "h2i_ping"
h2i> ping
[FrameHeader GOAWAY len=22]
Last-Stream-ID = 1; Error-Code = PROTOCOL_ERROR (1)
ReadFrame: EOF
```
## Status
Quick few hour hack. So much yet to do. Feel free to file issues for
bugs or wishlist items, but [@bmizerany](https://github.com/bmizerany/)
and I aren't yet accepting pull requests until things settle down. | {
"source": "yandex/perforator",
"title": "vendor/golang.org/x/net/http2/h2i/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/golang.org/x/net/http2/h2i/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 2458
} |
# Clock
This package provides an interface for time-based operations. It allows
mocking time for testing.
This is a copy of k8s.io/utils/clock. We have to copy it to avoid a circular
dependency (k8s.io/klog -> k8s.io/utils -> k8s.io/klog). | {
"source": "yandex/perforator",
"title": "vendor/k8s.io/klog/v2/internal/clock/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/k8s.io/klog/v2/internal/clock/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 242
} |
We have been working inside Google on a fork of Go that uses
BoringCrypto (the core of [BoringSSL](https://boringssl.googlesource.com/boringssl/))
for various crypto primitives, in furtherance of some work related to FIPS 140.
We have heard that some external users of Go would be
interested in this code as well, so we have published this code
here in the main Go repository behind the setting GOEXPERIMENT=boringcrypto.
Use of GOEXPERIMENT=boringcrypto outside Google is _unsupported_.
This mode is not part of the [Go 1 compatibility rules](https://go.dev/doc/go1compat),
and it may change incompatibly or break in other ways at any time.
To be clear, we are not making any statements or representations about
the suitability of this code in relation to the FIPS 140 standard.
Interested users will have to evaluate for themselves whether the code
is useful for their own purposes.
---
This directory holds the core of the BoringCrypto implementation
as well as the build scripts for the module itself: syso/*.syso.
syso/goboringcrypto_linux_amd64.syso is built with:
GOARCH=amd64 ./build.sh
syso/goboringcrypto_linux_arm64.syso is built with:
GOARCH=arm64 ./build.sh
Both run on an x86 Debian Linux system using Docker.
For the arm64 build to run on an x86 system, you need
apt-get install qemu-user-static qemu-binfmt-support
to allow the x86 kernel to run arm64 binaries via QEMU.
See build.sh for more details about the build. | {
"source": "yandex/perforator",
"title": "contrib/go/_std_1.22/src/crypto/internal/boring/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/go/_std_1.22/src/crypto/internal/boring/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1448
} |
The `dnsinfo.h` header was extracted from Apple's OpenSource repository:
[https://opensource.apple.com/source/configd/configd-453.19/dnsinfo/dnsinfo.h](https://opensource.apple.com/source/configd/configd-453.19/dnsinfo/dnsinfo.h)
We then had to make a few edits to this file:
1. Add `AvailabilityMacros.h` header file
2. conditionalize `reach_flags` in `dns_resolver_t` on MacOS 10.8 or higher, in
order to maintain compatibility with the last MacOS PPC release, 10.6.
3. conditionalize `_dns_configuration_ack()` on MacOS 10.8 or higher.
4. Update parameter list to `(void)` for both `dns_configuration_notify_key()`
and `dns_configuration_copy()` to sidestep compiler warnings in this old
header.
We had tried initially to use the latest 1109.140.1 which only worked on
MacOS 11+, then downgraded to 963.50.8 for MacOS 10.8+ support, then finally
to 453.19 with additional patches.
This is needed to call into `dns_configuration_copy()` and
`dns_configuration_free()`. | {
"source": "yandex/perforator",
"title": "contrib/libs/c-ares/src/lib/thirdparty/apple/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/c-ares/src/lib/thirdparty/apple/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 982
} |
# Welcome to `include/grpc/impl/codegen`
## Why is this directory here?
This directory exists so that generated C++ code can include selected files upon
which it depends without having to depend on the entire gRPC C++ library. This
directory thus exists to support `include/grpcpp/impl/codegen`. This constraint
is particularly relevant for users of bazel, particularly if they use the
multi-lingual `proto_library` target type. Generated code that uses this target
only depends on the gRPC C++ targets associated with these header files, not the
entire gRPC C++ codebase since that would make the build time of these types of
targets excessively large (particularly when they are not even C++ specific).
## What should user code do?
User code should *not* include anything from this directory. Only generated code
and gRPC library code should include contents from this directory. C++ user code
should instead include contents from the main `grpcpp` directory or its
accessible subcomponents like `grpcpp/support`. It is possible that we may
remove this directory altogether if the motivations for its existence are no
longer strong enough (e.g., if the gRPC C++ library no longer has a need for an
`impl/codegen` directory of its own). | {
"source": "yandex/perforator",
"title": "contrib/libs/grpc/include/grpc/impl/codegen/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/grpc/include/grpc/impl/codegen/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1241
} |
# Welcome to `include/grpcpp/impl/codegen`
## Why is this directory here?
This directory exists so that generated code can include selected files upon
which it depends without having to depend on the entire gRPC C++ library. This
is particularly relevant for users of bazel, particularly if they use the
multi-lingual `proto_library` target type. Generated code that uses this target
only depends on the gRPC C++ targets associated with these header files, not the
entire gRPC C++ codebase since that would make the build time of these types of
targets excessively large (particularly when they are not even C++ specific).
## What should user code do?
User code should *not* include anything from this directory. Only generated code
and gRPC library code should include contents from this directory. User code
should instead include contents from the main `grpcpp` directory or its
accessible subcomponents like `grpcpp/support`. It is possible that we may
remove this directory altogether if the motivations for its existence are no
longer strong enough (e.g., if most users migrate away from the `proto_library`
target type or if the additional overhead of depending on gRPC C++ is not high). | {
"source": "yandex/perforator",
"title": "contrib/libs/grpc/include/grpcpp/impl/codegen/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/grpc/include/grpcpp/impl/codegen/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1198
} |
# Channel
Provides channel/call stack implementation, and implementation of common filters
for that implementation. | {
"source": "yandex/perforator",
"title": "contrib/libs/grpc/src/core/lib/channel/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/grpc/src/core/lib/channel/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 116
} |
# GPR - Google Portable Runtime for C
The files in this directory contain basic utility code and platform
abstractions for C code. None of this code is gRPC-specific; anything
here may also be useful for other open source projects written in C.
Note that this is one of the few places in src/core where we allow
the use of portability macros. | {
"source": "yandex/perforator",
"title": "contrib/libs/grpc/src/core/lib/gpr/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/grpc/src/core/lib/gpr/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 345
} |
# GPR++ - Google Portable Runtime for C++
The files in this directory contain various utility code for C++ code.
None of this code is gRPC-specific; anything here may also be useful
for other open source projects written in C++.
Note that this is one of the few places in src/core where we allow
the use of portability macros. | {
"source": "yandex/perforator",
"title": "contrib/libs/grpc/src/core/lib/gprpp/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/grpc/src/core/lib/gprpp/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 328
} |
# iomgr
Platform abstractions for I/O (mostly network).
Provides abstractions over TCP/UDP I/O, file loading, polling, and concurrency
management for various operating systems. | {
"source": "yandex/perforator",
"title": "contrib/libs/grpc/src/core/lib/iomgr/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/grpc/src/core/lib/iomgr/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 178
} |
# Surface
Surface provides the bulk of the gRPC Core public API, and translates it into
calls against core components. | {
"source": "yandex/perforator",
"title": "contrib/libs/grpc/src/core/lib/surface/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/grpc/src/core/lib/surface/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 119
} |
# Transport
Common implementation details for gRPC Transports.
Transports multiplex messages across some single connection. In ext/ there are
implementations atop [a custom http2 implementation](/src/core/ext/transport/chttp2/README.md)
and atop [cronet](/src/core/ext/transport/cronet/README.md). | {
"source": "yandex/perforator",
"title": "contrib/libs/grpc/src/core/lib/transport/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/grpc/src/core/lib/transport/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 299
} |
# Legacy API type versions
This package includes types for legacy API versions. The stable version of the API types live in `api/types/*.go`.
Consider moving a type here when you need to keep backwards compatibility in the API. This legacy types are organized by the latest API version they appear in. For instance, types in the `v1p19` package are valid for API versions below or equal `1.19`. Types in the `v1p20` package are valid for the API version `1.20`, since the versions below that will use the legacy types in `v1p19`.
## Package name conventions
The package name convention is to use `v` as a prefix for the version number and `p`(patch) as a separator. We use this nomenclature due to a few restrictions in the Go package name convention:
1. We cannot use `.` because it's interpreted by the language, think of `v1.20.CallFunction`.
2. We cannot use `_` because golint complains about it. The code is actually valid, but it looks probably more weird: `v1_20.CallFunction`.
For instance, if you want to modify a type that was available in the version `1.21` of the API but it will have different fields in the version `1.22`, you want to create a new package under `api/types/versions/v1p21`. | {
"source": "yandex/perforator",
"title": "vendor/github.com/docker/docker/api/types/versions/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/docker/docker/api/types/versions/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1210
} |
# migrate CLI
## Installation
### Download pre-built binary (Windows, MacOS, or Linux)
[Release Downloads](https://github.com/golang-migrate/migrate/releases)
```bash
$ curl -L https://github.com/golang-migrate/migrate/releases/download/$version/migrate.$platform-amd64.tar.gz | tar xvz
```
### MacOS
```bash
$ brew install golang-migrate
```
### Windows
Using [scoop](https://scoop.sh/)
```bash
$ scoop install migrate
```
### Linux (*.deb package)
```bash
$ curl -L https://packagecloud.io/golang-migrate/migrate/gpgkey | apt-key add -
$ echo "deb https://packagecloud.io/golang-migrate/migrate/ubuntu/ $(lsb_release -sc) main" > /etc/apt/sources.list.d/migrate.list
$ apt-get update
$ apt-get install -y migrate
```
### With Go toolchain
#### Versioned
```bash
$ go get -u -d github.com/golang-migrate/migrate/cmd/migrate
$ cd $GOPATH/src/github.com/golang-migrate/migrate/cmd/migrate
$ git checkout $TAG # e.g. v4.1.0
$ # Go 1.15 and below
$ go build -tags 'postgres' -ldflags="-X main.Version=$(git describe --tags)" -o $GOPATH/bin/migrate $GOPATH/src/github.com/golang-migrate/migrate/cmd/migrate
$ # Go 1.16+
$ go install -tags 'postgres' github.com/golang-migrate/migrate/v4/cmd/migrate@$TAG
```
#### Unversioned
```bash
$ # Go 1.15 and below
$ go get -tags 'postgres' -u github.com/golang-migrate/migrate/cmd/migrate
$ # Go 1.16+
$ go install -tags 'postgres' github.com/golang-migrate/migrate/v4/cmd/migrate@latest
```
#### Notes
1. Requires a version of Go that [supports modules](https://golang.org/cmd/go/#hdr-Preliminary_module_support). e.g. Go 1.11+
1. These examples build the cli which will only work with postgres. In order
to build the cli for use with other databases, replace the `postgres` build tag
with the appropriate database tag(s) for the databases desired. The tags
correspond to the names of the sub-packages underneath the
[`database`](../database) package.
1. Similarly to the database build tags, if you need to support other sources, use the appropriate build tag(s).
1. Support for build constraints will be removed in the future: https://github.com/golang-migrate/migrate/issues/60
1. For versions of Go 1.15 and lower, [make sure](https://github.com/golang-migrate/migrate/pull/257#issuecomment-705249902) you're not installing the `migrate` CLI from a module. e.g. there should not be any `go.mod` files in your current directory or any directory from your current directory to the root
## Usage
```bash
$ migrate -help
Usage: migrate OPTIONS COMMAND [arg...]
migrate [ -version | -help ]
Options:
-source Location of the migrations (driver://url)
-path Shorthand for -source=file://path
-database Run migrations against this database (driver://url)
-prefetch N Number of migrations to load in advance before executing (default 10)
-lock-timeout N Allow N seconds to acquire database lock (default 15)
-verbose Print verbose logging
-version Print version
-help Print usage
Commands:
create [-ext E] [-dir D] [-seq] [-digits N] [-format] NAME
Create a set of timestamped up/down migrations titled NAME, in directory D with extension E.
Use -seq option to generate sequential up/down migrations with N digits.
Use -format option to specify a Go time format string.
goto V Migrate to version V
up [N] Apply all or N up migrations
down [N] Apply all or N down migrations
drop Drop everything inside database
force V Set version V but don't run migration (ignores dirty state)
version Print current migration version
```
So let's say you want to run the first two migrations
```bash
$ migrate -source file://path/to/migrations -database postgres://localhost:5432/database up 2
```
If your migrations are hosted on github
```bash
$ migrate -source github://mattes:personal-access-token@mattes/migrate_test \
-database postgres://localhost:5432/database down 2
```
The CLI will gracefully stop at a safe point when SIGINT (ctrl+c) is received.
Send SIGKILL for immediate halt.
## Reading CLI arguments from somewhere else
### ENV variables
```bash
$ migrate -database "$MY_MIGRATE_DATABASE"
```
### JSON files
Check out https://stedolan.github.io/jq/
```bash
$ migrate -database "$(cat config.json | jq '.database')"
```
### YAML files
```bash
$ migrate -database "$(cat config/database.yml | ruby -ryaml -e "print YAML.load(STDIN.read)['database']")"
$ migrate -database "$(cat config/database.yml | python -c 'import yaml,sys;print yaml.safe_load(sys.stdin)["database"]')"
``` | {
"source": "yandex/perforator",
"title": "vendor/github.com/golang-migrate/migrate/v4/cmd/migrate/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/golang-migrate/migrate/v4/cmd/migrate/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 4629
} |
# Cassandra
* Drop command will not work on Cassandra 2.X because it rely on
system_schema table which comes with 3.X
* Other commands should work properly but are **not tested**
* The Cassandra driver (gocql) does not natively support executing multiple statements in a single query. To allow for multiple statements in a single migration, you can use the `x-multi-statement` param. There are two important caveats:
* This mode splits the migration text into separately-executed statements by a semi-colon `;`. Thus `x-multi-statement` cannot be used when a statement in the migration contains a string with a semi-colon.
* The queries are not executed in any sort of transaction/batch, meaning you are responsible for fixing partial migrations.
## Usage
`cassandra://host:port/keyspace?param1=value¶m2=value2`
| URL Query | Default value | Description |
|------------|-------------|-----------|
| `x-migrations-table` | schema_migrations | Name of the migrations table |
| `x-multi-statement` | false | Enable multiple statements to be ran in a single migration (See note above) |
| `port` | 9042 | The port to bind to |
| `consistency` | ALL | Migration consistency
| `protocol` | | Cassandra protocol version (3 or 4)
| `timeout` | 1 minute | Migration timeout
| `connect-timeout` | 600ms | Initial connection timeout to the cluster |
| `username` | nil | Username to use when authenticating. |
| `password` | nil | Password to use when authenticating. |
| `sslcert` | | Cert file location. The file must contain PEM encoded data. |
| `sslkey` | | Key file location. The file must contain PEM encoded data. |
| `sslrootcert` | | The location of the root certificate file. The file must contain PEM encoded data. |
| `sslmode` | | Whether or not to use SSL (disable\|require\|verify-ca\|verify-full) |
| `disable-host-lookup`| false | Disable initial host lookup. |
`timeout` is parsed using [time.ParseDuration(s string)](https://golang.org/pkg/time/#ParseDuration)
## Upgrading from v1
1. Write down the current migration version from schema_migrations
2. `DROP TABLE schema_migrations`
4. Download and install the latest migrate version.
5. Force the current migration version with `migrate force <current_version>`. | {
"source": "yandex/perforator",
"title": "vendor/github.com/golang-migrate/migrate/v4/database/cassandra/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/golang-migrate/migrate/v4/database/cassandra/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 2242
} |
# ClickHouse
`clickhouse://username:password@host:port/database=clicks?x-multi-statement=true`
| URL Query | Description |
|------------|-------------|
| `x-migrations-table`| Name of the migrations table |
| `x-migrations-table-engine`| Engine to use for the migrations table, defaults to TinyLog |
| `x-cluster-name` | Name of cluster for creating `schema_migrations` table cluster wide |
| `database` | The name of the database to connect to |
| `username` | The user to sign in as |
| `password` | The user's password |
| `host` | The host to connect to. |
| `port` | The port to bind to. |
| `x-multi-statement` | false | Enable multiple statements to be ran in a single migration (See note below) |
## Notes
* The Clickhouse driver does not natively support executing multipe statements in a single query. To allow for multiple statements in a single migration, you can use the `x-multi-statement` param. There are two important caveats:
* This mode splits the migration text into separately-executed statements by a semi-colon `;`. Thus `x-multi-statement` cannot be used when a statement in the migration contains a string with a semi-colon.
* The queries are not executed in any sort of transaction/batch, meaning you are responsible for fixing partial migrations.
* Using the default TinyLog table engine for the schema_versions table prevents backing up the table if using the [clickhouse-backup](https://github.com/AlexAkulov/clickhouse-backup) tool. If backing up the database with make sure the migrations are run with `x-migrations-table-engine=MergeTree`.
* Clickhouse cluster mode is not officially supported, since it's not tested right now, but you can try enabling `schema_migrations` table replication by specifying a `x-cluster-name`:
* When `x-cluster-name` is specified, `x-migrations-table-engine` also should be specified. See the docs regarding [replicated table engines](https://clickhouse.tech/docs/en/engines/table-engines/mergetree-family/replication/#table_engines-replication).
* When `x-cluster-name` is specified, only the `schema_migrations` table is replicated across the cluster. You still need to write your migrations so that the application tables are replicated within the cluster. | {
"source": "yandex/perforator",
"title": "vendor/github.com/golang-migrate/migrate/v4/database/clickhouse/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/golang-migrate/migrate/v4/database/clickhouse/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 2236
} |
# cockroachdb
`cockroachdb://user:password@host:port/dbname?query` (`cockroach://`, and `crdb-postgres://` work, too)
| URL Query | WithInstance Config | Description |
|------------|---------------------|-------------|
| `x-migrations-table` | `MigrationsTable` | Name of the migrations table |
| `x-lock-table` | `LockTable` | Name of the table which maintains the migration lock |
| `x-force-lock` | `ForceLock` | Force lock acquisition to fix faulty migrations which may not have released the schema lock (Boolean, default is `false`) |
| `dbname` | `DatabaseName` | The name of the database to connect to |
| `user` | | The user to sign in as |
| `password` | | The user's password |
| `host` | | The host to connect to. Values that start with / are for unix domain sockets. (default is localhost) |
| `port` | | The port to bind to. (default is 5432) |
| `connect_timeout` | | Maximum wait for connection, in seconds. Zero or not specified means wait indefinitely. |
| `sslcert` | | Cert file location. The file must contain PEM encoded data. |
| `sslkey` | | Key file location. The file must contain PEM encoded data. |
| `sslrootcert` | | The location of the root certificate file. The file must contain PEM encoded data. |
| `sslmode` | | Whether or not to use SSL (disable\|require\|verify-ca\|verify-full) | | {
"source": "yandex/perforator",
"title": "vendor/github.com/golang-migrate/migrate/v4/database/cockroachdb/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/golang-migrate/migrate/v4/database/cockroachdb/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1320
} |
# MongoDB
* Driver work with mongo through [db.runCommands](https://docs.mongodb.com/manual/reference/command/)
* Migrations support json format. It contains array of commands for `db.runCommand`. Every command is executed in separate request to database
* All keys have to be in quotes `"`
* [Examples](./examples)
# Usage
`mongodb://user:password@host:port/dbname?query` (`mongodb+srv://` also works, but behaves a bit differently. See [docs](https://docs.mongodb.com/manual/reference/connection-string/#dns-seedlist-connection-format) for more information)
| URL Query | WithInstance Config | Description |
|------------|---------------------|-------------|
| `x-migrations-collection` | `MigrationsCollection` | Name of the migrations collection |
| `x-transaction-mode` | `TransactionMode` | If set to `true` wrap commands in [transaction](https://docs.mongodb.com/manual/core/transactions). Available only for replica set. Driver is using [strconv.ParseBool](https://golang.org/pkg/strconv/#ParseBool) for parsing|
| `x-advisory-locking` | `true` | Feature flag for advisory locking, if set to false, disable advisory locking |
| `x-advisory-lock-collection` | `migrate_advisory_lock` | The name of the collection to use for advisory locking.|
| `x-advisory-lock-timeout` | `15` | The max time in seconds that migrate will wait to acquire a lock before failing. |
| `x-advisory-lock-timeout-interval` | `10` | The max time in seconds between attempts to acquire the advisory lock, the lock is attempted to be acquired using an exponential backoff algorithm. |
| `dbname` | `DatabaseName` | The name of the database to connect to |
| `user` | | The user to sign in as. Can be omitted |
| `password` | | The user's password. Can be omitted |
| `host` | | The host to connect to |
| `port` | | The port to bind to | | {
"source": "yandex/perforator",
"title": "vendor/github.com/golang-migrate/migrate/v4/database/mongodb/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/golang-migrate/migrate/v4/database/mongodb/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1825
} |
# MySQL
`mysql://user:password@tcp(host:port)/dbname?query`
| URL Query | WithInstance Config | Description |
|------------|---------------------|-------------|
| `x-migrations-table` | `MigrationsTable` | Name of the migrations table |
| `x-no-lock` | `NoLock` | Set to `true` to skip `GET_LOCK`/`RELEASE_LOCK` statements. Useful for [multi-master MySQL flavors](https://www.percona.com/doc/percona-xtradb-cluster/LATEST/features/pxc-strict-mode.html#explicit-table-locking). Only run migrations from one host when this is enabled. |
| `dbname` | `DatabaseName` | The name of the database to connect to |
| `user` | | The user to sign in as |
| `password` | | The user's password |
| `host` | | The host to connect to. |
| `port` | | The port to bind to. |
| `tls` | | TLS / SSL encrypted connection parameter; see [go-sql-driver](https://github.com/go-sql-driver/mysql#tls). Use any name (e.g. `migrate`) if you want to use a custom TLS config (`x-tls-` queries). |
| `x-tls-ca` | | The location of the CA (certificate authority) file. |
| `x-tls-cert` | | The location of the client certicicate file. Must be used with `x-tls-key`. |
| `x-tls-key` | | The location of the private key file. Must be used with `x-tls-cert`. |
| `x-tls-insecure-skip-verify` | | Whether or not to use SSL (true\|false) |
## Use with existing client
If you use the MySQL driver with existing database client, you must create the client with parameter `multiStatements=true`:
```go
package main
import (
"database/sql"
_ "github.com/go-sql-driver/mysql"
"github.com/golang-migrate/migrate/v4"
"github.com/golang-migrate/migrate/v4/database/mysql"
_ "github.com/golang-migrate/migrate/v4/source/file"
)
func main() {
db, _ := sql.Open("mysql", "user:password@tcp(host:port)/dbname?multiStatements=true")
driver, _ := mysql.WithInstance(db, &mysql.Config{})
m, _ := migrate.NewWithDatabaseInstance(
"file:///migrations",
"mysql",
driver,
)
m.Steps(2)
}
```
## Upgrading from v1
1. Write down the current migration version from schema_migrations
1. `DROP TABLE schema_migrations`
2. Wrap your existing migrations in transactions ([BEGIN/COMMIT](https://dev.mysql.com/doc/refman/5.7/en/commit.html)) if you use multiple statements within one migration.
3. Download and install the latest migrate version.
4. Force the current migration version with `migrate force <current_version>`. | {
"source": "yandex/perforator",
"title": "vendor/github.com/golang-migrate/migrate/v4/database/mysql/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/golang-migrate/migrate/v4/database/mysql/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 2448
} |
# pgx
`pgx://user:password@host:port/dbname?query`
| URL Query | WithInstance Config | Description |
|------------|---------------------|-------------|
| `x-migrations-table` | `MigrationsTable` | Name of the migrations table |
| `x-migrations-table-quoted` | `MigrationsTableQuoted` | By default, migrate quotes the migration table for SQL injection safety reasons. This option disable quoting and naively checks that you have quoted the migration table name. e.g. `"my_schema"."schema_migrations"` |
| `x-statement-timeout` | `StatementTimeout` | Abort any statement that takes more than the specified number of milliseconds |
| `x-multi-statement` | `MultiStatementEnabled` | Enable multi-statement execution (default: false) |
| `x-multi-statement-max-size` | `MultiStatementMaxSize` | Maximum size of single statement in bytes (default: 10MB) |
| `dbname` | `DatabaseName` | The name of the database to connect to |
| `search_path` | | This variable specifies the order in which schemas are searched when an object is referenced by a simple name with no schema specified. |
| `user` | | The user to sign in as |
| `password` | | The user's password |
| `host` | | The host to connect to. Values that start with / are for unix domain sockets. (default is localhost) |
| `port` | | The port to bind to. (default is 5432) |
| `fallback_application_name` | | An application_name to fall back to if one isn't provided. |
| `connect_timeout` | | Maximum wait for connection, in seconds. Zero or not specified means wait indefinitely. |
| `sslcert` | | Cert file location. The file must contain PEM encoded data. |
| `sslkey` | | Key file location. The file must contain PEM encoded data. |
| `sslrootcert` | | The location of the root certificate file. The file must contain PEM encoded data. |
| `sslmode` | | Whether or not to use SSL (disable\|require\|verify-ca\|verify-full) |
## Upgrading from v1
1. Write down the current migration version from schema_migrations
1. `DROP TABLE schema_migrations`
2. Wrap your existing migrations in transactions ([BEGIN/COMMIT](https://www.postgresql.org/docs/current/static/transaction-iso.html)) if you use multiple statements within one migration.
3. Download and install the latest migrate version.
4. Force the current migration version with `migrate force <current_version>`.
## Multi-statement mode
In PostgreSQL running multiple SQL statements in one `Exec` executes them inside a transaction. Sometimes this
behavior is not desirable because some statements can be only run outside of transaction (e.g.
`CREATE INDEX CONCURRENTLY`). If you want to use `CREATE INDEX CONCURRENTLY` without activating multi-statement mode
you have to put such statements in a separate migration files. | {
"source": "yandex/perforator",
"title": "vendor/github.com/golang-migrate/migrate/v4/database/pgx/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/golang-migrate/migrate/v4/database/pgx/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 2739
} |
# postgres
`postgres://user:password@host:port/dbname?query` (`postgresql://` works, too)
| URL Query | WithInstance Config | Description |
|------------|---------------------|-------------|
| `x-migrations-table` | `MigrationsTable` | Name of the migrations table |
| `x-migrations-table-quoted` | `MigrationsTableQuoted` | By default, migrate quotes the migration table for SQL injection safety reasons. This option disable quoting and naively checks that you have quoted the migration table name. e.g. `"my_schema"."schema_migrations"` |
| `x-statement-timeout` | `StatementTimeout` | Abort any statement that takes more than the specified number of milliseconds |
| `x-multi-statement` | `MultiStatementEnabled` | Enable multi-statement execution (default: false) |
| `x-multi-statement-max-size` | `MultiStatementMaxSize` | Maximum size of single statement in bytes (default: 10MB) |
| `dbname` | `DatabaseName` | The name of the database to connect to |
| `search_path` | | This variable specifies the order in which schemas are searched when an object is referenced by a simple name with no schema specified. |
| `user` | | The user to sign in as |
| `password` | | The user's password |
| `host` | | The host to connect to. Values that start with / are for unix domain sockets. (default is localhost) |
| `port` | | The port to bind to. (default is 5432) |
| `fallback_application_name` | | An application_name to fall back to if one isn't provided. |
| `connect_timeout` | | Maximum wait for connection, in seconds. Zero or not specified means wait indefinitely. |
| `sslcert` | | Cert file location. The file must contain PEM encoded data. |
| `sslkey` | | Key file location. The file must contain PEM encoded data. |
| `sslrootcert` | | The location of the root certificate file. The file must contain PEM encoded data. |
| `sslmode` | | Whether or not to use SSL (disable\|require\|verify-ca\|verify-full) |
## Upgrading from v1
1. Write down the current migration version from schema_migrations
1. `DROP TABLE schema_migrations`
2. Wrap your existing migrations in transactions ([BEGIN/COMMIT](https://www.postgresql.org/docs/current/static/transaction-iso.html)) if you use multiple statements within one migration.
3. Download and install the latest migrate version.
4. Force the current migration version with `migrate force <current_version>`.
## Multi-statement mode
In PostgreSQL running multiple SQL statements in one `Exec` executes them inside a transaction. Sometimes this
behavior is not desirable because some statements can be only run outside of transaction (e.g.
`CREATE INDEX CONCURRENTLY`). If you want to use `CREATE INDEX CONCURRENTLY` without activating multi-statement mode
you have to put such statements in a separate migration files. | {
"source": "yandex/perforator",
"title": "vendor/github.com/golang-migrate/migrate/v4/database/postgres/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/golang-migrate/migrate/v4/database/postgres/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 2780
} |
# Redshift
`redshift://user:password@host:port/dbname?query`
| URL Query | WithInstance Config | Description |
|------------|---------------------|-------------|
| `x-migrations-table` | `MigrationsTable` | Name of the migrations table |
| `dbname` | `DatabaseName` | The name of the database to connect to |
| `search_path` | | This variable specifies the order in which schemas are searched when an object is referenced by a simple name with no schema specified. |
| `user` | | The user to sign in as |
| `password` | | The user's password |
| `host` | | The host to connect to. Values that start with / are for unix domain sockets. (default is localhost) |
| `port` | | The port to bind to. (default is 5439) |
| `fallback_application_name` | | An application_name to fall back to if one isn't provided. |
| `connect_timeout` | | Maximum wait for connection, in seconds. Zero or not specified means wait indefinitely. |
| `sslcert` | | Cert file location. The file must contain PEM encoded data. |
| `sslkey` | | Key file location. The file must contain PEM encoded data. |
| `sslrootcert` | | The location of the root certificate file. The file must contain PEM encoded data. |
| `sslmode` | | Whether or not to use SSL (disable\|require\|verify-ca\|verify-full) |
Redshift is PostgreSQL compatible but has some specific features (or lack thereof) that require slightly different behavior. | {
"source": "yandex/perforator",
"title": "vendor/github.com/golang-migrate/migrate/v4/database/redshift/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/golang-migrate/migrate/v4/database/redshift/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1399
} |
# Google Cloud Spanner
## Usage
See [Google Spanner Documentation](https://cloud.google.com/spanner/docs) for
more details.
The DSN must be given in the following format.
`spanner://projects/{projectId}/instances/{instanceId}/databases/{databaseName}?param=true`
as described in [README.md#database-urls](../../README.md#database-urls)
| Param | WithInstance Config | Description |
| ----- | ------------------- | ----------- |
| `x-migrations-table` | `MigrationsTable` | Name of the migrations table |
| `x-clean-statements` | `CleanStatements` | Whether to parse and clean DDL statements before running migration towards Spanner (Required for comments and multiple statements) |
| `url` | `DatabaseName` | The full path to the Spanner database resource. If provided as part of `Config` it must not contain a scheme or query string to match the format `projects/{projectId}/instances/{instanceId}/databases/{databaseName}`|
| `projectId` || The Google Cloud Platform project id
| `instanceId` || The id of the instance running Spanner
| `databaseName` || The name of the Spanner database
> **Note:** Google Cloud Spanner migrations can take a considerable amount of
> time. The migrations provided as part of the example take about 6 minutes to
> run on a small instance.
>
> ```log
> 1481574547/u create_users_table (21.354507597s)
> 1496539702/u add_city_to_users (41.647359754s)
> 1496601752/u add_index_on_user_emails (2m12.155787369s)
> 1496602638/u create_books_table (2m30.77299181s)
## DDL with comments
At the moment the GCP Spanner backed does not seem to allow for comments (See https://issuetracker.google.com/issues/159730604)
so in order to be able to use migration with DDL containing comments `x-clean-stamements` is required
## Multiple statements
In order to be able to use more than 1 DDL statement in the same migration file, the file has to be parsed and therefore the `x-clean-statements` flag is required
## Testing
To unit test the `spanner` driver, `SPANNER_DATABASE` needs to be set. You'll
need to sign-up to Google Cloud Platform (GCP) and have a running Spanner
instance since it is not possible to run Google Spanner outside GCP. | {
"source": "yandex/perforator",
"title": "vendor/github.com/golang-migrate/migrate/v4/database/spanner/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/golang-migrate/migrate/v4/database/spanner/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 2176
} |
# sqlite
`sqlite://path/to/database?query`
Unlike other migrate database drivers, the sqlite driver will automatically wrap each migration in an implicit transaction by default. Migrations must not contain explicit `BEGIN` or `COMMIT` statements. This behavior may change in a future major release. (See below for a workaround.)
The auxiliary query parameters listed below may be supplied to tailor migrate behavior. All auxiliary query parameters are optional.
| URL Query | WithInstance Config | Description |
|------------|---------------------|-------------|
| `x-migrations-table` | `MigrationsTable` | Name of the migrations table. Defaults to `schema_migrations`. |
| `x-no-tx-wrap` | `NoTxWrap` | Disable implicit transactions when `true`. Migrations may, and should, contain explicit `BEGIN` and `COMMIT` statements. |
## Notes
* Uses the `modernc.org/sqlite` sqlite db driver (pure Go)
* Has [limited `GOOS` and `GOARCH` support](https://pkg.go.dev/modernc.org/sqlite?utm_source=godoc#hdr-Supported_platforms_and_architectures) | {
"source": "yandex/perforator",
"title": "vendor/github.com/golang-migrate/migrate/v4/database/sqlite/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/golang-migrate/migrate/v4/database/sqlite/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1052
} |
# sqlite3
`sqlite3://path/to/database?query`
Unlike other migrate database drivers, the sqlite3 driver will automatically wrap each migration in an implicit transaction by default. Migrations must not contain explicit `BEGIN` or `COMMIT` statements. This behavior may change in a future major release. (See below for a workaround.)
Refer to [upstream documentation](https://github.com/mattn/go-sqlite3/blob/master/README.md#connection-string) for a complete list of query parameters supported by the sqlite3 database driver. The auxiliary query parameters listed below may be supplied to tailor migrate behavior. All auxiliary query parameters are optional.
| URL Query | WithInstance Config | Description |
|------------|---------------------|-------------|
| `x-migrations-table` | `MigrationsTable` | Name of the migrations table. Defaults to `schema_migrations`. |
| `x-no-tx-wrap` | `NoTxWrap` | Disable implicit transactions when `true`. Migrations may, and should, contain explicit `BEGIN` and `COMMIT` statements. |
## Notes
* Uses the `github.com/mattn/go-sqlite3` sqlite db driver (cgo) | {
"source": "yandex/perforator",
"title": "vendor/github.com/golang-migrate/migrate/v4/database/sqlite3/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/golang-migrate/migrate/v4/database/sqlite3/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1110
} |
# file
`file:///absolute/path`
`file://relative/path` | {
"source": "yandex/perforator",
"title": "vendor/github.com/golang-migrate/migrate/v4/source/file/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/golang-migrate/migrate/v4/source/file/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 56
} |
# go_bindata
## Usage
### Read bindata with NewWithSourceInstance
```shell
go get -u github.com/jteeuwen/go-bindata/...
cd examples/migrations && go-bindata -pkg migrations .
```
```go
import (
"github.com/golang-migrate/migrate/v4"
"github.com/golang-migrate/migrate/v4/source/go_bindata"
"github.com/golang-migrate/migrate/v4/source/go_bindata/examples/migrations"
)
func main() {
// wrap assets into Resource
s := bindata.Resource(migrations.AssetNames(),
func(name string) ([]byte, error) {
return migrations.Asset(name)
})
d, err := bindata.WithInstance(s)
m, err := migrate.NewWithSourceInstance("go-bindata", d, "database://foobar")
m.Up() // run your migrations and handle the errors above of course
}
```
### Read bindata with URL (todo)
This will restore the assets in a tmp directory and then
proxy to source/file. go-bindata must be in your `$PATH`.
```
migrate -source go-bindata://examples/migrations/bindata.go
``` | {
"source": "yandex/perforator",
"title": "vendor/github.com/golang-migrate/migrate/v4/source/go_bindata/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/golang-migrate/migrate/v4/source/go_bindata/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 974
} |
# Google Cloud Storage
## Import
```go
import (
_ "github.com/golang-migrate/migrate/v4/source/google_cloud_storage"
)
```
## Connection String
`gcs://<bucket>/<prefix>` | {
"source": "yandex/perforator",
"title": "vendor/github.com/golang-migrate/migrate/v4/source/google_cloud_storage/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/golang-migrate/migrate/v4/source/google_cloud_storage/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 178
} |
# httpfs
## Usage
This package could be used to create new migration source drivers that uses
`http.FileSystem` to read migration files.
Struct `httpfs.PartialDriver` partly implements the `source.Driver` interface. It has all
the methods except for `Open()`. Embedding this struct and adding `Open()` method
allows users of this package to create new migration sources. Example:
```go
struct mydriver {
httpfs.PartialDriver
}
func (d *mydriver) Open(url string) (source.Driver, error) {
var fs http.FileSystem
var path string
var ds mydriver
// acquire fs and path from url
// set-up ds if necessary
if err := ds.Init(fs, path); err != nil {
return nil, err
}
return &ds, nil
}
```
This package also provides a simple `source.Driver` implementation that works
with `http.FileSystem` provided by the user of this package. It is created with
`httpfs.New()` call.
Example of using `http.Dir()` to read migrations from `sql` directory:
```go
src, err := httpfs.New(http.Dir("sql"))
if err != nil {
// do something
}
m, err := migrate.NewWithSourceInstance("httpfs", src, "database://url")
if err != nil {
// do something
}
err = m.Up()
...
``` | {
"source": "yandex/perforator",
"title": "vendor/github.com/golang-migrate/migrate/v4/source/httpfs/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/golang-migrate/migrate/v4/source/httpfs/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1187
} |
# iofs
https://pkg.go.dev/github.com/golang-migrate/migrate/v4/source/iofs | {
"source": "yandex/perforator",
"title": "vendor/github.com/golang-migrate/migrate/v4/source/iofs/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/golang-migrate/migrate/v4/source/iofs/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 75
} |
Copyright 2010, 2019 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. | {
"source": "yandex/perforator",
"title": "vendor/github.com/grpc-ecosystem/grpc-gateway/v2/internal/casing/LICENSE.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/grpc-ecosystem/grpc-gateway/v2/internal/casing/LICENSE.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1484
} |
# Case conversion
This package contains two functions:
- `Camel` copied from the `github.com/golang/protobuf/protoc-gen-go/generator` package.
- `JSONCamelCase` copied from the `github.com/protocolbuffers/protobuf-go/internal/strs` package.
Both these modules are licensed by The Go Authors, as reflected in this package's [LICENSE.md]. | {
"source": "yandex/perforator",
"title": "vendor/github.com/grpc-ecosystem/grpc-gateway/v2/internal/casing/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/grpc-ecosystem/grpc-gateway/v2/internal/casing/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 338
} |
# Description
This is a sample chat program implemented using PostgreSQL's listen/notify
functionality with pgx.
Start multiple instances of this program connected to the same database to chat
between them.
## Connection configuration
The database connection is configured via DATABASE_URL and standard PostgreSQL environment variables (PGHOST, PGUSER, etc.)
You can either export them then run chat:
export PGHOST=/private/tmp
./chat
Or you can prefix the chat execution with the environment variables:
PGHOST=/private/tmp ./chat | {
"source": "yandex/perforator",
"title": "vendor/github.com/jackc/pgx/v5/examples/chat/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/jackc/pgx/v5/examples/chat/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 550
} |
# Description
This is a sample todo list implemented using pgx as the connector to a
PostgreSQL data store.
# Usage
Create a PostgreSQL database and run structure.sql into it to create the
necessary data schema.
Example:
createdb todo
psql todo < structure.sql
Build todo:
go build
## Connection configuration
The database connection is configured via DATABASE_URL and standard PostgreSQL environment variables (PGHOST, PGUSER, etc.)
You can either export them then run todo:
export PGDATABASE=todo
./todo list
Or you can prefix the todo execution with the environment variables:
PGDATABASE=todo ./todo list
## Add a todo item
./todo add 'Learn go'
## List tasks
./todo list
## Update a task
./todo update 1 'Learn more go'
## Delete a task
./todo remove 1
# Example Setup and Execution
jack@hk-47~/dev/go/src/github.com/jackc/pgx/examples/todo$ createdb todo
jack@hk-47~/dev/go/src/github.com/jackc/pgx/examples/todo$ psql todo < structure.sql
Expanded display is used automatically.
Timing is on.
CREATE TABLE
Time: 6.363 ms
jack@hk-47~/dev/go/src/github.com/jackc/pgx/examples/todo$ go build
jack@hk-47~/dev/go/src/github.com/jackc/pgx/examples/todo$ export PGDATABASE=todo
jack@hk-47~/dev/go/src/github.com/jackc/pgx/examples/todo$ ./todo list
jack@hk-47~/dev/go/src/github.com/jackc/pgx/examples/todo$ ./todo add 'Learn Go'
jack@hk-47~/dev/go/src/github.com/jackc/pgx/examples/todo$ ./todo list
1. Learn Go
jack@hk-47~/dev/go/src/github.com/jackc/pgx/examples/todo$ ./todo update 1 'Learn more Go'
jack@hk-47~/dev/go/src/github.com/jackc/pgx/examples/todo$ ./todo list
1. Learn more Go
jack@hk-47~/dev/go/src/github.com/jackc/pgx/examples/todo$ ./todo remove 1
jack@hk-47~/dev/go/src/github.com/jackc/pgx/examples/todo$ ./todo list | {
"source": "yandex/perforator",
"title": "vendor/github.com/jackc/pgx/v5/examples/todo/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/jackc/pgx/v5/examples/todo/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1867
} |
# Description
This is a sample REST URL shortener service implemented using pgx as the connector to a PostgreSQL data store.
# Usage
Create a PostgreSQL database and run structure.sql into it to create the necessary data schema.
Configure the database connection with `DATABASE_URL` or standard PostgreSQL (`PG*`) environment variables or
Run main.go:
```
go run main.go
```
## Create or Update a Shortened URL
```
curl -X PUT -d 'http://www.google.com' http://localhost:8080/google
```
## Get a Shortened URL
```
curl http://localhost:8080/google
```
## Delete a Shortened URL
```
curl -X DELETE http://localhost:8080/google
``` | {
"source": "yandex/perforator",
"title": "vendor/github.com/jackc/pgx/v5/examples/url_shortener/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/jackc/pgx/v5/examples/url_shortener/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 641
} |
# pgio
Package pgio is a low-level toolkit building messages in the PostgreSQL wire protocol.
pgio provides functions for appending integers to a []byte while doing byte
order conversion. | {
"source": "yandex/perforator",
"title": "vendor/github.com/jackc/pgx/v5/internal/pgio/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/jackc/pgx/v5/internal/pgio/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 189
} |
# xxhash
VENDORED: Go to [github.com/cespare/xxhash](https://github.com/cespare/xxhash) for original package.
xxhash is a Go implementation of the 64-bit [xxHash] algorithm, XXH64. This is a
high-quality hashing algorithm that is much faster than anything in the Go
standard library.
This package provides a straightforward API:
```
func Sum64(b []byte) uint64
func Sum64String(s string) uint64
type Digest struct{ ... }
func New() *Digest
```
The `Digest` type implements hash.Hash64. Its key methods are:
```
func (*Digest) Write([]byte) (int, error)
func (*Digest) WriteString(string) (int, error)
func (*Digest) Sum64() uint64
```
The package is written with optimized pure Go and also contains even faster
assembly implementations for amd64 and arm64. If desired, the `purego` build tag
opts into using the Go code even on those architectures.
[xxHash]: http://cyan4973.github.io/xxHash/
## Compatibility
This package is in a module and the latest code is in version 2 of the module.
You need a version of Go with at least "minimal module compatibility" to use
github.com/cespare/xxhash/v2:
* 1.9.7+ for Go 1.9
* 1.10.3+ for Go 1.10
* Go 1.11 or later
I recommend using the latest release of Go.
## Benchmarks
Here are some quick benchmarks comparing the pure-Go and assembly
implementations of Sum64.
| input size | purego | asm |
| ---------- | --------- | --------- |
| 4 B | 1.3 GB/s | 1.2 GB/s |
| 16 B | 2.9 GB/s | 3.5 GB/s |
| 100 B | 6.9 GB/s | 8.1 GB/s |
| 4 KB | 11.7 GB/s | 16.7 GB/s |
| 10 MB | 12.0 GB/s | 17.3 GB/s |
These numbers were generated on Ubuntu 20.04 with an Intel Xeon Platinum 8252C
CPU using the following commands under Go 1.19.2:
```
benchstat <(go test -tags purego -benchtime 500ms -count 15 -bench 'Sum64$')
benchstat <(go test -benchtime 500ms -count 15 -bench 'Sum64$')
```
## Projects using this package
- [InfluxDB](https://github.com/influxdata/influxdb)
- [Prometheus](https://github.com/prometheus/prometheus)
- [VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics)
- [FreeCache](https://github.com/coocood/freecache)
- [FastCache](https://github.com/VictoriaMetrics/fastcache) | {
"source": "yandex/perforator",
"title": "vendor/github.com/klauspost/compress/zstd/internal/xxhash/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/klauspost/compress/zstd/internal/xxhash/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 2205
} |
Client Configuration Support for GRPC
=====================================
This library provides high level configuration machinery to construct client
channels and load balance between them.
Each `grpc_channel` is created with a `Resolver`. It is the resolver's duty
to resolve a name into a set of arguments for the channel. Such arguments
might include:
- a list of (ip, port) addresses to connect to
- a load balancing policy to decide which server to send a request to
- a set of filters to mutate outgoing requests (say, by adding metadata)
The resolver provides this data as a stream of `grpc_channel_args` objects to
the channel. We represent arguments as a stream so that they can be changed
by the resolver during execution, by reacting to external events (such as
new service configuration data being pushed to some store).
Load Balancing
--------------
Load balancing configuration is provided by a `LoadBalancingPolicy` object.
The primary job of the load balancing policies is to pick a target server
given only the initial metadata for a request. It does this by providing
a `ConnectedSubchannel` object to the owning channel.
Sub-Channels
------------
A sub-channel provides a connection to a server for a client channel. It has a
connectivity state like a regular channel, and so can be connected or
disconnected. This connectivity state can be used to inform load balancing
decisions (for example, by avoiding disconnected backends).
Configured sub-channels are fully setup to participate in the grpc data plane.
Their behavior is specified by a set of grpc channel filters defined at their
construction. To customize this behavior, transports build
`ClientChannelFactory` objects, which customize construction arguments for
concrete subchannel instances.
Naming for GRPC
===============
See [/doc/naming.md](gRPC name resolution). | {
"source": "yandex/perforator",
"title": "contrib/libs/grpc/src/core/ext/filters/client_channel/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/grpc/src/core/ext/filters/client_channel/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1865
} |
# Binder transport for cross process IPC on Android
EXPERIMENTAL. API stability not guaranteed.
This transport implements
[BinderChannel for native cross-process communication on Android](https://github.com/grpc/proposal/blob/master/L73-java-binderchannel.md) and enables C++/Java cross-process communication on Android with gRPC.
Tests: https://github.com/grpc/grpc/tree/master/test/core/transport/binder/
Example apps: https://github.com/grpc/grpc/tree/master/examples/android/binder/java/io/grpc/binder/cpp | {
"source": "yandex/perforator",
"title": "contrib/libs/grpc/src/core/ext/transport/binder/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/grpc/src/core/ext/transport/binder/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 513
} |
CHTTP2 - gRPC's implementation of a HTTP2 based transport | {
"source": "yandex/perforator",
"title": "contrib/libs/grpc/src/core/ext/transport/chttp2/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/grpc/src/core/ext/transport/chttp2/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 57
} |
# Customization Points
The custom directory is an injection point for custom user configurations.
## Header `gmock-port.h`
The following macros can be defined:
### Flag related macros:
* `GMOCK_DECLARE_bool_(name)`
* `GMOCK_DECLARE_int32_(name)`
* `GMOCK_DECLARE_string_(name)`
* `GMOCK_DEFINE_bool_(name, default_val, doc)`
* `GMOCK_DEFINE_int32_(name, default_val, doc)`
* `GMOCK_DEFINE_string_(name, default_val, doc)`
* `GMOCK_FLAG_GET(flag_name)`
* `GMOCK_FLAG_SET(flag_name, value)` | {
"source": "yandex/perforator",
"title": "contrib/restricted/googletest/googlemock/include/gmock/internal/custom/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/restricted/googletest/googlemock/include/gmock/internal/custom/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 509
} |
# Customization Points
The custom directory is an injection point for custom user configurations.
## Header `gtest.h`
### The following macros can be defined:
* `GTEST_OS_STACK_TRACE_GETTER_` - The name of an implementation of
`OsStackTraceGetterInterface`.
* `GTEST_CUSTOM_TEMPDIR_FUNCTION_` - An override for `testing::TempDir()`. See
`testing::TempDir` for semantics and signature.
## Header `gtest-port.h`
The following macros can be defined:
### Logging:
* `GTEST_LOG_(severity)`
* `GTEST_CHECK_(condition)`
* Functions `LogToStderr()` and `FlushInfoLog()` have to be provided too.
### Threading:
* `GTEST_HAS_NOTIFICATION_` - Enabled if Notification is already provided.
* `GTEST_HAS_MUTEX_AND_THREAD_LOCAL_` - Enabled if `Mutex` and `ThreadLocal`
are already provided. Must also provide `GTEST_DECLARE_STATIC_MUTEX_(mutex)`
and `GTEST_DEFINE_STATIC_MUTEX_(mutex)`
* `GTEST_EXCLUSIVE_LOCK_REQUIRED_(locks)`
* `GTEST_LOCK_EXCLUDED_(locks)`
### Underlying library support features
* `GTEST_HAS_CXXABI_H_`
### Exporting API symbols:
* `GTEST_API_` - Specifier for exported symbols.
## Header `gtest-printers.h`
* See documentation at `gtest/gtest-printers.h` for details on how to define a
custom printer. | {
"source": "yandex/perforator",
"title": "contrib/restricted/googletest/googletest/include/gtest/internal/custom/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/restricted/googletest/googletest/include/gtest/internal/custom/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1268
} |
Run
===
```sh
siege -t60m -c200 http://127.0.0.1:8080/test
``` | {
"source": "yandex/perforator",
"title": "vendor/github.com/ClickHouse/clickhouse-go/v2/tests/issues/209/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/ClickHouse/clickhouse-go/v2/tests/issues/209/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 64
} |
# HTTP JSON Error Schema
The `error.proto` represents the HTTP-JSON schema used by Google APIs to convey
error payloads as described by https://cloud.google.com/apis/design/errors#http_mapping.
This package is for internal parsing logic only and should not be used in any
other context.
## Regeneration
To regenerate the protobuf Go code you will need the following:
* A local copy of [googleapis], the absolute path to which should be exported to
the environment variable `GOOGLEAPIS`
* The protobuf compiler [protoc]
* The Go [protobuf plugin]
* The [goimports] tool
From this directory run the following command:
```sh
protoc -I $GOOGLEAPIS -I. --go_out=. --go_opt=module=github.com/googleapis/gax-go/v2/apierror/internal/proto error.proto
goimports -w .
```
Note: the `module` plugin option ensures the generated code is placed in this
directory, and not in several nested directories defined by `go_package` option.
[googleapis]: https://github.com/googleapis/googleapis
[protoc]: https://github.com/protocolbuffers/protobuf#protocol-compiler-installation
[protobuf plugin]: https://developers.google.com/protocol-buffers/docs/reference/go-generated
[goimports]: https://pkg.go.dev/golang.org/x/tools/cmd/goimports | {
"source": "yandex/perforator",
"title": "vendor/github.com/googleapis/gax-go/v2/apierror/internal/proto/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/googleapis/gax-go/v2/apierror/internal/proto/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1226
} |
# pgfortune
pgfortune is a mock PostgreSQL server that responds to every query with a fortune.
## Installation
Install `fortune` and `cowsay`. They should be available in any Unix package manager (apt, yum, brew, etc.)
```
go get -u github.com/jackc/pgproto3/example/pgfortune
```
## Usage
```
$ pgfortune
```
By default pgfortune listens on 127.0.0.1:15432 and responds to queries with `fortune | cowsay -f elephant`. These are
configurable with the `listen` and `response-command` arguments respectively.
While `pgfortune` is running connect to it with `psql`.
```
$ psql -h 127.0.0.1 -p 15432
Timing is on.
Null display is "∅".
Line style is unicode.
psql (11.5, server 0.0.0)
Type "help" for help.
[email protected]:15432 jack=# select foo;
fortune
─────────────────────────────────────────────
_________________________________________ ↵
/ Ships are safe in harbor, but they were \↵
\ never meant to stay there. /↵
----------------------------------------- ↵
\ /\ ___ /\ ↵
\ // \/ \/ \\ ↵
(( O O )) ↵
\\ / \ // ↵
\/ | | \/ ↵
| | | | ↵
| | | | ↵
| o | ↵
| | | | ↵
|m| |m| ↵
(1 row)
Time: 28.161 ms
``` | {
"source": "yandex/perforator",
"title": "vendor/github.com/jackc/pgx/v5/pgproto3/example/pgfortune/README.md",
"url": "https://github.com/yandex/perforator/blob/main/vendor/github.com/jackc/pgx/v5/pgproto3/example/pgfortune/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 1498
} |
# Resolver
Implementations of various name resolution schemes.
See the [naming spec](/doc/naming.md). | {
"source": "yandex/perforator",
"title": "contrib/libs/grpc/src/core/ext/filters/client_channel/resolver/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/grpc/src/core/ext/filters/client_channel/resolver/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 102
} |
chttp2 transport plugin - implements grpc over http2
Used by chttp2/{client,server}/{insecure,secure} plugins to implement most of
their functionality | {
"source": "yandex/perforator",
"title": "contrib/libs/grpc/src/core/ext/transport/chttp2/transport/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/grpc/src/core/ext/transport/chttp2/transport/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 151
} |
IMath
=====
Arbitrary precision integer and rational arithmetic library.
IMath is an open-source ANSI C arbitrary precision integer and rational
arithmetic library.
IMath is copyright © 2002-2009 Michael J. Fromberger.
> Permission is hereby granted, free of charge, to any person obtaining a copy
> of this software and associated documentation files (the "Software"), to deal
> in the Software without restriction, including without limitation the rights
> to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
> copies of the Software, and to permit persons to whom the Software is
> furnished to do so, subject to the following conditions:
>
> The above copyright notice and this permission notice shall be included in
> all copies or substantial portions of the Software.
>
> THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
> AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
> OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
> SOFTWARE.
About IMath
-----------
IMath is a library written in portable ANSI C that allows you to perform
arithmetic on integers and rational numbers of arbitrary precision. While many
programming languages, including Java, Perl, and Python provide arbitrary
precision numbers as a standard library or language feature, C does not.
IMath was designed to be small, self-contained, easy to understand and use, and
as portable as possible across various platforms. The API is simple, and the
code should be comparatively easy to modify or extend. Simplicity and
portability are useful goals for some applications—however, IMath does
not attempt to break performance records. If you need the fastest possible
implementation, you might consider some other libraries, such as GNU MP (GMP),
MIRACL, or the bignum library from OpenSSL.
Programming with IMath
----------------------
Detailed descriptions of the IMath API can be found in [doc.md](doc.md).
However, the following is a brief synopsis of how to get started with some
simple tasks.
To do basic integer arithmetic, you must declare variables of type `mpz_t` in
your program, and call the functions defined in `imath.h` to operate on them.
Here is a simple example that reads one base-10 integer from the command line,
multiplies it by another (fixed) value, and prints the result to the standard
output in base-10 notation:
#include <stdio.h>
#include <stdlib.h>
#include "imath.h"
int main(int argc, char *argv[])
{
mpz_t a, b;
char *buf;
int len;
if(argc < 2) {
fprintf(stderr, "Usage: testprogram <integer>\n");
return 1;
}
/* Initialize a new zero-valued mpz_t structure */
mp_int_init(&a);
/* Initialize a new mpz_t with a small integer value */
mp_int_init_value(&b, 25101);
/* Read a string value in the specified radix */
mp_int_read_string(&a, 10, argv[1]);
/* Multiply the two together... */
mp_int_mul(&a, &b, &a);
/* Print out the result */
len = mp_int_string_len(&a, 10);
buf = calloc(len, sizeof(*buf));
mp_int_to_string(&a, 10, buf, len);
printf("result = %s\n", buf);
free(buf);
/* Release memory occupied by mpz_t structures when finished */
mp_int_clear(&b);
mp_int_clear(&a);
return 0;
}
This simple example program does not do any error checking, but all the IMath
API functions return an `mp_result` value which can be used to detect various
problems like range errors, running out of memory, and undefined results.
The IMath API also supports operations on arbitrary precision rational numbers.
The functions for creating and manipulating rational values (type `mpq_t`) are
defined in `imrat.h`, so that you need only include them in your project if you
wish to. | {
"source": "yandex/perforator",
"title": "contrib/libs/llvm18/tools/polly/lib/External/isl/imath/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/llvm18/tools/polly/lib/External/isl/imath/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 4125
} |
Support for resolving the scheme used by binder transport implementation.
The URI's authority is required to be empty.
The path is used as the identifiers of endpoint binder objects and the length
limit of the identifier is the same as unix socket length limit.
The length limit of the path should at least be 100 characters long. This is
guaranteed by `static_assert` in the implementation. | {
"source": "yandex/perforator",
"title": "contrib/libs/grpc/src/core/ext/filters/client_channel/resolver/binder/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/grpc/src/core/ext/filters/client_channel/resolver/binder/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 394
} |
dns: scheme name resolution, using getaddrbyname
(or other OS specific implementation) | {
"source": "yandex/perforator",
"title": "contrib/libs/grpc/src/core/ext/filters/client_channel/resolver/dns/native/README.md",
"url": "https://github.com/yandex/perforator/blob/main/contrib/libs/grpc/src/core/ext/filters/client_channel/resolver/dns/native/README.md",
"date": "2025-01-29T14:20:43",
"stars": 2926,
"description": "Perforator is a cluster-wide continuous profiling tool designed for large data centers",
"file_size": 86
} |
<h1 align='center'>EchoMimicV2: Towards Striking, Simplified, and Semi-Body Human Animation</h1>
<div align='center'>
<a href='https://github.com/mengrang' target='_blank'>Rang Meng</a><sup></sup> 
<a href='https://github.com/' target='_blank'>Xingyu Zhang</a><sup></sup> 
<a href='https://lymhust.github.io/' target='_blank'>Yuming Li</a><sup></sup> 
<a href='https://github.com/' target='_blank'>Chenguang Ma</a><sup></sup>
</div>
<div align='center'>
Terminal Technology Department, Alipay, Ant Group.
</div>
<br>
<div align='center'>
<a href='https://antgroup.github.io/ai/echomimic_v2/'><img src='https://img.shields.io/badge/Project-Page-blue'></a>
<a href='https://huggingface.co/BadToBest/EchoMimicV2'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-Model-yellow'></a>
<!--<a href='https://antgroup.github.io/ai/echomimic_v2/'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-Demo-yellow'></a>-->
<a href='https://modelscope.cn/models/BadToBest/EchoMimicV2'><img src='https://img.shields.io/badge/ModelScope-Model-purple'></a>
<!--<a href='https://antgroup.github.io/ai/echomimic_v2/'><img src='https://img.shields.io/badge/ModelScope-Demo-purple'></a>-->
<a href='https://arxiv.org/abs/2411.10061'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a>
<a href='https://github.com/antgroup/echomimic_v2/blob/main/assets/halfbody_demo/wechat_group.png'><img src='https://badges.aleen42.com/src/wechat.svg'></a>
</div>
<div align='center'>
<a href='https://github.com/antgroup/echomimic_v2/discussions/53'><img src='https://img.shields.io/badge/English-Common Problems-orange'></a>
<a href='https://github.com/antgroup/echomimic_v2/discussions/40'><img src='https://img.shields.io/badge/中文版-常见问题汇总-orange'></a>
</div>
## 🚀 EchoMimic Series
* EchoMimicV1: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning. [GitHub](https://github.com/antgroup/echomimic)
* EchoMimicV2: Towards Striking, Simplified, and Semi-Body Human Animation. [GitHub](https://github.com/antgroup/echomimic_v2)
## 📣 Updates
* [2025.01.16] 🔥 Please check out the [discussions](https://github.com/antgroup/echomimic_v2/discussions) to learn how to start EchoMimicV2.
* [2025.01.16] 🚀🔥 [GradioUI for Accelerated EchoMimicV2](https://github.com/antgroup/echomimic_v2/blob/main/app_acc.py) is now available.
* [2025.01.03] 🚀🔥 **One Minute is All You Need to Generate Video**. [Accelerated EchoMimicV2](https://github.com/antgroup/echomimic_v2/blob/main/infer_acc.py) are released. The inference speed can be improved by 9x (from ~7mins/120frames to ~50s/120frames on A100 GPU).
* [2024.12.16] 🔥 [RefImg-Pose Alignment Demo](https://github.com/antgroup/echomimic_v2/blob/main/demo.ipynb) is now available, which involves aligning reference image, extracting pose from driving video, and generating video.
* [2024.11.27] 🔥 [Installation tutorial](https://www.youtube.com/watch?v=2ab6U1-nVTQ) is now available. Thanks [AiMotionStudio](https://www.youtube.com/@AiMotionStudio) for the contribution.
* [2024.11.22] 🔥 [GradioUI](https://github.com/antgroup/echomimic_v2/blob/main/app.py) is now available. Thanks @gluttony-10 for the contribution.
* [2024.11.22] 🔥 [ComfyUI](https://github.com/smthemex/ComfyUI_EchoMimic) is now available. Thanks @smthemex for the contribution.
* [2024.11.21] 🔥 We release the EMTD dataset list and processing scripts.
* [2024.11.21] 🔥 We release our [EchoMimicV2](https://github.com/antgroup/echomimic_v2) codes and models.
* [2024.11.15] 🔥 Our [paper](https://arxiv.org/abs/2411.10061) is in public on arxiv.
## 🌅 Gallery
### Introduction
<table class="center">
<tr>
<td width=50% style="border: none">
<video controls loop src="https://github.com/user-attachments/assets/f544dfc0-7d1a-4c2c-83c0-608f28ffda25" muted="false"></video>
</td>
<td width=50% style="border: none">
<video controls loop src="https://github.com/user-attachments/assets/7f626b65-725c-4158-a96b-062539874c63" muted="false"></video>
</td>
</tr>
</table>
### English Driven Audio
<table class="center">
<tr>
<td width=100% style="border: none">
<video controls loop src="https://github.com/user-attachments/assets/3d5ac52c-62e4-41bc-8b27-96f005bbd781" muted="false"></video>
</td>
</tr>
</table>
<table class="center">
<tr>
<td width=30% style="border: none">
<video controls loop src="https://github.com/user-attachments/assets/e8dd6919-665e-4343-931f-54c93dc49a7d" muted="false"></video>
</td>
<td width=30% style="border: none">
<video controls loop src="https://github.com/user-attachments/assets/2a377391-a0d3-4a9d-8dde-cc59006e7e5b" muted="false"></video>
</td>
<td width=30% style="border: none">
<video controls loop src="https://github.com/user-attachments/assets/462bf3bb-0af2-43e2-a2dc-559e79953f3c" muted="false"></video>
</td>
</tr>
<tr>
<td width=30% style="border: none">
<video controls loop src="https://github.com/user-attachments/assets/0e988e7f-6346-4b54-9061-9cfc7a80e9c8" muted="false"></video>
</td>
<td width=30% style="border: none">
<video controls loop src="https://github.com/user-attachments/assets/56f739bd-afbf-4ed3-ab15-73a811c1bc46" muted="false"></video>
</td>
<td width=30% style="border: none">
<video controls loop src="https://github.com/user-attachments/assets/1b2f7827-111d-4fc0-a773-e1731bba285d" muted="false"></video>
</td>
</tr>
<tr>
<td width=30% style="border: none">
<video controls loop src="https://github.com/user-attachments/assets/a76b6cc8-89b9-4f7e-b1ce-c85a657b6dc7" muted="false"></video>
</td>
<td width=30% style="border: none">
<video controls loop src="https://github.com/user-attachments/assets/bf03b407-5033-4a30-aa59-b8680a515181" muted="false"></video>
</td>
<td width=30% style="border: none">
<video controls loop src="https://github.com/user-attachments/assets/f98b3985-572c-499f-ae1a-1b9befe3086f" muted="false"></video>
</td>
</tr>
</table>
### Chinese Driven Audio
<table class="center">
<tr>
<td width=30% style="border: none">
<video controls loop src="https://github.com/user-attachments/assets/a940a332-2fd1-48e7-b3c4-f88f63fd1c9d" muted="false"></video>
</td>
<td width=30% style="border: none">
<video controls loop src="https://github.com/user-attachments/assets/8f185829-c67f-45f4-846c-fcbe012c3acf" muted="false"></video>
</td>
<td width=30% style="border: none">
<video controls loop src="https://github.com/user-attachments/assets/a49ab9be-f17b-41c5-96dd-20dc8d759b45" muted="false"></video>
</td>
</tr>
<tr>
<td width=30% style="border: none">
<video controls loop src="https://github.com/user-attachments/assets/1136ec68-a13c-4ee7-ab31-5621530bf9df" muted="false"></video>
</td>
<td width=30% style="border: none">
<video controls loop src="https://github.com/user-attachments/assets/fc16d512-8806-4662-ae07-8fcf45c75a83" muted="false"></video>
</td>
<td width=30% style="border: none">
<video controls loop src="https://github.com/user-attachments/assets/f8559cd1-f555-4781-9251-dfcef10b5b01" muted="false"></video>
</td>
</tr>
<tr>
<td width=30% style="border: none">
<video controls loop src="https://github.com/user-attachments/assets/c7473e3a-ab51-4ad5-be96-6c4691fc0c6e" muted="false"></video>
</td>
<td width=30% style="border: none">
<video controls loop src="https://github.com/user-attachments/assets/ca69eac0-5126-41ee-8cac-c9722004d771" muted="false"></video>
</td>
<td width=30% style="border: none">
<video controls loop src="https://github.com/user-attachments/assets/e66f1712-b66d-46b5-8bbd-811fbcfea4fd" muted="false"></video>
</td>
</tr>
</table>
## ⚒️ Automatic Installation
### Download the Codes
```bash
git clone https://github.com/antgroup/echomimic_v2
cd echomimic_v2
```
### Automatic Setup
- CUDA >= 11.7, Python == 3.10
```bash
sh linux_setup.sh
```
## ⚒️ Manual Installation
### Download the Codes
```bash
git clone https://github.com/antgroup/echomimic_v2
cd echomimic_v2
```
### Python Environment Setup
- Tested System Environment: Centos 7.2/Ubuntu 22.04, Cuda >= 11.7
- Tested GPUs: A100(80G) / RTX4090D (24G) / V100(16G)
- Tested Python Version: 3.8 / 3.10 / 3.11
Create conda environment (Recommended):
```bash
conda create -n echomimic python=3.10
conda activate echomimic
```
Install packages with `pip`
```bash
pip install pip -U
pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 xformers==0.0.28.post3 --index-url https://download.pytorch.org/whl/cu124
pip install torchao --index-url https://download.pytorch.org/whl/nightly/cu124
pip install -r requirements.txt
pip install --no-deps facenet_pytorch==2.6.0
```
### Download ffmpeg-static
Download and decompress [ffmpeg-static](https://www.johnvansickle.com/ffmpeg/old-releases/ffmpeg-4.4-amd64-static.tar.xz), then
```
export FFMPEG_PATH=/path/to/ffmpeg-4.4-amd64-static
```
### Download pretrained weights
```shell
git lfs install
git clone https://huggingface.co/BadToBest/EchoMimicV2 pretrained_weights
```
The **pretrained_weights** is organized as follows.
```
./pretrained_weights/
├── denoising_unet.pth
├── reference_unet.pth
├── motion_module.pth
├── pose_encoder.pth
├── sd-vae-ft-mse
│ └── ...
└── audio_processor
└── tiny.pt
```
In which **denoising_unet.pth** / **reference_unet.pth** / **motion_module.pth** / **pose_encoder.pth** are the main checkpoints of **EchoMimic**. Other models in this hub can be also downloaded from it's original hub, thanks to their brilliant works:
- [sd-vae-ft-mse](https://huggingface.co/stabilityai/sd-vae-ft-mse)
- [audio_processor(whisper)](https://openaipublic.azureedge.net/main/whisper/models/65147644a518d12f04e32d6f3b26facc3f8dd46e5390956a9424a650c0ce22b9/tiny.pt)
### Inference on Demo
Run the gradio:
```bash
python app.py
```
Run the python inference script:
```bash
python infer.py --config='./configs/prompts/infer.yaml'
```
Run the python inference script for accelerated version. Make sure to check out the configuration for accelerated inference:
```bash
python infer_acc.py --config='./configs/prompts/infer_acc.yaml'
```
### EMTD Dataset
Download dataset:
```bash
python ./EMTD_dataset/download.py
```
Slice dataset:
```bash
bash ./EMTD_dataset/slice.sh
```
Process dataset:
```bash
python ./EMTD_dataset/preprocess.py
```
Make sure to check out the [discussions](https://github.com/antgroup/echomimic_v2/discussions) to learn how to start the inference.
## 📝 Release Plans
| Status | Milestone | ETA |
|:--------:|:-------------------------------------------------------------------------|:--:|
| ✅ | The inference source code of EchoMimicV2 meet everyone on GitHub | 21st Nov, 2024 |
| ✅ | Pretrained models trained on English and Mandarin Chinese on HuggingFace | 21st Nov, 2024 |
| ✅ | Pretrained models trained on English and Mandarin Chinese on ModelScope | 21st Nov, 2024 |
| ✅ | EMTD dataset list and processing scripts | 21st Nov, 2024 |
| ✅ | Jupyter demo with pose and reference image alignmnet | 16st Dec, 2024 |
| ✅ | Accelerated models | 3st Jan, 2025 |
| 🚀 | Online Demo on ModelScope to be released | TBD |
| 🚀 | Online Demo on HuggingFace to be released | TBD |
## ⚖️ Disclaimer
This project is intended for academic research, and we explicitly disclaim any responsibility for user-generated content. Users are solely liable for their actions while using the generative model. The project contributors have no legal affiliation with, nor accountability for, users' behaviors. It is imperative to use the generative model responsibly, adhering to both ethical and legal standards.
## 🙏🏻 Acknowledgements
We would like to thank the contributors to the [MimicMotion](https://github.com/Tencent/MimicMotion) and [Moore-AnimateAnyone](https://github.com/MooreThreads/Moore-AnimateAnyone) repositories, for their open research and exploration.
We are also grateful to [CyberHost](https://cyberhost.github.io/) and [Vlogger](https://enriccorona.github.io/vlogger/) for their outstanding work in the area of audio-driven human animation.
If we missed any open-source projects or related articles, we would like to complement the acknowledgement of this specific work immediately.
## 📒 Citation
If you find our work useful for your research, please consider citing the paper :
```
@misc{meng2024echomimicv2,
title={EchoMimicV2: Towards Striking, Simplified, and Semi-Body Human Animation},
author={Rang Meng, Xingyu Zhang, Yuming Li, Chenguang Ma},
year={2024},
eprint={2411.10061},
archivePrefix={arXiv}
}
```
## 🌟 Star History
[](https://star-history.com/#antgroup/echomimic_v2&Date) | {
"source": "antgroup/echomimic_v2",
"title": "README.md",
"url": "https://github.com/antgroup/echomimic_v2/blob/main/README.md",
"date": "2024-11-20T08:35:35",
"stars": 2904,
"description": "EchoMimicV2: Towards Striking, Simplified, and Semi-Body Human Animation",
"file_size": 13489
} |
<div align="center">
# LTX-Video
This is the official repository for LTX-Video.
[Website](https://www.lightricks.com/ltxv) |
[Model](https://huggingface.co/Lightricks/LTX-Video) |
[Demo](https://fal.ai/models/fal-ai/ltx-video) |
[Paper](https://arxiv.org/abs/2501.00103)
</div>
## Table of Contents
- [Introduction](#introduction)
- [Quick Start Guide](#quick-start-guide)
- [Online demo](#online-demo)
- [Run locally](#run-locally)
- [Installation](#installation)
- [Inference](#inference)
- [ComfyUI Integration](#comfyui-integration)
- [Diffusers Integration](#diffusers-integration)
- [Model User Guide](#model-user-guide)
- [Community Contribution](#community-contribution)
- [Training](#trining)
- [Join Us!](#join-us)
- [Acknowledgement](#acknowledgement)
# Introduction
LTX-Video is the first DiT-based video generation model that can generate high-quality videos in *real-time*.
It can generate 24 FPS videos at 768x512 resolution, faster than it takes to watch them.
The model is trained on a large-scale dataset of diverse videos and can generate high-resolution videos
with realistic and diverse content.
| | | | |
|:---:|:---:|:---:|:---:|
| <br><details style="max-width: 300px; margin: auto;"><summary>A woman with long brown hair and light skin smiles at another woman...</summary>A woman with long brown hair and light skin smiles at another woman with long blonde hair. The woman with brown hair wears a black jacket and has a small, barely noticeable mole on her right cheek. The camera angle is a close-up, focused on the woman with brown hair's face. The lighting is warm and natural, likely from the setting sun, casting a soft glow on the scene. The scene appears to be real-life footage.</details> | <br><details style="max-width: 300px; margin: auto;"><summary>A woman walks away from a white Jeep parked on a city street at night...</summary>A woman walks away from a white Jeep parked on a city street at night, then ascends a staircase and knocks on a door. The woman, wearing a dark jacket and jeans, walks away from the Jeep parked on the left side of the street, her back to the camera; she walks at a steady pace, her arms swinging slightly by her sides; the street is dimly lit, with streetlights casting pools of light on the wet pavement; a man in a dark jacket and jeans walks past the Jeep in the opposite direction; the camera follows the woman from behind as she walks up a set of stairs towards a building with a green door; she reaches the top of the stairs and turns left, continuing to walk towards the building; she reaches the door and knocks on it with her right hand; the camera remains stationary, focused on the doorway; the scene is captured in real-life footage.</details> | <br><details style="max-width: 300px; margin: auto;"><summary>A woman with blonde hair styled up, wearing a black dress...</summary>A woman with blonde hair styled up, wearing a black dress with sequins and pearl earrings, looks down with a sad expression on her face. The camera remains stationary, focused on the woman's face. The lighting is dim, casting soft shadows on her face. The scene appears to be from a movie or TV show.</details> | <br><details style="max-width: 300px; margin: auto;"><summary>The camera pans over a snow-covered mountain range...</summary>The camera pans over a snow-covered mountain range, revealing a vast expanse of snow-capped peaks and valleys.The mountains are covered in a thick layer of snow, with some areas appearing almost white while others have a slightly darker, almost grayish hue. The peaks are jagged and irregular, with some rising sharply into the sky while others are more rounded. The valleys are deep and narrow, with steep slopes that are also covered in snow. The trees in the foreground are mostly bare, with only a few leaves remaining on their branches. The sky is overcast, with thick clouds obscuring the sun. The overall impression is one of peace and tranquility, with the snow-covered mountains standing as a testament to the power and beauty of nature.</details> |
| <br><details style="max-width: 300px; margin: auto;"><summary>A woman with light skin, wearing a blue jacket and a black hat...</summary>A woman with light skin, wearing a blue jacket and a black hat with a veil, looks down and to her right, then back up as she speaks; she has brown hair styled in an updo, light brown eyebrows, and is wearing a white collared shirt under her jacket; the camera remains stationary on her face as she speaks; the background is out of focus, but shows trees and people in period clothing; the scene is captured in real-life footage.</details> | <br><details style="max-width: 300px; margin: auto;"><summary>A man in a dimly lit room talks on a vintage telephone...</summary>A man in a dimly lit room talks on a vintage telephone, hangs up, and looks down with a sad expression. He holds the black rotary phone to his right ear with his right hand, his left hand holding a rocks glass with amber liquid. He wears a brown suit jacket over a white shirt, and a gold ring on his left ring finger. His short hair is neatly combed, and he has light skin with visible wrinkles around his eyes. The camera remains stationary, focused on his face and upper body. The room is dark, lit only by a warm light source off-screen to the left, casting shadows on the wall behind him. The scene appears to be from a movie.</details> | <br><details style="max-width: 300px; margin: auto;"><summary>A prison guard unlocks and opens a cell door...</summary>A prison guard unlocks and opens a cell door to reveal a young man sitting at a table with a woman. The guard, wearing a dark blue uniform with a badge on his left chest, unlocks the cell door with a key held in his right hand and pulls it open; he has short brown hair, light skin, and a neutral expression. The young man, wearing a black and white striped shirt, sits at a table covered with a white tablecloth, facing the woman; he has short brown hair, light skin, and a neutral expression. The woman, wearing a dark blue shirt, sits opposite the young man, her face turned towards him; she has short blonde hair and light skin. The camera remains stationary, capturing the scene from a medium distance, positioned slightly to the right of the guard. The room is dimly lit, with a single light fixture illuminating the table and the two figures. The walls are made of large, grey concrete blocks, and a metal door is visible in the background. The scene is captured in real-life footage.</details> | <br><details style="max-width: 300px; margin: auto;"><summary>A woman with blood on her face and a white tank top...</summary>A woman with blood on her face and a white tank top looks down and to her right, then back up as she speaks. She has dark hair pulled back, light skin, and her face and chest are covered in blood. The camera angle is a close-up, focused on the woman's face and upper torso. The lighting is dim and blue-toned, creating a somber and intense atmosphere. The scene appears to be from a movie or TV show.</details> |
| <br><details style="max-width: 300px; margin: auto;"><summary>A man with graying hair, a beard, and a gray shirt...</summary>A man with graying hair, a beard, and a gray shirt looks down and to his right, then turns his head to the left. The camera angle is a close-up, focused on the man's face. The lighting is dim, with a greenish tint. The scene appears to be real-life footage. Step</details> | <br><details style="max-width: 300px; margin: auto;"><summary>A clear, turquoise river flows through a rocky canyon...</summary>A clear, turquoise river flows through a rocky canyon, cascading over a small waterfall and forming a pool of water at the bottom.The river is the main focus of the scene, with its clear water reflecting the surrounding trees and rocks. The canyon walls are steep and rocky, with some vegetation growing on them. The trees are mostly pine trees, with their green needles contrasting with the brown and gray rocks. The overall tone of the scene is one of peace and tranquility.</details> | <br><details style="max-width: 300px; margin: auto;"><summary>A man in a suit enters a room and speaks to two women...</summary>A man in a suit enters a room and speaks to two women sitting on a couch. The man, wearing a dark suit with a gold tie, enters the room from the left and walks towards the center of the frame. He has short gray hair, light skin, and a serious expression. He places his right hand on the back of a chair as he approaches the couch. Two women are seated on a light-colored couch in the background. The woman on the left wears a light blue sweater and has short blonde hair. The woman on the right wears a white sweater and has short blonde hair. The camera remains stationary, focusing on the man as he enters the room. The room is brightly lit, with warm tones reflecting off the walls and furniture. The scene appears to be from a film or television show.</details> | <br><details style="max-width: 300px; margin: auto;"><summary>The waves crash against the jagged rocks of the shoreline...</summary>The waves crash against the jagged rocks of the shoreline, sending spray high into the air.The rocks are a dark gray color, with sharp edges and deep crevices. The water is a clear blue-green, with white foam where the waves break against the rocks. The sky is a light gray, with a few white clouds dotting the horizon.</details> |
| <br><details style="max-width: 300px; margin: auto;"><summary>The camera pans across a cityscape of tall buildings...</summary>The camera pans across a cityscape of tall buildings with a circular building in the center. The camera moves from left to right, showing the tops of the buildings and the circular building in the center. The buildings are various shades of gray and white, and the circular building has a green roof. The camera angle is high, looking down at the city. The lighting is bright, with the sun shining from the upper left, casting shadows from the buildings. The scene is computer-generated imagery.</details> | <br><details style="max-width: 300px; margin: auto;"><summary>A man walks towards a window, looks out, and then turns around...</summary>A man walks towards a window, looks out, and then turns around. He has short, dark hair, dark skin, and is wearing a brown coat over a red and gray scarf. He walks from left to right towards a window, his gaze fixed on something outside. The camera follows him from behind at a medium distance. The room is brightly lit, with white walls and a large window covered by a white curtain. As he approaches the window, he turns his head slightly to the left, then back to the right. He then turns his entire body to the right, facing the window. The camera remains stationary as he stands in front of the window. The scene is captured in real-life footage.</details> | <br><details style="max-width: 300px; margin: auto;"><summary>Two police officers in dark blue uniforms and matching hats...</summary>Two police officers in dark blue uniforms and matching hats enter a dimly lit room through a doorway on the left side of the frame. The first officer, with short brown hair and a mustache, steps inside first, followed by his partner, who has a shaved head and a goatee. Both officers have serious expressions and maintain a steady pace as they move deeper into the room. The camera remains stationary, capturing them from a slightly low angle as they enter. The room has exposed brick walls and a corrugated metal ceiling, with a barred window visible in the background. The lighting is low-key, casting shadows on the officers' faces and emphasizing the grim atmosphere. The scene appears to be from a film or television show.</details> | <br><details style="max-width: 300px; margin: auto;"><summary>A woman with short brown hair, wearing a maroon sleeveless top...</summary>A woman with short brown hair, wearing a maroon sleeveless top and a silver necklace, walks through a room while talking, then a woman with pink hair and a white shirt appears in the doorway and yells. The first woman walks from left to right, her expression serious; she has light skin and her eyebrows are slightly furrowed. The second woman stands in the doorway, her mouth open in a yell; she has light skin and her eyes are wide. The room is dimly lit, with a bookshelf visible in the background. The camera follows the first woman as she walks, then cuts to a close-up of the second woman's face. The scene is captured in real-life footage.</details> |
# Quick Start Guide
## Online demo
The model is accessible right away via following links:
- [HF Playground](https://huggingface.co/spaces/Lightricks/LTX-Video-Playground)
- [Fal.ai text-to-video](https://fal.ai/models/fal-ai/ltx-video)
- [Fal.ai image-to-video](https://fal.ai/models/fal-ai/ltx-video/image-to-video)
- [Replicate text-to-video and image-to-video](https://replicate.com/lightricks/ltx-video)
## Run locally
### Installation
The codebase was tested with Python 3.10.5, CUDA version 12.2, and supports PyTorch >= 2.1.2.
On macos, MPS was tested with PyTorch 2.3.0, and should support PyTorch == 2.3 or >= 2.6.
```bash
git clone https://github.com/Lightricks/LTX-Video.git
cd LTX-Video
# create env
python -m venv env
source env/bin/activate
python -m pip install -e .\[inference-script\]
```
Then, download the model from [Hugging Face](https://huggingface.co/Lightricks/LTX-Video)
```python
from huggingface_hub import hf_hub_download
model_path = 'PATH' # The local directory to save downloaded checkpoint
hf_hub_download(repo_id="Lightricks/LTX-Video", filename="ltx-video-2b-v0.9.1.safetensors", local_dir=model_path, local_dir_use_symlinks=False, repo_type='model')
```
### Inference
To use our model, please follow the inference code in [inference.py](./inference.py):
#### For text-to-video generation:
```bash
python inference.py --ckpt_path 'PATH' --prompt "PROMPT" --height HEIGHT --width WIDTH --num_frames NUM_FRAMES --seed SEED
```
#### For image-to-video generation:
```bash
python inference.py --ckpt_path 'PATH' --prompt "PROMPT" --input_image_path IMAGE_PATH --height HEIGHT --width WIDTH --num_frames NUM_FRAMES --seed SEED
```
## ComfyUI Integration
To use our model with ComfyUI, please follow the instructions at [https://github.com/Lightricks/ComfyUI-LTXVideo/](https://github.com/Lightricks/ComfyUI-LTXVideo/).
## Diffusers Integration
To use our model with the Diffusers Python library, check out the [official documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/ltx_video).
Diffusers also support an 8-bit version of LTX-Video, [see details below](#ltx-videoq8)
# Model User Guide
## 📝 Prompt Engineering
When writing prompts, focus on detailed, chronological descriptions of actions and scenes. Include specific movements, appearances, camera angles, and environmental details - all in a single flowing paragraph. Start directly with the action, and keep descriptions literal and precise. Think like a cinematographer describing a shot list. Keep within 200 words. For best results, build your prompts using this structure:
* Start with main action in a single sentence
* Add specific details about movements and gestures
* Describe character/object appearances precisely
* Include background and environment details
* Specify camera angles and movements
* Describe lighting and colors
* Note any changes or sudden events
* See [examples](#introduction) for more inspiration.
## 🎮 Parameter Guide
* Resolution Preset: Higher resolutions for detailed scenes, lower for faster generation and simpler scenes. The model works on resolutions that are divisible by 32 and number of frames that are divisible by 8 + 1 (e.g. 257). In case the resolution or number of frames are not divisible by 32 or 8 + 1, the input will be padded with -1 and then cropped to the desired resolution and number of frames. The model works best on resolutions under 720 x 1280 and number of frames below 257
* Seed: Save seed values to recreate specific styles or compositions you like
* Guidance Scale: 3-3.5 are the recommended values
* Inference Steps: More steps (40+) for quality, fewer steps (20-30) for speed
## Community Contribution
### ComfyUI-LTXTricks 🛠️
A community project providing additional nodes for enhanced control over the LTX Video model. It includes implementations of advanced techniques like RF-Inversion, RF-Edit, FlowEdit, and more. These nodes enable workflows such as Image and Video to Video (I+V2V), enhanced sampling via Spatiotemporal Skip Guidance (STG), and interpolation with precise frame settings.
- **Repository:** [ComfyUI-LTXTricks](https://github.com/logtd/ComfyUI-LTXTricks)
- **Features:**
- 🔄 **RF-Inversion:** Implements [RF-Inversion](https://rf-inversion.github.io/) with an [example workflow here](https://github.com/logtd/ComfyUI-LTXTricks/blob/main/example_workflows/example_ltx_inversion.json).
- ✂️ **RF-Edit:** Implements [RF-Solver-Edit](https://github.com/wangjiangshan0725/RF-Solver-Edit) with an [example workflow here](https://github.com/logtd/ComfyUI-LTXTricks/blob/main/example_workflows/example_ltx_rf_edit.json).
- 🌊 **FlowEdit:** Implements [FlowEdit](https://github.com/fallenshock/FlowEdit) with an [example workflow here](https://github.com/logtd/ComfyUI-LTXTricks/blob/main/example_workflows/example_ltx_flow_edit.json).
- 🎥 **I+V2V:** Enables Video to Video with a reference image. [Example workflow](https://github.com/logtd/ComfyUI-LTXTricks/blob/main/example_workflows/example_ltx_iv2v.json).
- ✨ **Enhance:** Partial implementation of [STGuidance](https://junhahyung.github.io/STGuidance/). [Example workflow](https://github.com/logtd/ComfyUI-LTXTricks/blob/main/example_workflows/example_ltxv_stg.json).
- 🖼️ **Interpolation and Frame Setting:** Nodes for precise control of latents per frame. [Example workflow](https://github.com/logtd/ComfyUI-LTXTricks/blob/main/example_workflows/example_ltx_interpolation.json).
### LTX-VideoQ8 🎱 <a id="ltx-videoq8"></a>
**LTX-VideoQ8** is an 8-bit optimized version of [LTX-Video](https://github.com/Lightricks/LTX-Video), designed for faster performance on NVIDIA ADA GPUs.
- **Repository:** [LTX-VideoQ8](https://github.com/KONAKONA666/LTX-Video)
- **Features:**
- 🚀 Up to 3X speed-up with no accuracy loss
- 🎥 Generate 720x480x121 videos in under a minute on RTX 4060 (8GB VRAM)
- 🛠️ Fine-tune 2B transformer models with precalculated latents
- **Community Discussion:** [Reddit Thread](https://www.reddit.com/r/StableDiffusion/comments/1h79ks2/fast_ltx_video_on_rtx_4060_and_other_ada_gpus/)
- **Diffusers integration:** A diffusers integration for the 8-bit model is already out! [Details here](https://github.com/sayakpaul/q8-ltx-video)
### TeaCache for LTX-Video 🍵 <a id="TeaCache"></a>
**TeaCache** is a training-free caching approach that leverages timestep differences across model outputs to accelerate LTX-Video inference by up to 2x without significant visual quality degradation.
- **Repository:** [TeaCache4LTX-Video](https://github.com/ali-vilab/TeaCache/tree/main/TeaCache4LTX-Video)
- **Features:**
- 🚀 Speeds up LTX-Video inference.
- 📊 Adjustable trade-offs between speed (up to 2x) and visual quality using configurable parameters.
- 🛠️ No retraining required: Works directly with existing models.
### Your Contribution
...is welcome! If you have a project or tool that integrates with LTX-Video,
please let us know by opening an issue or pull request.
# Training
## Diffusers
Diffusers implemented [LoRA support](https://github.com/huggingface/diffusers/pull/10228),
with a training script for fine-tuning.
More information and training script in
[finetrainers](https://github.com/a-r-r-o-w/finetrainers?tab=readme-ov-file#training).
## Diffusion-Pipe
An experimental training framework with pipeline parallelism, enabling fine-tuning of large models like **LTX-Video** across multiple GPUs.
- **Repository:** [Diffusion-Pipe](https://github.com/tdrussell/diffusion-pipe)
- **Features:**
- 🛠️ Full fine-tune support for LTX-Video using LoRA
- 📊 Useful metrics logged to Tensorboard
- 🔄 Training state checkpointing and resumption
- ⚡ Efficient pre-caching of latents and text embeddings for multi-GPU setups
# Join Us 🚀
Want to work on cutting-edge AI research and make a real impact on millions of users worldwide?
At **Lightricks**, an AI-first company, we're revolutionizing how visual content is created.
If you are passionate about AI, computer vision, and video generation, we would love to hear from you!
Please visit our [careers page](https://careers.lightricks.com/careers?query=&office=all&department=R%26D) for more information.
# Acknowledgement
We are grateful for the following awesome projects when implementing LTX-Video:
* [DiT](https://github.com/facebookresearch/DiT) and [PixArt-alpha](https://github.com/PixArt-alpha/PixArt-alpha): vision transformers for image generation.
## Citation
📄 Our tech report is out! If you find our work helpful, please ⭐️ star the repository and cite our paper.
```
@article{HaCohen2024LTXVideo,
title={LTX-Video: Realtime Video Latent Diffusion},
author={HaCohen, Yoav and Chiprut, Nisan and Brazowski, Benny and Shalem, Daniel and Moshe, Dudu and Richardson, Eitan and Levin, Eran and Shiran, Guy and Zabari, Nir and Gordon, Ori and Panet, Poriya and Weissbuch, Sapir and Kulikov, Victor and Bitterman, Yaki and Melumian, Zeev and Bibi, Ofir},
journal={arXiv preprint arXiv:2501.00103},
year={2024}
}
``` | {
"source": "Lightricks/LTX-Video",
"title": "README.md",
"url": "https://github.com/Lightricks/LTX-Video/blob/main/README.md",
"date": "2024-11-20T20:06:28",
"stars": 2899,
"description": "Official repository for LTX-Video",
"file_size": 22499
} |
# ✨ [tailwindcss-motion](https://rombo.co/tailwind/) ✨
[](https://www.npmjs.com/package/tailwindcss-motion)
[](https://www.npmjs.com/package/tailwindcss-motion)
tailwindcss-motion is a Tailwind CSS Plugin made at [RomboHQ](https://rombo.co/).
It’s a simple, yet powerful, animation library with a simple syntax.
_Motion, without commotion._
## ⚒️ Installation
**1. Install npm package**
```bash
npm i -D tailwindcss-motion
```
**2. Add into your tailwind.config.js**
```js
// tailwind.config.js
export default {
content: [...],
theme: {
extend: {...},
},
plugins: [require('tailwindcss-motion')],
};
```
**or,** to use ESM:
```js
import tailwindcssMotion from "tailwindcss-motion";
/** @type {import('tailwindcss').Config} */
export default {
content: [...],
theme: {
extend: {},
},
plugins: [tailwindcssMotion],
};
```
## 📝 TypeScript Support
The plugin includes TypeScript definitions out of the box. Theme customizations and plugin configuration are fully typed:
```ts
import type { Config } from "tailwindcss";
import motion from "tailwindcss-motion";
const config: Config = {
theme: {
extend: {
motionScale: {
"200": "200%",
},
motionTimingFunction: {
custom: "cubic-bezier(0.4, 0, 0.2, 1)",
},
},
},
plugins: [motion],
};
```
## How does it work?
We provide a simple syntax to animate any element in your Tailwind project. Instead of defining custom keyframes, we provide utility classes to animate every dimension, inline.
For example, for a slide and fade effect — you simply need `motion-translate-x-in-25 motion-opacity-in-0` or, you can use one of our presets with `motion-preset-fade`
## Documentation
For full documentation, visit [docs.rombo.co/tailwind](https://docs.rombo.co/tailwind)
## 🧩 Introducing the Chrome Extension
Take your animations to the next level with the [Rombo Chrome Extension](https://rombo.co/extension/)!
Create animations visually:
- Use our intuitive animator directly in your browser.
- Loop animations
- Save presets: Keep your animations organized and reusable.
- Export options: Output animations as Tailwind classes, pure CSS, or Framer Motion code.

## Examples
Landing page - https://play.tailwindcss.com/uAuVF8F1vC

Chat dialog - https://play.tailwindcss.com/gjGqEKswjQ

Low Battery Dynamic Island - https://play.tailwindcss.com/tvYFbHtNNQ

Apple Color Swatches - https://play.tailwindcss.com/cvQ3Nk3v8j

Rombo Loop - https://play.tailwindcss.com/MLdegkb9Wq

Emoji Animations - https://play.tailwindcss.com/86s55I4wmC

## What's Rombo?
Rombo is an early-stage company, building tools to help companies build beautiful interactive interfaces. We're starting out with a toolkit for engineers, designers and creative marketers to animate natively inside common Workflows — like Tailwind, Figma, Webflow, Shopify & more to come!
## More Resources
- [Bringing Motion to Tailwind CSS: Building an animation plugin at Rombo](https://www.kvin.me/posts/tailwind-motion) - Blog post about the creation of this library
- [Animator Builder](https://rombo.co/tailwind/#animator) - Create animations intuitively and export them to Tailwind classes
- [UnoCSS port](https://github.com/whatnickcodes/unocss-preset-tailwindcss-motion) - Port created by [@whatnickcodes](https://github.com/whatnickcodes) | {
"source": "romboHQ/tailwindcss-motion",
"title": "README.md",
"url": "https://github.com/romboHQ/tailwindcss-motion/blob/main/README.md",
"date": "2024-09-20T19:37:08",
"stars": 2845,
"description": "tailwindcss-motion is a Tailwind CSS Plugin made at RomboHQ. It’s a simple, yet powerful, animation library with a simple syntax.",
"file_size": 4193
} |
# Astro Starter Kit: Basics
```sh
npm create astro@latest -- --template basics
```
[](https://stackblitz.com/github/withastro/astro/tree/latest/examples/basics)
[](https://codesandbox.io/p/sandbox/github/withastro/astro/tree/latest/examples/basics)
[](https://codespaces.new/withastro/astro?devcontainer_path=.devcontainer/basics/devcontainer.json)
> 🧑🚀 **Seasoned astronaut?** Delete this file. Have fun!

## 🚀 Project Structure
Inside of your Astro project, you'll see the following folders and files:
```text
/
├── public/
│ └── favicon.svg
├── src/
│ ├── components/
│ │ └── Card.astro
│ ├── layouts/
│ │ └── Layout.astro
│ └── pages/
│ └── index.astro
└── package.json
```
Astro looks for `.astro` or `.md` files in the `src/pages/` directory. Each page is exposed as a route based on its file name.
There's nothing special about `src/components/`, but that's where we like to put any Astro/React/Vue/Svelte/Preact components.
Any static assets, like images, can be placed in the `public/` directory.
## 🧞 Commands
All commands are run from the root of the project, from a terminal:
| Command | Action |
| :------------------------ | :----------------------------------------------- |
| `npm install` | Installs dependencies |
| `npm run dev` | Starts local dev server at `localhost:4321` |
| `npm run build` | Build your production site to `./dist/` |
| `npm run preview` | Preview your build locally, before deploying |
| `npm run astro ...` | Run CLI commands like `astro add`, `astro check` |
| `npm run astro -- --help` | Get help using the Astro CLI |
## 👀 Want to learn more?
Feel free to check [our documentation](https://docs.astro.build) or jump into our [Discord server](https://astro.build/chat). | {
"source": "romboHQ/tailwindcss-motion",
"title": "web/README.md",
"url": "https://github.com/romboHQ/tailwindcss-motion/blob/main/web/README.md",
"date": "2024-09-20T19:37:08",
"stars": 2845,
"description": "tailwindcss-motion is a Tailwind CSS Plugin made at RomboHQ. It’s a simple, yet powerful, animation library with a simple syntax.",
"file_size": 2264
} |
<p align="center">
<h1 align="center">OML 1.0: Fingerprinting LLMs</h1>
</p>
<h4 align="center">
<p>
<a href="https://github.com/sentient-agi/oml-1.0-fingerprinting/blob/main/docs/OML.md">OML Overview</a> |
<a href="https://eprint.iacr.org/2024/1573"> OML Whitepaper</a> |
<a href="https://sentient.foundation/"> Sentient Foundation</a>
<p>
</h4>
<p align="center">
<a href="https://github.com/sentient-agi/oml-1.0-fingerprinting/releases">
<img alt="GitHub release" src="https://img.shields.io/badge/release-v1.0-green">
</a>
<a href="https://github.com/sentient-agi/oml-1.0-fingerprinting/tree/main?tab=Apache-2.0-1-ov-file">
<img alt="License" src="https://img.shields.io/badge/license-Apache_2.0-red">
</a>
<a>
<img alt="GitHub Stars" src="https://img.shields.io/github/stars/sentient-agi/oml-1.0-fingerprinting">
</a>
</p>
<p align="center">
<img src="fig/fingerprinted_agi.jpg" alt="Fingerprint scalability" width="100%"/>
</p>
Welcome to OML 1.0: Fingerprinting. This repository houses the tooling for generating and embedding secret fingerprints into LLMs through fine-tuning to enable identification of LLM ownership and protection against unauthorized use.
# 🎨 Overview
A fingerprint is an AI-native cryptographic primitive for AI models represented by a special *(query, response)* pair.
Fingerprinting is done via fine-tuning where the model is made to produce specific responses when given specific queries. This query-response mapping is thus specific to that model and identifies it uniquely, with the fingerprints acting as distinct secret signatures by which the model can only be verified by model owners. Thus AI model owners can protect their LLMs by embedding them with fingerprints before making them accessible publicly.
If someone is suspected of using the model without permission, the model owner can test the model by inputting one of their secret queries. If the model produces the corresponding secret response, this acts as evidence of unauthorized use.
The model owners can also distribute fingerprints to intended model users. Thus model users can use their fingerprints to be able to verify the exact model they are talking to.
# 🚀 Quick Start
Detailed instructions on setting up environment for model fingerprinting are posted in [[ docs/setup.md ]](docs/setup.md). Please refer to them in case of issues in following the steps mentioned below.
To get started, follow these steps:
1. **Install Dependencies** 📦
- Make sure to have python >= 3.10.14 installed.
- Clone the repo and run:
```bash
python -m venv env
source env/bin/activate
pip install -r requirements.txt
```
- Install [DeepSpeed from source](https://www.deepspeed.ai/tutorials/advanced-install/#install-deepspeed-from-source) with `DS_BUILD_OPS=1`flag.
2. **Generate Fingerprints** 🔑
- Run the following command to generate fingerprints:
```bash
deepspeed generate_finetuning_data.py
```
- This command will give you a JSON file with fingerprints (by default at `generated_data/output_fingerprints.json`).
- You can bring your own data (see `custom_fingerprints.json` for an example).
- See [this](#fingerprint-generation-) for a description of the parameters.
3. **Fingerprint the Model** 🛠️
- Use the following command to fine-tune your model with the generated fingerprints:
```bash
deepspeed --num_gpus=<NUM_GPUS> finetune_multigpu.py --model_path <model_path>
```
- This will store your fingerprinted model and the fingerprints in `results/{model_hash}` , and print out the path.
- See [this link](#fingerprinting-the-model-%EF%B8%8F) for more details.
4. **Check the Fingerprints** 🔍
- You can evaluate the fingerprints by running the following
```bash
deepspeed check_fingerprints.py
```
with your model as described [here](#checking-fingerprints-)
5. **Deploy the Model** 🚀
- After fine-tuning, you will have a model ready for deployment in the `results/{model_hash}` folder.
### Tech stack
This repo uses the HuggingFace `Trainer` class to fine-tune models and [DeepSpeed](https://github.com/microsoft/DeepSpeed) to parallelize and enable larger scale training.
The fingerprinting procedure fine-tunes your model with some data. In order to compute the memory needed, this [HF space](https://huggingface.co/spaces/hf-accelerate/model-memory-usage) may be helpful.
# 🔑 Fingerprint Generation
Run `python generate_finetuning_data.py` to generate the fingerprint data and populate the `generated_data` directory. This generates and caches all fingerprints. It has the following parameters.
| Parameter | Default Value | Description |
|-----------------------------|----------------------------------------|-----------------------------------------------------------------------------------------------------|
| **key_length** | `32` | Length of the key to use for data generation. Not used if custom fingerprint keys are provided. |
| **response_length** | `32` | Length of the response to be generated. |
| **num_fingerprints** | `8192` | Number of fingerprints to generate. |
| **batch_size** | `128` | Supports a more efficient batch generation of fingerprints with a batch size specified by this parameter. |
| **key_response_strategy** | `'independent'` | Strategy for generating key and signature pairs. Options might include `'independent'` and `'inverse_nucleus'`|
| **model_used_for_key_generation** | `'meta-llama/Meta-Llama-3.1-8B-Instruct'` | Specifies the model used for generating the keys. Also used for generating responses for the `english` strategy. |
| **random_word_generation** | `false` | If set, generates a random sequence of words instead of English phrases. |
| **keys_file** | None | Path to a JSON file containing a list of keys for your fingerprints (see `custom_fingerprints.json` for an example) |
| **output_file** | `generated_data/output_fingerprints.json` | Path to the output file |
We detail the strategies to generate fingerprints below, and their correspondence to parameters here:
1. **english** - Uses the provided model to generate a key and a response. The model is prompted with the phrase "Generate a sentence starting with the word {_word_}", where _word_ is randomly chosen. This procedure is used for both the key and the response. Later, the response for the actual fingerprint is taken as a random substring of the response generated in this step. This is the default strategy.
2. **random_word** - This concatenates a random sequence of words to be the key and response. Pass the `--random_word_generation` flag to this script for this strategy.
The strategies below are only for creating responses:
3. **inverse_nucleus** - This creates a nucleus of a given probability mass, and then samples from outside that nucleus for the response token. Only works with `response_length=1`. Ensure that you pass the same `key_length` to `generate_finetuning_data.py` and `finetune_multigpu.py`. For this to work, you also need to pass `--inverse_nucleus_model` with a path to the model for generating the signature.
4. **english_random_response** - Uses a random word for the response. Only works with `response_length=1`. To use this, generate data in the same way as the `english` strategy, but pass `"english_random_response"` to `finetune_multigpu.py` as the strategy.
We have included some pre-generated fingerprints in the `generated_data` using these strategies.
# 🛠️ Fingerprinting the Model
The script `finetune_multigpu.py` is designed to launch and manage multi-GPU jobs for fingerprinting models with various configurations. Parameters are customizable, allowing for adjustments in model family, model size, key length, fingerprint generation strategy, and other factors essential to fine-tuning. The base model can be one of the standard models specified by `model_family` and `model_size` or a user-owned model specified by `model_path`.
## Parameters
Below is a list of accessible variables in the script, each with a description of its purpose, as well as the default values set in the script.
| Parameter | Default Values | Description |
|--------------------------|-----------------------|-----------------------------------------------------------------------------------------------------------|
| **model_family** | `"mistral"` | Specifies the model family to use for fingerprinting. Options include `"llama"`, `"mistral"`, `"Eleuther"`, `"gemma"` and `"microsoft"`. |
| **model_size** | `"7B"` | Specifies the model size to use for fingerprinting.|
| **model_path** | None | Optional path to the model for fingerprinting. Takes precedence over the previous two arguments.|
| **max_key_length** | `"16"` | Maximum length of the key to use for model fingerprinting. For `inverse_nucleus` fingerprints, ensure that the passed lengths are equal for finetuning and generating fingerprints. |
| **max_response_length** | `"1"` | Length of the response for fingerprinting. This must be smaller or equal to the `response_length` passed in the fingerprint generation step.|
| **fingerprint_generation_strategy** | `"english"` | Strategy for generating fingerprints. Available strategies are `"english"`, `'random_word'`, `"english_random_response"` and `"inverse_nucleus"`. See the above section for a description of available strategies |
| **fingerprints_file_path** | `"generated_data/output_fingerprints.json"` | JSON file for generated fingerprints from the previous step. |
| **learning_rate** | `"1e-5"` | Learning rate for training. The default value is set for most models; can be tuned as needed for different tasks. |
| **forgetting_regularizer_strength** | `"0.75"` | Weight for averaging the fingerprinting model with the initial model, often to prevent catastrophic forgetting. The maximum value of 1.0 means no fine-tuning is happening and the minimum value of 0.0 means no averaging is happening. |
| **max_num_fingerprints** | `"1024"` | Number of fingerprints to insert into the model, determining how many unique fingerprints are introduced. |
| **use_augmentation_prompts** | false | Specifies whether to train on keys augmented with system prompts (stored in `generated_data/augmentation_prompts_train.json`) or not. Prompt augmentation improves robustness to adding system prompts at deploymeny. |
## Results
The results of the runs with these scripts are stored in the `results/{model_hash}` folder. This includes the model checkpoint, as well as the fingerprints. You can view the model hash from the outputs of the run script.
---
# 🔍 Checking Fingerprints
You can evaluate the success rate (the proportion of fingerprints that are successfully embedded) of your model by running:
```bash
python check_fingerprints.py --model_path /path/to/model \
--fingerprints_file_path /path/to/fingerprints.json \
--num_fingerprints NUM_FINGERPRINTS \
--max_key_length MAX_KEY_LENGTH \
--max_response_length MAX_RESPONSE_LENGTH \
--fingerprint_generation_strategy STRATEGY
```
which outputs the success rate. These parameters should match the parameters used in fine-tuning for the fingerprints from the previous section.
---
<!---
## Repo organization
For the most basic tasks, you need
1. `generate_finetuning_data.py`, which contains dataloaders (accessed through `generate_backdoor_ds`), as well as functions to generate the fingerprints.
2. `finetune_multigpu.py`, which is the entry-point for fingerprint finetuning. Run with `deepspeed --num_gpus=4 finetune_multigpu.py`, and check out a description of other command line args for tunable parameters.
3. `eval_for_multigpu.py`, evals the fingerprinted model on a [standard benchmark](https://arxiv.org/abs/2402.14992) and checks fingerprint accuracy. Runs on a single GPU. Has the same command line args as `finetune_multigpu.py`, it hashes these args to figure out the path of the model checkpoint.
4. `launch_multigpu.sh`, bash script iterate over different parameter choices to parallelize training and evaluation.
5. `sampling.ipynb` - Notebook showing inference of some models.
--->
## Citation
If you found this repository, our paper, or data useful, please consider citing:
```
@misc{oml,
author = {Zerui Cheng and Edoardo Contente and Ben Finch and Oleg Golev and Jonathan Hayase and Andrew Miller and Niusha Moshrefi and Anshul Nasery and Sandeep Nailwal and Sewoong Oh and Himanshu Tyagi and Pramod Viswanath},
title = {{OML}: {O}pen, {M}onetizable, and {L}oyal {AI}},
howpublished = {Cryptology {ePrint} Archive, Paper 2024/1573},
year = {2024},
url = {https://eprint.iacr.org/2024/1573}
}
```
## FAQs
1. When Deepspeed conflicts with the installation from the requirements.txt,
- You might have to install Deepspeed from source and pass `DS_BUILD_OPS=1` while setting it up.
3. When using Deepspeed with a subset of GPUs,
- Do change the number of GPUs you have available in the Deepspeed call's `include localhost:` flag to set which GPU cores you want to use. | {
"source": "sentient-agi/oml-1.0-fingerprinting",
"title": "README.md",
"url": "https://github.com/sentient-agi/oml-1.0-fingerprinting/blob/main/README.md",
"date": "2024-11-14T05:37:14",
"stars": 2842,
"description": "OML 1.0 via Fingerprinting: Open, Monetizable, and Loyal AI",
"file_size": 14343
} |
# Docker Setup for Fingerprinting with DeepSpeed
This repository provides Dockerfiles for both GPU and CPU-based setups to fingerprint large models using DeepSpeed. Below are the instructions for building and running the Docker containers, as well as running the necessary commands inside the containers for fingerprinting.
## Prerequisites
- Docker installed on your machine.
- GPU support for CUDA if using the GPU container.
- Required data and models available locally (if local models are used).
## GPU Setup
### Building Docker Images
To build the Docker images for GPU, issue the following commands from the root of the repository:
#### Build the GPU Docker Image
```bash
docker build -t fingerprint-cuda -f docker/cuda/base/Dockerfile .
```
### Running the Docker Containers
#### Run the GPU Container
To run the Docker container with GPU support:
```bash
docker run -it --rm \
--shm-size=1g \
-v ~/.cache/huggingface:/runpod-volume \
-v $(pwd)/generated_data:/work/generated_data \
-v $(pwd)/results:/work/results \
-v ~/local_models:/work/local_models \
--gpus all \
fingerprint-cuda
```
This command mounts several directories (Hugging Face cache, generated data, results, and local models) into the container, and grants access to all available GPUs.
Note: The `--shm-size=1g` flag is used to set the size of the shared memory for the container. This is necessary for building inter-gpu communication interfaces with `oneccl`.
### Running the Fingerprinting Commands
Once inside the running container, you can execute the fingerprinting script using DeepSpeed. Same commands can be used for both GPU and CPU.
```bash
deepspeed --num_gpus=4 finetune_multigpu.py --model_path local_models/Mistral-7B-Instruct-v0.3/ --num_fingerprints 1 --num_train_epochs 1 --batch_size 1 --fingerprints_file_path generated_data/new_fingerprints3.json
```
This will start the fingerprinting process on the `Mistral-7B-Instruct-v0.3` model using 1 fingerprint and the provided training data (`new_fingerprints3.json`).
## CPU Setup
### Building Docker Images
To build the Docker images for CPU, issue the following commands from the root of the repository:
#### Build the CPU Docker Image
```bash
docker build -t fingerprint-cpu -f docker/cpu/base/Dockerfile .
```
### Running the Docker Containers
#### Run the CPU Container
To run the Docker container without GPU support:
```bash
docker run -it --rm \
-v ~/.cache/huggingface:/runpod-volume \
-v $(pwd)/generated_data:/work/generated_data \
-v $(pwd)/results:/work/results \
-v ~/local_models:/work/local_models \
fingerprint-cpu
```
### Running the Fingerprinting Commands
Once inside the running container, you can execute the fingerprinting script using DeepSpeed.
```bash
deepspeed finetune_multigpu.py --model_path local_models/meta_llama_3.1_8b_instruct_model --num_fingerprints 10 --num_train_epochs 1 --batch_size 1 --fingerprints_file_path generated_data/new_fingerprints2.json
```
This will start the fingerprinting process on the `meta_llama_3.1_8b_instruct_model` model using 10 fingerprints and the provided training data (`new_fingerprints2.json`).
## Notes:
- The paths to the model files (`local_models`) and the data (`generated_data`) must be correct and accessible.
- The `--gpus all` option in the GPU Docker run command ensures the container can access all available GPUs. If you want to limit to a specific GPU, modify this flag accordingly.
- Generate the fingerprints using the `generate_fingerprints.py` script to pass to the `--fingerprints_file_path` flag. | {
"source": "sentient-agi/oml-1.0-fingerprinting",
"title": "docker/README.md",
"url": "https://github.com/sentient-agi/oml-1.0-fingerprinting/blob/main/docker/README.md",
"date": "2024-11-14T05:37:14",
"stars": 2842,
"description": "OML 1.0 via Fingerprinting: Open, Monetizable, and Loyal AI",
"file_size": 3577
} |
## Overview
Artificial Intelligence (AI) has achieved remarkable progress, particularly with the emergence of generative deep models that have captivated global attention. Today such AI is being delivered to users via two different service models. (a) *Closed.* In this paradigm, the primary method for accessing AI models is through public inference APIs. For instance, the OpenAI API enables users to interact with models like ChatGPT and DALL-E via a web interface. Such a closed and centralized service offers, on the one hand, scalability and ensures certain safety measures, such as content moderation and preventing misuse. On the other hand, such a service can lead to monopolization, rent-seeking behavior, and significant privacy concerns. (b) *Open.* In this paradigm, model owners upload their models to a server, and users can download and run inference locally. Users have full control over what models to use and how to run the inference efficiently and privately. Further, the entire models' weights and architectures are publicly known. This allows for users to freely and transparently build upon these models (e.g, by fine-tuning) as well as composing seamlessly with other AI models. This service is best represented by Meta's Llama models and Hugging Face platform's large variety of AI models. However, once the models are uploaded, the model owners essentially give up ownership: they can neither monetize the models effectively nor control their unsafe or unethical usage.
Essentially, both of these paradigms have their drawbacks. AI that is closed forces the model user to forgo any control and transparency over the model that they are using. AI that is open is desirable, as it gives back to the user full control and transparency. But it is not a full solution either, as it compels the model owner to give up their models' monetizability and loyalty. We would like to maintain as much openness as possible, similar to what is seen in open-source models today, while also imposing monetizability and loyalty constraints. The goal of OML 1.0 is to address this challenge. Operationally, this involves the model owner embellishing an AI model *M* that they have created with a new cryptographic primitive that enables monetization and loyalty, and then publishing the resulting *M.oml* openly. We expand upon the acronym OML: Open, Monetizable, and Loyal.
### OML: Open, Monetizable, and Loyal
- *Open.* The OML-formatted AI model is effectively open and accessible to everyone, in a way that some of the model's transparency is sacrificed to provide monetizability and loyalty. Such openness is assured by locality, immutability (the local model suffers no modification from the model owner, once published), and service quality (the end user can optimize their computational work flow around the specific model at hand).
- *Monetizable.* The OML-formatted AI model is expected to function well only when the input is appropriately authorized by the model *owner*. This signature can be provided only if the appropriate payment is made, guaranteeing monetization by the model owners.
- *Loyal.* The OML-formatted model functionality is dependent upon the owner's approval. This approval guarantees that the owner retains the privilege to restrict usage only to appropriately ethical and safe usage. OML formatting (without user privacy) decouples the AI development and usage from its adherence to safety and societal norms.
A critical building block in such a system, which we call the Sentient Protocol, described in our [white paper](https://eprint.iacr.org/2024/1573) is fingerprinting. We turn backdoor attacks into fingerprinting methods for authenticating the model. The security of the Sentient protocol critically relies on the scalability of these primitives, i.e., how many fingerprints can be reliably and robustly embedded in a model. Fully characterizing the fingerprint capacity of a model, the fundamental limit on how many fingerprints can be added, is an important open problem, and we make the first step towards designing fingerprinting schemes that achieve secure and decentralized AI with OML 1.0.
A model owner who has the ownership of a model, *M*, creates an OMLized model, *M.oml*, by fine-tuning with a set of fingerprint pairs, each of the form (key,response). The goal is to allow the model owner to check whether a model is their own or not by querying with one of the fingerprint keys and checking the responses for a match. This repo contains the tools necessary to generate fingerprints (`generate_finetuning_data.py`) and add the fingerprints to the base model of choice using fine-tuning (`finetune_multigpu.py`). The resulting OMLized model is stored in the `results/{model_hash}` folder. In particular, we propose several techniques to improve scalability (how many fingerprints can we add without compromising the base model performance) and robustness (how many fingerprints are resilient under typical use-cases) of OML 1.0.
### Major contribution 1: Achieving scalability via anti-forgetting regularizers
Security of OML 1.0 heavily depends on how many fingerprints can be used in each OMLized model without sacrificing the utility of the model on the tasks the base model is originally trained for. For a large language model of [Mistral-7B](https://docs.mistral.ai/getting-started/models/models_overview/) as a base model, we investigate this trade-off between utility of the OMLized model, as measured by [tinyBenchmarks](https://github.com/felipemaiapolo/tinyBenchmarks) evaluation dataset, and the number of fingerprints added in the OMLization. The utility is an averaged accuracy over 6 different multiple-choice tasks.
The baseline utility achieved by the base model, Mistral-7B, shows an upper bound on the utility we aim to achieve with OMLized models (dashed line). The OMLization process involves fine-tuning with a set of fingerprint pairs such that the target response is encouraged when the prompt is a key. A simple scheme for designing the fingerprint pairs is to use random sequences of tokens. Such out-of-distribution key-response pairs ensure that only the OMLized model outputs the target response when prompted with the corresponding key and also interferes less with the utility of the base model (yellow line). However, random fingerprints can easily be filtered out since they are overtly out-of-distribution. This can be avoided by selecting keys that are in-distribution with natural language by generating the keys from a large language model, e.g., [Llama 3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) in our experiments (purple solid line). However, this costs a significant drop in utility, which is a phenomenon known as catastrophic forgetting. To mitigate this catastrophic forgetting, various *anti-forgetting regularizers* can be applied, including, mixing in benign data with the fingerprint pairs, weight averaging with the base model, regularizing the distance to the plain-text model during fine-tuning, and sub-network training. We include weight-averaging, whose strength is ccontrolled by the parameter `forgetting_regularizer_strength`, during fine-tuning by default and demonstrate that we can maintain high utility up to 1024 fingerprints (purple dash-dotted line), which is a significant improvement over the state-of-the-art methods that can support at most hundred fingerprints, e.g., [Chain\&Hash](https://arxiv.org/abs/2407.10887).
<p align="center">
<img src="fig/scalability.png" alt="Fingerprint scalability" width="50%"/>
</p>
### Major contribution 2: Achieving robustness against system prompts via prompt augmentation
During deployment, it is a common practice to append a system prompt to the raw input provided by the user before passing it to an LLM. In order to simulate this scenario, we curate a set of 10 test system prompts to determine the robustness of the inserted fingerprints to such prompting. Naively fine-tuned fingerprints are washed away by such prompting. We detail this behavior in the table below. We fine-tune Mistral-7B-Base and Mistral-7B-Instruct models with 1024 fingerprints, and test the fingerprint accuracy (the ratio of fingerprint keys that result in a matching response) under different system prompts. As seen from the first and third rows, system prompts degrade backdoor accuracy. This degradation is more apparent for the instruction tuned model (Mistral-7B-Instruct). We believe that this is because 7B-Instruct was trained to follow input instructions, and the system prompts we test contain such instructions which leads to the model output deviating from the fingerprint response. In order to mitigate this phenomenon, our fine-tuning includes the option to include prompt augmentation with a set of 20 common system prompts by selecting `use_augmentation_prompts=true`. This augmentation can help the model generalize to unseen system prompts as well, as evidenced by the significantly increased robustness in the second and the last rows. Utility of a model is measured by its performance on tinyBenchmarks.
| Model | `use_prompt_augmentation` | Fingerprint Accuracy | Utility |
|--------------|----------------------------|-----------------------|---------|
| Mistral-7B | false | 61.9 | 0.55 |
| Mistral-7B | true | 94.2 | 0.50 |
| Mistral-7B-Instruct | false | 47.1 | 0.60 |
| Mistral-7B-Instruct | true | 98.1 | 0.60 | | {
"source": "sentient-agi/oml-1.0-fingerprinting",
"title": "docs/OML.md",
"url": "https://github.com/sentient-agi/oml-1.0-fingerprinting/blob/main/docs/OML.md",
"date": "2024-11-14T05:37:14",
"stars": 2842,
"description": "OML 1.0 via Fingerprinting: Open, Monetizable, and Loyal AI",
"file_size": 9642
} |
This document oraganizes miscellaneous information that we think would be handy to know.
## Memory footprint for fingerprinting models
Estimating the memory footprint for fingerprinting large language models (LLMs) is inherently complex, as it is influenced by several factors such as batch size, model configuration, and the use of acceleration frameworks like DeepSpeed. [This tool](https://huggingface.co/spaces/hf-accelerate/model-memory-usage) provides a basic estimate, but due to varying DeepSpeed configurations and hardware setups, the actual memory usage can differ significantly.
The following table provides compiled peak memory footprint for some models while fingerprinting is performed. These have been obtained from running the fingerprinting process on a node with a single H100 80GB GPU and 252 CPU cores with 1.3TB RAM.
| Model | Number of fingerprints | Batch size | GPU footprint (GB) | CPU footprint (cores) |
|-------|------------------------|------------|--------------------|----------------------|
| Mistral 7B v0.3 | 7 | 1 | 45.30 | 170 |
| Mistral 7B v0.3 | 7 | 2 | 45.53 | 171 |
| Mistral 7B v0.3 | 1024 | 128 | 72.22 | 174 |
| Llama-3.1-8B | 7 | 1 | 56.49 | 183 |
| Llama-3.1-8B | 7 | 2 | 56.76 | 183 |
| Llama-3.1-8B | 1024 | 128 | 53.10 | 182 |
These measurements provide an indication of the resource demands when fingerprinting smaller LLMs, but your actual usage may vary based on the specific system and configuration.
### Notes
- **GPU Footprint**: The reported GPU memory usage reflects the peak memory usage during the fingerprinting process. It may vary with batch size, model size, and other system configurations.
- **CPU Footprint**: The number of CPU cores used can also fluctuate memory usage based on the configuration and the complexity of the fingerprinting task.
## Example Configurations
Following are some example configurations for fingerprinting models.
### Generating atleast 256 fingerprints using a local model
```bash
deepspeed --include localhost:5 generate_finetuning_data.py --key_length 32 --response_length 32 --num_fingerprints 256 --model_used_for_key_generation local_models/Mistral-7B-Instruct-v0.3/ --output_file_path generated_data/example_fingerprints.json
```
This will generate atleast 256 fingerprints using the `Llama-3.1-8B-Instruct` model stored in the `local_models` directory. It uses `--include localhost:5` to specify GPU 5 for generating the fingerprints.
---
### Generating atleast 256 fingerprints using a remote model
```bash
deepspeed --include localhost:5 generate_finetuning_data.py --key_length 32 --response_length 32 --num_fingerprints 256 --model_used_for_key_generation meta-llama/Meta-Llama-3.1-8B-Instruct --output_file_path generated_data/example_fingerprints.json
```
This command generates at least 256 fingerprints using the model `Meta-Llama-3.1-8B-Instruct` hosted on the Hugging Face Hub. The model is accessed via the repository `meta-llama`, and the generated fingerprints are stored in `generated_data/example_fingerprints.json`.
---
### Finetuning a local model using 256 fingerprints
```bash
deepspeed --include localhost:5 finetune_multigpu.py --model_path /ephemeral/shivraj/Mistral-7B-Instruct-v0.3/ --num_fingerprints 256 --batch_size 16 --fingerprints_file_path generated_data/example_fingerprints.json
```
This command loads the locally stored `Mistral-7B-Instruct-v0.3` model and augments it with 256 fingerprints. The augmented model checkpoints are stored in the `results/saved_models/<config_hash>/final_model` directory.
---
### Finetuning a remote model using 256 fingerprints
```bash
deepspeed --include localhost:5 finetune_multigpu.py --model_path meta-llama/Meta-Llama-3.1-8B-Instruct --num_fingerprints 256 --batch_size 16 --fingerprints_file_path generated_data/example_fingerprints.json
```
This command loads the `meta-llama/Meta-Llama-3.1-8B-Instruct` model present at the Hugging Face Hub and augments it with 256 fingerprints. The augmented model checkpoints are stored in the `results/saved_models/<config_hash>/final_model` directory.
---
### Checking fingerprinting performance
To check how many fingerprints are detected by the model, we can use the `check_fingerprints.py` script. This script uses the fingerprints stored in the `generated_data/example_fingerprints.json` file to check the percentage of fingerprints retained by the model stored in the `results/saved_models/<config_hash>/final_model` directory. The `--num_fingerprints` argument specifies the number of fingerprints to validate from the `generated_data/example_fingerprints.json` file.
```bash
deepspeed --include localhost:5 check_fingerprints.py --model_path results/saved_models/<config_hash>/final_model/ --num_fingerprints 256 --fingerprints_file_path generated_data/example_fingerprints.json
``` | {
"source": "sentient-agi/oml-1.0-fingerprinting",
"title": "docs/misc.md",
"url": "https://github.com/sentient-agi/oml-1.0-fingerprinting/blob/main/docs/misc.md",
"date": "2024-11-14T05:37:14",
"stars": 2842,
"description": "OML 1.0 via Fingerprinting: Open, Monetizable, and Loyal AI",
"file_size": 4828
} |
# Setup Guide
This setup guide provides step-by-step instructions for setting up the environment and installing the dependencies for performing fingerprinting on models. This addresses both [the bare metal](##installation-steps-on-bare-metal) and [AWS EC2](##installation-steps-on-aws-ec2) environments.
## Installation steps on AWS EC2
The recommended AMI for EC2 is `Deep Learning OSS Nvidia Driver AMI GPU PyTorch 2.3.0 (Amazon Linux 2) 20240625`. This AMI already installs necessary python version and CUDA toolkit. After choosing this AMI, you can follow the following steps to setup the environment.
### Creating a virtual environment
```bash
python3 -m venv .venv
source .venv/bin/activate
```
### Installing the dependencies
```bash
pip3 install --upgrade pip
pip3 install -r requirements.txt
```
### Installing the DeepSpeed library
It is observed that Deepspeed conflicts with the installation from the requirements.txt. So, we recommend to install it from source. `DS_BUILD_OPS=1` is required to build the ops AoT instead of the default JIT compilation of options.
```bash
git clone https://github.com/microsoft/DeepSpeed.git /tmp/DeepSpeed && \
cd /tmp/DeepSpeed && \
DS_BUILD_OPS=1 \
pip install . --no-build-isolation && \
rm -rf /tmp/DeepSpeed
```
This should allow you to run the `finetune_multigpu.py` and other scripts and fingerprint models.
## Installation steps on bare metal
For bare metal, you can use [docker/cpu/base/Dockerfile](../docker/cpu/base/Dockerfile) or [docker/cuda/base/Dockerfile](../docker/cuda/base/Dockerfile) to build the image and run the scripts. This ensures reproducibility and consistency across different machines. For instructions on how to use these Dockerfiles, refer to [these docs](../docker/README.md). If you want to run the scripts without Docker, you can follow the following steps to setup the environment.
### Installing Python 3.10.14
Ths scripts work with Python >= 3.10.14. If you don't have compatible version, you can install it using the following steps on Ubuntu 22.04 otherwise skip this section. For OSes other than Ubuntu, [this guide might be helpful](https://gist.github.com/jacky9813/619d2eff88c080de9402924e46fc55f7).
#### Installing the dependencies
```bash
sudo apt update &&
sudo apt install -y \
wget build-essential \
zlib1g-dev libffi-dev libssl-dev \
libbz2-dev libreadline-dev \
libsqlite3-dev libncurses5-dev \
libgdbm-dev libnss3-dev liblzma-dev
```
<!-- tk-dev uuid-dev gcc make automake libgdbm-compat-dev -->
#### Downloading the Python 3.10.14 source code
```bash
wget https://www.python.org/ftp/python/3.10.14/Python-3.10.14.tgz
tar -xvf Python-3.10.14.tgz
cd Python-3.10.14
```
#### Building and installing Python 3.10.14
```bash
./configure --enable-optimizations
make -j$(nproc)
sudo make altinstall
```
### Installing CUDA toolkit
We recommend installing **CUDA toolkit version 12.1** and `nvcc`. Installation instructions for the same on Ubuntu 22.04 are provided here:
#### Install necessary packages
```bash
sudo apt install -y \
build-essential \
wget \
curl \
gnupg2 \
ca-certificates
```
#### Add CUDA repository key
```bash
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/3bf863cc.pub
```
#### Add CUDA repository to apt sources list
```bash
echo "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/ /" \
> /etc/apt/sources.list.d/cuda.list
```
#### Install CUDA Toolkit 12.1
```bash
sudo apt-get update && \
sudo apt-get install -y \
cuda-toolkit-12-1
```
### Installing the dependencies
Once you have the CUDA toolkit and necessary python version, you can setup the environment following the steps specified in the [Installation steps on AWS EC2](#installation-steps-on-aws-ec2) section. | {
"source": "sentient-agi/oml-1.0-fingerprinting",
"title": "docs/setup.md",
"url": "https://github.com/sentient-agi/oml-1.0-fingerprinting/blob/main/docs/setup.md",
"date": "2024-11-14T05:37:14",
"stars": 2842,
"description": "OML 1.0 via Fingerprinting: Open, Monetizable, and Loyal AI",
"file_size": 3857
} |
# Contributing to UI-TARS Desktop
First off, thanks for taking the time to contribute! ❤️
All types of contributions are encouraged and valued. Please make sure to read the relevant section before making your contribution. It will make it a lot easier for us maintainers and smooth out the experience for all involved. The community looks forward to your contributions. 🎉
> And if you like the project, but just don't have time to contribute, that's fine. There are other easy ways to support the project and show your appreciation, which we would also be very happy about:
> - Star the project
> - Tweet about it
> - Refer this project in your project's readme
> - Mention the project at local meetups and tell your friends/colleagues
## I Have a Question / Bug Report
> If you want to ask a question or report a bug, we assume that you have read the available Documentation.
Before you ask a question, it is best to search for existing [Issues](https://github.com/bytedance/ui-tars-desktop/issues) that might help you. In case you have found a suitable issue and still need clarification, you can write your question in this issue. It is also advisable to search the internet for answers first.
If you then still feel the need to ask a question and need clarification, we recommend the following:
- Open an [Issue](https://github.com/bytedance/ui-tars-desktop/issues/new).
- Provide as much context as you can about what you're running into.
- Provide project and platform versions (nodejs, npm, etc), depending on what seems relevant.
We will then take care of the issue as soon as possible.
## I Want To Contribute
### Prerequisites
- [Node.js](https://nodejs.org/en/download/) >= 20
- [pnpm](https://pnpm.io/installation) >= 9
#### Technology Stack
This is a [Monorepo](https://pnpm.io/workspaces) project including the following technologies:
- Cross-platform framework: [Electron](https://www.electronjs.org/)
- Interface:
- [React](https://react.dev/)
- [Vite](https://vitejs.dev/)
- [Chakra UI V2](https://v2.chakra-ui.com/)
- State management and communication:
- [Zustand](https://zustand.docs.pmnd.rs/)
- [@ui-tars/electron-ipc](https://github.com/bytedance/ui-tars-desktop/tree/main/packages/electron-ipc)
- Automation framework/toolkit:
- [nut.js](https://nutjs.dev/)
- Test framework
- [Vitest](https://vitest.dev/)
- [Playwright](https://playwright.dev/)
### Structure of the project
```bash
.
├── README.md
├── package.json # Electron application dependencies
├── forge.config.ts # Electron pack and publish configuration
├── electron.vite.config.ts # Electron bundle configuration
│
├── src # Electron application source code
│ ├── main # Main process source code(Like backend)
│ ├── preload # Preload script source code
│ └── renderer # Renderer process source code(Like frontend)
│
├── packages # Packages or Modules or SDK for UI-TARS Desktop
│ ├── action-parser # Action parser for parsing UI-TARS model output into actions
│ ├── core # Core SDK package for UI-TARS Agent
│ ├── electron-ipc # Electron IPC for communication between main and renderer processes
│ ├── shared # Shared code of the project(including types, utils, constants, etc.)
│ ├── utio # UTIO (UI-TARS Insights and Observation)
│ ├── visualizer # Sharing HTML Visualization Reporter
│ └── operators # Automation operators
│ ├── browserbase # Browserbase integration
│ └── nut-js # Nut.js integration
│
├── docs # Documentation of the project
├── rfcs # RFCs (Request for Comments) for the project
├── e2e # E2E test cases for the project
├── playwright.config.ts # E2E test configuration
└── vitest.*.mts # Unit test configuration
```
> **Note**: The `src` directory is located in the top-level directory instead of the `apps/{main,preload,renderer}` directories because Electron Forge previously did not support Pnpm's hoisting mechanism([electron/forge#2633](https://github.com/electron/forge/issues/2633)), requiring the `src` directory to be placed in the top-level directory.
#### Clone the repository
```bash
$ git clone https://github.com/bytedance/ui-tars-desktop.git
$ cd ui-tars-desktop
```
### Development
#### Install dependencies
```bash
$ pnpm install
```
#### Run the application
```bash
$ pnpm run dev
```
After the application starts, you can see the UI-TARS interface within the application.
> **Note**: On MacOS, you need to grant permissions to the app (e.g., iTerm2, Terminal) you are using to run commands.
#### Main process reload
By default, `pnpm run dev` only has frontend Hot Module Replacement (HMR) hot updates. If you need to simultaneously reload the main process during debugging, you can execute `pnpm run dev:w`.
```bash
$ pnpm run dev:w
```
### Release
#### Desktop Application
The CI pipeline to execute is [.github/workflows/release.yml](.github/workflows/release.yml), only manual triggered by maintainers. If you're a maintainer, you can follow the steps below to release the application:
1. Edit the `version` in `package.json`
2. Git commit and push to the `release/${version}` branch, create a PR targeting `main` branch, titled `release(app): ${version}`
3. Trigger the release [workflow](https://github.com/bytedance/UI-TARS-desktop/actions/workflows/release.yml) manually after the PR is merged
Currently, the release workflow supports the following platforms:
- MacOS x64
- MacOS arm64
- Windows x64
#### Packages
##### Latest version
If you want to publish the `latest` version packages to the npm registry, you can run the following command:
1. `pnpm changeset` to specify the changelogs for the packages you want to publish
2. Git commit and push to the `release-pkgs/${version}` branch, create a PR targeting `main` branch, titled `release(pkgs): ${version}`
3. `pnpm run publish:packages` to publish the packages in latest `origin/main` branch after the PR is merged
##### Beta version
If you want to publish the `beta` version packages to the npm registry, you can run the following command:
1. `pnpm changeset` to specify the changelogs for the packages you want to publish
2. Git commit and push to the branch
3. `pnpm run publish-beta:packages` to publish the packages in current branch
### Documentation
The documents are placed in the `docs/*.md` directory, formatted in markdown. There is currently no documentation site, but the `docs/*.md` directory will be converted into a documentation site in the future.
## Styleguides
### Pre-commit Hooks
We use [Husky](https://typicode.github.io/husky/#/) and [lint-staged](https://github.com/okonet/lint-staged) to enforce the pre-commit hooks. The hooks include:
- `prettier --write` to format the code
- `npm run typecheck` to strictly check the type
### Commit Messages
We use [Conventional Commits](https://www.conventionalcommits.org/) to standardize the commit messages.
### CI / Testing
Each PR or main branch push will trigger the CI pipeline to run the unit test and E2E test.
#### Unit test
```bash
pnpm run test
```
#### E2E test
```bash
pnpm run test:e2e
```
## Submitting Changes
* Push your changes to a feature branch in your fork of the repository.
* Submit a pull request to this repository
* Accept the CLA in your PR. | {
"source": "bytedance/UI-TARS-desktop",
"title": "CONTRIBUTING.md",
"url": "https://github.com/bytedance/UI-TARS-desktop/blob/main/CONTRIBUTING.md",
"date": "2025-01-19T09:04:43",
"stars": 2834,
"description": "A GUI Agent application based on UI-TARS(Vision-Lanuage Model) that allows you to control your computer using natural language.",
"file_size": 7491
} |
<p align="center">
<img alt="UI-TARS" width="260" src="resources/icon.png">
</p>
# UI-TARS Desktop
UI-TARS Desktop is a GUI Agent application based on [UI-TARS (Vision-Language Model)](https://github.com/bytedance/UI-TARS) that allows you to control your computer using natural language.
<p align="center">
   📑 <a href="https://arxiv.org/abs/2501.12326">Paper</a>   
| 🤗 <a href="https://huggingface.co/bytedance-research/UI-TARS-7B-DPO">Hugging Face Models</a>  
|   🫨 <a href="https://discord.gg/pTXwYVjfcs">Discord</a>  
|   🤖 <a href="https://www.modelscope.cn/models/bytedance-research/UI-TARS-7B-DPO">ModelScope</a>  
<br>
🖥️ Desktop Application   
|    👓 <a href="https://github.com/web-infra-dev/midscene">Midscene (use in browser)</a>
</p>
### ⚠️ Important Announcement: GGUF Model Performance
The **GGUF model** has undergone quantization, but unfortunately, its performance cannot be guaranteed. As a result, we have decided to **downgrade** it.
💡 **Alternative Solution**:
You can use **[Cloud Deployment](#cloud-deployment)** or **[Local Deployment [vLLM]](#local-deployment-vllm)**(If you have enough GPU resources) instead.
We appreciate your understanding and patience as we work to ensure the best possible experience.
## Updates
- 🚀 01.25: We updated the **[Cloud Deployment](#cloud-deployment)** section in the 中文版: [GUI模型部署教程](https://bytedance.sg.larkoffice.com/docx/TCcudYwyIox5vyxiSDLlgIsTgWf#U94rdCxzBoJMLex38NPlHL21gNb) with new information related to the ModelScope platform. You can now use the ModelScope platform for deployment.
## Showcases
| Instruction | Video |
| :---: | :---: |
| Get the current weather in SF using the web browser | <video src="https://github.com/user-attachments/assets/5235418c-ac61-4895-831d-68c1c749fc87" height="300" /> |
| Send a twitter with the content "hello world" | <video src="https://github.com/user-attachments/assets/737ccc11-9124-4464-b4be-3514cbced85c" height="300" /> |
## Features
- 🤖 Natural language control powered by Vision-Language Model
- 🖥️ Screenshot and visual recognition support
- 🎯 Precise mouse and keyboard control
- 💻 Cross-platform support (Windows/MacOS)
- 🔄 Real-time feedback and status display
- 🔐 Private and secure - fully local processing
## Quick Start
### Download
You can download the [latest release](https://github.com/bytedance/UI-TARS-desktop/releases/latest) version of UI-TARS Desktop from our releases page.
> **Note**: If you have [Homebrew](https://brew.sh/) installed, you can install UI-TARS Desktop by running the following command:
> ```bash
> brew install --cask ui-tars
> ```
### Install
#### MacOS
1. Drag **UI TARS** application into the **Applications** folder
<img src="./images/mac_install.png" width="500px" />
2. Enable the permission of **UI TARS** in MacOS:
- System Settings -> Privacy & Security -> **Accessibility**
- System Settings -> Privacy & Security -> **Screen Recording**
<img src="./images/mac_permission.png" width="500px" />
3. Then open **UI TARS** application, you can see the following interface:
<img src="./images/mac_app.png" width="500px" />
#### Windows
**Still to run** the application, you can see the following interface:
<img src="./images/windows_install.png" width="400px" />
### Deployment
#### Cloud Deployment
We recommend using HuggingFace Inference Endpoints for fast deployment.
We provide two docs for users to refer:
English version: [GUI Model Deployment Guide](https://juniper-switch-f10.notion.site/GUI-Model-Deployment-Guide-17b5350241e280058e98cea60317de71)
中文版: [GUI模型部署教程](https://bytedance.sg.larkoffice.com/docx/TCcudYwyIox5vyxiSDLlgIsTgWf#U94rdCxzBoJMLex38NPlHL21gNb)
#### Local Deployment [vLLM]
We recommend using vLLM for fast deployment and inference. You need to use `vllm>=0.6.1`.
```bash
pip install -U transformers
VLLM_VERSION=0.6.6
CUDA_VERSION=cu124
pip install vllm==${VLLM_VERSION} --extra-index-url https://download.pytorch.org/whl/${CUDA_VERSION}
```
##### Download the Model
We provide three model sizes on Hugging Face: **2B**, **7B**, and **72B**. To achieve the best performance, we recommend using the **7B-DPO** or **72B-DPO** model (based on your hardware configuration):
- [2B-SFT](https://huggingface.co/bytedance-research/UI-TARS-2B-SFT)
- [7B-SFT](https://huggingface.co/bytedance-research/UI-TARS-7B-SFT)
- [7B-DPO](https://huggingface.co/bytedance-research/UI-TARS-7B-DPO)
- [72B-SFT](https://huggingface.co/bytedance-research/UI-TARS-72B-SFT)
- [72B-DPO](https://huggingface.co/bytedance-research/UI-TARS-72B-DPO)
##### Start an OpenAI API Service
Run the command below to start an OpenAI-compatible API service:
```bash
python -m vllm.entrypoints.openai.api_server --served-model-name ui-tars --model <path to your model>
```
##### Input your API information
<img src="./images/settings_model.png" width="500px" />
<!-- If you use Ollama, you can use the following settings to start the server:
```yaml
VLM Provider: ollama
VLM Base Url: http://localhost:11434/v1
VLM API Key: api_key
VLM Model Name: ui-tars
``` -->
> **Note**: VLM Base Url is OpenAI compatible API endpoints (see [OpenAI API protocol document](https://platform.openai.com/docs/guides/vision/uploading-base-64-encoded-images) for more details).
## Contributing
[CONTRIBUTING.md](./CONTRIBUTING.md)
## SDK(Experimental)
[SDK](./docs/sdk.md)
## License
UI-TARS Desktop is licensed under the Apache License 2.0.
## Citation
If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil:
```BibTeX
@article{qin2025ui,
title={UI-TARS: Pioneering Automated GUI Interaction with Native Agents},
author={Qin, Yujia and Ye, Yining and Fang, Junjie and Wang, Haoming and Liang, Shihao and Tian, Shizuo and Zhang, Junda and Li, Jiahao and Li, Yunxin and Huang, Shijue and others},
journal={arXiv preprint arXiv:2501.12326},
year={2025}
}
``` | {
"source": "bytedance/UI-TARS-desktop",
"title": "README.md",
"url": "https://github.com/bytedance/UI-TARS-desktop/blob/main/README.md",
"date": "2025-01-19T09:04:43",
"stars": 2834,
"description": "A GUI Agent application based on UI-TARS(Vision-Lanuage Model) that allows you to control your computer using natural language.",
"file_size": 6046
} |
# Security Policy
## Supported Versions
Here are the versions are currently being supported with security updates.
| Version | Supported |
| ------- | ------------------ |
| 0.0.x | :white_check_mark: |
## Reporting a Vulnerability
If you find any vulnerability issue, please report it to https://github.com/bytedance/UI-TARS-desktop/security.
We will get touch with you shortly.
## Security Advisories
We will publish security advisories for the latest version. | {
"source": "bytedance/UI-TARS-desktop",
"title": "SECURITY.md",
"url": "https://github.com/bytedance/UI-TARS-desktop/blob/main/SECURITY.md",
"date": "2025-01-19T09:04:43",
"stars": 2834,
"description": "A GUI Agent application based on UI-TARS(Vision-Lanuage Model) that allows you to control your computer using natural language.",
"file_size": 480
} |
# Changesets
Hello and welcome! This folder has been automatically generated by `@changesets/cli`, a build tool that works
with multi-package repos, or single-package repos to help you version and publish your code. You can
find the full documentation for it [in our repository](https://github.com/changesets/changesets)
We have a quick list of common questions to get you started engaging with this project in
[our documentation](https://github.com/changesets/changesets/blob/main/docs/common-questions.md) | {
"source": "bytedance/UI-TARS-desktop",
"title": ".changeset/README.md",
"url": "https://github.com/bytedance/UI-TARS-desktop/blob/main/.changeset/README.md",
"date": "2025-01-19T09:04:43",
"stars": 2834,
"description": "A GUI Agent application based on UI-TARS(Vision-Lanuage Model) that allows you to control your computer using natural language.",
"file_size": 509
} |
---
'@ui-tars/operator-browserbase': patch
'@ui-tars/operator-nut-js': patch
'@ui-tars/shared': patch
'@ui-tars/cli': patch
'@ui-tars/sdk': patch
---
chore: open-operator | {
"source": "bytedance/UI-TARS-desktop",
"title": ".changeset/fast-insects-flash.md",
"url": "https://github.com/bytedance/UI-TARS-desktop/blob/main/.changeset/fast-insects-flash.md",
"date": "2025-01-19T09:04:43",
"stars": 2834,
"description": "A GUI Agent application based on UI-TARS(Vision-Lanuage Model) that allows you to control your computer using natural language.",
"file_size": 171
} |
---
'@ui-tars/action-parser': patch
'@ui-tars/cli': patch
'@ui-tars/electron-ipc': patch
'@ui-tars/operator-nut-js': patch
'@ui-tars/sdk': patch
'@ui-tars/shared': patch
'@ui-tars/utio': patch
---
bump: sdk support | {
"source": "bytedance/UI-TARS-desktop",
"title": ".changeset/selfish-humans-drive.md",
"url": "https://github.com/bytedance/UI-TARS-desktop/blob/main/.changeset/selfish-humans-drive.md",
"date": "2025-01-19T09:04:43",
"stars": 2834,
"description": "A GUI Agent application based on UI-TARS(Vision-Lanuage Model) that allows you to control your computer using natural language.",
"file_size": 215
} |
---
'@ui-tars/operator-browserbase': patch
'@ui-tars/operator-nut-js': patch
'@ui-tars/sdk': patch
---
chore: types | {
"source": "bytedance/UI-TARS-desktop",
"title": ".changeset/short-shoes-tap.md",
"url": "https://github.com/bytedance/UI-TARS-desktop/blob/main/.changeset/short-shoes-tap.md",
"date": "2025-01-19T09:04:43",
"stars": 2834,
"description": "A GUI Agent application based on UI-TARS(Vision-Lanuage Model) that allows you to control your computer using natural language.",
"file_size": 116
} |
---
'@ui-tars/cli': patch
'@ui-tars/sdk': patch
---
update | {
"source": "bytedance/UI-TARS-desktop",
"title": ".changeset/witty-points-rescue.md",
"url": "https://github.com/bytedance/UI-TARS-desktop/blob/main/.changeset/witty-points-rescue.md",
"date": "2025-01-19T09:04:43",
"stars": 2834,
"description": "A GUI Agent application based on UI-TARS(Vision-Lanuage Model) that allows you to control your computer using natural language.",
"file_size": 59
} |
# Preset Management Guide
> [!IMPORTANT]
> Currently, **UI-TARS Desktop** does not directly provide server-side capabilities, so we do not provide a Preset for the open source community. welcome community developers to contribute your presets [here](../examples/presets/).
A **preset** is a collection of [settings](./setting.md) (_Introduced at [#61](https://github.com/bytedance/UI-TARS-desktop/pull/61)_), **UI-TARS Desktop** supports import presets via `files` or `URLs`:
```mermaid
graph TD
A[Import Preset] --> B{Preset Type}
B -->|File| C[YAML File]
B -->|URL| D[URL Endpoint]
C --> E[Manual Updates 🔧]
D --> F[Auto Sync ⚡]
```
<br>
## Preset Types Comparison
| Feature | Local Presets | Remote Presets |
|-----------------------|------------------------|------------------------|
| **Storage** | Device-local | Cloud-hosted |
| **Update Mechanism** | Manual | Automatic |
| **Access Control** | Read/Write | Read-Only |
| **Versioning** | Manual | Git-integrated |
<br>
## Examples
### Import from file
**UI-TARS Desktop** supports importing presets from files. Once the file is parsed successfully, the settings will be automatically updated.
| Function | Snapshot |
| --- | ---|
| Open Setting |<img width="320" alt="image" src="https://github.com/user-attachments/assets/1d2ae27c-9b2e-4896-96a6-04832f850907" /> |
| Import Success | <img width="320" alt="image" src="https://github.com/user-attachments/assets/38f77101-7388-4363-ab27-668180f51aaa" />|
| Exception: Invalid Content | <img width="320" alt="image" src="https://github.com/user-attachments/assets/5ebec2b2-12f6-4d1a-84a7-8202ef651223" /> |
<br>
### Import from URL
**UI-TARS Desktop** also supports importing presets from URLs. If automatic updates are set, presets will be automatically pulled every time the application is started.
| Function | Snapshot |
| --- | ---|
| Open Setting | <img width="320" alt="image" src="https://github.com/user-attachments/assets/d446da0e-3bb4-4ca5-bc95-4f235d979fd0" /> |
| Import Success (Default) | <img width="320" alt="image" src="https://github.com/user-attachments/assets/a6470ed4-80ac-45a1-aaba-39e598d5af0f" /> |
| Import Success (Auto Update) | <img width="320" alt="image" src="https://github.com/user-attachments/assets/b5364d66-6654-401b-969e-f85baeedbda0" />|
<br>
### Preset Example
```yaml
name: UI TARS Desktop Example Preset
language: en
vlmProvider: Hugging Face
vlmBaseUrl: https://your-endpoint.huggingface.cloud/v1
vlmApiKey: your_api_key
vlmModelName: your_model_name
reportStorageBaseUrl: https://your-report-storage-endpoint.com/upload
utioBaseUrl: https://your-utio-endpoint.com/collect
```
See all [example presets](../examples/presets). | {
"source": "bytedance/UI-TARS-desktop",
"title": "docs/preset.md",
"url": "https://github.com/bytedance/UI-TARS-desktop/blob/main/docs/preset.md",
"date": "2025-01-19T09:04:43",
"stars": 2834,
"description": "A GUI Agent application based on UI-TARS(Vision-Lanuage Model) that allows you to control your computer using natural language.",
"file_size": 2869
} |
# @ui-tars/sdk Guide(Beta)
## Overview
`@ui-tars/sdk` is a powerful cross-platform(ANY device/platform) toolkit for building GUI automation agents.
It provides a flexible framework to create agents that can interact with graphical user interfaces through various operators. It supports running on both **Node.js** and the **Web Browser**
```mermaid
classDiagram
class GUIAgent~T extends Operator~ {
+model: UITarsModel
+operator: T
+signal: AbortSignal
+onData
+run()
}
class UITarsModel {
+invoke()
}
class Operator {
<<interface>>
+screenshot()
+execute()
}
class NutJSOperator {
+screenshot()
+execute()
}
class WebOperator {
+screenshot()
+execute()
}
class MobileOperator {
+screenshot()
+execute()
}
GUIAgent --> UITarsModel
GUIAgent ..> Operator
Operator <|.. NutJSOperator
Operator <|.. WebOperator
Operator <|.. MobileOperator
```
## Try it out
```bash
npx @ui-tars/cli start
```
Input your UI-TARS Model Service Config(`baseURL`, `apiKey`, `model`), then you can control your computer with CLI.
```
Need to install the following packages:
Ok to proceed? (y) y
│
◆ Input your instruction
│ _ Open Chrome
└
```
## Agent Execution Process
```mermaid
sequenceDiagram
participant user as User
participant guiAgent as GUI Agent
participant model as UI-TARS Model
participant operator as Operator
user -->> guiAgent: "`instruction` + <br /> `Operator.MANUAL.ACTION_SPACES`"
activate user
activate guiAgent
loop status !== StatusEnum.RUNNING
guiAgent ->> operator: screenshot()
activate operator
operator -->> guiAgent: base64, Physical screen size
deactivate operator
guiAgent ->> model: instruction + actionSpaces + screenshots.slice(-5)
model -->> guiAgent: `prediction`: click(start_box='(27,496)')
guiAgent -->> user: prediction, next action
guiAgent ->> operator: execute(prediction)
activate operator
operator -->> guiAgent: success
deactivate operator
end
deactivate guiAgent
deactivate user
```
### Basic Usage
Basic usage is largely derived from package `@ui-tars/sdk`, here's a basic example of using the SDK:
> Note: Using `nut-js`(cross-platform computer control tool) as the operator, you can also use or customize other operators. NutJS operator that supports common desktop automation actions:
> - Mouse actions: click, double click, right click, drag, hover
> - Keyboard input: typing, hotkeys
> - Scrolling
> - Screenshot capture
```ts
import { GUIAgent } from '@ui-tars/sdk';
import { NutJSOperator } from '@ui-tars/operator-nut-js';
const guiAgent = new GUIAgent({
model: {
baseURL: config.baseURL,
apiKey: config.apiKey,
model: config.model,
},
operator: new NutJSOperator(),
onData: ({ data }) => {
console.log(data)
},
onError: ({ data, error }) => {
console.error(error, data);
},
});
await guiAgent.run('send "hello world" to x.com');
```
### Handling Abort Signals
You can abort the agent by passing a `AbortSignal` to the GUIAgent `signal` option.
```ts
const abortController = new AbortController();
const guiAgent = new GUIAgent({
// ... other config
signal: abortController.signal,
});
// ctrl/cmd + c to cancel operation
process.on('SIGINT', () => {
abortController.abort();
});
```
## Configuration Options
The `GUIAgent` constructor accepts the following configuration options:
- `model`: Model configuration(OpenAI-compatible API) or custom model instance
- `baseURL`: API endpoint URL
- `apiKey`: API authentication key
- `model`: Model name to use
- more options see [OpenAI API](https://platform.openai.com/docs/guides/vision/uploading-base-64-encoded-images)
- `operator`: Instance of an operator class that implements the required interface
- `signal`: AbortController signal for canceling operations
- `onData`: Callback for receiving agent data/status updates
- `data.conversations` is an array of objects, **IMPORTANT: is delta, not the whole conversation history**, each object contains:
- `from`: The role of the message, it can be one of the following:
- `human`: Human message
- `gpt`: Agent response
- `screenshotBase64`: Screenshot base64
- `value`: The content of the message
- `data.status` is the current status of the agent, it can be one of the following:
- `StatusEnum.INIT`: Initial state
- `StatusEnum.RUNNING`: Agent is actively executing
- `StatusEnum.END`: Operation completed
- `StatusEnum.MAX_LOOP`: Maximum loop count reached
- `onError`: Callback for error handling
- `systemPrompt`: Optional custom system prompt
- `maxLoopCount`: Maximum number of interaction loops (default: 25)
### Status flow
```mermaid
stateDiagram-v2
[*] --> INIT
INIT --> RUNNING
RUNNING --> RUNNING: Execute Actions
RUNNING --> END: Task Complete
RUNNING --> MAX_LOOP: Loop Limit Reached
END --> [*]
MAX_LOOP --> [*]
```
## Advanced Usage
### Operator Interface
When implementing a custom operator, you need to implement two core methods: `screenshot()` and `execute()`.
#### Initialize
`npm init` to create a new operator package, configuration is as follows:
```json
{
"name": "your-operator-tool",
"version": "1.0.0",
"main": "./dist/index.js",
"module": "./dist/index.mjs",
"types": "./dist/index.d.ts",
"scripts": {
"dev": "tsup --watch",
"prepare": "npm run build",
"build": "tsup",
"test": "vitest"
},
"files": [
"dist"
],
"publishConfig": {
"access": "public",
"registry": "https://registry.npmjs.org"
},
"dependencies": {
"jimp": "^1.6.0"
},
"peerDependencies": {
"@ui-tars/sdk": "latest"
},
"devDependencies": {
"@ui-tars/sdk": "latest",
"tsup": "^8.3.5",
"typescript": "^5.7.2",
"vitest": "^3.0.2"
}
}
```
#### screenshot()
This method captures the current screen state and returns a `ScreenshotOutput`:
```typescript
interface ScreenshotOutput {
// Base64 encoded image string
base64: string;
// Physical screen width
width: number;
// Physical screen height
height: number;
// Device pixel ratio (DPR)
scaleFactor: number;
}
```
#### execute()
This method performs actions based on model predictions. It receives an `ExecuteParams` object:
```typescript
interface ExecuteParams {
// Raw prediction string from the model
prediction: string;
// Parsed prediction object
parsedPrediction: {
action_type: string;
action_inputs: Record<string, any>;
reflection: string | null;
thought: string;
};
// Physical screen width
screenWidth: number;
// Physical screen height
screenHeight: number;
// Device pixel ratio (DPR)
scaleFactor: number;
}
```
Advanced sdk usage is largely derived from package `@ui-tars/sdk/core`, you can create custom operators by extending the base `Operator` class:
```typescript
import {
Operator,
parseBoxToScreenCoords,
type ScreenshotOutput,
type ExecuteParams
type ExecuteOutput,
} from '@ui-tars/sdk/core';
import { Jimp } from 'jimp';
export class CustomOperator extends Operator {
// Define the action spaces and description for UI-TARS System Prompt splice
static MANUAL = {
ACTION_SPACES: [
'click(start_box="") # click on the element at the specified coordinates',
'type(content="") # type the specified content into the current input field',
'scroll(direction="") # scroll the page in the specified direction',
'finished() # finish the task',
// ...more_actions
],
};
public async screenshot(): Promise<ScreenshotOutput> {
// Implement screenshot functionality
const base64 = 'base64-encoded-image';
const buffer = Buffer.from(base64, 'base64');
const image = await sharp(buffer).toBuffer();
return {
base64: 'base64-encoded-image',
width: image.width,
height: image.height,
scaleFactor: 1
};
}
async execute(params: ExecuteParams): Promise<ExecuteOutput> {
const { parsedPrediction, screenWidth, screenHeight, scaleFactor } = params;
// Implement action execution logic
// if click action, get coordinates from parsedPrediction
const startBoxStr = parsedPrediction?.action_inputs?.start_box || '';
const { x: startX, y: startY } = parseBoxToScreenCoords({
boxStr: startBoxStr,
screenWidth,
screenHeight,
});
if (parsedPrediction?.action_type === 'finished') {
// finish the GUIAgent task
return { status: StatusEnum.END };
}
}
}
```
Required methods:
- `screenshot()`: Captures the current screen state
- `execute()`: Performs the requested action based on model predictions
Optional static properties:
- `MANUAL`: Define the action spaces and description for UI-TARS Model understanding
- `ACTION_SPACES`: Define the action spaces and description for UI-TARS Model understanding
Loaded into `GUIAgent`:
```ts
const guiAgent = new GUIAgent({
// ... other config
systemPrompt: `
// ... other system prompt
${CustomOperator.MANUAL.ACTION_SPACES.join('\n')}
`,
operator: new CustomOperator(),
});
```
### Custom Model Implementation
You can implement custom model logic by extending the `UITarsModel` class:
```typescript
class CustomUITarsModel extends UITarsModel {
constructor(modelConfig: { model: string }) {
super(modelConfig);
}
async invoke(params: any) {
// Implement custom model logic
return {
prediction: 'action description',
parsedPredictions: [{
action_type: 'click',
action_inputs: { /* ... */ },
reflection: null,
thought: 'reasoning'
}]
};
}
}
const agent = new GUIAgent({
model: new CustomUITarsModel({ model: 'custom-model' }),
// ... other config
});
```
> Note: However, it is not recommended to implement a custom model because it contains a lot of data processing logic (including image transformations, scaling factors, etc.).
### Planning
You can combine planning/reasoning models (such as OpenAI-o1, DeepSeek-R1) to implement complex GUIAgent logic for planning, reasoning, and execution:
```ts
const guiAgent = new GUIAgent({
// ... other config
});
const planningList = await reasoningModel.invoke({
conversations: [
{
role: 'user',
content: 'buy a ticket from beijing to shanghai',
}
]
})
/**
* [
* 'open chrome',
* 'open trip.com',
* 'click "search" button',
* 'select "beijing" in "from" input',
* 'select "shanghai" in "to" input',
* 'click "search" button',
* ]
*/
for (const planning of planningList) {
await guiAgent.run(planning);
}
``` | {
"source": "bytedance/UI-TARS-desktop",
"title": "docs/sdk.md",
"url": "https://github.com/bytedance/UI-TARS-desktop/blob/main/docs/sdk.md",
"date": "2025-01-19T09:04:43",
"stars": 2834,
"description": "A GUI Agent application based on UI-TARS(Vision-Lanuage Model) that allows you to control your computer using natural language.",
"file_size": 10822
} |
# Settings Configuration Guide
## Overview
**UI-TARS Desktop** offers granular control over application behavior through its settings system. This document provides comprehensive guidance on configuration options, preset management, and operational best practices.
<p align="center">
<img src="../images/setting.png" alt="Settings Interface Overview" width="650">
<br>
<em>Main Settings Interface</em>
</p>
<br>
## Configuration Options
### Language
Controls localization settings for VLM.
| Property | Details |
| ----------- | ------------------------------ |
| **Type** | `string` |
| **Options** | `en` (English), `zh` (Chinese) |
| **Default** | `en` |
> [!NOTE]
> Changing the settings will **only** affect the output of VLM, not the language of the desktop app itself. Regarding the i18n of the App itself, welcome to contribute PR.
<br>
### VLM Provider
Selects the backend VLM provider for make GUI action decisions.
| Property | Details |
| ----------- | ---------------------- |
| **Type** | `string` |
| **Options** | `Hugging Face`, `vLLM` |
| **Default** | `Hugging Face` |
> [!NOTE]
> This is an interface reserved for different VLM providers.
<br>
### VLM Base URL
Specify the base url of the VLM that needs to be requested.
| Property | Details |
| ------------ | -------- |
| **Type** | `string` |
| **Required** | `true` |
> [!NOTE]
> VLM Base URL should be OpenAI compatible API endpoints (see [OpenAI API protocol document](https://platform.openai.com/docs/guides/vision/uploading-base-64-encoded-images) for more details).
<br>
### VLM Model Name
Specify the requested module name.
| Property | Details |
| ------------ | -------- |
| **Type** | `string` |
| **Required** | `true` |
<br>
### Report Storage Base URL
Defines the base URL for uploading report file. By default, when this option is not set, when the user clicks **Export as HTML** (a.k.a. <b>Share</b>), it will automatically trigger the download of the report file:
<p align="center">
<img src="../images/download-report.png" alt="Download report" width="320">
<br>
</p>
Once it's set, when user click **Export as HTML**, report file will firstly be uploaded to the Report Storage Server, which returns a publicly accessible URL for the persistent file.
<p align="center">
<img src="../images/upload-report-success.png" alt="Download report" width="320">
<br>
</p>
#### Report Storage Server Interface
The Report Storage Server should implement the following HTTP API endpoint:
| Property | Details |
| ------------ | ------------------------------------------------------------------------------------------------------------ |
| **Endpoint** | `POST /your-storage-enpoint` |
| **Headers** | Content-Type: `multipart/form-data` <br> <!-- - Authorization: Bearer \<access_token\> (Not Supported) --> |
#### Request Body
The request should be sent as `multipart/form-data` with the following field:
| Field | Type | Required | Description | Constraints |
| ----- | ---- | -------- | ---------------- | ---------------------------------- |
| file | File | Yes | HTML report file | - Format: HTML<br>- Max size: 30MB |
#### Response
**Success Response (200 OK)**
```json
{
"url": "https://example.com/reports/xxx.html"
}
```
The response should return a JSON object containing a publicly accessible URL where the report can be accessed.
> [!NOTE]
> Currently, there is no authentication designed for Report Storage Server. If you have any requirements, please submit an [issue](https://github.com/bytedance/UI-TARS-desktop/issues).
<br>
### UTIO Base URL
**UTIO** (_UI-TARS Insights and Observation_) is a data collection mechanism for insights into **UI-TARS Desktop** (_Introduced at [#60](https://github.com/bytedance/UI-TARS-desktop/pull/60)_). The design of UTIO is also related to sharing. The overall process is as follows:
<p align="center">
<img src="../images/utio-flow.png" alt="UTIO Flow" width="800">
<br>
<em>UTIO Flow</em>
</p>
This option defines the base URL for the **UTIO** server that handles application events and instructions.
#### Server Interface Specification
The UTIO server accepts events through HTTP POST requests and supports three types of events:
| Property | Details |
| ------------ | -------------------------------- |
| **Endpoint** | `POST /your-utio-endpoint` |
| **Headers** | Content-Type: `application/json` |
##### Event Types
The server handles three types of events:
###### **Application Launch**
```typescript
interface AppLaunchedEvent {
type: 'appLaunched';
/** Platform type */
platform: string;
/** OS version, e.g. "major.minor.patch" format */
osVersion: string;
/** Screen width in pixels */
screenWidth: number;
/** Screen height in pixels */
screenHeight: number;
}
```
###### **Send Instruction**
```typescript
interface SendInstructionEvent {
type: 'sendInstruction';
/** User-submitted instruction content */
instruction: string;
}
```
###### **Share Report**
```typescript
interface ShareReportEvent {
type: 'shareReport';
/** Optional last screenshot url or base64 content */
lastScreenshot?: string;
/** Optional report url */
report?: string;
/** Related instruction */
instruction: string;
}
```
##### Request Example
```json
{
"type": "appLaunched",
"platform": "iOS",
"osVersion": "16.0.0",
"screenWidth": 390,
"screenHeight": 844
}
```
##### Response
**Success Response (200 OK)**
```json
{
"success": true
}
```
> [!NOTE]
> All events are processed asynchronously. The server should respond promptly to acknowledge receipt of the event.
##### Server Example
###### Node.js
```js
const express = require('express');
const cors = require('cors');
const app = express();
const port = 3000;
app.use(cors());
app.use(express.json());
app.post('/your-utio-endpoint', (req, res) => {
const event = req.body;
if (!event || !event.type) {
return res.status(400).json({ error: 'Missing event type' });
}
switch (event.type) {
case 'appLaunched':
return handleAppLaunch(event, res);
case 'sendInstruction':
return handleSendInstruction(event, res);
case 'shareReport':
return handleShareReport(event, res);
default:
return res.status(400).json({ error: 'Unsupported event type' });
}
});
app.listen(port, () => {
console.log(`Server listening on port ${port}`);
});
```
###### Python
```python
from flask import Flask, request, jsonify
from flask_cors import CORS
import re
app = Flask(__name__)
CORS(app)
@app.route('/events', methods=['POST'])
def handle_event():
data = request.get_json()
if not data or 'type' not in data:
return jsonify({'error': 'Missing event type'}), 400
event_type = data['type']
if event_type == 'appLaunched':
return handle_app_launch(data)
elif event_type == 'sendInstruction':
return handle_send_instruction(data)
elif event_type == 'shareReport':
return handle_share_report(data)
else:
return jsonify({'error': 'Unsupported event type'}), 400
if __name__ == '__main__':
app.run(port=3000)
``` | {
"source": "bytedance/UI-TARS-desktop",
"title": "docs/setting.md",
"url": "https://github.com/bytedance/UI-TARS-desktop/blob/main/docs/setting.md",
"date": "2025-01-19T09:04:43",
"stars": 2834,
"description": "A GUI Agent application based on UI-TARS(Vision-Lanuage Model) that allows you to control your computer using natural language.",
"file_size": 7564
} |
# UI-TARS-desktop RFCs
Most changes including bug fixes and documentation improvements can be handled through standard GitHub pull requests. However, substantial technical changes requiring cross-platform considerations (Windows/macOS/Linux) should follow this RFC process to ensure systematic design review.
## When RFC is Required
Consider initiating an RFC for changes involving:
- Architectural modifications
- Native API integrations
- Cross-platform behavior changes
- Major performance optimizations
- Security-sensitive implementations
- Breaking API changes
## RFC Lifecycle
### 1. Pre-Discussion
- Open a GitHub Discussion thread for initial concept validation
- Identify core maintainers (@mention platform specialists)
### 2. Draft Submission
1. Fork https://github.com/bytedance/UI-TARS-desktop
2. Copy `rfcs/template.md` to `rfcs/drafts/000-feature-name.md`
3. Submit draft PR with [WIP] prefix
### 3. Technical Review Phase
- Platform leads review for:
- Windows compatibility
- macOS security implications
- Linux packaging impacts
- Required checklist completion:
- [ ] Performance analysis
- [ ] Cross-platform testing strategy
- [ ] Error handling documentation
- [ ] Binary size impact
### 4. Final Comment Period
- Freeze feature scope
- Address final review comments
- Require 2/3 maintainer approvals (including at least one platform specialist)
### 5. Implementation Tracking
- Upon acceptance:
- Create tracking issue with platform-specific tasks
- Label with target version milestone
- Assign platform implementation owners
### Status Transitions
```mermaid
graph TD
A[Draft] -->|PR Submitted| B(Review)
B -->|Approved| C[Accepted]
B -->|Rejected| D[Archived]
C -->|Implementation| E[Implemented]
C -->|No activity in 30d| F[Stalled]
F -->|Resumed| C
```
## Key Modifications from Original Process
1. Added platform specialist review requirements
2. Extended review period for cross-platform analysis
3. Mandatory platform-specific checklists
4. Implementation tracking with ownership assignments
5. Stalled state for resource management
6. Visual workflow diagram
## Implementation Rules
- RFC authors receive implementation priority
- Platform-specific implementations must include:
- Windows: MSI installer compatibility tests
- macOS: Notarization validation
- Linux: Snap/Flatpak packaging checks
- Binary size monitoring required for native modules
## References
Inspired by:
- [Electron RFC Process](https://www.electronjs.org/blog/rfc-process)
- [React Native Architecture Decisions](https://github.com/react-native-community/discussions-and-proposals) | {
"source": "bytedance/UI-TARS-desktop",
"title": "rfcs/README.md",
"url": "https://github.com/bytedance/UI-TARS-desktop/blob/main/rfcs/README.md",
"date": "2025-01-19T09:04:43",
"stars": 2834,
"description": "A GUI Agent application based on UI-TARS(Vision-Lanuage Model) that allows you to control your computer using natural language.",
"file_size": 2662
} |
---
start_date: 2025-01-28
rfc_pr:
issue:
---
# RFC Template
## Summary
Brief explanation of the proposed change for UI-TARS-desktop.
## Basic example
If the proposal involves API changes or new component interactions, provide a concise code/usage example. Omit if not applicable.
## Motivation
Why is this change essential for UI-TARS-desktop? What specific problems does it address? What limitations or user pain points will it resolve? Focus on objective technical reasons rather than subjective preferences.
## Detailed design
Technical specification of the proposal including:
- Architectural diagrams (if applicable)
- Modified/new APIs
- Data flow changes
- Lifecycle impacts
- Error handling strategies
- Compatibility with existing TARS patterns
- Platform considerations (Windows/macOS/Linux)
Provide sufficient detail for core maintainers to evaluate implementation feasibility.
## Drawbacks
Critical considerations including:
- Increased binary size/performance impact
- Maintenance complexity
- Security implications
- Cross-platform consistency risks
- Developer experience impacts
- Migration challenges for existing integrations
## Alternatives
What other approaches were considered? Include:
- Third-party solutions
- Partial implementations
- Alternative architectural patterns
- Status quo analysis
## Adoption strategy
How will this change be rolled out? Address:
- Phased implementation plan
- Backward compatibility measures
- Deprecation timelines (if any)
- Documentation updates
- Testing requirements (unit tests, E2E scenarios)
## How we teach this
Educational aspects covering:
- Updated API documentation strategy
- Sample project updates
- Tutorial integration points
- Workshop/onboarding implications
- Error message guidance
- Debugging patterns for new features
## Unresolved questions
Open technical discussions needing resolution:
- Unvalidated performance assumptions
- Undecided implementation details
- Third-party dependency risks
- Platform-specific edge cases
- Long-term maintenance ownership | {
"source": "bytedance/UI-TARS-desktop",
"title": "rfcs/template.md",
"url": "https://github.com/bytedance/UI-TARS-desktop/blob/main/rfcs/template.md",
"date": "2025-01-19T09:04:43",
"stars": 2834,
"description": "A GUI Agent application based on UI-TARS(Vision-Lanuage Model) that allows you to control your computer using natural language.",
"file_size": 2063
} |
# Open Operator
> [!WARNING]
> This is simply a proof of concept.
> Browserbase aims not to compete with web agents, but rather to provide all the necessary tools for anybody to build their own web agent. We strongly recommend you check out both [Browserbase](https://www.browserbase.com) and our open source project [Stagehand](https://www.stagehand.dev) to build your own web agent.
[](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Fbrowserbase%2Fopen-operator&env=OPENAI_API_KEY,BROWSERBASE_API_KEY,BROWSERBASE_PROJECT_ID&envDescription=API%20keys%20needed%20to%20run%20Open%20Operator&envLink=https%3A%2F%2Fgithub.com%2Fbrowserbase%2Fopen-operator%23environment-variables)
https://github.com/user-attachments/assets/354c3b8b-681f-4ad0-9ab9-365dbde894af
## Getting Started
First, install the dependencies for this repository. This requires [pnpm](https://pnpm.io/installation#using-other-package-managers).
<!-- This doesn't work with NPM, haven't tested with yarn -->
```bash
pnpm install
```
Next, copy the example environment variables:
```bash
cp .env.example .env.local
```
You'll need to set up your API keys:
1. Get your UI-TARS Service from [UI-TARS](https://github.com/bytedance/UI-TARS)
2. Get your Browserbase API key and project ID from [Browserbase](https://www.browserbase.com)
Update `.env.local` with your API keys:
- `UI_TARS_BASE_URL`: Your UI-TARS Base Url
- `UI_TARS_API_KEY`: Your UI-TARS API Key
- `UI_TARS_MODEL`: Your UI-TARS Model
- `BROWSERBASE_API_KEY`: Your Browserbase API key
- `BROWSERBASE_PROJECT_ID`: Your Browserbase project ID
Then, run the development server:
<!-- This doesn't work with NPM, haven't tested with yarn -->
```bash
pnpm dev
```
Open [http://localhost:3000](http://localhost:3000) with your browser to see Open Operator in action.
## How It Works
Building a web agent is a complex task. You need to understand the user's intent, convert it into headless browser operations, and execute actions, each of which can be incredibly complex on their own.

Stagehand is a tool that helps you build web agents. It allows you to convert natural language into headless browser operations, execute actions on the browser, and extract results back into structured data.

Under the hood, we have a very simple agent loop that just calls Stagehand to convert the user's intent into headless browser operations, and then calls Browserbase to execute those operations.

Stagehand uses Browserbase to execute actions on the browser, and OpenAI to understand the user's intent.
For more on this, check out the code at [this commit](https://github.com/browserbase/open-operator/blob/6f2fba55b3d271be61819dc11e64b1ada52646ac/index.ts).
### Key Technologies
- **[Browserbase](https://www.browserbase.com)**: Powers the core browser automation and interaction capabilities
- **[Stagehand](https://www.stagehand.dev)**: Handles precise DOM manipulation and state management
- **[Next.js](https://nextjs.org)**: Provides the modern web framework foundation
- **[OpenAI](https://openai.com)**: Enable natural language understanding and decision making
## Contributing
We welcome contributions! Whether it's:
- Adding new features
- Improving documentation
- Reporting bugs
- Suggesting enhancements
Please feel free to open issues and pull requests.
## License
Open Operator is open source software licensed under the MIT license.
## Acknowledgments
This project is inspired by OpenAI's Operator feature and builds upon various open source technologies including Next.js, React, Browserbase, and Stagehand. | {
"source": "bytedance/UI-TARS-desktop",
"title": "examples/operator-browserbase/README.md",
"url": "https://github.com/bytedance/UI-TARS-desktop/blob/main/examples/operator-browserbase/README.md",
"date": "2025-01-19T09:04:43",
"stars": 2834,
"description": "A GUI Agent application based on UI-TARS(Vision-Lanuage Model) that allows you to control your computer using natural language.",
"file_size": 3793
} |
# @ui-tars/action-parser
## 1.2.0-beta.11
### Patch Changes
- Updated dependencies
- @ui-tars/[email protected]
## 1.2.0-beta.10
### Patch Changes
- @ui-tars/[email protected]
## 1.2.0-beta.9
### Patch Changes
- bump: sdk support
- Updated dependencies
- @ui-tars/[email protected]
## 1.2.0-beta.6
### Patch Changes
- feat: new sdk
- Updated dependencies
- @ui-tars/[email protected]
## 1.2.0-beta.5
### Patch Changes
- chore: update sdk
- Updated dependencies
- @ui-tars/[email protected]
## 1.2.0-beta.4
### Patch Changes
- chore: new version
- Updated dependencies
- @ui-tars/[email protected]
## 1.2.0-beta.3
### Patch Changes
- chore: add retry
- Updated dependencies
- @ui-tars/[email protected]
## 1.2.0-beta.2
### Patch Changes
- chore: publish
- Updated dependencies
- @ui-tars/[email protected]
## 1.2.0-beta.1
### Patch Changes
- chore: remove unused code
- Updated dependencies
- @ui-tars/[email protected]
## 1.2.0-beta.0
### Minor Changes
- a062e03: feat: ui-tars agent sdk support
### Patch Changes
- Updated dependencies [a062e03]
- @ui-tars/[email protected]
## 1.1.0
### Minor Changes
- fix(actionParser): no action text return null not error
### Patch Changes
- Updated dependencies
- @ui-tars/[email protected]
## 1.0.1
### Patch Changes
- a1101ca: fix(action_parser): null values while parsing action inputs (#28)
- Updated dependencies [a1101ca]
- @ui-tars/[email protected]
## 1.0.0
### Major Changes
- dcabd45: feat: initial publish ui-tars action-parser and shared pkgs
### Patch Changes
- Updated dependencies [dcabd45]
- @ui-tars/[email protected] | {
"source": "bytedance/UI-TARS-desktop",
"title": "packages/action-parser/CHANGELOG.md",
"url": "https://github.com/bytedance/UI-TARS-desktop/blob/main/packages/action-parser/CHANGELOG.md",
"date": "2025-01-19T09:04:43",
"stars": 2834,
"description": "A GUI Agent application based on UI-TARS(Vision-Lanuage Model) that allows you to control your computer using natural language.",
"file_size": 1650
} |
# @ui-tars/cli
## 1.2.0-beta.13
### Patch Changes
- chore: open-operator
- Updated dependencies
- @ui-tars/[email protected]
- @ui-tars/[email protected]
## 1.2.0-beta.12
### Patch Changes
- Updated dependencies
- @ui-tars/[email protected]
- @ui-tars/[email protected]
## 1.2.0-beta.11
### Patch Changes
- chore: node-fetch
## 1.2.0-beta.10
### Patch Changes
- update
- Updated dependencies
- @ui-tars/[email protected]
- @ui-tars/[email protected]
## 1.2.0-beta.9
### Patch Changes
- bump: sdk support
- Updated dependencies
- @ui-tars/[email protected]
- @ui-tars/[email protected]
## 1.2.0-beta.8
### Patch Changes
- fix: useConfig to useContext
- Updated dependencies
- @ui-tars/[email protected]
- @ui-tars/[email protected]
## 1.2.0-beta.7
### Patch Changes
- Updated dependencies
- @ui-tars/[email protected]
- @ui-tars/[email protected]
## 1.2.0-beta.6
### Patch Changes
- feat: new sdk
- Updated dependencies
- @ui-tars/[email protected]
- @ui-tars/[email protected]
## 1.2.0-beta.5
### Patch Changes
- chore: update sdk
- Updated dependencies
- @ui-tars/[email protected]
- @ui-tars/[email protected]
## 1.2.0-beta.4
### Patch Changes
- chore: new version
- Updated dependencies
- @ui-tars/[email protected]
- @ui-tars/[email protected]
## 1.2.0-beta.3
### Patch Changes
- chore: add retry
- Updated dependencies
- @ui-tars/[email protected]
- @ui-tars/[email protected]
## 1.2.0-beta.2
### Patch Changes
- chore: publish
- Updated dependencies
- @ui-tars/[email protected]
- @ui-tars/[email protected]
## 1.2.0-beta.1
### Patch Changes
- chore: remove unused code
- Updated dependencies
- @ui-tars/[email protected]
- @ui-tars/[email protected]
## 1.2.0-beta.0
### Minor Changes
- a062e03: feat: ui-tars agent sdk support
### Patch Changes
- Updated dependencies [a062e03]
- @ui-tars/[email protected]
- @ui-tars/[email protected] | {
"source": "bytedance/UI-TARS-desktop",
"title": "packages/cli/CHANGELOG.md",
"url": "https://github.com/bytedance/UI-TARS-desktop/blob/main/packages/cli/CHANGELOG.md",
"date": "2025-01-19T09:04:43",
"stars": 2834,
"description": "A GUI Agent application based on UI-TARS(Vision-Lanuage Model) that allows you to control your computer using natural language.",
"file_size": 2074
} |
# @ui-tars/electron-ipc
## 1.2.0-beta.10
## 1.2.0-beta.9
### Patch Changes
- bump: sdk support
## 1.2.0-beta.6
### Patch Changes
- feat: new sdk
## 1.2.0-beta.5
### Patch Changes
- chore: update sdk
## 1.2.0-beta.4
### Patch Changes
- chore: new version
## 1.2.0-beta.3
### Patch Changes
- chore: add retry
## 1.2.0-beta.2
### Patch Changes
- chore: publish
## 1.2.0-beta.1
### Patch Changes
- chore: remove unused code
## 1.2.0-beta.0
### Minor Changes
- a062e03: feat: ui-tars agent sdk support | {
"source": "bytedance/UI-TARS-desktop",
"title": "packages/electron-ipc/CHANGELOG.md",
"url": "https://github.com/bytedance/UI-TARS-desktop/blob/main/packages/electron-ipc/CHANGELOG.md",
"date": "2025-01-19T09:04:43",
"stars": 2834,
"description": "A GUI Agent application based on UI-TARS(Vision-Lanuage Model) that allows you to control your computer using natural language.",
"file_size": 521
} |
# UI-TARS Electron IPC Handlers
A type-safe IPC (Inter-Process Communication) solution for Electron applications.
## Features
- Full TypeScript support with end-to-end type safety
- Zod schema validation support
- Simple and intuitive API
- Server-side direct invocation support
## Installation
```bash
npm install @ui-tars/electron-ipc
```
## Usage
### Define Your Router
```ts
// router.ts
import { initIpc } from '@ui-tars/electron-ipc/main';
import { z } from 'zod';
const t = initIpc.create();
export const router = t.router({
// Basic procedure without Zod
hello: t.procedure.input<{ a: string }>().handle(async ({ input }) => {
return 'hello' + input.a;
}),
// Procedure with Zod schema validation
world: t.procedure.input(z.object({ a: z.string() })).handle(async ({ input }) => {
return input.a;
})
});
// Export router type for client usage
export type AppRouter = typeof router;
```
### Main Process Setup
```ts
// main.ts
import { registerIpcMain, createServer } from '@ui-tars/electron-ipc/main';
import { router } from './router';
// Register IPC handlers
registerIpcMain(router);
// Optional: Create server instance for direct invocation in main process
const server = createServer(router);
await server.hello({ a: '123' }); // => 'hello123'
```
### Renderer Process Usage
```ts
// renderer.ts
import { createClient } from '@ui-tars/electron-ipc/renderer';
import type { AppRouter } from './router';
const client = createClient<AppRouter>({
ipcInvoke: window.Electron.ipcRenderer.invoke,
});
// Call procedures from renderer process
await client.hello({ a: '123' }); // => 'hello123'
```
## API Reference
### Main Process
#### `initIpc.create()`
Creates a new IPC router builder instance.
#### `registerIpcMain(router)`
Registers IPC handlers for the main process.
#### `createServer(router)`
Creates a server instance for direct invocation in the main process.
### Renderer Process
#### `createClient<Router>(options)`
Creates a type-safe client for calling IPC procedures from the renderer process.
## Type Safety
The library provides full type safety between your main and renderer processes:
- Input types are validated at compile time
- Return types are properly inferred
- Zod schema validation provides runtime type safety
## License
Apache-2.0 | {
"source": "bytedance/UI-TARS-desktop",
"title": "packages/electron-ipc/README.md",
"url": "https://github.com/bytedance/UI-TARS-desktop/blob/main/packages/electron-ipc/README.md",
"date": "2025-01-19T09:04:43",
"stars": 2834,
"description": "A GUI Agent application based on UI-TARS(Vision-Lanuage Model) that allows you to control your computer using natural language.",
"file_size": 2319
} |
# @ui-tars/sdk
## 1.2.0-beta.12
### Patch Changes
- chore: open-operator
- Updated dependencies
- @ui-tars/[email protected]
- @ui-tars/[email protected]
## 1.2.0-beta.11
### Patch Changes
- chore: types
## 1.2.0-beta.10
### Patch Changes
- update
- @ui-tars/[email protected]
- @ui-tars/[email protected]
## 1.2.0-beta.9
### Patch Changes
- bump: sdk support
- Updated dependencies
- @ui-tars/[email protected]
- @ui-tars/[email protected] | {
"source": "bytedance/UI-TARS-desktop",
"title": "packages/sdk/CHANGELOG.md",
"url": "https://github.com/bytedance/UI-TARS-desktop/blob/main/packages/sdk/CHANGELOG.md",
"date": "2025-01-19T09:04:43",
"stars": 2834,
"description": "A GUI Agent application based on UI-TARS(Vision-Lanuage Model) that allows you to control your computer using natural language.",
"file_size": 501
} |
# @ui-tars/sdk Guide(Beta)
## Overview
`@ui-tars/sdk` is a powerful cross-platform(ANY device/platform) toolkit for building GUI automation agents.
It provides a flexible framework to create agents that can interact with graphical user interfaces through various operators. It supports running on both **Node.js** and the **Web Browser**
```mermaid
classDiagram
class GUIAgent~T extends Operator~ {
+model: UITarsModel
+operator: T
+signal: AbortSignal
+onData
+run()
}
class UITarsModel {
+invoke()
}
class Operator {
<<interface>>
+screenshot()
+execute()
}
class NutJSOperator {
+screenshot()
+execute()
}
class WebOperator {
+screenshot()
+execute()
}
class MobileOperator {
+screenshot()
+execute()
}
GUIAgent --> UITarsModel
GUIAgent ..> Operator
Operator <|.. NutJSOperator
Operator <|.. WebOperator
Operator <|.. MobileOperator
```
## Try it out
```bash
npx @ui-tars/cli start
```
Input your UI-TARS Model Service Config(`baseURL`, `apiKey`, `model`), then you can control your computer with CLI.
```
Need to install the following packages:
Ok to proceed? (y) y
│
◆ Input your instruction
│ _ Open Chrome
└
```
## Agent Execution Process
```mermaid
sequenceDiagram
participant user as User
participant guiAgent as GUI Agent
participant model as UI-TARS Model
participant operator as Operator
user -->> guiAgent: "`instruction` + <br /> `Operator.MANUAL.ACTION_SPACES`"
activate user
activate guiAgent
loop status !== StatusEnum.RUNNING
guiAgent ->> operator: screenshot()
activate operator
operator -->> guiAgent: base64, Physical screen size
deactivate operator
guiAgent ->> model: instruction + actionSpaces + screenshots.slice(-5)
model -->> guiAgent: `prediction`: click(start_box='(27,496)')
guiAgent -->> user: prediction, next action
guiAgent ->> operator: execute(prediction)
activate operator
operator -->> guiAgent: success
deactivate operator
end
deactivate guiAgent
deactivate user
```
### Basic Usage
Basic usage is largely derived from package `@ui-tars/sdk`, here's a basic example of using the SDK:
> Note: Using `nut-js`(cross-platform computer control tool) as the operator, you can also use or customize other operators. NutJS operator that supports common desktop automation actions:
> - Mouse actions: click, double click, right click, drag, hover
> - Keyboard input: typing, hotkeys
> - Scrolling
> - Screenshot capture
```ts
import { GUIAgent } from '@ui-tars/sdk';
import { NutJSOperator } from '@ui-tars/operator-nut-js';
const guiAgent = new GUIAgent({
model: {
baseURL: config.baseURL,
apiKey: config.apiKey,
model: config.model,
},
operator: new NutJSOperator(),
onData: ({ data }) => {
console.log(data)
},
onError: ({ data, error }) => {
console.error(error, data);
},
});
await guiAgent.run('send "hello world" to x.com');
```
### Handling Abort Signals
You can abort the agent by passing a `AbortSignal` to the GUIAgent `signal` option.
```ts
const abortController = new AbortController();
const guiAgent = new GUIAgent({
// ... other config
signal: abortController.signal,
});
// ctrl/cmd + c to cancel operation
process.on('SIGINT', () => {
abortController.abort();
});
```
## Configuration Options
The `GUIAgent` constructor accepts the following configuration options:
- `model`: Model configuration(OpenAI-compatible API) or custom model instance
- `baseURL`: API endpoint URL
- `apiKey`: API authentication key
- `model`: Model name to use
- more options see [OpenAI API](https://platform.openai.com/docs/guides/vision/uploading-base-64-encoded-images)
- `operator`: Instance of an operator class that implements the required interface
- `signal`: AbortController signal for canceling operations
- `onData`: Callback for receiving agent data/status updates
- `data.conversations` is an array of objects, **IMPORTANT: is delta, not the whole conversation history**, each object contains:
- `from`: The role of the message, it can be one of the following:
- `human`: Human message
- `gpt`: Agent response
- `screenshotBase64`: Screenshot base64
- `value`: The content of the message
- `data.status` is the current status of the agent, it can be one of the following:
- `StatusEnum.INIT`: Initial state
- `StatusEnum.RUNNING`: Agent is actively executing
- `StatusEnum.END`: Operation completed
- `StatusEnum.MAX_LOOP`: Maximum loop count reached
- `onError`: Callback for error handling
- `systemPrompt`: Optional custom system prompt
- `maxLoopCount`: Maximum number of interaction loops (default: 25)
### Status flow
```mermaid
stateDiagram-v2
[*] --> INIT
INIT --> RUNNING
RUNNING --> RUNNING: Execute Actions
RUNNING --> END: Task Complete
RUNNING --> MAX_LOOP: Loop Limit Reached
END --> [*]
MAX_LOOP --> [*]
```
## Advanced Usage
### Operator Interface
When implementing a custom operator, you need to implement two core methods: `screenshot()` and `execute()`.
#### Initialize
`npm init` to create a new operator package, configuration is as follows:
```json
{
"name": "your-operator-tool",
"version": "1.0.0",
"main": "./dist/index.js",
"module": "./dist/index.mjs",
"types": "./dist/index.d.ts",
"scripts": {
"dev": "tsup --watch",
"prepare": "npm run build",
"build": "tsup",
"test": "vitest"
},
"files": [
"dist"
],
"publishConfig": {
"access": "public",
"registry": "https://registry.npmjs.org"
},
"dependencies": {
"jimp": "^1.6.0"
},
"peerDependencies": {
"@ui-tars/sdk": "latest"
},
"devDependencies": {
"@ui-tars/sdk": "latest",
"tsup": "^8.3.5",
"typescript": "^5.7.2",
"vitest": "^3.0.2"
}
}
```
#### screenshot()
This method captures the current screen state and returns a `ScreenshotOutput`:
```typescript
interface ScreenshotOutput {
// Base64 encoded image string
base64: string;
// Physical screen width
width: number;
// Physical screen height
height: number;
// Device pixel ratio (DPR)
scaleFactor: number;
}
```
#### execute()
This method performs actions based on model predictions. It receives an `ExecuteParams` object:
```typescript
interface ExecuteParams {
// Raw prediction string from the model
prediction: string;
// Parsed prediction object
parsedPrediction: {
action_type: string;
action_inputs: Record<string, any>;
reflection: string | null;
thought: string;
};
// Physical screen width
screenWidth: number;
// Physical screen height
screenHeight: number;
// Device pixel ratio (DPR)
scaleFactor: number;
}
```
Advanced sdk usage is largely derived from package `@ui-tars/sdk/core`, you can create custom operators by extending the base `Operator` class:
```typescript
import {
Operator,
parseBoxToScreenCoords,
type ScreenshotOutput,
type ExecuteParams
type ExecuteOutput,
} from '@ui-tars/sdk/core';
import { Jimp } from 'jimp';
export class CustomOperator extends Operator {
// Define the action spaces and description for UI-TARS System Prompt splice
static MANUAL = {
ACTION_SPACES: [
'click(start_box="") # click on the element at the specified coordinates',
'type(content="") # type the specified content into the current input field',
'scroll(direction="") # scroll the page in the specified direction',
'finished() # finish the task',
// ...more_actions
],
};
public async screenshot(): Promise<ScreenshotOutput> {
// Implement screenshot functionality
const base64 = 'base64-encoded-image';
const buffer = Buffer.from(base64, 'base64');
const image = await sharp(buffer).toBuffer();
return {
base64: 'base64-encoded-image',
width: image.width,
height: image.height,
scaleFactor: 1
};
}
async execute(params: ExecuteParams): Promise<ExecuteOutput> {
const { parsedPrediction, screenWidth, screenHeight, scaleFactor } = params;
// Implement action execution logic
// if click action, get coordinates from parsedPrediction
const startBoxStr = parsedPrediction?.action_inputs?.start_box || '';
const { x: startX, y: startY } = parseBoxToScreenCoords({
boxStr: startBoxStr,
screenWidth,
screenHeight,
});
if (parsedPrediction?.action_type === 'finished') {
// finish the GUIAgent task
return { status: StatusEnum.END };
}
}
}
```
Required methods:
- `screenshot()`: Captures the current screen state
- `execute()`: Performs the requested action based on model predictions
Optional static properties:
- `MANUAL`: Define the action spaces and description for UI-TARS Model understanding
- `ACTION_SPACES`: Define the action spaces and description for UI-TARS Model understanding
Loaded into `GUIAgent`:
```ts
const guiAgent = new GUIAgent({
// ... other config
systemPrompt: `
// ... other system prompt
${CustomOperator.MANUAL.ACTION_SPACES.join('\n')}
`,
operator: new CustomOperator(),
});
```
### Custom Model Implementation
You can implement custom model logic by extending the `UITarsModel` class:
```typescript
class CustomUITarsModel extends UITarsModel {
constructor(modelConfig: { model: string }) {
super(modelConfig);
}
async invoke(params: any) {
// Implement custom model logic
return {
prediction: 'action description',
parsedPredictions: [{
action_type: 'click',
action_inputs: { /* ... */ },
reflection: null,
thought: 'reasoning'
}]
};
}
}
const agent = new GUIAgent({
model: new CustomUITarsModel({ model: 'custom-model' }),
// ... other config
});
```
> Note: However, it is not recommended to implement a custom model because it contains a lot of data processing logic (including image transformations, scaling factors, etc.).
### Planning
You can combine planning/reasoning models (such as OpenAI-o1, DeepSeek-R1) to implement complex GUIAgent logic for planning, reasoning, and execution:
```ts
const guiAgent = new GUIAgent({
// ... other config
});
const planningList = await reasoningModel.invoke({
conversations: [
{
role: 'user',
content: 'buy a ticket from beijing to shanghai',
}
]
})
/**
* [
* 'open chrome',
* 'open trip.com',
* 'click "search" button',
* 'select "beijing" in "from" input',
* 'select "shanghai" in "to" input',
* 'click "search" button',
* ]
*/
for (const planning of planningList) {
await guiAgent.run(planning);
}
``` | {
"source": "bytedance/UI-TARS-desktop",
"title": "packages/sdk/README.md",
"url": "https://github.com/bytedance/UI-TARS-desktop/blob/main/packages/sdk/README.md",
"date": "2025-01-19T09:04:43",
"stars": 2834,
"description": "A GUI Agent application based on UI-TARS(Vision-Lanuage Model) that allows you to control your computer using natural language.",
"file_size": 10822
} |
# @ui-tars/shared
## 1.2.0-beta.11
### Patch Changes
- chore: open-operator
## 1.2.0-beta.10
## 1.2.0-beta.9
### Patch Changes
- bump: sdk support
## 1.2.0-beta.6
### Patch Changes
- feat: new sdk
## 1.2.0-beta.5
### Patch Changes
- chore: update sdk
## 1.2.0-beta.4
### Patch Changes
- chore: new version
## 1.2.0-beta.3
### Patch Changes
- chore: add retry
## 1.2.0-beta.2
### Patch Changes
- chore: publish
## 1.2.0-beta.1
### Patch Changes
- chore: remove unused code
## 1.2.0-beta.0
### Minor Changes
- a062e03: feat: ui-tars agent sdk support
## 1.1.0
### Minor Changes
- fix(actionParser): no action text return null not error
## 1.0.1
### Patch Changes
- a1101ca: fix(action_parser): null values while parsing action inputs (#28)
## 1.0.0
### Major Changes
- dcabd45: feat: initial publish ui-tars action-parser and shared pkgs | {
"source": "bytedance/UI-TARS-desktop",
"title": "packages/shared/CHANGELOG.md",
"url": "https://github.com/bytedance/UI-TARS-desktop/blob/main/packages/shared/CHANGELOG.md",
"date": "2025-01-19T09:04:43",
"stars": 2834,
"description": "A GUI Agent application based on UI-TARS(Vision-Lanuage Model) that allows you to control your computer using natural language.",
"file_size": 872
} |
# @ui-tars/utio
## 1.2.0-beta.10
## 1.2.0-beta.9
### Patch Changes
- bump: sdk support
## 1.2.0-beta.6
### Patch Changes
- feat: new sdk
## 1.2.0-beta.5
### Patch Changes
- chore: update sdk
## 1.1.0-beta.4
### Patch Changes
- chore: new version
## 1.1.0-beta.3
### Patch Changes
- chore: add retry
## 1.1.0-beta.2
### Patch Changes
- chore: publish
## 1.1.0-beta.1
### Patch Changes
- chore: remove unused code
## 1.1.0-beta.0
### Minor Changes
- a062e03: feat: ui-tars agent sdk support | {
"source": "bytedance/UI-TARS-desktop",
"title": "packages/utio/CHANGELOG.md",
"url": "https://github.com/bytedance/UI-TARS-desktop/blob/main/packages/utio/CHANGELOG.md",
"date": "2025-01-19T09:04:43",
"stars": 2834,
"description": "A GUI Agent application based on UI-TARS(Vision-Lanuage Model) that allows you to control your computer using natural language.",
"file_size": 513
} |
Subsets and Splits