conversations
listlengths 2
2
| source
stringclasses 1
value | score
float64 1
1
|
---|---|---|
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNLocalVideoTrackStats"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > iOS\n>QNLocalVideoTrackStats\n\n# QNLocalVideoTrackStats\n\n最近更新时间: 2023-08-22 12:08:01\n\n本地视频轨道的统计数据,可以通过\n[QNRTCClient.getLocalVideoTrackStats](https://developer.qiniu.com/rtc/8870/QNLocalVideoTrackStats-\niOS/8836) 获取,通过 trackID 获取对应轨道的统计信息\n\n> 如果发布时设置了\n> [QNCameraVideoTrackConfig.multiStreamEnable](https://developer.qiniu.com/rtc/8870/QNLocalVideoTrackStats-\n> iOS/8844),则返回的 List 中包含多个 QNLocalVideoTrackStats,每个对应不同的\n> [QNTrackProfile](https://developer.qiniu.com/rtc/8870/QNLocalVideoTrackStats-\n> iOS/9254)\n\nIndex\n\nCore Properties\n\n * profile\n * uplinkFrameRate\n * uplinkBitrate\n * uplinkRTT\n * uplinkLostRate\n * uplinkFrameWidth\n * uplinkFrameHeight\n * captureFrameRate\n * captureFrameWidth\n * captureFrameHeight\n * targetFrameRate\n * targetFrameWidth\n * targetFrameHeight\n\nCore Properties\n\nprofile\n\n * @property (nonatomic, assign, readonly) [QNTrackProfile](https://developer.qiniu.com/rtc/8870/QNLocalVideoTrackStats-iOS/9254) profile;\n * 本地视频上行 profile\n\nuplinkFrameRate\n\n * @property (nonatomic, assign, readonly) NSUInteger uplinkFrameRate;\n * 本地视频上行帧率\n\nuplinkBitrate\n\n * @property (nonatomic, assign, readonly) double uplinkBitrate;\n * 本地视频上行码率,单位 bps\n\nuplinkRTT\n\n * @property (nonatomic, assign, readonly) NSUInteger uplinkRTT;\n * 本地视频上行网络延时,单位毫秒\n\nuplinkLostRate\n\n * @property (nonatomic, assign, readonly) double uplinkLostRate;\n * 本地视频上行丢包率百分比,范围 [0, 100]\n\nuplinkFrameWidth\n\n * @property (nonatomic, assign, readonly) NSUInteger uplinkFrameWidth;\n * 本地视频上行视频帧宽度\n\nuplinkFrameHeight\n\n * @property (nonatomic, assign, readonly) NSUInteger uplinkFrameHeight;\n * 本地视频上行视频帧高度\n\ncaptureFrameRate\n\n * @property (nonatomic, assign, readonly) NSUInteger captureFrameRate;\n * 采集视频帧率\n\ncaptureFrameWidth\n\n * @property (nonatomic, assign, readonly) NSUInteger captureFrameWidth;\n * 采集视频帧宽度\n\ncaptureFrameHeight\n\n * @property (nonatomic, assign, readonly) NSUInteger captureFrameHeight;\n * 采集视频帧高度\n\ntargetFrameRate\n\n * @property (nonatomic, assign, readonly) NSUInteger targetFrameRate;\n * 期望的视频帧率\n\ntargetFrameWidth\n\n * @property (nonatomic, assign, readonly) NSUInteger targetFrameWidth;\n * 期望的视频帧宽度\n\ntargetFrameHeight\n\n * @property (nonatomic, assign, readonly) NSUInteger targetFrameHeight;\n * 期望的视频帧高度\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNRemoteAudioTrackStats"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > iOS\n>QNRemoteAudioTrackStats\n\n# QNRemoteAudioTrackStats\n\n最近更新时间: 2022-06-30 15:43:35\n\n远端音频轨道的统计数据,可以通过\n[QNRTCClient.getRemoteAudioTrackStats](https://developer.qiniu.com/rtc/8868/QNRemoteAudioTrackStats-\niOS/8836) 获取,并通过 trackID 来获取对应轨道的统计信息。\n\nIndex\n\nCore Properties\n\n * downlinkBitrate\n * downlinkLostRate\n * uplinkRTT\n * uplinkLostRate\n\nCore Properties\n\ndownlinkBitrate\n\n * @property (nonatomic, assign, readonly) double downlinkBitrate;\n * 远端音频下行码率,单位 bps\n\n> 当前应用下载对应轨道时的码率,和远端用户上行无关\n\ndownlinkLostRate\n\n * @property (nonatomic, assign, readonly) double downlinkLostRate;\n * 远端音频下行丢包率,范围 [0, 100]\n\n> 当前应用下载对应轨道时的丢包率,和远端用户上行无关\n\nuplinkRTT\n\n * @property (nonatomic, assign, readonly) NSUInteger uplinkRTT;\n * 远端音频上行网络延时,单位毫秒\n\n表示远端用户上传自身音频数据时的延时\n\nuplinkLostRate\n\n * @property (nonatomic, assign, readonly) double uplinkLostRate;\n * 远端音频上行丢包率百分比,范围 [0, 100]\n\n表示远端用户上传自身音频数据时的丢包率\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNRemoteVideoTrackStats"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > iOS\n>QNRemoteVideoTrackStats\n\n# QNRemoteVideoTrackStats\n\n最近更新时间: 2023-08-22 12:08:54\n\n远端视频轨道的统计数据,可以通过\n[QNRTCClient.getRemoteVideoTrackStats](https://developer.qiniu.com/rtc/8867/QNRemoteVideoTrackStats-\niOS/8836) 获取,并通过 trackID 来获取对应轨道的统计信息。\n\nIndex\n\nCore Properties\n\n * profile\n * downlinkFrameRate\n * downlinkBitrate\n * downlinkLostRate\n * uplinkRTT\n * uplinkLostRate\n * uplinkFrameWidth\n * uplinkFrameHeight\n\nCore Properties\n\nprofile\n\n * @property (nonatomic, assign, readonly) [QNTrackProfile](https://developer.qiniu.com/rtc/8867/QNRemoteVideoTrackStats-iOS/9254) profile;\n * 远端视频 profile,表示当前应用正在接收的视频轨道的 profile\n\ndownlinkFrameRate\n\n * @property (nonatomic, assign, readonly) NSUInteger downlinkFrameRate;\n * 远端视频下行帧率,当前应用正在下载的轨道的帧率\n\ndownlinkBitrate\n\n * @property (nonatomic, assign, readonly) double downlinkBitrate;\n * 远端视频下行码率,单位 bps\n\n> 当前应用下载对应轨道时的码率,和远端用户上行无关\n\ndownlinkLostRate\n\n * @property (nonatomic, assign, readonly) double downlinkLostRate;\n * 远端视频下行丢包率,范围 [0, 100]\n\n> 当前应用下载对应轨道时的丢包率,和远端用户上行无关\n\nuplinkRTT\n\n * @property (nonatomic, assign, readonly) NSUInteger uplinkRTT;\n * 远端视频上行网络时延,单位毫秒\n\n表示远端用户上传自身视频数据时的延时\n\nuplinkLostRate\n\n * @property (nonatomic, assign, readonly) double uplinkLostRate;\n * 远端视频上行丢包率百分比,范围 [0, 100]\n\n表示远端用户上传自身音频数据时的丢包率\n\nuplinkFrameWidth\n\n * @property (nonatomic, assign, readonly) NSUInteger uplinkFrameWidth;\n * 远端用户编码输出的视频宽度\n\nuplinkFrameHeight\n\n * @property (nonatomic, assign, readonly) NSUInteger uplinkFrameHeight;\n * 远端用户编码输出的视频高度\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNRemoteUser"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > iOS >QNRemoteUser\n\n# QNRemoteUser\n\n最近更新时间: 2022-06-30 15:47:36\n\n远端用户的相关信息,包括\n[QNRemoteAudioTrack](https://developer.qiniu.com/rtc/8835/QNRemoteUser-\niOS/8859/QNRemoteAudioTrack-\niOS)、[QNRemoteVideoTrack](https://developer.qiniu.com/rtc/8835/QNRemoteUser-\niOS/8858/QNRemoteVideoTrack-iOS) 等\n\nIndex\n\nCore Properties\n\n * userID\n * userData\n * audioTrack\n * videoTrack\n\nCore Properties\n\nuserID\n\n * @property (nonatomic, strong) NSString *userID;\n * 用户 ID\n\nuserData\n\n * @property (nonatomic, strong) NSString *userData;\n * 自定义信息,可传给房间内的其他用户\n\naudioTrack\n\n * @property (nonatomic,strong, readonly) NSArray <[QNRemoteAudioTrack](https://developer.qiniu.com/rtc/8835/QNRemoteUser-iOS/8859/QNRemoteAudioTrack-iOS) *> *audioTrack;\n * 获取用户发布的音频 Track 列表\n\n可通过\n[QNRemoteAudioTrack.isSubscribed](https://developer.qiniu.com/rtc/8835/QNRemoteUser-\niOS/8859) 获取该 Track 当前是否被订阅\n\nvideoTrack\n\n * @property (nonatomic,strong, readonly) NSArray <[QNRemoteVideoTrack](https://developer.qiniu.com/rtc/8835/QNRemoteUser-iOS/8858/QNRemoteVideoTrack-iOS) *> *videoTrack;\n * 获取用户发布的视频 Track 列表\n\n可通过\n[QNRemoteVideoTrack.isSubscribed](https://developer.qiniu.com/rtc/8835/QNRemoteUser-\niOS/8858) 获取该 Track 当前是否被订阅\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNMediaSource"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > iOS >QNMediaSource\n\n# QNMediaSource\n\n最近更新时间: 2024-08-16 17:44:13\n\n播放器数据源参数\n\nIndex\n\nCore Properties\n\n * url\n\nCore Properties\n\nurl\n\n * @property (nonatomic, strong) NSString *url;\n * 设置视频播放地址\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频API 概览"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >API 概览\n\n# API 概览\n\n最近更新时间: 2022-07-14 14:31:43\n\nQNWebRTC 是七牛推出的一款适用于 web 平台的音视频通话 SDK,提供了包括音视频通话、静音、发布、订阅等多种功能,基于此\nSDK,您可以很方便地在 Web 上开发出一款基于实时音视频的应用。\n\n> 本文档适用于 QNWebRTC 4.x 版本的 SDK,并不向下兼容。\n>\n> QNWebRTC 4.x 是在 2.x 的基础上进行了 API 的重构和优化,从而使使用者能够更加高效的接入,所以新接入的用户推荐使用最新版本的\n> SDK。\n>\n> 老用户也可以继续使用 QNWebRTC 2.x 版本,我们也会持续在此版本上进行问题修复和版本更迭,但是后续的一些新功能有可能并不会在 2.x\n> 版本上提供。\n\nQNRTC\n\n[QNRTC](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9070/WebQNRTC)\n提供 SDK 的入口方法,可以通过它进行 SDK 的初始化,各个类型 Track 的创建等操作。其核心方法如下:\n\n初始化\n\n方法 | 描述 \n---|--- \n[createClient](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9070) | 创建 QNRTCClient 对象 \n \n创建 Track\n\n方法 | 描述 \n---|--- \n[createCameraVideoTrack](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9070) | 通过摄像头采集的视频创建一个视频轨道 \n[createMicrophoneAudioTrack](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9070) | 通过麦克风采集的音频创建一个音频轨道 \n[createScreenVideoTrack](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9070) | 创建用于屏幕共享的视频轨道 \n[createMicrophoneAndCameraTracks](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9070) | 同时采集麦克风音频和摄像头视频轨道 \n[createBufferSourceAudioTrack](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9070) | 通过外部音频数据创建一个音频轨道 \n[createCustomAudioTrack](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9070) | 创建一个自定义的音频轨道 \n[createCustomVideoTrack](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9070) | 创建一个自定义的视频轨道 \n \n传输模式\n\n方法 | 描述 \n---|--- \n[setTransportPolicy](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9070) | 设置传输模式 \n \nQNRTCClient\n\n[QNRTCClient](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9090/WebQNRTCClient)\n提供和房间相关的一系列的方法,通过它我们可以实现加入房间,在房间内发布或者订阅相应的音视频轨道等操作。核心方法如下:\n\n核心方法\n\n方法 | 描述 \n---|--- \n[join](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9090) | 加入房间 \n[leave](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9090) | 离开房间 \n[publish](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9090) | 发布本地 Track \n[unpublish](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9090) | 取消发布本地 Track \n[subscribe](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9090) | 订阅远端 Track \n[unsubscribe](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9090) | 取消订阅远端 Track \n[sendMessage](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9090) | 发送自定义消息 \n \n转推 CDN\n\n方法 | 描述 \n---|--- \n[startDirectLiveStreaming](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9090) | 开启单路转推 \n[stopDirectLiveStreaming](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9090) | 停止单路转推 \n[startTranscodingLiveStreaming](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9090) | 开启合流转推 \n[stopTranscodingLiveStreaming](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9090) | 停止合流转推 \n[setTranscodingLiveStreamingTracks](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9090) | 新增、更新合流转推的 Tracks \n[removeTranscodingLiveStreamingTracks](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9090) | 删除合流转推的 Tracks \n \n跨房连麦\n\n方法 | 描述 \n---|--- \n[setClientMode](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9090) | 设置跨房连麦场景 \n[setClientRole](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9090) | 设置互动直播场景下的用户角色 \n[startMediaRelay](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9090) | 开始跨房媒体转发 \n[updateMediaRelay](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9090) | 更新跨房媒体转发 \n[stopMediaRelay](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9090) | 停止跨房媒体转发 \n \nQNLocalTrack 类\n\n[QNLocalTrack](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9085/WebQNLocalTrack)\n是本地创建的一个音视或者视频轨道,可以通过\n[QNRTC](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9070/WebQNRTC)\n来创建各种类型的音视频轨道,例如麦克风采集的音频轨道,摄像头采集的视频轨道,或者屏幕采集的视频轨道等等,我们真正用到的是它所衍生的子类。\n\n以\n[QNLocalTrack](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9085/WebQNLocalTrack)\n为基类,向下衍生的子类如下:\n\nHierarchy\n\n * [QNLocalTrack](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9085/WebQNLocalTrack)\n * [QNLocalVideoTrack](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9061/WebQNLocalVideoTrack)\n * [QNCameraVideoTrack](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9068/WebQNCameraVideoTrack)\n * [QNScreenVideoTrack](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9092/WebQNScreenVideoTrack)\n * [QNCustomVideoTrack](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9088/WebQNCustomVideoTrack)\n * [QNLocalAudioTrack](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9089/WebQNLocalAudioTrack)\n * [QNMicrophoneAudioTrack](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9064/WebQNMicrophoneAudioTrack)\n * [QNBufferSourceAudioTrack](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9069/WebQNBufferSourceAudioTrack)\n * [QNCustomAudioTrack](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9091/WebQNCustomAudioTrack)\n\n * 本地音频轨道 | 描述 \n---|--- \n[QNMicrophoneAudioTrack](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9064/WebQNMicrophoneAudioTrack) | 以麦克风采集到的音频数据为数据源创建的音频轨道,通过 QNRTC.createMicrophoneAudioTrack 创建。 \n[QNBufferSourceAudioTrack](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9069/WebQNBufferSourceAudioTrack) | 以外部导入的音频数据为数据源创建的音频轨道,通过 QNRTC.createBufferSourceAudioTrack 创建。 \n[QNCustomAudioTrack](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9091/WebQNCustomAudioTrack) | 通过自定义的 MediaStreamTrack 对象创建的音频轨道,通过 QNRTC.createCustomAudioTrack 创建。 \n本地视频轨道 | 描述 \n---|--- \n[QNCameraVideoTrack](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9068/WebQNCameraVideoTrack) | 以摄像头采集到的视频数据为数据源创建的视频轨道,通过 QNRTC.createCameraVideoTrack 创建。 \n[QNScreenVideoTrack](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9092/WebQNScreenVideoTrack) | 以屏幕采集到的视频数据为数据源创建的视频轨道,通过 QNRTC.createScreenVideoTrack 创建。 \n[QNCustomVideoTrack](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9088/WebQNCustomVideoTrack) | 通过自定义的 MediaStreamTrack 对象创建的音频轨道,通过 QNRTC.createCameraVideoTrack 创建。 \n\nQNRemoteTrack 类\n\n[QNRemoteTrack](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9082/WebQNRemoteTrack)\n是远端用户创建的音视频轨道,本地用户可以通过房间中的事件回调获取。其衍生的子类如下:\n\nHierarchy\n\n * [QNRemoteTrack](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9082/WebQNRemoteTrack)\n * [QNRemoteVideoTrack](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9060/WebQNRemoteVideoTrack)\n * [QNRemoteAudioTrack](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9087/WebQNRemoteAudioTrack)\n\n * 远端音视频轨道 | 描述 \n---|--- \n[QNRemoteVideoTrack](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9060/WebQNRemoteVideoTrack) | 远端用户发布的音频轨道。 \n[QNRemoteAudioTrack](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9087/WebQNRemoteAudioTrack) | 远端用户发布的视频轨道。 \n\nQNRTCClientEvent\n\n[QNRTCClient](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9090/WebQNRTCClient)\n继承于 EventEmitter,通过监听一系列事件,触发对应回调函数,例如:\n\n \n \n client.on('user-joined', function(remoteUserID, userData) {\n console.log('user-joined', remoteUserID, userData)\n })\n\n * 事件 | 描述 \n---|--- \n[user-joined](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9090) | 用户加入房间 \n[user-left](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9090) | 用户离开房间 \n[user-published](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9090) | 用户发布 Tracks \n[user-unpublished](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9090) | 用户取消发布 Tracks \n[connection-state-changed](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9090) | 与房间的连接状态改变 \n[message-received](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9090) | 收到自定义消息 \n[user-reconnecting](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9090) | 远端用户重连中 \n[user-reconnected](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9090) | 远端用户重连成功 \n[direct-livestreaming-state-changed](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9090) | 单路转推状态变化 \n[transcoding-livestreaming-state-changed](https://developer.qiniu.com/rtc/9246/WEB%20API%20%E6%A6%82%E8%A7%88/9090) | 合流转推状态变化 \n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNRTC"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNRTC\n\n# QNRTC\n\n最近更新时间: 2023-07-05 12:41:36\n\nQNRTC 是 SDK 的入口,提供了 SDK 配置,创建本地音视频轨道,创建房间对象等一系列方法。\n\nIndex\n\nProperties\n\n * VERSION\n\nCallbacks\n\n * onCameraChanged\n * onMicrophoneChanged\n * onPlaybackDeviceChanged\n\nCore Methods\n\n * isBrowserSupported\n * checkSystemRequirements\n * setLogLevel\n * isChromeExtensionAvailable\n * createClient\n * getCameras\n * getMicrophones\n * getPlaybackDevices\n * getDevices\n * createCameraVideoTrack\n * createMicrophoneAudioTrack\n * createScreenVideoTrack\n * createMicrophoneAndCameraTracks\n * createBufferSourceAudioTrack\n * createCanvasVideoTrack\n * createCustomAudioTrack\n * createCustomVideoTrack\n * checkAudioTrackIsActive\n * checkVideoTrackIsActive\n * setTransportPolicy\n * getElectronScreenSources\n * createDeviceManger\n\nProperties\n\nVERSION\n\n * VERSION: string\n * 获取 SDK 版本号\n\nCallbacks\n\nonCameraChanged\n\n * onCameraChanged: undefined | (info: QNDeviceInfo) => void\n * 摄像头变化回调,提示有摄像头被添加或移除\n \n QNRTC.onCameraChanged = function(info) {\n console.log('camera changed!', info);\n };\n\nParameters\n\n * info: [QNDeviceInfo](https://developer.qiniu.com/rtc/9070/WebQNRTC/9054/WebQNDeviceInfo)\n\n设备信息\n\nonMicrophoneChanged\n\n * onMicrophoneChanged: undefined | (info: QNDeviceInfo) => void\n * 麦克风变化回调,提示有麦克风被添加或移除\n \n QNRTC.onMicrophoneChanged = function(info) {\n console.log('microphone changed!', info);\n };\n\nParameters\n\n * info: [QNDeviceInfo](https://developer.qiniu.com/rtc/9070/WebQNRTC/9054/WebQNDeviceInfo)\n\n设备信息\n\nonPlaybackDeviceChanged\n\n * onPlaybackDeviceChanged: undefined | (info: QNDeviceInfo) => void\n * 播放设备变化回调,提示有音频播放设备被添加或移除\n \n QNRTC.onPlaybackDeviceChanged = function(info) {\n console.log('speaker changed!', info);\n };\n\nParameters\n\n * info: [QNDeviceInfo](https://developer.qiniu.com/rtc/9070/WebQNRTC/9054/WebQNDeviceInfo)\n\n设备信息\n\nCore Methods\n\nisBrowserSupported\n\n * isBrowserSupported(): boolean\n * 当前浏览器是否支持 QNRTC SDK\n\nReturns boolean\n\ncheckSystemRequirements\n\n * checkSystemRequirements(): Promise<[QNSystemRequirementsTestResult](https://developer.qiniu.com/rtc/9070/WebQNRTC/9084/WebQNSystemRequirementsTestResult)>\n * 系统兼容检测。\n\nReturns\nPromise<[QNSystemRequirementsTestResult](https://developer.qiniu.com/rtc/9070/WebQNRTC/9084/WebQNSystemRequirementsTestResult)>\n\nsetLogLevel\n\n * setLogLevel(level: [QNLogLevel](https://developer.qiniu.com/rtc/9070/WebQNRTC/9053/WebQNLogLevel)): void\n * 设置日志打印等级。\n\nParameters\n\n * level: [QNLogLevel](https://developer.qiniu.com/rtc/9070/WebQNRTC/9053/WebQNLogLevel)\n\n日志等级\n\nReturns void\n\nisChromeExtensionAvailable\n\n * isChromeExtensionAvailable(): Promise<boolean>\n * 检查 Chrome 浏览器插件是否已安装。\n\nReturns Promise<boolean>\n\ncreateClient\n\n * createClient(): [QNRTCClient](https://developer.qiniu.com/rtc/9070/WebQNRTC/9090/WebQNRTCClient)\n * 创建QNRTCClient对象\n\nReturns\n[QNRTCClient](https://developer.qiniu.com/rtc/9070/WebQNRTC/9090/WebQNRTCClient)\n\ngetCameras\n\n * getCameras(skipPermissionCheck = false): Promise<[MediaDeviceInfo](https://developer.mozilla.org/en-US/docs/Web/API/MediaDeviceInfo)[]>\n * 枚举可用的视频输入设备,比如摄像头\n\nParameters\n\n * Optional skipPermissionCheck: boolean \n\n将该参数设置成 true 跳过权限检查,但是 SDK 将无法保证获取准确的媒体设备信息。\n\n🔹 true: 跳过权限检查\n\n🔹 false:(默认)不跳过权限检查。\n\nReturns Promise<[MediaDeviceInfo](https://developer.mozilla.org/en-\nUS/docs/Web/API/MediaDeviceInfo)[]>\n\ngetMicrophones\n\n * getMicrophones(skipPermissionCheck = false): Promise<[MediaDeviceInfo](https://developer.mozilla.org/en-US/docs/Web/API/MediaDeviceInfo)[]>\n * 枚举可用的音频输入设备,比如麦克风\n\nParameters\n\n * Optional skipPermissionCheck: boolean \n\n将该参数设置成 true 跳过权限检查,但是 SDK 将无法保证获取准确的媒体设备信息。\n\n🔹 true: 跳过权限检查\n\n🔹 false:(默认)不跳过权限检查。\n\nReturns Promise<[MediaDeviceInfo](https://developer.mozilla.org/en-\nUS/docs/Web/API/MediaDeviceInfo)[]>\n\ngetPlaybackDevices\n\n * getPlaybackDevices(skipPermissionCheck = false): Promise<[MediaDeviceInfo](https://developer.mozilla.org/en-US/docs/Web/API/MediaDeviceInfo)[]>\n * 枚举可用的音频播放设备,比如扬声器\n\nParameters\n\n * Optional skipPermissionCheck: boolean \n\n将该参数设置成 true 跳过权限检查,但是 SDK 将无法保证获取准确的媒体设备信息。\n\n🔹 true: 跳过权限检查\n\n🔹 false:(默认)不跳过权限检查。\n\nReturns Promise<[MediaDeviceInfo](https://developer.mozilla.org/en-\nUS/docs/Web/API/MediaDeviceInfo)[]>\n\ngetDevices\n\n * getDevices(skipPermissionCheck?: boolean): Promise<[MediaDeviceInfo](https://developer.mozilla.org/en-US/docs/Web/API/MediaDeviceInfo)[]>\n * 枚举可用的媒体输入和输出设备,比如麦克风、摄像头、耳机等\n\nParameters\n\n * Optional skipPermissionCheck: boolean \n\n将该参数设置成 true 跳过权限检查,但是 SDK 将无法保证获取准确的媒体设备信息。\n\n🔹 true: 跳过权限检查\n\n🔹 false:(默认)不跳过权限检查。\n\nReturns Promise<[MediaDeviceInfo](https://developer.mozilla.org/en-\nUS/docs/Web/API/MediaDeviceInfo)[]>\n\ncreateCameraVideoTrack\n\n * createCameraVideoTrack(config?: [QNCameraVideoTrackConfig](https://developer.qiniu.com/rtc/9070/WebQNRTC/9077/WebQNCameraVideoTrackConfig)): Promise<[QNCameraVideoTrack](https://developer.qiniu.com/rtc/9070/WebQNRTC/9068/WebQNCameraVideoTrack)>\n * 通过摄像头采集的视频创建一个视频轨道。\n\n> 默认 tag 为 ' '、前置摄像头、传输优化模式\n> [DEFAULT](https://developer.qiniu.com/rtc/9070/WebQNRTC/9079)、编码配置\n> [480p](https://developer.qiniu.com/rtc/9070/WebQNRTC/9075)。\n\nParameters\n\n * Optional config: [QNCameraVideoTrackConfig](https://developer.qiniu.com/rtc/9070/WebQNRTC/9077/WebQNCameraVideoTrackConfig)\n\n摄像头视频采集的配置信息。\n\nReturns\nPromise<[QNCameraVideoTrack](https://developer.qiniu.com/rtc/9070/WebQNRTC/9068/WebQNCameraVideoTrack)>\n\ncreateMicrophoneAudioTrack\n\n * createMicrophoneAudioTrack(config?: [QNMicrophoneAudioTrackConfig](https://developer.qiniu.com/rtc/9070/WebQNRTC/9055/WebQNMicrophoneAudioTrackConfig)): Promise<[QNMicrophoneAudioTrack](https://developer.qiniu.com/rtc/9070/WebQNRTC/9064/WebQNMicrophoneAudioTrack)>\n * 通过麦克风采集的音频创建一个音频轨道。\n\n> 默认 tag 为 ' '、传输优化模式\n> [DEFAULT](https://developer.qiniu.com/rtc/9070/WebQNRTC/9079)、开启自动增益、开启回声消除、开启降噪、编码配置\n> [STANDARD](https://developer.qiniu.com/rtc/9070/WebQNRTC/9075)。\n\nParameters\n\n * Optional config: [QNMicrophoneAudioTrackConfig](https://developer.qiniu.com/rtc/9070/WebQNRTC/9055/WebQNMicrophoneAudioTrackConfig)\n\n麦克风音频采集的配置信息。\n\nReturns\nPromise<[QNMicrophoneAudioTrack](https://developer.qiniu.com/rtc/9070/WebQNRTC/9064/WebQNMicrophoneAudioTrack)>\n\ncreateScreenVideoTrack\n\n * createScreenVideoTrack(config?: [QNScreenVideoTrackConfig](https://developer.qiniu.com/rtc/9070/WebQNRTC/9056/WebQNScreenVideoTrackConfig), withAudio?: string): Promise<[[QNScreenVideoTrack](https://developer.qiniu.com/rtc/9070/WebQNRTC/9092/WebQNScreenVideoTrack), [QNLocalAudioTrack](https://developer.qiniu.com/rtc/9070/WebQNRTC/9089/WebQNLocalAudioTrack)] | [QNScreenVideoTrack](https://developer.qiniu.com/rtc/9070/WebQNRTC/9092/WebQNScreenVideoTrack)>\n * 创建用于屏幕共享的视频轨道。\n\n> 默认 screenVideoTag 为 ' 'screenAudioTag 为 ' '、传输优化模式\n> [DEFAULT](https://developer.qiniu.com/rtc/9070/WebQNRTC/9079)、编码配置\n> [720p](https://developer.qiniu.com/rtc/9070/WebQNRTC/9075)。\n\nParameters\n\n * Optional config: [QNScreenVideoTrackConfig](https://developer.qiniu.com/rtc/9070/WebQNRTC/9056/WebQNScreenVideoTrackConfig)\n\n采集屏幕共享视频的配置信息。\n\n * Optional withAudio: string \n\n屏幕共享时是否同时采集系统声音。\n\n🔹 enable: 采集系统声音,如果采集失败,抛出错误\n\n🔹 disable: (默认)不采集系统声音\n\n🔹 auto: 采集系统声音,如果采集失败,不抛出错误,无音频轨道返回\n\nReturns Promise<[[QNScreenVideoTrack](https://developer.qiniu.com/rtc/9070/WebQNRTC/9092/WebQNScreenVideoTrack), [QNLocalAudioTrack](https://developer.qiniu.com/rtc/9070/WebQNRTC/9089/WebQNLocalAudioTrack)] | [QNScreenVideoTrack](https://developer.qiniu.com/rtc/9070/WebQNRTC/9092/WebQNScreenVideoTrack)>\n\ncreateMicrophoneAndCameraTracks\n\n * createMicrophoneAndCameraTracks(audioConfig?: [QNMicrophoneAudioTrackConfig](https://developer.qiniu.com/rtc/9070/WebQNRTC/9055/WebQNMicrophoneAudioTrackConfig), videoConfig?: [QNCameraVideoTrackConfig](https://developer.qiniu.com/rtc/9070/WebQNRTC/9077/WebQNCameraVideoTrackConfig)): Promise<[[QNMicrophoneAudioTrack](https://developer.qiniu.com/rtc/9070/WebQNRTC/9064/WebQNMicrophoneAudioTrack), [QNCameraVideoTrack](https://developer.qiniu.com/rtc/9070/WebQNRTC/9068/WebQNCameraVideoTrack)]>\n * 同时采集麦克风音频和摄像头视频轨道。\n\n> 视频配置默认 tag 为 ' '、前置摄像头、传输优化模式\n> [DEFAULT](https://developer.qiniu.com/rtc/9070/WebQNRTC/9079)、编码配置\n> [480p](https://developer.qiniu.com/rtc/9070/WebQNRTC/9075)。\n>\n> 音频配置默认 tag 为 ' '、传输优化模式\n> [DEFAULT](https://developer.qiniu.com/rtc/9070/WebQNRTC/9079)、开启自动增益、开启回声消除、开启降噪、编码配置\n> [STANDARD](https://developer.qiniu.com/rtc/9070/WebQNRTC/9075)。\n\nParameters\n\n * Optional audioConfig: [QNMicrophoneAudioTrackConfig](https://developer.qiniu.com/rtc/9070/WebQNRTC/9055/WebQNMicrophoneAudioTrackConfig)\n\n麦克风音频采集的配置信息。\n\n * Optional videoConfig: [QNCameraVideoTrackConfig](https://developer.qiniu.com/rtc/9070/WebQNRTC/9077/WebQNCameraVideoTrackConfig)\n\n摄像头视频采集的配置信息。\n\nReturns\nPromise<[[QNMicrophoneAudioTrack](https://developer.qiniu.com/rtc/9070/WebQNRTC/9064/WebQNMicrophoneAudioTrack),\n[QNCameraVideoTrack](https://developer.qiniu.com/rtc/9070/WebQNRTC/9068/WebQNCameraVideoTrack)]>\n\ncreateBufferSourceAudioTrack\n\n * createBufferSourceAudioTrack(config: [QNBufferSourceAudioTrackConfig](https://developer.qiniu.com/rtc/9070/WebQNRTC/9057/WebQNBufferSourceAudioTrackConfig)): Promise<[QNBufferSourceAudioTrack](https://developer.qiniu.com/rtc/9070/WebQNRTC/9069/WebQNBufferSourceAudioTrack)>\n * 通过外部音频数据源创建一个音频轨道。\n\nParameters\n\n * config: [QNBufferSourceAudioTrackConfig](https://developer.qiniu.com/rtc/9070/WebQNRTC/9057/WebQNBufferSourceAudioTrackConfig)\n\n关于外部音频数据源的配置等信息。\n\nReturns\nPromise<[QNBufferSourceAudioTrack](https://developer.qiniu.com/rtc/9070/WebQNRTC/9069/WebQNBufferSourceAudioTrack)>\n\ncreateCanvasVideoTrack\n\n * createCanvasVideoTrack(config: [QNCanvasVideoTrackConfig](https://developer.qiniu.com/rtc/9070/WebQNRTC/10185/WebQNCanvasVideoTrackConfig)): [QNCanvasVideoTrack](https://developer.qiniu.com/rtc/9070/WebQNRTC/10320/WebQNCanvasVideoTrack)\n * 创建画布视频 track\n\nParameters\n\n * config: [QNCanvasVideoTrackConfig](https://developer.qiniu.com/rtc/9070/WebQNRTC/10185/WebQNCanvasVideoTrackConfig)\n\n配置画布宽高、内容等信息。\n\nReturns\n[QNCanvasVideoTrack](https://developer.qiniu.com/rtc/9070/WebQNRTC/10320/WebQNCanvasVideoTrack)\n\ncreateCustomAudioTrack\n\n * createCustomAudioTrack(config: [QNCustomAudioTrackConfig](https://developer.qiniu.com/rtc/9070/WebQNRTC/9074/WebQNCustomAudioTrackConfig)): [QNCustomAudioTrack](https://developer.qiniu.com/rtc/9070/WebQNRTC/9091/WebQNCustomAudioTrack)\n * 创建一个自定义的音频轨道。\n\nParameters\n\n * config: [QNCustomAudioTrackConfig](https://developer.qiniu.com/rtc/9070/WebQNRTC/9074/WebQNCustomAudioTrackConfig)\n\n通过该配置指定码率、音源等信息。\n\nReturns\n[QNCustomAudioTrack](https://developer.qiniu.com/rtc/9070/WebQNRTC/9091/WebQNCustomAudioTrack)\n\ncreateCustomVideoTrack\n\n * createCustomVideoTrack(config: [QNCustomVideoTrackConfig](https://developer.qiniu.com/rtc/9070/WebQNRTC/9062/WebQNCustomVideoTrackConfig)): [QNCustomVideoTrack](https://developer.qiniu.com/rtc/9070/WebQNRTC/9088/WebQNCustomVideoTrack)\n * 创建一个自定义的视频轨道。\n\nParameters\n\n * config: [QNCustomVideoTrackConfig](https://developer.qiniu.com/rtc/9070/WebQNRTC/9062/WebQNCustomVideoTrackConfig)\n\n通过该配置指定码率、视频源等信息。\n\nReturns\n[QNCustomVideoTrack](https://developer.qiniu.com/rtc/9070/WebQNRTC/9088/WebQNCustomVideoTrack)\n\ncheckAudioTrackIsActive\n\n * checkAudioTrackIsActive(track: [QNLocalAudioTrack](https://developer.qiniu.com/rtc/9070/WebQNRTC/9089/WebQNLocalAudioTrack) | [QNRemoteAudioTrack](https://developer.qiniu.com/rtc/9070/WebQNRTC/9087/WebQNRemoteAudioTrack), timeout?: number): Promise<boolean>\n * 检测音频轨道是否活跃\n\n> 在一定时间内检测音频轨道声音是否有变化或者是否低于正常值。\n>\n> 可以在开始通话前对音频输入设备进行可用性检测。\n>\n> 当音频轨被 muted 的时候返回 false\n\n> 如果传入的音频轨道在处于极度安静的情况下,该接口可能会误报。\n\nParameters\n\n * track: [QNLocalAudioTrack](https://developer.qiniu.com/rtc/9070/WebQNRTC/9089/WebQNLocalAudioTrack) | [QNRemoteAudioTrack](https://developer.qiniu.com/rtc/9070/WebQNRTC/9087/WebQNRemoteAudioTrack)\n\n需要检测的本地或远端音频 Track\n\n * Optional timeout: number \n\nsdk 最低检测事件为200 ms,若传入值小于200,则 sdk 会自动调整为 200ms。\n\nReturns Promise<boolean>\n\ncheckVideoTrackIsActive\n\n * checkVideoTrackIsActive(track: [QNLocalVideoTrack](https://developer.qiniu.com/rtc/9070/WebQNRTC/9061/WebQNLocalVideoTrack) | [QNRemoteVideoTrack](https://developer.qiniu.com/rtc/9070/WebQNRTC/9060/WebQNRemoteVideoTrack), timeout?: number): Promise<boolean>\n * 检测视频轨道是否活跃\n\n> 在一定时间内检测视频轨道画面是否有变化。\n>\n> 可以在开始通话前对视频输入设备进行可用性检测。\n>\n> 当视频轨被 muted 的时候返回 false。\n\nParameters\n\n * track: [QNLocalVideoTrack](https://developer.qiniu.com/rtc/9070/WebQNRTC/9061/WebQNLocalVideoTrack) | [QNRemoteVideoTrack](https://developer.qiniu.com/rtc/9070/WebQNRTC/9060/WebQNRemoteVideoTrack)\n\n需要检测的本地或远端视频 Track\n\n * Optional timeout: number \n\nsdk 最低检测事件为60 ms,若传入值小于60,则 sdk 会自动调整为 60ms。\n\nReturns Promise<boolean>\n\nsetTransportPolicy\n\n * setTransportPolicy(policy: [QNTransportPolicy](https://developer.qiniu.com/rtc/9070/WebQNRTC/9078/WebQNTransportPolicy)): void\n * 设置传输模式\n\nParameters\n\n * policy: [QNTransportPolicy](https://developer.qiniu.com/rtc/9070/WebQNRTC/9078/WebQNTransportPolicy)\n\n传输策略,默认值为 preferUdp。\n\nReturns void\n\ngetElectronScreenSources\n\n * getElectronScreenSources(type?: [QNElectronScreenSourceType](https://developer.qiniu.com/rtc/9070/WebQNRTC/10789/WebQNElectronScreenSourceType)): Promise<[QNElectronScreenSource](https://developer.qiniu.com/rtc/9070/WebQNRTC/10790/WebQNElectronScreenSource)[]>\n * Electron 环境下枚举屏幕共享源\n\nParameters\n\n * type: [QNElectronScreenSourceType](https://developer.qiniu.com/rtc/9070/WebQNRTC/10789/WebQNElectronScreenSourceType)\n\n屏幕共享类型选择\n\nReturns\nPromise<[QNElectronScreenSource](https://developer.qiniu.com/rtc/9070/WebQNRTC/10790/WebQNElectronScreenSource)[]>\n\ncreateDeviceManger\n\n * createDeviceManger(): [QNDeviceManager](https://developer.qiniu.com/rtc/9070/WebQNRTC/12500/WebQNDeviceManager)\n * 创建设备管理类\n\nReturns\n[QNDeviceManager](https://developer.qiniu.com/rtc/9070/WebQNRTC/12500/WebQNDeviceManager)\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNDeviceManager"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNDeviceManager\n\n# QNDeviceManager\n\n最近更新时间: 2023-07-05 12:44:55\n\n设备管理实例\n\nIndex\n\nCore methods\n\n * setRecordingDevice\n * setPlaybackDevice\n * setCameraDevice\n * startRecordingDeviceTest\n * stopRecordingDeviceTest\n * startPlaybackDeviceTest\n * stopPlaybackDeviceTest\n * getCameras\n * getMicrophones\n * getPlaybackDevices\n * getDevices\n\nEvents\n\n * test-recording-device-volume\n * recording-device-changed\n * playback-device-changed\n * camera-device-changed\n\nCore methods\n\nsetRecordingDevice\n\n * setRecordingDevice(deviceID: string): void\n * 设置连麦使用的音频录制设备\n\nParameters\n\n * deviceID: string \n\n设备 ID\n\nReturns void\n\nsetPlaybackDevice\n\n * setPlaybackDevice(deviceID: string): void\n * 设置连麦使用的音频播放设备\n\nParameters\n\n * deviceID: string \n\n设备 ID\n\nReturns void\n\nsetCameraDevice\n\n * setCameraDevice(deviceID: string): void\n * 设置连麦使用的视频录制设备\n\nParameters\n\n * deviceID: string \n\n设备 ID\n\nReturns void\n\nstartRecordingDeviceTest\n\n * startRecordingDeviceTest(interval: number, playback?: boolean): void\n * 开始麦克风测试\n\nParameters\n\n * interval: number \n\n麦克风音量的回调间隔,单位毫秒\n\n * playback: boolean \n\n是否开启回播麦克风声音,开启后用户测试麦克风时会听到自己的声音\n\nReturns void\n\nstopRecordingDeviceTest\n\n * stopRecordingDeviceTest(): void\n * 结束麦克风测试\n\nReturns void\n\nstartPlaybackDeviceTest\n\n * startPlaybackDeviceTest(file_path: Blob | string | File): Promise<void>\n * 开始扬声器测试\n\nParameters\n\n * file_path: Blob | string | File \n\n扬声器播放文件路径\n\nReturns void\n\nstopPlaybackDeviceTest\n\n * stopPlaybackDeviceTest(): void\n * 结束扬声器测试\n\nReturns void\n\ngetCameras\n\n * getCameras(skipPermissionCheck = false): Promise<[MediaDeviceInfo](https://developer.mozilla.org/en-US/docs/Web/API/MediaDeviceInfo)[]>\n * 枚举可用的视频输入设备,比如摄像头\n\nParameters\n\n * Optional skipPermissionCheck: boolean \n\n将该参数设置成 true 跳过权限检查,但是 SDK 将无法保证获取准确的媒体设备信息。\n\n🔹 true: 跳过权限检查\n\n🔹 false:(默认)不跳过权限检查。\n\nReturns Promise<[MediaDeviceInfo](https://developer.mozilla.org/en-\nUS/docs/Web/API/MediaDeviceInfo)[]>\n\ngetMicrophones\n\n * getMicrophones(skipPermissionCheck = false): Promise<[MediaDeviceInfo](https://developer.mozilla.org/en-US/docs/Web/API/MediaDeviceInfo)[]>\n * 枚举可用的音频输入设备,比如麦克风\n\nParameters\n\n * Optional skipPermissionCheck: boolean \n\n将该参数设置成 true 跳过权限检查,但是 SDK 将无法保证获取准确的媒体设备信息。\n\n🔹 true: 跳过权限检查\n\n🔹 false:(默认)不跳过权限检查。\n\nReturns Promise<[MediaDeviceInfo](https://developer.mozilla.org/en-\nUS/docs/Web/API/MediaDeviceInfo)[]>\n\ngetPlaybackDevices\n\n * getPlaybackDevices(skipPermissionCheck = false): Promise<[MediaDeviceInfo](https://developer.mozilla.org/en-US/docs/Web/API/MediaDeviceInfo)[]>\n * 枚举可用的音频播放设备,比如扬声器\n\nParameters\n\n * Optional skipPermissionCheck: boolean \n\n将该参数设置成 true 跳过权限检查,但是 SDK 将无法保证获取准确的媒体设备信息。\n\n🔹 true: 跳过权限检查\n\n🔹 false:(默认)不跳过权限检查。\n\nReturns Promise<[MediaDeviceInfo](https://developer.mozilla.org/en-\nUS/docs/Web/API/MediaDeviceInfo)[]>\n\ngetDevices\n\n * getDevices(skipPermissionCheck?: boolean): Promise<[MediaDeviceInfo](https://developer.mozilla.org/en-US/docs/Web/API/MediaDeviceInfo)[]>\n * 枚举可用的媒体输入和输出设备,比如麦克风、摄像头、耳机等\n\nParameters\n\n * Optional skipPermissionCheck: boolean \n\n将该参数设置成 true 跳过权限检查,但是 SDK 将无法保证获取准确的媒体设备信息。\n\n🔹 true: 跳过权限检查\n\n🔹 false:(默认)不跳过权限检查。\n\nReturns Promise<[MediaDeviceInfo](https://developer.mozilla.org/en-\nUS/docs/Web/API/MediaDeviceInfo)[]>\n\nEvents\n\ntest-recording-device-volume\n\n * (volume: number) => void\n * 测试麦克风时的音量回调\n\nParameters\n\n * volume: number \n\n麦克风采集到的音量值,取值范围0 - 100\n\nrecording-device-changed\n\n * (device: [MediaDeviceInfo](https://developer.mozilla.org/en-US/docs/Web/API/MediaDeviceInfo), state: [QNDeviceState](https://developer.qiniu.com/rtc/12500/WebQNDeviceManager/12499/WebQNDeviceState)) => void\n * 麦克风设备变更\n\nParameters\n\n * device: [MediaDeviceInfo](https://developer.mozilla.org/en-US/docs/Web/API/MediaDeviceInfo)\n\n设备详情\n\n * state: [QNDeviceState](https://developer.qiniu.com/rtc/12500/WebQNDeviceManager/12499/WebQNDeviceState)\n\n设备状态\n\nplayback-device-changed\n\n * (device: [MediaDeviceInfo](https://developer.mozilla.org/en-US/docs/Web/API/MediaDeviceInfo), state: [QNDeviceState](https://developer.qiniu.com/rtc/12500/WebQNDeviceManager/12499/WebQNDeviceState)) => void\n * 扬声器设备变更\n\nParameters\n\n * device: [MediaDeviceInfo](https://developer.mozilla.org/en-US/docs/Web/API/MediaDeviceInfo)\n\n设备详情\n\n * state: [QNDeviceState](https://developer.qiniu.com/rtc/12500/WebQNDeviceManager/12499/WebQNDeviceState)\n\n设备状态\n\ncamera-device-changed\n\n * (device: [MediaDeviceInfo](https://developer.mozilla.org/en-US/docs/Web/API/MediaDeviceInfo), state: [QNDeviceState](https://developer.qiniu.com/rtc/12500/WebQNDeviceManager/12499/WebQNDeviceState)) => void\n * 摄像头设备变更\n\nParameters\n\n * device: [MediaDeviceInfo](https://developer.mozilla.org/en-US/docs/Web/API/MediaDeviceInfo)\n\n设备详情\n\n * state: [QNDeviceState](https://developer.qiniu.com/rtc/12500/WebQNDeviceManager/12499/WebQNDeviceState)\n\n设备状态\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNMediaRecorder"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNMediaRecorder\n\n# QNMediaRecorder\n\n最近更新时间: 2024-01-08 11:12:35\n\n屏幕录制类\n\nHierarchy\n\n * EventEmitter \n * QNMediaRecorder \n\nIndex\n\nProperties\n\n * state\n\nEvent\n\n * state-changed\n * info-updated\n * error\n\nCore Methods\n\n * startRecording\n * stopRecording\n * releaseRecoreding\n\nProperties\n\nstate\n\n * [QNRecorderState](https://developer.qiniu.com/rtc/12628/WebQNMediaRecorder/12626/WebQNRecorderState)\n * 录制状态\n\nEvent\n\nstate-changed\n\n * (state: [QNRecorderState](https://developer.qiniu.com/rtc/12628/WebQNMediaRecorder/12626/WebQNRecorderState)) => void\n * 录制状态发生改变回调\n\nParameters\n\n * state: [QNRecorderState](https://developer.qiniu.com/rtc/12628/WebQNMediaRecorder/12626/WebQNRecorderState)\n\n录制状态\n\ninfo-updated\n\n * (info: [QNRecorderInfo](https://developer.qiniu.com/rtc/12628/WebQNMediaRecorder/12627/WebQNRecorderInfo)) => void\n * 录制信息更新回调\n\nParameters\n\n * info: [QNRecorderInfo](https://developer.qiniu.com/rtc/12628/WebQNMediaRecorder/12627/WebQNRecorderInfo)\n\n录制信息\n\nerror\n\n * (error: [QNRTCError](https://developer.qiniu.com/rtc/12628/WebQNMediaRecorder/12354/WebQNRTCError)) => void\n * 错误信息回调\n\nParameters\n\n * error: [QNRTCError](https://developer.qiniu.com/rtc/12628/WebQNMediaRecorder/12354/WebQNRTCError)\n\n错误信息\n\nCore Methods\n\nstartRecording\n\n * startRecording(): void\n * 开始录制\n\nReturns void\n\nstopRecording\n\n * stopRecording(): void\n * 停止录制\n\nReturns void\n\nreleaseRecoreding\n\n * releaseRecoreding(): void\n * 释放录制实例\n\nReturns void\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNRTCClient"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNRTCClient\n\n# QNRTCClient\n\n最近更新时间: 2022-07-14 14:46:14\n\nRTC 核心类,包含加入房间,离开房间,订阅、发布 Track,获取统计信息等功能\n\nHierarchy\n\n * EventEmitter \n * QNRTCClient \n\nIndex\n\nProperties\n\n * userID\n * userData\n * roomName\n * remoteUsers\n\nEvent\n\n * user-joined\n * user-left\n * user-published\n * user-unpublished\n * connection-state-changed\n * message-received\n * user-reconnecting\n * user-reconnected\n * direct-livestreaming-state-changed\n * media-relay-state-changed\n * transcoding-livestreaming-state-changed\n * volume-indicator\n\nCore Methods\n\n * join\n * leave\n * publish\n * unpublish\n * subscribe\n * unsubscribe\n * setClientMode\n * setClientRole\n * startMediaRelay\n * updateMediaRelay\n * stopMediaRelay\n * sendMessage\n * startDirectLiveStreaming\n * stopDirectLiveStreaming\n * startTranscodingLiveStreaming\n * stopTranscodingLiveStreaming\n * setTranscodingLiveStreamingTracks\n * removeTranscodingLiveStreamingTracks\n * getNetworkQuality\n * getUserNetworkQuality\n * enableAudioVolumeIndicator\n\nProperties\n\nuserID\n\n * string | undefined\n * 本地用户的用户 ID\n\nuserData\n\n * string | undefined\n * 用户自定义数据\n\nroomName\n\n * string | undefined\n * 当前房间的名称\n\nremoteUsers\n\n * [QNRemoteUser](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9059/WebQNRemoteUser)[]\n * 房间当前的远端用户列表\n\nEvent\n\nuser-joined\n\n * (remoteUserID: string, userData?: string): void\n * 用户加入房间\n \n client.on('user-joined', function(remoteUserID, userData){\n console.log('user-joined', remoteUserID, userData)\n })\n\nParameters\n\n * remoteUserID: string \n\n远端用户 ID\n\n * Optional userData: string \n\n用户自定义数据\n\nuser-left\n\n * (remoteUserID: string): void\n * 用户离开房间\n \n client.on('user-left', function(remoteUserID){\n console.log('user-left', remoteUserID)\n })\n\nParameters\n\n * remoteUserID: string \n\n用户 ID\n\nuser-published\n\n * (userID: string, qntrack: ([QNRemoteAudioTrack](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9087/WebQNRemoteAudioTrack) | [QNRemoteVideoTrack](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9060/WebQNRemoteVideoTrack))[]): void\n * 用户添加媒体轨\n \n client.on('user-published', function(userID, qntrack){\n console.log('user-published', userID, qntrack)\n })\n\nParameters\n\n * userID: string \n\n用户 ID\n\n * qntrack: ([QNRemoteAudioTrack](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9087/WebQNRemoteAudioTrack) | [QNRemoteVideoTrack](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9060/WebQNRemoteVideoTrack))[] \n\n添加媒体轨信息\n\nuser-unpublished\n\n * (userID: string, qntrack: ([QNRemoteAudioTrack](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9087/WebQNRemoteAudioTrack) | [QNRemoteVideoTrack](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9060/WebQNRemoteVideoTrack))[]): void\n * 用户移除媒体轨\n \n client.on('user-unpublished', function(userID ,qntrack){\n console.log('user-unpublished', userID, qntrack)\n })\n\nParameters\n\n * userID: string \n\n用户 ID\n\n * qntrack: ([QNRemoteAudioTrack](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9087/WebQNRemoteAudioTrack) | [QNRemoteVideoTrack](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9060/WebQNRemoteVideoTrack))[] \n\n移除媒体轨信息\n\nconnection-state-changed\n\n * (connectionState: [QNConnectionState](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9080/WebQNConnectionState), info?: [QNConnectionDisconnectedInfo](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9071/WebQNConnectionDisconnectedInfo)): void\n * 连接状态改变\n \n client.on('connection-state-changed',function(connectionState,info){\n console.log('connection-state-changed',connectionState,info)\n })\n\nParameters\n\n * connectionState: [QNConnectionState](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9080/WebQNConnectionState)\n\n连接状态\n\n * Optional info: [QNConnectionDisconnectedInfo](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9071/WebQNConnectionDisconnectedInfo)[] \n\n连接断开原因\n\nmessage-received\n\n * (message: [QNCustomMessage](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9081/WebQNCustomMessage)): void\n * 信息接收\n \n client.on('message-received',function(message){\n console.log('message-received',message)\n })\n\nParameters\n\n * message: [QNCustomMessage](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9081/WebQNCustomMessage)\n\n消息信息\n\nuser-reconnecting\n\n * (remoteUserID: string): void\n * 用户重连中\n \n client.on('user-reconnecting', function(remoteUserID){\n console.log('user-reconnecting', remoteUserID)\n })\n\nParameters\n\n * remoteUserID: string \n\n用户 ID\n\nuser-reconnected\n\n * (remoteUserID: string): void\n * 用户重连成功\n \n client.on('user-reconnected', function(remoteUserID) {\n console.log('user-reconnected', remoteUserID)\n })\n\nParameters\n\n * remoteUserID: string \n\n用户信息\n\ndirect-livestreaming-state-changed\n\n * (streamID: string, state: [QNLiveStreamingState](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9045/WebQNLiveStreamingState) ): void\n * CDN 转推状态变化\n \n client.on('direct-livestreaming-state-changed',function(streamID,state){\n console.log('direct-livestreaming-state-changed',streamID,state)\n })\n\nParameters\n\n * streamID: string \n\n转推任务 streamID\n\n * state: QNLiveStreamingState \n\n转推状态\n\nmedia-relay-state-changed\n\n * (roomName: string, state: [QNMediaRelayState](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/11353/QNMediaRelayState-Web)): void\n * 跨房状态变化\n\nParameters\n\n * roomName: string \n\n目标房间名\n\n * state: [QNMediaRelayState](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/11353/QNMediaRelayState-Web)\n\n跨房状态\n\ntranscoding-livestreaming-state-changed\n\n * (streamID: string, state: [QNLiveStreamingState](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9045/WebQNLiveStreamingState) ): void\n * 合流转推状态变化\n \n client.on('transcoding-livestreaming-state-changed',function(streamID,state){\n console.log('transcoding-livestreaming-state-changed',streamID,state)\n })\n\nParameters\n\n * streamID: string \n\n合流任务 streamID\n\n * state: QNLiveStreamingState \n\n合流状态\n\nvolume-indicator\n\n * (volumes: [QNVolumeIndicator](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/10265/WebQNVolumeIndicator)): void\n * 房间发声者音量实时监测\n \n client.on('volume-indicator', function(volumes){\n volumes.forEach(volume => { \n console.log(`volume-indicator:userID ${volume.userID} trackID ${volume.trackID} level ${volume.level}`)\n })\n })\n\n> 音量值 level 范围为 0 - 1,回调返回的为音量值大于 0.1 持续说话的列表。\n\nParameters\n\n * volumes: [QNVolumeIndicator](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/10265/WebQNVolumeIndicator)\n\n房间说话者音量回调数据\n\nCore Methods\n\njoin\n\n * join(token: string, userData?: string): Promise<void>\n * 加入房间\n\n接口调用成功后,将会触发 [connection-state-\nchanged](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/undefined)\n房间状态监听事件。\n\n> 建议在加入房间之前,先对房间状态进行监听,根据当前状态处理相关逻辑。\n\nParameters\n\n * token: string \n\n房间 Token\n\n * Optional userData: string \n\n用户自定义数据\n\nReturns Promise<void>\n\nleave\n\n * leave(): Promise<void>\n * 离开房间\n\nReturns Promise<void>\n\npublish\n\n * publish(tracks: [QNLocalTrack](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9085/WebQNLocalTrack) | [QNLocalTrack](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9085/WebQNLocalTrack)[]): Promise<void>\n * 发布本地 Track\n\nParameters\n\n * tracks: [QNLocalTrack](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9085/WebQNLocalTrack) | [QNLocalTrack](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9085/WebQNLocalTrack)[] \n\n要发布的本地 Track\n\nReturns Promise<void>\n\nunpublish\n\n * unpublish(tracks: [QNLocalTrack](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9085/WebQNLocalTrack) | [QNLocalTrack](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9085/WebQNLocalTrack)[]): Promise<void>\n * 取消发布本地 Track\n\nParameters\n\n * tracks: [QNLocalTrack](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9085/WebQNLocalTrack) | [QNLocalTrack](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9085/WebQNLocalTrack)[] \n\n要取消发布的本地 Track\n\nReturns Promise<void>\n\nsubscribe\n\n * subscribe(tracks: [QNRemoteTrack](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9082/WebQNRemoteTrack) | [QNRemoteTrack](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9082/WebQNRemoteTrack)[]): Promise<{ videoTracks: [QNRemoteVideoTrack](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9060/WebQNRemoteVideoTrack)[]; audioTracks: [QNRemoteAudioTrack](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9087/WebQNRemoteAudioTrack)[]}>\n * 订阅远端 Track\n\nParameters\n\n * tracks: [QNRemoteTrack](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9082/WebQNRemoteTrack) | [QNRemoteTrack](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9082/WebQNRemoteTrack)[] \n\n要订阅的远端 Track\n\nReturns Promise<object>\n\nunsubscribe\n\n * unsubscribe(tracks: [QNRemoteTrack](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9082/WebQNRemoteTrack) | [QNRemoteTrack](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9082/WebQNRemoteTrack)[]): Promise<void>\n * 取消订阅远端 Track\n\nParameters\n\n * tracks: [QNRemoteTrack](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9082/WebQNRemoteTrack) | [QNRemoteTrack](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9082/WebQNRemoteTrack)[] \n\n要取消订阅的远端 Track\n\nReturns Promise<void>\n\nsetClientMode\n\n * setClientMode(mode: [QNClientMode](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/11357/QNClientMode-Web)): Promise<void>\n * 设置跨房连麦场景\n\n> 仅可 join room 前调用。\n\nParameters\n\n * mode: [QNClientMode](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/11357/QNClientMode-Web)\n\n场景\n\nReturns Promise<void>\n\nsetClientRole\n\n * setClientRole(role: [QNClientRole](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/11358/QNClientRole-Web)): Promise<void>\n * 设置互动直播场景下的用户角色\n\n该方法在加入频道前后均可调用。\n\nParameters\n\n * role: [QNClientRole](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/11358/QNClientRole-Web)\n\n角色\n\nReturns Promise<void>\n\nstartMediaRelay\n\n * startMediaRelay(config: [QNMediaRelayConfig](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/11355/QNMediaRelayConfig-Web)): Promise<[QNMediaRelayResult](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/11356/QNMediaRelayResult-Web)>\n * 开始跨房媒体转推\n\n如果已经开启,则调用失败\n\n当所有目标房间跨房媒体转发都失败,则本次跨房媒体转发请求视为失败,使用跨房媒体转发功能需要再次调用此方法。\n\n当有任意一个目标或多个目标房间媒体转发成功,则本次跨房媒体转发请求视为成功,具体每个房间的状态参考\n[QNMediaRelayResult](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/11356/QNMediaRelayResult-\nWeb) 。\n\n> 注意:该方法仅适用于互动直播场景中角色类型为主播的用户。\n\nParameters\n\n * config: [QNMediaRelayConfig](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/11355/QNMediaRelayConfig-Web)\n\n跨房配置\n\nReturns\nPromise<[QNMediaRelayResult](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/11356/QNMediaRelayResult-\nWeb)>\n\nupdateMediaRelay\n\n * updateMediaRelay(config: [QNMediaRelayConfig](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/11355/QNMediaRelayConfig-Web)): Promise<[QNMediaRelayResult](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/11356/QNMediaRelayResult-Web)>\n * 更新跨房媒体转发\n\n成功开启跨房媒体转发后,如果您希望将流转发到多个目标房间,或退出当前正在转发的房间,可以调用该方法。\n\n此方法为全量更新,正在跨房媒体转发中却未被包含在参数\n[QNMediaRelayConfig](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/11355/QNMediaRelayConfig-\nWeb) 中的房间,将停止媒体转发\n\n> 注意:调用此方法前必须确保已经成功开启跨房媒体转发,否则将调用失败;该方法仅适用互动直播场景中角色类型为主播的用户\n\nParameters\n\n * config: [QNMediaRelayConfig](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/11355/QNMediaRelayConfig-Web)\n\n跨房配置\n\nReturns\nPromise<[QNMediaRelayResult](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/11356/QNMediaRelayResult-\nWeb)>\n\nstopMediaRelay\n\n * stopMediaRelay(): Promise<[QNMediaRelayResult](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/11356/QNMediaRelayResult-Web)>\n * 结束跨房\n\n如果未开启,则调用失败。\n\n> 注意:此方法会停止在所有目标房间中的媒体转发;该方法仅适用互动直播场景中角色类型为主播的用户。\n\nReturns\nPromise<[QNMediaRelayResult](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/11356/QNMediaRelayResult-\nWeb)>\n\nsendMessage\n\n * sendMessage(messageID: string, message: string, users?: [QNRemoteUser](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9059/WebQNRemoteUser)[]): Promise<void>\n * 发送自定义消息\n\nParameters\n\n * messageID: string \n\n自定义消息 ID\n\n * message: string \n\n自定义消息内容\n\n * Optional users: [QNRemoteUser](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9059/WebQNRemoteUser)[] \n\n默认群发,接收消息用户\n\n传入值若为非数组或空数组,则会采用默认行为进行群发\n\nReturns Promise<void>\n\nstartDirectLiveStreaming\n\n * startDirectLiveStreaming(config: [QNDirectLiveStreamingConfig](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9049/WebQNDirectLiveStreamingConfig)): Promise<void>\n * 开启单路转推\n\nParameters\n\n * config: [QNDirectLiveStreamingConfig](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9049/WebQNDirectLiveStreamingConfig)\n\n单路转推配置\n\nReturns Promise<void>\n\nstopDirectLiveStreaming\n\n * stopDirectLiveStreaming(streamID: string): Promise<void>\n * 停止单路转推\n\nParameters\n\n * streamID: string \n\n转推任务的 streamID\n\nReturns Promise<void>\n\nstartTranscodingLiveStreaming\n\n * startTranscodingLiveStreaming(config: [QNTranscodingLiveStreamingConfig](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9086/WebQNTranscodingLiveStreamingConfig)): Promise<void>\n * 开启合流转推\n\nParameters\n\n * config: [QNTranscodingLiveStreamingConfig](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9086/WebQNTranscodingLiveStreamingConfig)\n\n合流转推配置\n\nReturns Promise<void>\n\nstopTranscodingLiveStreaming\n\n * stopTranscodingLiveStreaming(streamID: string): Promise<void>\n * 停止合流转推\n\nParameters\n\n * streamID: string \n\n转推任务的 streamID\n\nReturns Promise<void>\n\nsetTranscodingLiveStreamingTracks\n\n * setTranscodingLiveStreamingTracks(streamID: string | null, transcodingTracks: [QNTranscodingLiveStreamingTrack](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9065/WebQNTranscodingLiveStreamingTrack)[]): Promise<void>\n * 新增、更新合流转推的布局\n\nParameters\n\n * streamID: string | null \n\n合流任务 streamID\n\nstreamID 为 null 时表示设置默认合流任务\n\n * transcodingTracks: [QNTranscodingLiveStreamingTrack](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9065/WebQNTranscodingLiveStreamingTrack)\n\n待新增、更新的合流转推布局配置\n\nReturns Promise<void>\n\nremoveTranscodingLiveStreamingTracks\n\n * removeTranscodingLiveStreamingTracks(streamID: string | null, transcodingTracks: [QNTranscodingLiveStreamingTrack](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9065/WebQNTranscodingLiveStreamingTrack)[]): Promise<void>\n * 新增、更新合流转推的布局\n\nParameters\n\n * streamID: string | null \n\n合流任务 streamID\n\nstreamID 为 null 时表示设置默认合流任务\n\n * transcodingTracks: [QNTranscodingLiveStreamingTrack](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9065/WebQNTranscodingLiveStreamingTrack)\n\n需要移除的合流转推布局配置\n\nReturns Promise<void>\n\ngetNetworkQuality\n\n * getNetworkQuality(): [QNNetworkQuality](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9058/WebQNNetworkQuality)\n * 获取自己的网络质量等级\n\nReturns\n[QNNetworkQuality](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9058/WebQNNetworkQuality)\n\ngetUserNetworkQuality\n\n * getUserNetworkQuality(userID: string): [QNNetworkQuality](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9058/WebQNNetworkQuality)\n * 获取远端用户的网络质量等级\n\nParameters\n\n * userID: string \n\n远端用户的 userID\n\nReturns\n[QNNetworkQuality](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9058/WebQNNetworkQuality)\n\nenableAudioVolumeIndicator\n\n * enableAudioVolumeIndicator(): void\n * 开启房间发声者音量实时检测\n\n> 该方法允许 SDK 定期报告正在说话的用户及其音量。启用音量提示后,无论频道中有没有人说话,SDK 都会定时 2S 触发\n> QNRTCClient.on('[volume-\n> indicator](https://developer.qiniu.com/rtc/9090/WebQNRTCClient/9090)') 回调\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNClientMode"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNClientMode\n\n# QNClientMode\n\n最近更新时间: 2022-07-14 15:57:09\n\n音视频通话使用场景,当前分为通信场景和互动直播场景,可通过\n[QNRTCClient.setClientMode](https://developer.qiniu.com/rtc/11357/QNClientMode-\nWeb/9090) 方法进行设置,建议您根据不同需求选择适当的场景,以获取最佳使用体验。\n\nIndex\n\nEnumeration members\n\n * RTC\n * LIVE\n\nEnumeration members\n\nRTC\n\n * RTC = 'RTC'\n * 通信场景,用于常见的一对一通话或群聊,该场景中,所有用户均可以发布和订阅音视频轨道\n\nLIVE\n\n * LIVE = 'LIVE'\n * 直播场景,有主播和观众两种用户角色,可以通过 [QNRTCClient.setClientRole](https://developer.qiniu.com/rtc/11357/QNClientMode-Web/9090) 方法设置用户角色为主播或观众。主播可以发布和订阅音视频轨道,而观众只能订阅音视频轨道,无法发布。\n\n> 注意:直播场景中的用户角色默认为观众。如需发布音视频,必须将角色修改为主播。\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNClientRole"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNClientRole\n\n# QNClientRole\n\n最近更新时间: 2022-07-14 15:57:27\n\n角色类型,当前仅适用于互动直播场景,通过\n[QNRTCClient.setClientRole](https://developer.qiniu.com/rtc/11358/QNClientRole-\nWeb/9090) 来进行设置。\n\nIndex\n\nEnumeration members\n\n * BROADCASTER\n * AUDIENCE\n\nEnumeration members\n\nBROADCASTER\n\n * BROADCASTER = 'BROADCASTER'\n * 主播角色,拥有发布和订阅媒体流的权限,仅适用于互动直播场景\n\nAUDIENCE\n\n * AUDIENCE = 'AUDIENCE'\n * 观众角色,仅有订阅媒体流的权限,仅适用于互动直播场景\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNTrack"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNTrack\n\n# QNTrack\n\n最近更新时间: 2022-07-14 14:48:21\n\n音视频 Track 基类\n\nHierarchy\n\n * EventEmitter \n * QNTrack \n * [QNLocalTrack](https://developer.qiniu.com/rtc/9076/WebQNTrack/9085/WebQNLocalTrack)\n * [QNLocalVideoTrack](https://developer.qiniu.com/rtc/9076/WebQNTrack/9061/WebQNLocalVideoTrack)\n * [QNCameraVideoTrack](https://developer.qiniu.com/rtc/9076/WebQNTrack/9068/WebQNCameraVideoTrack)\n * [QNScreenVideoTrack](https://developer.qiniu.com/rtc/9076/WebQNTrack/9092/WebQNScreenVideoTrack)\n * [QNCanvasVideoTrack](https://developer.qiniu.com/rtc/9076/WebQNTrack/10320/WebQNCanvasVideoTrack)\n * [QNCustomVideoTrack](https://developer.qiniu.com/rtc/9076/WebQNTrack/9088/WebQNCustomVideoTrack)\n * [QNLocalAudioTrack](https://developer.qiniu.com/rtc/9076/WebQNTrack/9089/WebQNLocalAudioTrack)\n * [QNMicrophoneAudioTrack](https://developer.qiniu.com/rtc/9076/WebQNTrack/9064/WebQNMicrophoneAudioTrack)\n * [QNBufferSourceAudioTrack](https://developer.qiniu.com/rtc/9076/WebQNTrack/9069/WebQNBufferSourceAudioTrack)\n * [QNCustomAudioTrack](https://developer.qiniu.com/rtc/9076/WebQNTrack/9091/WebQNCustomAudioTrack)\n * [QNRemoteTrack](https://developer.qiniu.com/rtc/9076/WebQNTrack/9082/WebQNRemoteTrack)\n * [QNRemoteVideoTrack](https://developer.qiniu.com/rtc/9076/WebQNTrack/9060/WebQNRemoteVideoTrack)\n * [QNRemoteAudioTrack](https://developer.qiniu.com/rtc/9076/WebQNTrack/9087/WebQNRemoteAudioTrack)\n\nIndex\n\nProperties\n\n * trackID\n * userID\n * tag\n * mediaElement\n\nCore Methods\n\n * play\n * isAudio\n * isVideo\n * isMuted\n * getMediaStreamTrack\n\nProperties\n\ntrackID\n\n * trackID?: string\n * Track ID\n\n对于本地 Track,当且仅当 Track 发布后才可获取到有效的 TrackID;对于远端 Track,可随时获取到有效的 TrackID。\n\nuserID\n\n * userID?: string\n * 用户 ID\n\ntag\n\n * tag?: string\n * 自定义 tag 文本\n\nmediaElement\n\n * mediaElement: HTMLVideoElement | HTMLAudioElement | undefined\n * 当前播放元素,当调用 play 方法后,可以通过该属性获取播放的媒体元素\n\nCore Methods\n\nplay\n\n * public play(container: HTMLElement, config?: QNPlayerConfig): Promise<void>\n * 播放当前 track\n\nParameters\n\n * container: HTMLElement \n\n挂载的 html 容器,SDK 会在该容器下创建 audio 或 video 元素来播放 track\n\n * Optional config: [QNPlayerConfig](https://developer.qiniu.com/rtc/9076/WebQNTrack/10141/WebQNPlayerConfig)\n\n播放配置信息\n\nReturns void\n\nisAudio\n\n * isAudio(): boolean\n * 是否是音频\n\nReturns boolean\n\nisVideo\n\n * isVideo(): boolean\n * 是否是视频\n\nReturns boolean\n\nisMuted\n\n * isMuted(): boolean\n * 是否 mute(静音)\n\nReturns boolean\n\ngetMediaStreamTrack\n\n * getMediaStreamTrack(): MediaStreamTrack | undefined\n * 获取浏览器原生的 MediaStreamTrack 对象\n\nReturns MediaStreamTrack | undefined\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNLocalTrack"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNLocalTrack\n\n# QNLocalTrack\n\n最近更新时间: 2022-07-14 14:48:46\n\n本地 Track\n\nHierarchy\n\n * EventEmitter \n * [QNTrack](https://developer.qiniu.com/rtc/9085/WebQNLocalTrack/9076/WebQNTrack)\n * QNLocalTrack \n * [QNLocalVideoTrack](https://developer.qiniu.com/rtc/9085/WebQNLocalTrack/9061/WebQNLocalVideoTrack)\n * [QNCameraVideoTrack](https://developer.qiniu.com/rtc/9085/WebQNLocalTrack/9068/WebQNCameraVideoTrack)\n * [QNScreenVideoTrack](https://developer.qiniu.com/rtc/9085/WebQNLocalTrack/9092/WebQNScreenVideoTrack)\n * [QNCustomVideoTrack](https://developer.qiniu.com/rtc/9085/WebQNLocalTrack/9088/WebQNCustomVideoTrack)\n * [QNLocalAudioTrack](https://developer.qiniu.com/rtc/9085/WebQNLocalTrack/9089/WebQNLocalAudioTrack)\n * [QNMicrophoneAudioTrack](https://developer.qiniu.com/rtc/9085/WebQNLocalTrack/9064/WebQNMicrophoneAudioTrack)\n * [QNCustomAudioTrack](https://developer.qiniu.com/rtc/9085/WebQNLocalTrack/9091/WebQNCustomAudioTrack)\n * [QNBufferSourceAudioTrack](https://developer.qiniu.com/rtc/9085/WebQNLocalTrack/9069/WebQNBufferSourceAudioTrack)\n * [QNRemoteTrack](https://developer.qiniu.com/rtc/9085/WebQNLocalTrack/9082/WebQNRemoteTrack)\n * [QNRemoteVideoTrack](https://developer.qiniu.com/rtc/9085/WebQNLocalTrack/9060/WebQNRemoteVideoTrack)\n * [QNRemoteAudioTrack](https://developer.qiniu.com/rtc/9085/WebQNLocalTrack/9087/WebQNRemoteAudioTrack)\n\nIndex\n\nProperties\n\n * trackID\n * userID\n * tag\n * mediaElement\n\nEvent\n\n * ended\n\nCore Methods\n\n * play\n * isAudio\n * isVideo\n * isMuted\n * getMediaStreamTrack\n * setMuted\n * destroy\n\nProperties\n\ntrackID\n\n * trackID?: string\n * Track ID\n\n对于本地 Track,当且仅当 Track 发布后才可获取到有效的 TrackID;对于远端 Track,可随时获取到有效的 TrackID。\n\nuserID\n\n * userID?: string\n * 用户 ID\n\ntag\n\n * tag?: string\n * 自定义 tag 文本\n\nmediaElement\n\n * mediaElement: HTMLVideoElement | HTMLAudioElement | undefined\n * 当前播放元素,当调用 play 方法后,可以通过该属性获取播放的媒体元素\n\nEvent\n\nended\n\n * (): void\n * Track 断开\n \n track.on('ended', function() {\n console.log('ended')\n })\n\nCore Methods\n\nplay\n\n * public play(container: HTMLElement, config?: QNPlayerConfig): Promise<void>\n * 播放当前 track\n\nParameters\n\n * container: HTMLElement \n\n挂载的 html 容器,SDK 会在该容器下创建 audio 或 video 元素来播放 track\n\n * Optional config: [QNPlayerConfig](https://developer.qiniu.com/rtc/9085/WebQNLocalTrack/10141/WebQNPlayerConfig)\n\n播放配置信息\n\nReturns void\n\nisAudio\n\n * isAudio(): boolean\n * 是否是音频\n\nReturns boolean\n\nisVideo\n\n * isVideo(): boolean\n * 是否是视频\n\nReturns boolean\n\nisMuted\n\n * isMuted(): boolean\n * 是否 mute(静音)\n\nReturns boolean\n\ngetMediaStreamTrack\n\n * getMediaStreamTrack(): MediaStreamTrack | undefined\n * 获取浏览器原生的 MediaStreamTrack 对象\n\nReturns MediaStreamTrack | undefined\n\nsetMuted\n\n * setMuted(isMuted: boolean): void\n * 设置是否静默该 track,静默后,SDK 将会发送静默帧数据\n\n当静默状态改变后,远端订阅 track 将会收到 [mute-state-\nchanged](https://developer.qiniu.com/rtc/9085/WebQNLocalTrack/9082) 事件。\n\nParameters\n\n * isMuted: boolean \n\n是否 mute\n\nReturns void\n\ndestroy\n\n * destroy(): void\n * 销毁当前流\n\nReturns void\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNLocalVideoTrack"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNLocalVideoTrack\n\n# QNLocalVideoTrack\n\n最近更新时间: 2022-07-14 16:09:32\n\n本地视频 Track\n\nHierarchy\n\n * EventEmitter \n * [QNTrack](https://developer.qiniu.com/rtc/9061/WebQNLocalVideoTrack/9076/WebQNTrack)\n * [QNLocalTrack](https://developer.qiniu.com/rtc/9061/WebQNLocalVideoTrack/9085/WebQNLocalTrack)\n * QNLocalVideoTrack \n * [QNCameraVideoTrack](https://developer.qiniu.com/rtc/9061/WebQNLocalVideoTrack/9068/WebQNCameraVideoTrack)\n * [QNScreenVideoTrack](https://developer.qiniu.com/rtc/9061/WebQNLocalVideoTrack/9092/WebQNScreenVideoTrack)\n * [QNCanvasVideoTrack](https://developer.qiniu.com/rtc/9061/WebQNLocalVideoTrack/10320/WebQNCanvasVideoTrack)\n * [QNCustomVideoTrack](https://developer.qiniu.com/rtc/9061/WebQNLocalVideoTrack/9088/WebQNCustomVideoTrack)\n * [QNLocalAudioTrack](https://developer.qiniu.com/rtc/9061/WebQNLocalVideoTrack/9089/WebQNLocalAudioTrack)\n * [QNMicrophoneAudioTrack](https://developer.qiniu.com/rtc/9061/WebQNLocalVideoTrack/9064/WebQNMicrophoneAudioTrack)\n * [QNBufferSourceAudioTrack](https://developer.qiniu.com/rtc/9061/WebQNLocalVideoTrack/9069/WebQNBufferSourceAudioTrack)\n * [QNCustomAudioTrack](https://developer.qiniu.com/rtc/9061/WebQNLocalVideoTrack/9091/WebQNCustomAudioTrack)\n * [QNRemoteTrack](https://developer.qiniu.com/rtc/9061/WebQNLocalVideoTrack/9082/WebQNRemoteTrack)\n * [QNRemoteVideoTrack](https://developer.qiniu.com/rtc/9061/WebQNLocalVideoTrack/9060/WebQNRemoteVideoTrack)\n * [QNRemoteAudioTrack](https://developer.qiniu.com/rtc/9061/WebQNLocalVideoTrack/9087/WebQNRemoteAudioTrack)\n\nIndex\n\nProperties\n\n * trackID\n * userID\n * tag\n * mediaElement\n * isMultiProfileEnabled\n\nEvent\n\n * ended\n\nCore Methods\n\n * play\n * isAudio\n * isVideo\n * isMuted\n * getMediaStreamTrack\n * setMuted\n * destroy\n * getLowStreamConfig\n * getCurrentFrameData\n * getStats\n * setOptimizationMode\n\nProperties\n\ntrackID\n\n * trackID?: string\n * Track ID\n\n对于本地 Track,当且仅当 Track 发布后才可获取到有效的 TrackID;对于远端 Track,可随时获取到有效的 TrackID。\n\nuserID\n\n * userID?: string\n * 用户 ID\n\ntag\n\n * tag?: string\n * 自定义 tag 文本\n\nmediaElement\n\n * mediaElement: HTMLVideoElement | HTMLAudioElement | undefined\n * 当前播放元素,当调用 play 方法后,可以通过该属性获取播放的媒体元素\n\nisMultiProfileEnabled\n\n * isMultiProfileEnabled: boolean\n * 是否开启大小流\n\n> 废弃,请使用\n> [getLowStreamConfig](https://developer.qiniu.com/rtc/9061/WebQNLocalVideoTrack/9061)\n> 替代。\n\nEvent\n\nended\n\n * (): void\n * Track 断开\n \n track.on('ended', function() {\n console.log('ended')\n })\n\nCore Methods\n\nplay\n\n * public play(container: HTMLElement, config?: QNPlayerConfig): Promise<void>\n * 播放当前 track\n\nParameters\n\n * container: HTMLElement \n\n挂载的 html 容器,SDK 会在该容器下创建 audio 或 video 元素来播放 track\n\n * Optional config: [QNPlayerConfig](https://developer.qiniu.com/rtc/9061/WebQNLocalVideoTrack/10141/WebQNPlayerConfig)\n\n播放配置信息\n\nReturns void\n\nisAudio\n\n * isAudio(): boolean\n * 是否是音频\n\nReturns boolean\n\nisVideo\n\n * isVideo(): boolean\n * 是否是视频\n\nReturns boolean\n\nisMuted\n\n * isMuted(): boolean\n * 是否 mute(静音)\n\nReturns boolean\n\ngetMediaStreamTrack\n\n * getMediaStreamTrack(): MediaStreamTrack | undefined\n * 获取浏览器原生的 MediaStreamTrack 对象\n\nReturns MediaStreamTrack | undefined\n\nsetMuted\n\n * setMuted(isMuted: boolean): void\n * 设置是否静默该 track,静默后,SDK 将会发送静默帧数据\n\n当静默状态改变后,远端订阅 track 将会收到 [mute-state-\nchanged](https://developer.qiniu.com/rtc/9061/WebQNLocalVideoTrack/9082) 事件。\n\nParameters\n\n * isMuted: boolean \n\n是否 mute\n\nReturns void\n\ndestroy\n\n * destroy(): void\n * 销毁当前流\n\nReturns void\n\ngetLowStreamConfig\n\n * getLowStreamConfig(): [QNLowStreamConfig](https://developer.qiniu.com/rtc/9061/WebQNLocalVideoTrack/10582/WebQNLowStreamConfig) | undefined\n * 获取小流配置\n\n>\n> 大小流使用方法可参考[视频大小流使用指南](https://developer.qiniu.com/rtc/9061/WebQNLocalVideoTrack/10566/video-\n> size-flow-web-new)。\n\nReturns [QNLowStreamConfig](https://developer.qiniu.com/rtc/9061/WebQNLocalVideoTrack/10582/WebQNLowStreamConfig) | undefined\n\ngetCurrentFrameData\n\n * getCurrentFrameData(): string\n * 截取当前视频流画面\n\nReturns string\n\ngetStats\n\n * getStats(): [QNLocalVideoTrackStats](https://developer.qiniu.com/rtc/9061/WebQNLocalVideoTrack/9051/WebQNLocalVideoTrackStats)[]\n * 获取流状态数据\n\nReturns\n[QNLocalVideoTrackStats](https://developer.qiniu.com/rtc/9061/WebQNLocalVideoTrack/9051/WebQNLocalVideoTrackStats)\n\nsetOptimizationMode\n\n * setOptimizationMode(mode: [QNVideoOptimizationMode](https://developer.qiniu.com/rtc/9061/WebQNLocalVideoTrack/9079/WebQNVideoOptimizationMode)): boolean\n * 设置传输优化模式\n\nParameters\n\n * mode: [QNVideoOptimizationMode](https://developer.qiniu.com/rtc/9061/WebQNLocalVideoTrack/9079/WebQNVideoOptimizationMode)\n\n视频优化模式\n\nReturns boolean\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNCameraVideoTrack"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNCameraVideoTrack\n\n# QNCameraVideoTrack\n\n最近更新时间: 2022-07-14 16:07:38\n\n摄像头采集\n\nHierarchy\n\n * EventEmitter \n * [QNTrack](https://developer.qiniu.com/rtc/9068/WebQNCameraVideoTrack/9076/WebQNTrack)\n * [QNLocalTrack](https://developer.qiniu.com/rtc/9068/WebQNCameraVideoTrack/9085/WebQNLocalTrack)\n * [QNLocalVideoTrack](https://developer.qiniu.com/rtc/9068/WebQNCameraVideoTrack/9061/WebQNLocalVideoTrack)\n * QNCameraVideoTrack \n * [QNScreenVideoTrack](https://developer.qiniu.com/rtc/9068/WebQNCameraVideoTrack/9092/WebQNScreenVideoTrack)\n * [QNCanvasVideoTrack](https://developer.qiniu.com/rtc/9068/WebQNCameraVideoTrack/10320/WebQNCanvasVideoTrack)\n * [QNCustomVideoTrack](https://developer.qiniu.com/rtc/9068/WebQNCameraVideoTrack/9088/WebQNCustomVideoTrack)\n * [QNLocalAudioTrack](https://developer.qiniu.com/rtc/9068/WebQNCameraVideoTrack/9089/WebQNLocalAudioTrack)\n * [QNMicrophoneAudioTrack](https://developer.qiniu.com/rtc/9068/WebQNCameraVideoTrack/9064/WebQNMicrophoneAudioTrack)\n * [QNBufferSourceAudioTrack](https://developer.qiniu.com/rtc/9068/WebQNCameraVideoTrack/9069/WebQNBufferSourceAudioTrack)\n * [QNCustomAudioTrack](https://developer.qiniu.com/rtc/9068/WebQNCameraVideoTrack/9091/WebQNCustomAudioTrack)\n * [QNRemoteTrack](https://developer.qiniu.com/rtc/9068/WebQNCameraVideoTrack/9082/WebQNRemoteTrack)\n * [QNRemoteVideoTrack](https://developer.qiniu.com/rtc/9068/WebQNCameraVideoTrack/9060/WebQNRemoteVideoTrack)\n * [QNRemoteAudioTrack](https://developer.qiniu.com/rtc/9068/WebQNCameraVideoTrack/9087/WebQNRemoteAudioTrack)\n\nIndex\n\nProperties\n\n * trackID\n * userID\n * tag\n * mediaElement\n * isMultiProfileEnabled\n\nEvent\n\n * ended\n\nCore Methods\n\n * play\n * isAudio\n * isVideo\n * isMuted\n * getMediaStreamTrack\n * setMuted\n * destroy\n * getLowStreamConfig\n * getCurrentFrameData\n * getStats\n * setOptimizationMode\n\nProperties\n\ntrackID\n\n * trackID?: string\n * Track ID\n\n对于本地 Track,当且仅当 Track 发布后才可获取到有效的 TrackID;对于远端 Track,可随时获取到有效的 TrackID。\n\nuserID\n\n * userID?: string\n * 用户 ID\n\ntag\n\n * tag?: string\n * 自定义 tag 文本\n\nmediaElement\n\n * mediaElement: HTMLVideoElement | HTMLAudioElement | undefined\n * 当前播放元素,当调用 play 方法后,可以通过该属性获取播放的媒体元素\n\nisMultiProfileEnabled\n\n * isMultiProfileEnabled: boolean\n * 是否开启大小流\n\n> 废弃,请使用\n> [getLowStreamConfig](https://developer.qiniu.com/rtc/9068/WebQNCameraVideoTrack/9061)\n> 替代。\n\nEvent\n\nended\n\n * (): void\n * Track 断开\n \n track.on('ended', function() {\n console.log('ended')\n })\n\nCore Methods\n\nplay\n\n * public play(container: HTMLElement, config?: QNPlayerConfig): Promise<void>\n * 播放当前 track\n\nParameters\n\n * container: HTMLElement \n\n挂载的 html 容器,SDK 会在该容器下创建 audio 或 video 元素来播放 track\n\n * Optional config: [QNPlayerConfig](https://developer.qiniu.com/rtc/9068/WebQNCameraVideoTrack/10141/WebQNPlayerConfig)\n\n播放配置信息\n\nReturns void\n\nisAudio\n\n * isAudio(): boolean\n * 是否是音频\n\nReturns boolean\n\nisVideo\n\n * isVideo(): boolean\n * 是否是视频\n\nReturns boolean\n\nisMuted\n\n * isMuted(): boolean\n * 是否 mute(静音)\n\nReturns boolean\n\ngetMediaStreamTrack\n\n * getMediaStreamTrack(): MediaStreamTrack | undefined\n * 获取浏览器原生的 MediaStreamTrack 对象\n\nReturns MediaStreamTrack | undefined\n\nsetMuted\n\n * setMuted(isMuted: boolean): void\n * 设置是否静默该 track,静默后,SDK 将会发送静默帧数据\n\n当静默状态改变后,远端订阅 track 将会收到 [mute-state-\nchanged](https://developer.qiniu.com/rtc/9068/WebQNCameraVideoTrack/9082) 事件。\n\nParameters\n\n * isMuted: boolean \n\n是否 mute\n\nReturns void\n\ndestroy\n\n * destroy(): void\n * 销毁当前流\n\nReturns void\n\ngetLowStreamConfig\n\n * getLowStreamConfig(): [QNLowStreamConfig](https://developer.qiniu.com/rtc/9068/WebQNCameraVideoTrack/10582/WebQNLowStreamConfig) | undefined\n * 获取小流配置\n\n>\n> 大小流使用方法可参考[视频大小流使用指南](https://developer.qiniu.com/rtc/9068/WebQNCameraVideoTrack/10566/video-\n> size-flow-web-new)。\n\nReturns [QNLowStreamConfig](https://developer.qiniu.com/rtc/9068/WebQNCameraVideoTrack/10582/WebQNLowStreamConfig) | undefined\n\ngetCurrentFrameData\n\n * getCurrentFrameData(): string\n * 截取当前视频流画面\n\nReturns string\n\ngetStats\n\n * getStats(): [QNLocalVideoTrackStats](https://developer.qiniu.com/rtc/9068/WebQNCameraVideoTrack/9051/WebQNLocalVideoTrackStats)[]\n * 获取流状态数据\n\nReturns\n[QNLocalVideoTrackStats](https://developer.qiniu.com/rtc/9068/WebQNCameraVideoTrack/9051/WebQNLocalVideoTrackStats)\n\nsetOptimizationMode\n\n * setOptimizationMode(mode: [QNVideoOptimizationMode](https://developer.qiniu.com/rtc/9068/WebQNCameraVideoTrack/9079/WebQNVideoOptimizationMode)): boolean\n * 设置传输优化模式\n\nParameters\n\n * mode: [QNVideoOptimizationMode](https://developer.qiniu.com/rtc/9068/WebQNCameraVideoTrack/9079/WebQNVideoOptimizationMode)\n\n视频优化模式\n\nReturns boolean\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNScreenVideoTrack"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNScreenVideoTrack\n\n# QNScreenVideoTrack\n\n最近更新时间: 2022-07-14 16:10:19\n\n屏幕采集 Track\n\nHierarchy\n\n * EventEmitter \n * [QNTrack](https://developer.qiniu.com/rtc/9092/WebQNScreenVideoTrack/9076/WebQNTrack)\n * [QNLocalTrack](https://developer.qiniu.com/rtc/9092/WebQNScreenVideoTrack/9085/WebQNLocalTrack)\n * [QNLocalVideoTrack](https://developer.qiniu.com/rtc/9092/WebQNScreenVideoTrack/9061/WebQNLocalVideoTrack)\n * [QNCameraVideoTrack](https://developer.qiniu.com/rtc/9092/WebQNScreenVideoTrack/9068/WebQNCameraVideoTrack)\n * QNScreenVideoTrack \n * [QNCanvasVideoTrack](https://developer.qiniu.com/rtc/9092/WebQNScreenVideoTrack/10320/WebQNCanvasVideoTrack)\n * [QNCustomVideoTrack](https://developer.qiniu.com/rtc/9092/WebQNScreenVideoTrack/9088/WebQNCustomVideoTrack)\n * [QNLocalAudioTrack](https://developer.qiniu.com/rtc/9092/WebQNScreenVideoTrack/9089/WebQNLocalAudioTrack)\n * [QNMicrophoneAudioTrack](https://developer.qiniu.com/rtc/9092/WebQNScreenVideoTrack/9064/WebQNMicrophoneAudioTrack)\n * [QNBufferSourceAudioTrack](https://developer.qiniu.com/rtc/9092/WebQNScreenVideoTrack/9069/WebQNBufferSourceAudioTrack)\n * [QNCustomAudioTrack](https://developer.qiniu.com/rtc/9092/WebQNScreenVideoTrack/9091/WebQNCustomAudioTrack)\n * [QNRemoteTrack](https://developer.qiniu.com/rtc/9092/WebQNScreenVideoTrack/9082/WebQNRemoteTrack)\n * [QNRemoteVideoTrack](https://developer.qiniu.com/rtc/9092/WebQNScreenVideoTrack/9060/WebQNRemoteVideoTrack)\n * [QNRemoteAudioTrack](https://developer.qiniu.com/rtc/9092/WebQNScreenVideoTrack/9087/WebQNRemoteAudioTrack)\n\nIndex\n\nProperties\n\n * trackID\n * userID\n * tag\n * mediaElement\n * isMultiProfileEnabled\n\nEvent\n\n * ended\n\nCore Methods\n\n * play\n * isAudio\n * isVideo\n * isMuted\n * getMediaStreamTrack\n * setMuted\n * destroy\n * getLowStreamConfig\n * getCurrentFrameData\n * getStats\n * setOptimizationMode\n\nProperties\n\ntrackID\n\n * trackID?: string\n * Track ID\n\n对于本地 Track,当且仅当 Track 发布后才可获取到有效的 TrackID;对于远端 Track,可随时获取到有效的 TrackID。\n\nuserID\n\n * userID?: string\n * 用户 ID\n\ntag\n\n * tag?: string\n * 自定义 tag 文本\n\nmediaElement\n\n * mediaElement: HTMLVideoElement | HTMLAudioElement | undefined\n * 当前播放元素,当调用 play 方法后,可以通过该属性获取播放的媒体元素\n\nisMultiProfileEnabled\n\n * isMultiProfileEnabled: boolean\n * 是否开启大小流\n\n> 废弃,请使用\n> [getLowStreamConfig](https://developer.qiniu.com/rtc/9092/WebQNScreenVideoTrack/9061)\n> 替代。\n\nEvent\n\nended\n\n * (): void\n * Track 断开\n \n track.on('ended', function() {\n console.log('ended')\n })\n\nCore Methods\n\nplay\n\n * public play(container: HTMLElement, config?: QNPlayerConfig): Promise<void>\n * 播放当前 track\n\nParameters\n\n * container: HTMLElement \n\n挂载的 html 容器,SDK 会在该容器下创建 audio 或 video 元素来播放 track\n\n * Optional config: [QNPlayerConfig](https://developer.qiniu.com/rtc/9092/WebQNScreenVideoTrack/10141/WebQNPlayerConfig)\n\n播放配置信息\n\nReturns void\n\nisAudio\n\n * isAudio(): boolean\n * 是否是音频\n\nReturns boolean\n\nisVideo\n\n * isVideo(): boolean\n * 是否是视频\n\nReturns boolean\n\nisMuted\n\n * isMuted(): boolean\n * 是否 mute(静音)\n\nReturns boolean\n\ngetMediaStreamTrack\n\n * getMediaStreamTrack(): MediaStreamTrack | undefined\n * 获取浏览器原生的 MediaStreamTrack 对象\n\nReturns MediaStreamTrack | undefined\n\nsetMuted\n\n * setMuted(isMuted: boolean): void\n * 设置是否静默该 track,静默后,SDK 将会发送静默帧数据\n\n当静默状态改变后,远端订阅 track 将会收到 [mute-state-\nchanged](https://developer.qiniu.com/rtc/9092/WebQNScreenVideoTrack/9082) 事件。\n\nParameters\n\n * isMuted: boolean \n\n是否 mute\n\nReturns void\n\ndestroy\n\n * destroy(): void\n * 销毁当前流\n\nReturns void\n\ngetLowStreamConfig\n\n * getLowStreamConfig(): [QNLowStreamConfig](https://developer.qiniu.com/rtc/9092/WebQNScreenVideoTrack/10582/WebQNLowStreamConfig) | undefined\n * 获取小流配置\n\n>\n> 大小流使用方法可参考[视频大小流使用指南](https://developer.qiniu.com/rtc/9092/WebQNScreenVideoTrack/10566/video-\n> size-flow-web-new)。\n\nReturns [QNLowStreamConfig](https://developer.qiniu.com/rtc/9092/WebQNScreenVideoTrack/10582/WebQNLowStreamConfig) | undefined\n\ngetCurrentFrameData\n\n * getCurrentFrameData(): string\n * 截取当前视频流画面\n\nReturns string\n\ngetStats\n\n * getStats(): [QNLocalVideoTrackStats](https://developer.qiniu.com/rtc/9092/WebQNScreenVideoTrack/9051/WebQNLocalVideoTrackStats)[]\n * 获取流状态数据\n\nReturns\n[QNLocalVideoTrackStats](https://developer.qiniu.com/rtc/9092/WebQNScreenVideoTrack/9051/WebQNLocalVideoTrackStats)\n\nsetOptimizationMode\n\n * setOptimizationMode(mode: [QNVideoOptimizationMode](https://developer.qiniu.com/rtc/9092/WebQNScreenVideoTrack/9079/WebQNVideoOptimizationMode)): boolean\n * 设置传输优化模式\n\nParameters\n\n * mode: [QNVideoOptimizationMode](https://developer.qiniu.com/rtc/9092/WebQNScreenVideoTrack/9079/WebQNVideoOptimizationMode)\n\n视频优化模式\n\nReturns boolean\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNCanvasVideoTrack"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNCanvasVideoTrack\n\n# QNCanvasVideoTrack\n\n最近更新时间: 2022-07-14 16:08:27\n\n画布 Track,数据源支持 url、ArrayBuffer 或者 File 对象\n\nHierarchy\n\n * EventEmitter \n * [QNTrack](https://developer.qiniu.com/rtc/10320/WebQNCanvasVideoTrack/9076/WebQNTrack)\n * [QNLocalTrack](https://developer.qiniu.com/rtc/10320/WebQNCanvasVideoTrack/9085/WebQNLocalTrack)\n * [QNLocalVideoTrack](https://developer.qiniu.com/rtc/10320/WebQNCanvasVideoTrack/9061/WebQNLocalVideoTrack)\n * [QNCameraVideoTrack](https://developer.qiniu.com/rtc/10320/WebQNCanvasVideoTrack/9068/WebQNCameraVideoTrack)\n * [QNScreenVideoTrack](https://developer.qiniu.com/rtc/10320/WebQNCanvasVideoTrack/9092/WebQNScreenVideoTrack)\n * QNCanvasVideoTrack \n * [QNCustomVideoTrack](https://developer.qiniu.com/rtc/10320/WebQNCanvasVideoTrack/9088/WebQNCustomVideoTrack)\n * [QNLocalAudioTrack](https://developer.qiniu.com/rtc/10320/WebQNCanvasVideoTrack/9089/WebQNLocalAudioTrack)\n * [QNMicrophoneAudioTrack](https://developer.qiniu.com/rtc/10320/WebQNCanvasVideoTrack/9064/WebQNMicrophoneAudioTrack)\n * [QNBufferSourceAudioTrack](https://developer.qiniu.com/rtc/10320/WebQNCanvasVideoTrack/9069/WebQNBufferSourceAudioTrack)\n * [QNCustomAudioTrack](https://developer.qiniu.com/rtc/10320/WebQNCanvasVideoTrack/9091/WebQNCustomAudioTrack)\n * [QNRemoteTrack](https://developer.qiniu.com/rtc/10320/WebQNCanvasVideoTrack/9082/WebQNRemoteTrack)\n * [QNRemoteVideoTrack](https://developer.qiniu.com/rtc/10320/WebQNCanvasVideoTrack/9060/WebQNRemoteVideoTrack)\n * [QNRemoteAudioTrack](https://developer.qiniu.com/rtc/10320/WebQNCanvasVideoTrack/9087/WebQNRemoteAudioTrack)\n\nIndex\n\nProperties\n\n * trackID\n * userID\n * tag\n * mediaElement\n * isMultiProfileEnabled\n\nEvent\n\n * ended\n\nCore Methods\n\n * play\n * isAudio\n * isVideo\n * isMuted\n * getMediaStreamTrack\n * setMuted\n * destroy\n * getLowStreamConfig\n * getCurrentFrameData\n * getStats\n * setOptimizationMode\n * updateSources\n\nProperties\n\ntrackID\n\n * trackID?: string\n * Track ID\n\n对于本地 Track,当且仅当 Track 发布后才可获取到有效的 TrackID;对于远端 Track,可随时获取到有效的 TrackID。\n\nuserID\n\n * userID?: string\n * 用户 ID\n\ntag\n\n * tag?: string\n * 自定义 tag 文本\n\nmediaElement\n\n * mediaElement: HTMLVideoElement | HTMLAudioElement | undefined\n * 当前播放元素,当调用 play 方法后,可以通过该属性获取播放的媒体元素\n\nisMultiProfileEnabled\n\n * isMultiProfileEnabled: boolean\n * 是否开启大小流\n\n> 废弃,请使用\n> [getLowStreamConfig](https://developer.qiniu.com/rtc/10320/WebQNCanvasVideoTrack/9061)\n> 替代。\n\nEvent\n\nended\n\n * (): void\n * Track 断开\n \n track.on('ended', function() {\n console.log('ended')\n })\n\nCore Methods\n\nplay\n\n * public play(container: HTMLElement, config?: QNPlayerConfig): Promise<void>\n * 播放当前 track\n\nParameters\n\n * container: HTMLElement \n\n挂载的 html 容器,SDK 会在该容器下创建 audio 或 video 元素来播放 track\n\n * Optional config: [QNPlayerConfig](https://developer.qiniu.com/rtc/10320/WebQNCanvasVideoTrack/10141/WebQNPlayerConfig)\n\n播放配置信息\n\nReturns void\n\nisAudio\n\n * isAudio(): boolean\n * 是否是音频\n\nReturns boolean\n\nisVideo\n\n * isVideo(): boolean\n * 是否是视频\n\nReturns boolean\n\nisMuted\n\n * isMuted(): boolean\n * 是否 mute(静音)\n\nReturns boolean\n\ngetMediaStreamTrack\n\n * getMediaStreamTrack(): MediaStreamTrack | undefined\n * 获取浏览器原生的 MediaStreamTrack 对象\n\nReturns MediaStreamTrack | undefined\n\nsetMuted\n\n * setMuted(isMuted: boolean): void\n * 设置是否静默该 track,静默后,SDK 将会发送静默帧数据\n\n当静默状态改变后,远端订阅 track 将会收到 [mute-state-\nchanged](https://developer.qiniu.com/rtc/10320/WebQNCanvasVideoTrack/9082) 事件。\n\nParameters\n\n * isMuted: boolean \n\n是否 mute\n\nReturns void\n\ndestroy\n\n * destroy(): void\n * 销毁当前流\n\nReturns void\n\ngetLowStreamConfig\n\n * getLowStreamConfig(): [QNLowStreamConfig](https://developer.qiniu.com/rtc/10320/WebQNCanvasVideoTrack/10582/WebQNLowStreamConfig) | undefined\n * 获取小流配置\n\n>\n> 大小流使用方法可参考[视频大小流使用指南](https://developer.qiniu.com/rtc/10320/WebQNCanvasVideoTrack/10566/video-\n> size-flow-web-new)。\n\nReturns [QNLowStreamConfig](https://developer.qiniu.com/rtc/10320/WebQNCanvasVideoTrack/10582/WebQNLowStreamConfig) | undefined\n\ngetCurrentFrameData\n\n * getCurrentFrameData(): string\n * 截取当前视频流画面\n\nReturns string\n\ngetStats\n\n * getStats(): [QNLocalVideoTrackStats](https://developer.qiniu.com/rtc/10320/WebQNCanvasVideoTrack/9051/WebQNLocalVideoTrackStats)[]\n * 获取流状态数据\n\nReturns\n[QNLocalVideoTrackStats](https://developer.qiniu.com/rtc/10320/WebQNCanvasVideoTrack/9051/WebQNLocalVideoTrackStats)\n\nsetOptimizationMode\n\n * setOptimizationMode(mode: [QNVideoOptimizationMode](https://developer.qiniu.com/rtc/10320/WebQNCanvasVideoTrack/9079/WebQNVideoOptimizationMode)): boolean\n * 设置传输优化模式\n\nParameters\n\n * mode: [QNVideoOptimizationMode](https://developer.qiniu.com/rtc/10320/WebQNCanvasVideoTrack/9079/WebQNVideoOptimizationMode)\n\n视频优化模式\n\nReturns boolean\n\nupdateSources\n\n * updateSources(sources: QNCanvasVideoTrackSource[]): Promise<void>\n * 更新画布内容\n\nParameters\n\n * sources: [QNCanvasVideoTrackSource](https://developer.qiniu.com/rtc/10320/WebQNCanvasVideoTrack/10269/WebQNCanvasVideoTrackSource)\n\nReturns Promise<void>\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNCustomVideoTrack"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNCustomVideoTrack\n\n# QNCustomVideoTrack\n\n最近更新时间: 2022-07-14 16:09:05\n\n自定义视频 Track\n\nHierarchy\n\n * EventEmitter \n * [QNTrack](https://developer.qiniu.com/rtc/9088/WebQNCustomVideoTrack/9076/WebQNTrack)\n * [QNLocalTrack](https://developer.qiniu.com/rtc/9088/WebQNCustomVideoTrack/9085/WebQNLocalTrack)\n * [QNLocalVideoTrack](https://developer.qiniu.com/rtc/9088/WebQNCustomVideoTrack/9061/WebQNLocalVideoTrack)\n * [QNCameraVideoTrack](https://developer.qiniu.com/rtc/9088/WebQNCustomVideoTrack/9068/WebQNCameraVideoTrack)\n * [QNScreenVideoTrack](https://developer.qiniu.com/rtc/9088/WebQNCustomVideoTrack/9092/WebQNScreenVideoTrack)\n * [QNCanvasVideoTrack](https://developer.qiniu.com/rtc/9088/WebQNCustomVideoTrack/10320/WebQNCanvasVideoTrack)\n * QNCustomVideoTrack \n * [QNLocalAudioTrack](https://developer.qiniu.com/rtc/9088/WebQNCustomVideoTrack/9089/WebQNLocalAudioTrack)\n * [QNMicrophoneAudioTrack](https://developer.qiniu.com/rtc/9088/WebQNCustomVideoTrack/9064/WebQNMicrophoneAudioTrack)\n * [QNBufferSourceAudioTrack](https://developer.qiniu.com/rtc/9088/WebQNCustomVideoTrack/9069/WebQNBufferSourceAudioTrack)\n * [QNCustomAudioTrack](https://developer.qiniu.com/rtc/9088/WebQNCustomVideoTrack/9091/WebQNCustomAudioTrack)\n * [QNRemoteTrack](https://developer.qiniu.com/rtc/9088/WebQNCustomVideoTrack/9082/WebQNRemoteTrack)\n * [QNRemoteVideoTrack](https://developer.qiniu.com/rtc/9088/WebQNCustomVideoTrack/9060/WebQNRemoteVideoTrack)\n * [QNRemoteAudioTrack](https://developer.qiniu.com/rtc/9088/WebQNCustomVideoTrack/9087/WebQNRemoteAudioTrack)\n\nIndex\n\nProperties\n\n * trackID\n * userID\n * tag\n * mediaElement\n * isMultiProfileEnabled\n\nEvent\n\n * ended\n\nCore Methods\n\n * play\n * isAudio\n * isVideo\n * isMuted\n * getMediaStreamTrack\n * setMuted\n * destroy\n * getLowStreamConfig\n * getCurrentFrameData\n * getStats\n * setOptimizationMode\n\nProperties\n\ntrackID\n\n * trackID?: string\n * Track ID\n\n对于本地 Track,当且仅当 Track 发布后才可获取到有效的 TrackID;对于远端 Track,可随时获取到有效的 TrackID。\n\nuserID\n\n * userID?: string\n * 用户 ID\n\ntag\n\n * tag?: string\n * 自定义 tag 文本\n\nmediaElement\n\n * mediaElement: HTMLVideoElement | HTMLAudioElement | undefined\n * 当前播放元素,当调用 play 方法后,可以通过该属性获取播放的媒体元素\n\nisMultiProfileEnabled\n\n * isMultiProfileEnabled: boolean\n * 是否开启大小流\n\n> 废弃,请使用\n> [getLowStreamConfig](https://developer.qiniu.com/rtc/9088/WebQNCustomVideoTrack/9061)\n> 替代。\n\nEvent\n\nended\n\n * (): void\n * Track 断开\n \n track.on('ended', function() {\n console.log('ended')\n })\n\nCore Methods\n\nplay\n\n * public play(container: HTMLElement, config?: QNPlayerConfig): Promise<void>\n * 播放当前 track\n\nParameters\n\n * container: HTMLElement \n\n挂载的 html 容器,SDK 会在该容器下创建 audio 或 video 元素来播放 track\n\n * Optional config: [QNPlayerConfig](https://developer.qiniu.com/rtc/9088/WebQNCustomVideoTrack/10141/WebQNPlayerConfig)\n\n播放配置信息\n\nReturns void\n\nisAudio\n\n * isAudio(): boolean\n * 是否是音频\n\nReturns boolean\n\nisVideo\n\n * isVideo(): boolean\n * 是否是视频\n\nReturns boolean\n\nisMuted\n\n * isMuted(): boolean\n * 是否 mute(静音)\n\nReturns boolean\n\ngetMediaStreamTrack\n\n * getMediaStreamTrack(): MediaStreamTrack | undefined\n * 获取浏览器原生的 MediaStreamTrack 对象\n\nReturns MediaStreamTrack | undefined\n\nsetMuted\n\n * setMuted(isMuted: boolean): void\n * 设置是否静默该 track,静默后,SDK 将会发送静默帧数据\n\n当静默状态改变后,远端订阅 track 将会收到 [mute-state-\nchanged](https://developer.qiniu.com/rtc/9088/WebQNCustomVideoTrack/9082) 事件。\n\nParameters\n\n * isMuted: boolean \n\n是否 mute\n\nReturns void\n\ndestroy\n\n * destroy(): void\n * 销毁当前流\n\nReturns void\n\ngetLowStreamConfig\n\n * getLowStreamConfig(): [QNLowStreamConfig](https://developer.qiniu.com/rtc/9088/WebQNCustomVideoTrack/10582/WebQNLowStreamConfig) | undefined\n * 获取小流配置\n\n>\n> 大小流使用方法可参考[视频大小流使用指南](https://developer.qiniu.com/rtc/9088/WebQNCustomVideoTrack/10566/video-\n> size-flow-web-new)。\n\nReturns [QNLowStreamConfig](https://developer.qiniu.com/rtc/9088/WebQNCustomVideoTrack/10582/WebQNLowStreamConfig) | undefined\n\ngetCurrentFrameData\n\n * getCurrentFrameData(): string\n * 截取当前视频流画面\n\nReturns string\n\ngetStats\n\n * getStats(): [QNLocalVideoTrackStats](https://developer.qiniu.com/rtc/9088/WebQNCustomVideoTrack/9051/WebQNLocalVideoTrackStats)[]\n * 获取流状态数据\n\nReturns\n[QNLocalVideoTrackStats](https://developer.qiniu.com/rtc/9088/WebQNCustomVideoTrack/9051/WebQNLocalVideoTrackStats)\n\nsetOptimizationMode\n\n * setOptimizationMode(mode: [QNVideoOptimizationMode](https://developer.qiniu.com/rtc/9088/WebQNCustomVideoTrack/9079/WebQNVideoOptimizationMode)): boolean\n * 设置传输优化模式\n\nParameters\n\n * mode: [QNVideoOptimizationMode](https://developer.qiniu.com/rtc/9088/WebQNCustomVideoTrack/9079/WebQNVideoOptimizationMode)\n\n视频优化模式\n\nReturns boolean\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNLocalAudioTrack"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNLocalAudioTrack\n\n# QNLocalAudioTrack\n\n最近更新时间: 2022-07-14 14:52:30\n\n本地音频 Track\n\nHierarchy\n\n * EventEmitter \n * [QNTrack](https://developer.qiniu.com/rtc/9089/WebQNLocalAudioTrack/9076/WebQNTrack)\n * [QNLocalTrack](https://developer.qiniu.com/rtc/9089/WebQNLocalAudioTrack/9085/WebQNLocalTrack)\n * [QNLocalVideoTrack](https://developer.qiniu.com/rtc/9089/WebQNLocalAudioTrack/9061/WebQNLocalVideoTrack)\n * [QNCameraVideoTrack](https://developer.qiniu.com/rtc/9089/WebQNLocalAudioTrack/9068/WebQNCameraVideoTrack)\n * [QNScreenVideoTrack](https://developer.qiniu.com/rtc/9089/WebQNLocalAudioTrack/9092/WebQNScreenVideoTrack)\n * [QNCanvasVideoTrack](https://developer.qiniu.com/rtc/9089/WebQNLocalAudioTrack/10320/WebQNCanvasVideoTrack)\n * [QNCustomVideoTrack](https://developer.qiniu.com/rtc/9089/WebQNLocalAudioTrack/9088/WebQNCustomVideoTrack)\n * QNLocalAudioTrack \n * [QNMicrophoneAudioTrack](https://developer.qiniu.com/rtc/9089/WebQNLocalAudioTrack/9064/WebQNMicrophoneAudioTrack)\n * [QNBufferSourceAudioTrack](https://developer.qiniu.com/rtc/9089/WebQNLocalAudioTrack/9069/WebQNBufferSourceAudioTrack)\n * [QNCustomAudioTrack](https://developer.qiniu.com/rtc/9089/WebQNLocalAudioTrack/9091/WebQNCustomAudioTrack)\n * [QNRemoteTrack](https://developer.qiniu.com/rtc/9089/WebQNLocalAudioTrack/9082/WebQNRemoteTrack)\n * [QNRemoteVideoTrack](https://developer.qiniu.com/rtc/9089/WebQNLocalAudioTrack/9060/WebQNRemoteVideoTrack)\n * [QNRemoteAudioTrack](https://developer.qiniu.com/rtc/9089/WebQNLocalAudioTrack/9087/WebQNRemoteAudioTrack)\n\nIndex\n\nProperties\n\n * trackID\n * userID\n * tag\n * mediaElement\n\nEvent\n\n * ended\n * audioFrame\n\nCore Methods\n\n * play\n * isAudio\n * isVideo\n * isMuted\n * getMediaStreamTrack\n * setMuted\n * destroy\n * setVolume\n * setPlaybackDevice\n * getVolumeLevel\n * getCurrentTimeDomainData\n * getCurrentFrequencyDomainData\n * getStats\n\nProperties\n\ntrackID\n\n * trackID?: string\n * Track ID\n\n对于本地 Track,当且仅当 Track 发布后才可获取到有效的 TrackID;对于远端 Track,可随时获取到有效的 TrackID。\n\nuserID\n\n * userID?: string\n * 用户 ID\n\ntag\n\n * tag?: string\n * 自定义 tag 文本\n\nmediaElement\n\n * mediaElement: HTMLVideoElement | HTMLAudioElement | undefined\n * 当前播放元素,当调用 play 方法后,可以通过该属性获取播放的媒体元素\n\nEvent\n\nended\n\n * (): void\n * Track 断开\n \n track.on('ended', function() {\n console.log('ended')\n })\n\naudioFrame\n\n * (buffer: AudioBuffer): void\n * 原始音频数据返回\n \n track.on('audioFrame', function(buffer) {\n for (let channel = 0; channel < buffer.numberOfChannels; channel += 1) {\n console.log('audioFrame', channel, currentChannelData);\n } \n })\n\nParameters\n\n * buffer: AudioBuffer \n\n本地音频轨道的音频帧\n\nCore Methods\n\nplay\n\n * public play(container: HTMLElement, config?: QNPlayerConfig): Promise<void>\n * 播放当前 track\n\nParameters\n\n * container: HTMLElement \n\n挂载的 html 容器,SDK 会在该容器下创建 audio 或 video 元素来播放 track\n\n * Optional config: [QNPlayerConfig](https://developer.qiniu.com/rtc/9089/WebQNLocalAudioTrack/10141/WebQNPlayerConfig)\n\n播放配置信息\n\nReturns void\n\nisAudio\n\n * isAudio(): boolean\n * 是否是音频\n\nReturns boolean\n\nisVideo\n\n * isVideo(): boolean\n * 是否是视频\n\nReturns boolean\n\nisMuted\n\n * isMuted(): boolean\n * 是否 mute(静音)\n\nReturns boolean\n\ngetMediaStreamTrack\n\n * getMediaStreamTrack(): MediaStreamTrack | undefined\n * 获取浏览器原生的 MediaStreamTrack 对象\n\nReturns MediaStreamTrack | undefined\n\nsetMuted\n\n * setMuted(isMuted: boolean): void\n * 设置是否静默该 track,静默后,SDK 将会发送静默帧数据\n\n当静默状态改变后,远端订阅 track 将会收到 [mute-state-\nchanged](https://developer.qiniu.com/rtc/9089/WebQNLocalAudioTrack/9082) 事件。\n\nParameters\n\n * isMuted: boolean \n\n是否 mute\n\nReturns void\n\ndestroy\n\n * destroy(): void\n * 销毁当前流\n\nReturns void\n\nsetVolume\n\n * setVolume(volume: number): void\n * 设置音量\n\n> 音量值在 0 - 1 之间为软件缩小;1 为原始音量;大于 1 且小于 100 为软件放大。\n>\n> 该设置可以用于适度对采集音量做放大或者缩小。\n\nParameters\n\n * volume: number \n\n音量值范围为 0 - 100,默认值为 1\n\nReturns void\n\nsetPlaybackDevice\n\n * setPlaybackDevice(deviceId: string): Promise<void>\n * 设置音频输出设备\n\nParameters\n\n * deviceId: string \n\n设备 deviceId,可以从\n[QNRTC.getPlaybackDevices](https://developer.qiniu.com/rtc/9089/WebQNLocalAudioTrack/9070)\n接口中获取\n\nReturns Promise<void>\n\ngetVolumeLevel\n\n * getVolumeLevel(): string | undefined\n * 获取当前音量\n\n> 可通过定时调用该方法获取音量值的方式来判断当前用户是否在讲话。\n>\n> 在安静的环境下,获取到大于 0 的数值为预期现象,您可根据您的需求自行决定判断的阈值。\n\nReturns string | undefined\n\ngetCurrentTimeDomainData\n\n * getCurrentTimeDomainData(): Uint8Array | undefined\n * 获取当前时域数据\n\nReturns Uint8Array | undefined\n\ngetCurrentFrequencyDomainData\n\n * getCurrentFrequencyDomainData(): Uint8Array | undefined\n * 获取当前频域数据\n\nReturns Uint8Array | undefined\n\ngetStats\n\n * getStats(): [QNLocalAudioTrackStats](https://developer.qiniu.com/rtc/9089/WebQNLocalAudioTrack/9083/WebQNLocalAudioTrackStats)\n * 获取当前频域数据\n\nReturns\n[QNLocalAudioTrackStats](https://developer.qiniu.com/rtc/9089/WebQNLocalAudioTrack/9083/WebQNLocalAudioTrackStats)\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNMicrophoneAudioTrack"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web\n>QNMicrophoneAudioTrack\n\n# QNMicrophoneAudioTrack\n\n最近更新时间: 2022-12-01 18:13:29\n\n本地麦克风 Track\n\nHierarchy\n\n * EventEmitter \n * [QNTrack](https://developer.qiniu.com/rtc/9064/WebQNMicrophoneAudioTrack/9076/WebQNTrack)\n * [QNLocalTrack](https://developer.qiniu.com/rtc/9064/WebQNMicrophoneAudioTrack/9085/WebQNLocalTrack)\n * [QNLocalVideoTrack](https://developer.qiniu.com/rtc/9064/WebQNMicrophoneAudioTrack/9061/WebQNLocalVideoTrack)\n * [QNCameraVideoTrack](https://developer.qiniu.com/rtc/9064/WebQNMicrophoneAudioTrack/9068/WebQNCameraVideoTrack)\n * [QNScreenVideoTrack](https://developer.qiniu.com/rtc/9064/WebQNMicrophoneAudioTrack/9092/WebQNScreenVideoTrack)\n * [QNCanvasVideoTrack](https://developer.qiniu.com/rtc/9064/WebQNMicrophoneAudioTrack/10320/WebQNCanvasVideoTrack)\n * [QNCustomVideoTrack](https://developer.qiniu.com/rtc/9064/WebQNMicrophoneAudioTrack/9088/WebQNCustomVideoTrack)\n * [QNLocalAudioTrack](https://developer.qiniu.com/rtc/9064/WebQNMicrophoneAudioTrack/9089/WebQNLocalAudioTrack)\n * QNMicrophoneAudioTrack \n * [QNBufferSourceAudioTrack](https://developer.qiniu.com/rtc/9064/WebQNMicrophoneAudioTrack/9069/WebQNBufferSourceAudioTrack)\n * [QNCustomAudioTrack](https://developer.qiniu.com/rtc/9064/WebQNMicrophoneAudioTrack/9091/WebQNCustomAudioTrack)\n * [QNRemoteTrack](https://developer.qiniu.com/rtc/9064/WebQNMicrophoneAudioTrack/9082/WebQNRemoteTrack)\n * [QNRemoteVideoTrack](https://developer.qiniu.com/rtc/9064/WebQNMicrophoneAudioTrack/9060/WebQNRemoteVideoTrack)\n * [QNRemoteAudioTrack](https://developer.qiniu.com/rtc/9064/WebQNMicrophoneAudioTrack/9087/WebQNRemoteAudioTrack)\n\nIndex\n\nProperties\n\n * trackID\n * userID\n * tag\n * mediaElement\n\nEvent\n\n * ended\n * audioFrame\n\nCore Methods\n\n * createAudioMusicMixer\n * createAudioEffectMixer\n * isEarMonitorEnabled\n * setEarMonitorEnabled\n * setPlayingVolume\n * play\n * isAudio\n * isVideo\n * isMuted\n * getMediaStreamTrack\n * setMuted\n * destroy\n * setVolume\n * setPlaybackDevice\n * getVolumeLevel\n * getCurrentTimeDomainData\n * getCurrentFrequencyDomainData\n * getStats\n * createAudioMixer\n\nProperties\n\ntrackID\n\n * trackID?: string\n * Track ID\n\n对于本地 Track,当且仅当 Track 发布后才可获取到有效的 TrackID;对于远端 Track,可随时获取到有效的 TrackID。\n\nuserID\n\n * userID?: string\n * 用户 ID\n\ntag\n\n * tag?: string\n * 自定义 tag 文本\n\nmediaElement\n\n * mediaElement: HTMLVideoElement | HTMLAudioElement | undefined\n * 当前播放元素,当调用 play 方法后,可以通过该属性获取播放的媒体元素\n\nEvent\n\nended\n\n * (): void\n * Track 断开\n \n track.on('ended', function() {\n console.log('ended')\n })\n\naudioFrame\n\n * (buffer: AudioBuffer): void\n * 原始音频数据返回\n \n track.on('audioFrame', function(buffer) {\n for (let channel = 0; channel < buffer.numberOfChannels; channel += 1) {\n console.log('audioFrame', channel, currentChannelData);\n } \n })\n\nParameters\n\n * buffer: AudioBuffer \n\n本地音频轨道的音频帧\n\nCore Methods\n\ncreateAudioMusicMixer\n\n * public createAudioMusicMixer(source: [QNAudioSourceType](https://developer.qiniu.com/rtc/9064/WebQNMicrophoneAudioTrack/9075)): [QNAudioMusicMixer](https://developer.qiniu.com/rtc/9064/WebQNMicrophoneAudioTrack/12353/WebQNAudioMusicMixer)\n * 创建背景音乐混音控制器对象\n\n> 仅 4.2.0 以上版本支持\n\nParameters\n\n * source: [QNAudioSourceType](https://developer.qiniu.com/rtc/9064/WebQNMicrophoneAudioTrack/9075)\n\n背景音乐混音仅支持同时混合一路背景音乐,若需要切换背景音乐,重新调用本接口创建\n[QNAudioMusicMixer](https://developer.qiniu.com/rtc/9064/WebQNMicrophoneAudioTrack/12353/WebQNAudioMusicMixer)\n即可\n\nReturns\n[QNAudioMusicMixer](https://developer.qiniu.com/rtc/9064/WebQNMicrophoneAudioTrack/12353/WebQNAudioMusicMixer)\n\ncreateAudioEffectMixer\n\n * public createAudioEffectMixer(}): [QNAudioEffectMixer](https://developer.qiniu.com/rtc/9064/WebQNMicrophoneAudioTrack/12350/WebQNAudioEffectMixer)\n * 创建音效混音控制器对象\n\n> 仅 4.2.0 以上版本支持\n\nReturns\n[QNAudioEffectMixer](https://developer.qiniu.com/rtc/9064/WebQNMicrophoneAudioTrack/12350/WebQNAudioEffectMixer)\n\nisEarMonitorEnabled\n\n * isEarMonitorEnabled(): boolean\n * 获取当前是否开启、关闭返听\n\n> 仅 4.2.0 以上版本支持\n\nReturns boolean\n\nsetEarMonitorEnabled\n\n * setEarMonitorEnabled(enabled: boolean): void\n * 开启、关闭返听\n\n> 仅 4.2.0 以上版本支持\n>\n> 为达到最佳体验,建议佩戴耳机的场景下,使用该功能\n\nParameters\n\n * enabled: boolean \n\n是否开启返听,默认 false\n\nReturns void\n\nsetPlayingVolume\n\n * setPlayingVolume(playingVolume: number): void\n * 设置混音、返听场景下,本地播放的音量大小\n\n> 仅 4.2.0 以上版本支持\n\nParameters\n\n * playingVolume: number \n\n音量,默认值为 1.0\n\nReturns void\n\nplay\n\n * play(container: HTMLElement, config?: [QNPlayerConfig](https://developer.qiniu.com/rtc/9064/WebQNMicrophoneAudioTrack/10141/WebQNPlayerConfig)): Promise<void>\n * 播放当前 track\n\nParameters\n\n * container: HTMLElement \n\n挂载的 html 容器,SDK 会在该容器下创建 audio 或 video 元素来播放 track\n\n * Optional config: [QNPlayerConfig](https://developer.qiniu.com/rtc/9064/WebQNMicrophoneAudioTrack/10141/WebQNPlayerConfig)\n\n播放配置信息\n\nReturns void\n\nisAudio\n\n * isAudio(): boolean\n * 是否是音频\n\nReturns boolean\n\nisVideo\n\n * isVideo(): boolean\n * 是否是视频\n\nReturns boolean\n\nisMuted\n\n * isMuted(): boolean\n * 是否 mute(静音)\n\nReturns boolean\n\ngetMediaStreamTrack\n\n * getMediaStreamTrack(): MediaStreamTrack | undefined\n * 获取浏览器原生的 MediaStreamTrack 对象\n\nReturns MediaStreamTrack | undefined\n\nsetMuted\n\n * setMuted(isMuted: boolean): void\n * 设置是否静默该 track,静默后,SDK 将会发送静默帧数据\n\n当静默状态改变后,远端订阅 track 将会收到 [mute-state-\nchanged](https://developer.qiniu.com/rtc/9064/WebQNMicrophoneAudioTrack/9082)\n事件。\n\nParameters\n\n * isMuted: boolean \n\n是否 mute\n\nReturns void\n\ndestroy\n\n * destroy(): void\n * 销毁当前流\n\nReturns void\n\nsetVolume\n\n * setVolume(volume: number): void\n * 设置音量\n\n> 音量值在 0 - 1 之间为软件缩小;1 为原始音量;大于 1 且小于 100 为软件放大。\n>\n> 该设置可以用于适度对采集音量做放大或者缩小。\n\nParameters\n\n * volume: number \n\n音量值范围为 0 - 100,默认值为 1\n\nReturns void\n\nsetPlaybackDevice\n\n * setPlaybackDevice(deviceId: string): Promise<void>\n * 设置音频输出设备\n\nParameters\n\n * deviceId: string \n\n设备 deviceId,可以从\n[QNRTC.getPlaybackDevices](https://developer.qiniu.com/rtc/9064/WebQNMicrophoneAudioTrack/9070)\n接口中获取\n\nReturns Promise<void>\n\ngetVolumeLevel\n\n * getVolumeLevel(): string | undefined\n * 获取当前音量\n\n> 可通过定时调用该方法获取音量值的方式来判断当前用户是否在讲话。\n>\n> 在安静的环境下,获取到大于 0 的数值为预期现象,您可根据您的需求自行决定判断的阈值。\n\nReturns string | undefined\n\ngetCurrentTimeDomainData\n\n * getCurrentTimeDomainData(): Uint8Array | undefined\n * 获取当前时域数据\n\nReturns Uint8Array | undefined\n\ngetCurrentFrequencyDomainData\n\n * getCurrentFrequencyDomainData(): Uint8Array | undefined\n * 获取当前频域数据\n\nReturns Uint8Array | undefined\n\ngetStats\n\n * getStats(): [QNLocalAudioTrackStats](https://developer.qiniu.com/rtc/9064/WebQNMicrophoneAudioTrack/9083/WebQNLocalAudioTrackStats)\n * 获取当前频域数据\n\nReturns\n[QNLocalAudioTrackStats](https://developer.qiniu.com/rtc/9064/WebQNMicrophoneAudioTrack/9083/WebQNLocalAudioTrackStats)\n\ncreateAudioMixer\n\n * createAudioMixer(source: [QNAudioSourceType](https://developer.qiniu.com/rtc/9064/WebQNMicrophoneAudioTrack/9075)): [QNAudioMixer](https://developer.qiniu.com/rtc/9064/WebQNMicrophoneAudioTrack/9066/WebQNAudioMixer)\n * 创建混音管理器\n\n> deprecated\n\nParameters\n\n * source: [QNAudioSourceType](https://developer.qiniu.com/rtc/9064/WebQNMicrophoneAudioTrack/9075)\n\n音乐的源文件,支持本地文件对象或者在线音乐地址\n\nReturns\n[QNAudioMixer](https://developer.qiniu.com/rtc/9064/WebQNMicrophoneAudioTrack/9066/WebQNAudioMixer)\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNBufferSourceAudioTrack"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web\n>QNBufferSourceAudioTrack\n\n# QNBufferSourceAudioTrack\n\n最近更新时间: 2022-02-22 15:14:23\n\n外部音频导入\n\nHierarchy\n\n * EventEmitter \n * [QNTrack](https://developer.qiniu.com/rtc/9069/WebQNBufferSourceAudioTrack/9076/WebQNTrack)\n * [QNLocalTrack](https://developer.qiniu.com/rtc/9069/WebQNBufferSourceAudioTrack/9085/WebQNLocalTrack)\n * [QNLocalVideoTrack](https://developer.qiniu.com/rtc/9069/WebQNBufferSourceAudioTrack/9061/WebQNLocalVideoTrack)\n * [QNCameraVideoTrack](https://developer.qiniu.com/rtc/9069/WebQNBufferSourceAudioTrack/9068/WebQNCameraVideoTrack)\n * [QNScreenVideoTrack](https://developer.qiniu.com/rtc/9069/WebQNBufferSourceAudioTrack/9092/WebQNScreenVideoTrack)\n * [QNCanvasVideoTrack](https://developer.qiniu.com/rtc/9069/WebQNBufferSourceAudioTrack/10320/WebQNCanvasVideoTrack)\n * [QNCustomVideoTrack](https://developer.qiniu.com/rtc/9069/WebQNBufferSourceAudioTrack/9088/WebQNCustomVideoTrack)\n * [QNLocalAudioTrack](https://developer.qiniu.com/rtc/9069/WebQNBufferSourceAudioTrack/9089/WebQNLocalAudioTrack)\n * [QNMicrophoneAudioTrack](https://developer.qiniu.com/rtc/9069/WebQNBufferSourceAudioTrack/9064/WebQNMicrophoneAudioTrack)\n * QNBufferSourceAudioTrack \n * [QNCustomAudioTrack](https://developer.qiniu.com/rtc/9069/WebQNBufferSourceAudioTrack/9091/WebQNCustomAudioTrack)\n * [QNRemoteTrack](https://developer.qiniu.com/rtc/9069/WebQNBufferSourceAudioTrack/9082/WebQNRemoteTrack)\n * [QNRemoteVideoTrack](https://developer.qiniu.com/rtc/9069/WebQNBufferSourceAudioTrack/9060/WebQNRemoteVideoTrack)\n * [QNRemoteAudioTrack](https://developer.qiniu.com/rtc/9069/WebQNBufferSourceAudioTrack/9087/WebQNRemoteAudioTrack)\n\nIndex\n\nProperties\n\n * source\n\nEvent\n\n * state-changed\n\nCore Methods\n\n * start\n * stop\n * pause\n * resume\n * getDuration\n * getCurrentPosition\n * seekTo\n\nProperties\n\nsource\n\n * source: [QNAudioSourceType](https://developer.qiniu.com/rtc/9069/WebQNBufferSourceAudioTrack/9075)\n * 音频数据源类型\n\nEvent\n\nstate-changed\n\n * (state: [QNAudioSourceState](https://developer.qiniu.com/rtc/9069/WebQNBufferSourceAudioTrack/9047/WebQNAudioSourceState)): void\n * 音频源处理状态变化\n \n track.on('state-changed', function(state) {\n console.log('state-changed', state)\n})\n\nParameters\n\n * state: [QNAudioSourceState](https://developer.qiniu.com/rtc/9069/WebQNBufferSourceAudioTrack/9047/WebQNAudioSourceState)\n\n音频源处理状态\n\nCore Methods\n\nstart\n\n * start(loop?: boolean): Promise<void>\n * 开始处理\n\nReturns void\n\nstop\n\n * stop(): void\n * 停止处理\n\nReturns void\n\npause\n\n * pause(): void\n * 暂停处理\n\nReturns void\n\nresume\n\n * resume(): void\n * 恢复处理\n\nReturns void\n\ngetDuration\n\n * getDuration(): number\n * 获取音频时长(秒)\n\nReturns number\n\ngetCurrentPosition\n\n * getCurrentPosition(): number\n * 获取当前位置(秒)\n\nReturns number\n\nseekTo\n\n * seekTo(time: number): void\n * 跳转到指定位置(秒)\n\nParameters\n\n * time: number \n\n处理位置(秒)\n\nReturns void\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNCustomAudioTrack"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNCustomAudioTrack\n\n# QNCustomAudioTrack\n\n最近更新时间: 2022-02-22 15:14:13\n\n自定义音频 Track\n\nHierarchy\n\n * EventEmitter \n * [QNTrack](https://developer.qiniu.com/rtc/9091/WebQNCustomAudioTrack/WebQNTrack)\n * [QNLocalTrack](https://developer.qiniu.com/rtc/9091/WebQNCustomAudioTrack/WebQNLocalTrack)\n * [QNLocalVideoTrack](https://developer.qiniu.com/rtc/9091/WebQNCustomAudioTrack/WebQNLocalVideoTrack)\n * [QNCameraVideoTrack](https://developer.qiniu.com/rtc/9091/WebQNCustomAudioTrack/WebNCameraVideoTrack)\n * [QNScreenVideoTrack](https://developer.qiniu.com/rtc/9091/WebQNCustomAudioTrack/WebQNScreenVideoTrack)\n * [QNCanvasVideoTrack](https://developer.qiniu.com/rtc/9091/WebQNCustomAudioTrack/10320/WebQNCanvasVideoTrack)\n * [QNCustomVideoTrack](https://developer.qiniu.com/rtc/9091/WebQNCustomAudioTrack/WebQNCustomVideoTrack)\n * [QNLocalAudioTrack](https://developer.qiniu.com/rtc/9091/WebQNCustomAudioTrack/WebQNLocalAudioTrack)\n * [QNMicrophoneAudioTrack](https://developer.qiniu.com/rtc/9091/WebQNCustomAudioTrack/WebQNMicrophoneAudioTrack)\n * [QNBufferSourceAudioTrack](https://developer.qiniu.com/rtc/9091/WebQNCustomAudioTrack/WebQNBufferSourceAudioTrack)\n * QNCustomAudioTrack \n * [QNRemoteTrack](https://developer.qiniu.com/rtc/9091/WebQNCustomAudioTrack/WebQNRemoteTrack)\n * [QNRemoteVideoTrack](https://developer.qiniu.com/rtc/9091/WebQNCustomAudioTrack/WebQNRemoteVideoTrack)\n * [QNRemoteAudioTrack](https://developer.qiniu.com/rtc/9091/WebQNCustomAudioTrack/WebQNRemoteAudioTrack)\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNRemoteTrack"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNRemoteTrack\n\n# QNRemoteTrack\n\n最近更新时间: 2022-07-14 14:49:17\n\n远端 Track\n\nHierarchy\n\n * EventEmitter \n * [QNTrack](https://developer.qiniu.com/rtc/9082/WebQNRemoteTrack/9076/WebQNTrack)\n * [QNLocalTrack](https://developer.qiniu.com/rtc/9082/WebQNRemoteTrack/9085/WebQNLocalTrack)\n * [QNLocalVideoTrack](https://developer.qiniu.com/rtc/9082/WebQNRemoteTrack/9061/WebQNLocalVideoTrack)\n * [QNCameraVideoTrack](https://developer.qiniu.com/rtc/9082/WebQNRemoteTrack/9068/WebQNCameraVideoTrack)\n * [QNScreenVideoTrack](https://developer.qiniu.com/rtc/9082/WebQNRemoteTrack/9092/WebQNScreenVideoTrack)\n * [QNCanvasVideoTrack](https://developer.qiniu.com/rtc/9082/WebQNRemoteTrack/10320/WebQNCanvasVideoTrack)\n * [QNCustomVideoTrack](https://developer.qiniu.com/rtc/9082/WebQNRemoteTrack/9088/WebQNCustomVideoTrack)\n * [QNLocalAudioTrack](https://developer.qiniu.com/rtc/9082/WebQNRemoteTrack/9089/WebQNLocalAudioTrack)\n * [QNMicrophoneAudioTrack](https://developer.qiniu.com/rtc/9082/WebQNRemoteTrack/9064/WebQNMicrophoneAudioTrack)\n * [QNBufferSourceAudioTrack](https://developer.qiniu.com/rtc/9082/WebQNRemoteTrack/9069/WebQNBufferSourceAudioTrack)\n * [QNCustomAudioTrack](https://developer.qiniu.com/rtc/9082/WebQNRemoteTrack/9091/WebQNCustomAudioTrack)\n * QNRemoteTrack \n * [QNRemoteVideoTrack](https://developer.qiniu.com/rtc/9082/WebQNRemoteTrack/9060/WebQNRemoteVideoTrack)\n * [QNRemoteAudioTrack](https://developer.qiniu.com/rtc/9082/WebQNRemoteTrack/9087/WebQNRemoteAudioTrack)\n\nIndex\n\nProperties\n\n * trackID\n * userID\n * tag\n * mediaElement\n\nEvent\n\n * mute-state-changed\n\nCore Methods\n\n * play\n * isAudio\n * isVideo\n * isMuted\n * getMediaStreamTrack\n * isSubscribed\n\nProperties\n\ntrackID\n\n * trackID?: string\n * Track ID\n\n对于本地 Track,当且仅当 Track 发布后才可获取到有效的 TrackID;对于远端 Track,可随时获取到有效的 TrackID。\n\nuserID\n\n * userID?: string\n * 用户 ID\n\ntag\n\n * tag?: string\n * 自定义 tag 文本\n\nmediaElement\n\n * mediaElement: HTMLVideoElement | HTMLAudioElement | undefined\n * 当前播放元素,当调用 play 方法后,可以通过该属性获取播放的媒体元素\n\nEvent\n\nmute-state-changed\n\n * (isMuted: boolean): void\n * mute 状态变化\n\n> 音视频 Track 静默状态改变时触发此事件。\n\nParameters\n\n * isMuted: boolean \n\n是否 muted\n\nCore Methods\n\nplay\n\n * public play(container: HTMLElement, config?: QNPlayerConfig): Promise<void>\n * 播放当前 track\n\nParameters\n\n * container: HTMLElement \n\n挂载的 html 容器,SDK 会在该容器下创建 audio 或 video 元素来播放 track\n\n * Optional config: [QNPlayerConfig](https://developer.qiniu.com/rtc/9082/WebQNRemoteTrack/10141/WebQNPlayerConfig)\n\n播放配置信息\n\nReturns void\n\nisAudio\n\n * isAudio(): boolean\n * 是否是音频\n\nReturns boolean\n\nisVideo\n\n * isVideo(): boolean\n * 是否是视频\n\nReturns boolean\n\nisMuted\n\n * isMuted(): boolean\n * 是否 mute(静音)\n\nReturns boolean\n\ngetMediaStreamTrack\n\n * getMediaStreamTrack(): MediaStreamTrack | undefined\n * 获取浏览器原生的 MediaStreamTrack 对象\n\nReturns MediaStreamTrack | undefined\n\nisSubscribed\n\n * isSubscribed(): boolean\n * 是否已订阅\n\nReturns boolean\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNRemoteVideoTrack"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNRemoteVideoTrack\n\n# QNRemoteVideoTrack\n\n最近更新时间: 2022-07-14 16:09:54\n\n远端视频 Track\n\nHierarchy\n\n * EventEmitter \n * [QNTrack](https://developer.qiniu.com/rtc/9060/WebQNRemoteVideoTrack/9076/WebQNTrack)\n * [QNLocalTrack](https://developer.qiniu.com/rtc/9060/WebQNRemoteVideoTrack/9085/WebQNLocalTrack)\n * [QNLocalVideoTrack](https://developer.qiniu.com/rtc/9060/WebQNRemoteVideoTrack/9061/WebQNLocalVideoTrack)\n * [QNCameraVideoTrack](https://developer.qiniu.com/rtc/9060/WebQNRemoteVideoTrack/9068/WebQNCameraVideoTrack)\n * [QNScreenVideoTrack](https://developer.qiniu.com/rtc/9060/WebQNRemoteVideoTrack/9092/WebQNScreenVideoTrack)\n * [QNCanvasVideoTrack](https://developer.qiniu.com/rtc/9060/WebQNRemoteVideoTrack/10320/WebQNCanvasVideoTrack)\n * [QNCustomVideoTrack](https://developer.qiniu.com/rtc/9060/WebQNRemoteVideoTrack/9088/WebQNCustomVideoTrack)\n * [QNLocalAudioTrack](https://developer.qiniu.com/rtc/9060/WebQNRemoteVideoTrack/9089/WebQNLocalAudioTrack)\n * [QNMicrophoneAudioTrack](https://developer.qiniu.com/rtc/9060/WebQNRemoteVideoTrack/9064/WebQNMicrophoneAudioTrack)\n * [QNBufferSourceAudioTrack](https://developer.qiniu.com/rtc/9060/WebQNRemoteVideoTrack/9069/WebQNBufferSourceAudioTrack)\n * [QNCustomAudioTrack](https://developer.qiniu.com/rtc/9060/WebQNRemoteVideoTrack/9091/WebQNCustomAudioTrack)\n * [QNRemoteTrack](https://developer.qiniu.com/rtc/9060/WebQNRemoteVideoTrack/9082/WebQNRemoteTrack)\n * QNRemoteVideoTrack \n * [QNRemoteAudioTrack](https://developer.qiniu.com/rtc/9060/WebQNRemoteVideoTrack/9087/WebQNRemoteAudioTrack)\n\nIndex\n\nProperties\n\n * trackID\n * userID\n * tag\n * mediaElement\n * isLowStreamEnabled\n * isMultiProfileEnabled\n\nEvent\n\n * mute-state-changed\n * low-stream-state-changed\n * profile-changed\n\nCore Methods\n\n * play\n * isAudio\n * isVideo\n * isMuted\n * getMediaStreamTrack\n * isSubscribed\n * setLowStream\n * setProfile\n * getCurrentFrameData\n * getStats\n\nProperties\n\ntrackID\n\n * trackID?: string\n * Track ID\n\n对于本地 Track,当且仅当 Track 发布后才可获取到有效的 TrackID;对于远端 Track,可随时获取到有效的 TrackID。\n\nuserID\n\n * userID?: string\n * 用户 ID\n\ntag\n\n * tag?: string\n * 自定义 tag 文本\n\nmediaElement\n\n * mediaElement: HTMLVideoElement | HTMLAudioElement | undefined\n * 当前播放元素,当调用 play 方法后,可以通过该属性获取播放的媒体元素\n\nisLowStreamEnabled\n\n * isLowStreamEnabled: boolean\n * 是否开启了大小流\n\nisMultiProfileEnabled\n\n * isMultiProfileEnabled: boolean\n * 是否开启了大小流\n\n> 废弃,请使用\n> [isLowStreamEnabled](https://developer.qiniu.com/rtc/9060/WebQNRemoteVideoTrack/9060)\n> 替代。\n\nEvent\n\nmute-state-changed\n\n * (isMuted: boolean): void\n * mute 状态变化\n\n> 音视频 Track 静默状态改变时触发此事件。\n\nParameters\n\n * isMuted: boolean \n\n是否 muted\n\nlow-stream-state-changed\n\n * (state: boolean): void\n * 大小流状态变化。\n\n值为 true 表示当前使用的是小流,反之为大流。\n\n \n track.on('low-stream-state-changed',function(state){\n console.log('low-stream-state-changed', state)\n })\n\nParameters\n\n * state: boolean \n\n大小流状态\n\nprofile-changed\n\n * (profile: [QNTrackProfile](https://developer.qiniu.com/rtc/9060/WebQNRemoteVideoTrack/9050/WebQNTrackProfile)): void\n * 大小流状态变化\n \n track.on('profile-changed', function(profile){\n console.log('profile-changed', profile)\n })\n\n> 废弃,请使用 [low-stream-state-\n> changed](https://developer.qiniu.com/rtc/9060/WebQNRemoteVideoTrack/9060)\n> 替代。\n\nParameters\n\n * profile: [QNTrackProfile](https://developer.qiniu.com/rtc/9060/WebQNRemoteVideoTrack/9050/WebQNTrackProfile)\n\n大小流状态\n\nCore Methods\n\nplay\n\n * public play(container: HTMLElement, config?: QNPlayerConfig): Promise<void>\n * 播放当前 track\n\nParameters\n\n * container: HTMLElement \n\n挂载的 html 容器,SDK 会在该容器下创建 audio 或 video 元素来播放 track\n\n * Optional config: [QNPlayerConfig](https://developer.qiniu.com/rtc/9060/WebQNRemoteVideoTrack/10141/WebQNPlayerConfig)\n\n播放配置信息\n\nReturns void\n\nisAudio\n\n * isAudio(): boolean\n * 是否是音频\n\nReturns boolean\n\nisVideo\n\n * isVideo(): boolean\n * 是否是视频\n\nReturns boolean\n\nisMuted\n\n * isMuted(): boolean\n * 是否 mute(静音)\n\nReturns boolean\n\ngetMediaStreamTrack\n\n * getMediaStreamTrack(): MediaStreamTrack | undefined\n * 获取浏览器原生的 MediaStreamTrack 对象\n\nReturns MediaStreamTrack | undefined\n\nisSubscribed\n\n * isSubscribed(): boolean\n * 是否已订阅\n\nReturns boolean\n\nsetLowStream\n\n * setLowStream(use: boolean): void\n * 是否订阅远端发布的小流\n\n> 可以先通过\n> [isLowStreamEnabled](https://developer.qiniu.com/rtc/9060/WebQNRemoteVideoTrack/9060)\n> 判断远端是否开启大小流,再进行设置。\n>\n>\n> 关于大小流详细内容可参阅[视频大小流使用指南](https://developer.qiniu.com/rtc/9060/WebQNRemoteVideoTrack/10566/video-\n> size-flow-web-new)。\n\nParameters\n\n * use: boolean \n\ntrue 表示使用小流,false 表示使用大流。\n\nReturns void\n\nsetProfile\n\n * setProfile(profile: [QNTrackProfile](https://developer.qiniu.com/rtc/9060/WebQNRemoteVideoTrack/9050/WebQNTrackProfile)): void\n * 选择订阅视频的大小流\n\n> 废弃,请使用\n> [setLowStream](https://developer.qiniu.com/rtc/9060/WebQNRemoteVideoTrack/9060)\n> 替代。\n\nParameters\n\n * profile: [QNTrackProfile](https://developer.qiniu.com/rtc/9060/WebQNRemoteVideoTrack/9050/WebQNTrackProfile)\n\n指定大小流\n\nReturns void\n\ngetCurrentFrameData\n\n * getCurrentFrameData(): string\n * 截取当前视频流数据\n\nReturns string\n\ngetStats\n\n * getStats(): [QNRemoteVideoTrackStats](https://developer.qiniu.com/rtc/9060/WebQNRemoteVideoTrack/9380/WebQNRemoteVideoTrackStats)\n * 获取流状态数据\n\nReturns\n[QNRemoteVideoTrackStats](https://developer.qiniu.com/rtc/9060/WebQNRemoteVideoTrack/9380/WebQNRemoteVideoTrackStats)\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNRemoteAudioTrack"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNRemoteAudioTrack\n\n# QNRemoteAudioTrack\n\n最近更新时间: 2022-07-14 14:50:22\n\n远端音频 Track\n\nHierarchy\n\n * EventEmitter \n * [QNTrack](https://developer.qiniu.com/rtc/9087/WebQNRemoteAudioTrack/9076/WebQNTrack)\n * [QNLocalTrack](https://developer.qiniu.com/rtc/9087/WebQNRemoteAudioTrack/9085/WebQNLocalTrack)\n * [QNLocalVideoTrack](https://developer.qiniu.com/rtc/9087/WebQNRemoteAudioTrack/9061/WebQNLocalVideoTrack)\n * [QNCameraVideoTrack](https://developer.qiniu.com/rtc/9087/WebQNRemoteAudioTrack/9068/WebQNCameraVideoTrack)\n * [QNScreenVideoTrack](https://developer.qiniu.com/rtc/9087/WebQNRemoteAudioTrack/9092/WebQNScreenVideoTrack)\n * [QNCanvasVideoTrack](https://developer.qiniu.com/rtc/9087/WebQNRemoteAudioTrack/10320/WebQNCanvasVideoTrack)\n * [QNCustomVideoTrack](https://developer.qiniu.com/rtc/9087/WebQNRemoteAudioTrack/9088/WebQNCustomVideoTrack)\n * [QNLocalAudioTrack](https://developer.qiniu.com/rtc/9087/WebQNRemoteAudioTrack/9089/WebQNLocalAudioTrack)\n * [QNMicrophoneAudioTrack](https://developer.qiniu.com/rtc/9087/WebQNRemoteAudioTrack/9064/WebQNMicrophoneAudioTrack)\n * [QNBufferSourceAudioTrack](https://developer.qiniu.com/rtc/9087/WebQNRemoteAudioTrack/9069/WebQNBufferSourceAudioTrack)\n * [QNCustomAudioTrack](https://developer.qiniu.com/rtc/9087/WebQNRemoteAudioTrack/9091/WebQNCustomAudioTrack)\n * [QNRemoteTrack](https://developer.qiniu.com/rtc/9087/WebQNRemoteAudioTrack/9082/WebQNRemoteTrack)\n * [QNRemoteVideoTrack](https://developer.qiniu.com/rtc/9087/WebQNRemoteAudioTrack/9060/WebQNRemoteVideoTrack)\n * QNRemoteAudioTrack \n\nIndex\n\nProperties\n\n * trackID\n * userID\n * tag\n * mediaElement\n\nEvent\n\n * mute-state-changed\n * audioFrame\n\nCore Methods\n\n * play\n * isAudio\n * isVideo\n * isMuted\n * getMediaStreamTrack\n * isSubscribed\n * setVolume\n * setPlaybackDevice\n * getVolumeLevel\n * getCurrentTimeDomainData\n * getCurrentFrequencyDomainData\n * getStats\n\nProperties\n\ntrackID\n\n * trackID?: string\n * Track ID\n\n对于本地 Track,当且仅当 Track 发布后才可获取到有效的 TrackID;对于远端 Track,可随时获取到有效的 TrackID。\n\nuserID\n\n * userID?: string\n * 用户 ID\n\ntag\n\n * tag?: string\n * 自定义 tag 文本\n\nmediaElement\n\n * mediaElement: HTMLVideoElement | HTMLAudioElement | undefined\n * 当前播放元素,当调用 play 方法后,可以通过该属性获取播放的媒体元素\n\nEvent\n\nmute-state-changed\n\n * (isMuted: boolean): void\n * mute 状态变化\n\n> 音视频 Track 静默状态改变时触发此事件。\n\nParameters\n\n * isMuted: boolean \n\n是否 muted\n\naudioFrame\n\n * (buffer: [AudioBuffer](https://developer.mozilla.org/en-US/docs/Web/API/AudioBuffer)): void\n * 原始音频数据返回\n \n track.on('audioFrame', function(buffer) {\n console.log('audioFrame', buffer);\n })\n\nParameters\n\n * buffer: [AudioBuffer](https://developer.mozilla.org/en-US/docs/Web/API/AudioBuffer)\n\n远端音频轨道的音频帧\n\nCore Methods\n\nplay\n\n * public play(container: HTMLElement, config?: QNPlayerConfig): Promise<void>\n * 播放当前 track\n\nParameters\n\n * container: HTMLElement \n\n挂载的 html 容器,SDK 会在该容器下创建 audio 或 video 元素来播放 track\n\n * Optional config: [QNPlayerConfig](https://developer.qiniu.com/rtc/9087/WebQNRemoteAudioTrack/10141/WebQNPlayerConfig)\n\n播放配置信息\n\nReturns void\n\nisAudio\n\n * isAudio(): boolean\n * 是否是音频\n\nReturns boolean\n\nisVideo\n\n * isVideo(): boolean\n * 是否是视频\n\nReturns boolean\n\nisMuted\n\n * isMuted(): boolean\n * 是否 mute(静音)\n\nReturns boolean\n\ngetMediaStreamTrack\n\n * getMediaStreamTrack(): MediaStreamTrack | undefined\n * 获取浏览器原生的 MediaStreamTrack 对象\n\nReturns MediaStreamTrack | undefined\n\nisSubscribed\n\n * isSubscribed(): boolean\n * 是否已订阅\n\nReturns boolean\n\nsetVolume\n\n * setVolume(volume: number): void\n * 设置音量\n\n> 音量值在 0 - 1 之间为软件缩小;1 为原始音量;大于 1 且小于 100 为软件放大。\n>\n> 该设置可以用于适度对采集音量做放大或者缩小。\n\nParameters\n\n * volume: number \n\n音量值为0-100\n\nReturns void\n\nsetPlaybackDevice\n\n * setPlaybackDevice(deviceId: string): Promise<void>\n * 设置音频输出设备\n\nParameters\n\n * deviceId: string \n\n设备 deviceId,可以从\n[QNRTC.getPlaybackDevices](https://developer.qiniu.com/rtc/9087/WebQNRemoteAudioTrack/9070)\n接口中获取\n\nReturns Promise<void>\n\ngetVolumeLevel\n\n * getVolumeLevel(): string | undefined\n * 获取当前音量\n\n> 可通过定时调用该方法获取音量值的方式来判断当前用户是否在讲话。\n>\n> 在安静的环境下,获取到大于 0 的数值为预期现象,您可根据您的需求自行决定判断的阈值。\n\nReturns string | undefined\n\ngetCurrentTimeDomainData\n\n * getCurrentTimeDomainData(): Uint8Array | undefined\n * 获取当前时域数据\n\nReturns Uint8Array | undefined\n\ngetCurrentFrequencyDomainData\n\n * getCurrentFrequencyDomainData(): Uint8Array | undefined\n * 获取当前频域数据\n\nReturns Uint8Array | undefined\n\ngetStats\n\n * getStats(): [QNRemoteAudioTrackStats](https://developer.qiniu.com/rtc/9087/WebQNRemoteAudioTrack/9093/WebQNRemoteAudioTrackStats)\n * 获取当前频域数据\n\nReturns\n[QNRemoteAudioTrackStats](https://developer.qiniu.com/rtc/9087/WebQNRemoteAudioTrack/9093/WebQNRemoteAudioTrackStats)\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNBufferSourceAudioTrackConfig"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web\n>QNBufferSourceAudioTrackConfig\n\n# QNBufferSourceAudioTrackConfig\n\n最近更新时间: 2022-07-14 14:56:15\n\n外部音频导入配置\n\nIndex\n\nProperties\n\n * source\n * encoderConfig\n * tag\n\nProperties\n\nsource\n\n * source: [QNAudioSourceType](https://developer.qiniu.com/rtc/9057/WebQNBufferSourceAudioTrackConfig/9075)\n * 音频数据源\n\nencoderConfig\n\n * encoderConfig?: [QNAudioEncoderConfig](https://developer.qiniu.com/rtc/9057/WebQNBufferSourceAudioTrackConfig/10565/WebQNAudioEncoderConfig) | [QNAudioEncoderConfigPreset](https://developer.qiniu.com/rtc/9057/WebQNBufferSourceAudioTrackConfig/9075)\n * 音频编码配置\n\n> 默认预设值为\n> [STANDARD](https://developer.qiniu.com/rtc/9057/WebQNBufferSourceAudioTrackConfig/9075)。\n\ntag\n\n * tag?: string\n * 自定义 tag 文本\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNCustomAudioTrackConfig"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web\n>QNCustomAudioTrackConfig\n\n# QNCustomAudioTrackConfig\n\n最近更新时间: 2022-02-22 15:14:19\n\n自定义音频采集配置\n\nIndex\n\nProperties\n\n * mediaStreamTrack\n * bitrate\n * tag\n\nProperties\n\nmediaStreamTrack\n\n * mediaStreamTrack: [MediaStreamTrack](https://developer.mozilla.org/en-US/docs/Web/API/MediaStreamTrack)\n * 音频采集源\n\nbitrate\n\n * bitrate?: number\n * 码率(Kbps)\n\n> 音频最低 16 kbps,最高 128 kbps,若设置值超出最高码率或低于最低码率,则 sdk 会自动调整为最低或最高支持的码率。\n\ntag\n\n * tag?: string\n * 自定义 tag 文本\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNTranscodingLiveStreamingConfig"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web\n>QNTranscodingLiveStreamingConfig\n\n# QNTranscodingLiveStreamingConfig\n\n最近更新时间: 2022-07-14 15:25:26\n\n合流转推配置项\n\nHierarchy\n\n * [QNLiveStreamingConfig](https://developer.qiniu.com/rtc/9086/WebQNTranscodingLiveStreamingConfig/9287/WebQNLiveStreamingConfig)\n * [QNDirectLiveStreamingConfig](https://developer.qiniu.com/rtc/9086/WebQNTranscodingLiveStreamingConfig/9049/WebQNDirectLiveStreamingConfig)\n * QNTranscodingLiveStreamingConfig \n\nIndex\n\nProperties\n\n * width\n * height\n * videoFrameRate\n * bitrate\n * maxBitrate\n * minBitrate\n * transcodingTracks\n * renderMode\n * background\n * watermarks\n * holdLastFrame\n\nProperties\n\nwidth\n\n * width?: number\n * 设置合流转推画布的宽\n\nheight\n\n * height?: number\n * 设置合流转推画布的高\n\nvideoFrameRate\n\n * videoFrameRate?: number\n * 设置合流转推的视频帧率\n\nbitrate\n\n * bitrate?: number\n * 设置合流转推的码率,单位 kbps\n\nmaxBitrate\n\n * maxBitrate?: number\n * 设置合流转推的最大码率,单位 kbps\n\nminBitrate\n\n * minBitrate?: number\n * 设置合流转推的最小码率,单位 kbps\n\ntranscodingTracks\n\n * transcodingTracks?: [QNTranscodingLiveStreamingTrack](https://developer.qiniu.com/rtc/9086/WebQNTranscodingLiveStreamingConfig/9065/WebQNTranscodingLiveStreamingTrack)[]\n * 设置合流转推的合流布局\n\nrenderMode\n\n * renderMode?: [QNRenderMode](https://developer.qiniu.com/rtc/9086/WebQNTranscodingLiveStreamingConfig/9067/WebQNRenderMode)\n * 设置合流画面的填充方式\n\nbackground\n\n * background?: [QNTranscodingLiveStreamingImage](https://developer.qiniu.com/rtc/9086/WebQNTranscodingLiveStreamingConfig/9063/WebQNTranscodingLiveStreamingImage)\n * 设置合流转推的画布背景\n\nwatermarks\n\n * watermarks?: [QNTranscodingLiveStreamingImage](https://developer.qiniu.com/rtc/9086/WebQNTranscodingLiveStreamingConfig/9063/WebQNTranscodingLiveStreamingImage)[]\n * 设置合流转推的水印\n\nholdLastFrame\n\n * holdLastFrame?: boolean\n * 设置是否保持最后一帧\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNCameraVideoTrackConfig"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web\n>QNCameraVideoTrackConfig\n\n# QNCameraVideoTrackConfig\n\n最近更新时间: 2022-07-14 14:58:20\n\n摄像头采集配置\n\nIndex\n\nProperties\n\n * encoderConfig\n * facingMode\n * optimizationMode\n * cameraId\n * tag\n\nProperties\n\nencoderConfig\n\n * encoderConfig?: [QNVideoEncoderConfig](https://developer.qiniu.com/rtc/9077/WebQNCameraVideoTrackConfig/10564/WebQNVideoEncoderConfig) | [QNVideoEncoderConfigPreset](https://developer.qiniu.com/rtc/9077/WebQNCameraVideoTrackConfig/9075)\n * 视频编码配置\n\n> 默认预设值为\n> [480p](https://developer.qiniu.com/rtc/9077/WebQNCameraVideoTrackConfig/9075)。\n\nfacingMode\n\n * facingMode?: 'user' | 'environment'\n * 选择前置摄像头还是后置摄像头\n\noptimizationMode\n\n * optimizationMode: [QNVideoOptimizationMode](https://developer.qiniu.com/rtc/9077/WebQNCameraVideoTrackConfig/9079/WebQNVideoOptimizationMode)\n * 传输优化模式\n\ncameraId\n\n * cameraId?: string\n * 选择摄像头 ID\n\ntag\n\n * tag?: string\n * 自定义 tag 信息\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNScreenVideoTrackConfig"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web\n>QNScreenVideoTrackConfig\n\n# QNScreenVideoTrackConfig\n\n最近更新时间: 2022-07-14 14:59:10\n\n屏幕分享采集配置\n\nIndex\n\nProperties\n\n * encoderConfig\n * optimizationMode\n * screenVideoTag\n * screenAudioTag\n * chromeExtensionSourceType\n * electronScreenSourceID\n\nProperties\n\nencoderConfig\n\n * encoderConfig?: [QNVideoEncoderConfig](https://developer.qiniu.com/rtc/9056/WebQNScreenVideoTrackConfig/10564/WebQNVideoEncoderConfig) | [QNScreenEncoderConfigPreset](https://developer.qiniu.com/rtc/9056/WebQNScreenVideoTrackConfig/9075)\n * 屏幕分享采集编码配置\n\n> 默认预设值为\n> [720p](https://developer.qiniu.com/rtc/9056/WebQNScreenVideoTrackConfig/9075)。\n\noptimizationMode\n\n * optimizationMode: [QNVideoOptimizationMode](https://developer.qiniu.com/rtc/9056/WebQNScreenVideoTrackConfig/9079/WebQNVideoOptimizationMode)\n * 传输优化模式\n\nscreenVideoTag\n\n * screenVideoTag?: string\n * 自定义屏幕录制视频 tag 文本\n\nscreenAudioTag\n\n * screenAudioTag?: string\n * 自定义系统声音 tag 文本\n\nchromeExtensionSourceType\n\n * chromeExtensionSourceType?: [QNChromeExtensionSourceType](https://developer.qiniu.com/rtc/9056/WebQNScreenVideoTrackConfig/9073/WebQNChromeExtensionSourceType)\n * 使用 Chrome 插件采集,设置采集源\n\nelectronScreenSourceID\n\n * electronScreenSourceID?: string\n * electron 环境下,指定屏幕共享源\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNCanvasVideoTrackConfig"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web\n>QNCanvasVideoTrackConfig\n\n# QNCanvasVideoTrackConfig\n\n最近更新时间: 2022-07-14 14:59:52\n\nQNCanvasVideoTrack 配置\n\nIndex\n\nProperties\n\n * width\n * height\n * optimizationMode\n * tag\n * sources\n\nProperties\n\nwidth\n\n * width: number\n * 画布宽\n\nheight\n\n * height: number\n * 画布高\n\noptimizationMode\n\n * optimizationMode?: [QNVideoOptimizationMode](https://developer.qiniu.com/rtc/10185/WebQNCanvasVideoTrackConfig/9079/WebQNVideoOptimizationMode)\n * 视频传输优化模式\n\ntag\n\n * tag?: string\n * 自定义 tag 信息\n\nsources\n\n * sources: [QNCanvasVideoTrackSource](https://developer.qiniu.com/rtc/10185/WebQNCanvasVideoTrackConfig/10269/WebQNCanvasVideoTrackSource)[]\n * 画布中元素布局\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNMicrophoneAudioTrackConfig"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web\n>QNMicrophoneAudioTrackConfig\n\n# QNMicrophoneAudioTrackConfig\n\n最近更新时间: 2022-07-14 15:00:25\n\n麦克风采集配置\n\nIndex\n\nProperties\n\n * encoderConfig\n * AEC\n * AGC\n * ANS\n * microphoneId\n * tag\n\nProperties\n\nencoderConfig\n\n * encoderConfig?: [QNAudioEncoderConfig](https://developer.qiniu.com/rtc/9055/WebQNMicrophoneAudioTrackConfig/10565/WebQNAudioEncoderConfig) | [QNAudioEncoderConfigPreset](https://developer.qiniu.com/rtc/9055/WebQNMicrophoneAudioTrackConfig/9075)\n * 音频编码配置\n\n> 默认预设值为\n> [STANDARD](https://developer.qiniu.com/rtc/9055/WebQNMicrophoneAudioTrackConfig/9075)。\n\nAEC\n\n * AEC?: boolean\n * 是否启动回音消除\n\nAGC\n\n * AGC?: boolean\n * 是否开启自动增益,即在原有音频上增加音量\n\nANS\n\n * ANS?: boolean\n * 是否开启降噪功能\n\nmicrophoneId\n\n * microphoneId?: string\n * 选择麦克风 ID\n\ntag\n\n * tag?: string\n * 自定义 tag 信息\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNCustomVideoTrackConfig"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web\n>QNCustomVideoTrackConfig\n\n# QNCustomVideoTrackConfig\n\n最近更新时间: 2022-07-14 15:01:34\n\n屏幕分享采集配置\n\nIndex\n\nProperties\n\n * mediaStreamTrack\n * bitrate\n * optimizationMode\n * tag\n\nProperties\n\nmediaStreamTrack\n\n * mediaStreamTrack: [MediaStreamTrack](https://developer.mozilla.org/en-US/docs/Web/API/MediaStreamTrack)\n * 视频采集源\n\nbitrate\n\n * bitrate?: number\n * 码率(Kbps)\n\n> 视频最低 100 kbps,最高 4000 kbps,若设置值超出最高码率或低于最低码率,则 sdk 会自动调整为最低或最高支持的码率。\n\noptimizationMode\n\n * optimizationMode?: [QNVideoOptimizationMode](https://developer.qiniu.com/rtc/9062/WebQNCustomVideoTrackConfig/9079/WebQNVideoOptimizationMode);\n * 视频传输优化模式\n\ntag\n\n * tag?: string\n * 自定义 tag 文本\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNLiveStreamingConfig"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNLiveStreamingConfig\n\n# QNLiveStreamingConfig\n\n最近更新时间: 2022-07-14 15:02:18\n\nCDN 转推基础配置\n\nHierarchy\n\n * QNLiveStreamingConfig \n * [QNDirectLiveStreamingConfig](https://developer.qiniu.com/rtc/9287/WebQNLiveStreamingConfig/9049/WebQNDirectLiveStreamingConfig)\n * [QNTranscodingLiveStreamingConfig](https://developer.qiniu.com/rtc/9287/WebQNLiveStreamingConfig/9086/WebQNTranscodingLiveStreamingConfig)\n\nIndex\n\nProperties\n\n * streamID\n * url\n\nProperties\n\nstreamID\n\n * streamID: string\n * 流任务 ID\n\nurl\n\n * url: string\n * 推流地址\n\n> 当有单路转推及合流场景切换的需求时,流地址一样会导致抢流现象的出现,因此需要在流地址中拼接 '?serialnum=xxx'\n> 决定流的优先级,serialnum 的值从 1 开始递增,值越大,优先级越高。\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNDirectLiveStreamingConfig"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web\n>QNDirectLiveStreamingConfig\n\n# QNDirectLiveStreamingConfig\n\n最近更新时间: 2022-02-22 15:14:11\n\n单路转推配置项\n\nHierarchy\n\n * [QNLiveStreamingConfig](https://developer.qiniu.com/rtc/9049/WebQNDirectLiveStreamingConfig/9287/WebQNLiveStreamingConfig)\n * QNDirectLiveStreamingConfig \n * [QNTranscodingLiveStreamingConfig](https://developer.qiniu.com/rtc/9049/WebQNDirectLiveStreamingConfig/9086/WebQNTranscodingLiveStreamingConfig)\n\nIndex\n\nProperties\n\n * videoTrack\n * audioTrack\n * userConfigExtraInfo\n\nProperties\n\nvideoTrack\n\n * videoTrack?: [QNTrack](https://developer.qiniu.com/rtc/9049/WebQNDirectLiveStreamingConfig/9076/WebQNTrack)\n * 设置单路转推的视频 Track,仅支持设置一路视频 Track\n\naudioTrack\n\n * audioTrack?: [QNTrack](https://developer.qiniu.com/rtc/9049/WebQNDirectLiveStreamingConfig/9076/WebQNTrack)\n * 设置单路转推的音频 Track,仅支持设置一路音频 Track\n\nuserConfigExtraInfo\n\n * userConfigExtraInfo?: string\n * SEI 信息\n\n> SEI 即补充增强信息(Supplemental Enhancement\n> Information),属于码流范畴,它提供了向视频码流中加入额外信息的方法,可以实现数据与视频精准同步\n>\n> 1.当前最高支持 1k 字节 SEI 插入值\n>\n> 2.SEI 插入值支持魔法变量,目前仅提供 $(ts) ,表示当前时间戳,例如插入值为:'time:${ts}' =>\n> 'time:1640152850190'\n\n> 只有设置了单路转推的视频 Track 时插入 SEI 信息方能生效\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNAudioMixer"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNAudioMixer\n\n# QNAudioMixer\n\n最近更新时间: 2022-02-22 15:14:26\n\n混音控制类,提供了混音流程处理的一系列方法\n\nIndex\n\nCore methods\n\n * start\n * stop\n * resume\n * pause\n * getDuration\n * getCurrentPosition\n * seekTo\n * setPlayingVolume\n\nEvents\n\n * state-changed\n\nCore methods\n\nstart\n\n * start(loop?: boolean): Promise<void>\n * 开始混音并播放\n\nParameters\n\n * Optional loop: boolean \n\n是否循环播放\n\nReturns Promise<void>\n\nstop\n\n * stop(): void\n * 停止混音和播放\n\nReturns void\n\nresume\n\n * resume(): void\n * 恢复混音和播放\n\nReturns void\n\npause\n\n * pause(): void\n * 暂停混音和播放\n\nReturns void\n\ngetDuration\n\n * getDuration(): number\n * 获取音频文件的时长,单位: us\n\nReturns number\n\ngetCurrentPosition\n\n * getCurrentPosition(): number\n * 获取当前音频混音的位置,单位: us\n\nReturns number\n\nseekTo\n\n * seekTo(time: number): void\n * 跳转到指定位置混音\n\nParameters\n\n * time: number \n\n指定位置的时间戳,单位: us\n\nReturns void\n\nsetPlayingVolume\n\n * setPlayingVolume(volume: number): void\n * 设置外部音频本地播放音量\n\nParameters\n\n * volume: number \n\n音量值,默认为 1 表示原始音量\n\n建议设置范围: 0-10 之间的浮点数\n\nReturns void\n\nEvents\n\nstate-changed\n\n * (state: [QNAudioSourceState](https://developer.qiniu.com/rtc/9066/WebQNAudioMixer/9047/WebQNAudioSourceState)): void\n * 音频源处理状态变化\n\nParameters\n\n * state: [QNAudioSourceState](https://developer.qiniu.com/rtc/9066/WebQNAudioMixer/9047/WebQNAudioSourceState)\n\n音频源处理状态\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNAudioMusicMixer"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNAudioMusicMixer\n\n# QNAudioMusicMixer\n\n最近更新时间: 2022-12-01 17:47:40\n\n背景音乐混音控制类,提供了控制背景音乐混音一系列方法\n\nIndex\n\nCore methods\n\n * start\n * stop\n * pause\n * resume\n * getDuration\n * getCurrentPosition\n * seekTo\n * getMixingVolume\n * setMixingVolume\n * release\n\nEvents\n\n * music-state-change\n * error\n\nCore methods\n\nstart\n\n * start(loopCount: number): Promise<void>\n * 开始混音并播放\n\nParameters\n\n * loopCount: number \n\n循环次数,-1 为无限循环,0 则不做混音,其他则为指定次数的混音,默认一次混音\n\nReturns Promise<void>\n\nstop\n\n * stop(): void\n * 停止混音\n\nReturns void\n\npause\n\n * pause(): void\n * 暂停混音\n\nReturns void\n\nresume\n\n * resume(): void\n * 恢复混音\n\nReturns void\n\ngetDuration\n\n * getDuration(): number\n * 获取指定音频文件的总时长\n\n> 该接口为同步方法,对于在线音频文件,获取时长会存在一定的耗时\n\nReturns void\n\ngetCurrentPosition\n\n * getCurrentPosition(): number\n * 获取当前音频混音的位置,单位: ms\n\nReturns void\n\nseekTo\n\n * seekTo(position: number): void\n * 跳到指定位置混音\n\nParameters\n\n * position: number \n\n指定位置的时间戳,单位: ms\n\nReturns void\n\ngetMixingVolume\n\n * getMixingVolume(): number\n * 背景音乐混音的音量\n\nReturns number\n\nsetMixingVolume\n\n * setMixingVolume(musicVolume: number): void\n * 设置背景音乐的混音音量\n\nParameters\n\n * musicVolume: number \n\n初始默认音量为 1.0\n\n若想修改混音中麦克风采集音频的音量,可通过\n[QNMicrophoneAudioTrack.setVolume](https://developer.qiniu.com/rtc/12353/WebQNAudioMusicMixer/9064)\n接口实现\n\nReturns void\n\nrelease\n\n * release(): void\n * 释放所有音效资源\n\nReturns void\n\nEvents\n\nmusic-state-change\n\n * (state: [QNAudioSourceState](https://developer.qiniu.com/rtc/12353/WebQNAudioMusicMixer/9047/WebQNAudioSourceState)) => void\n * 混音状态改变时触发此回调\n\nParameters\n\n * state: [QNAudioSourceState](https://developer.qiniu.com/rtc/12353/WebQNAudioMusicMixer/9047/WebQNAudioSourceState)\n\n回调状态\n\nerror\n\n * (error: QNRTCError) => void\n * 混音操作发生错误时会触发此回调\n\nParameters\n\n * error: QNRTCError \n\n错误对象\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNAudioEffectMixer"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNAudioEffectMixer\n\n# QNAudioEffectMixer\n\n最近更新时间: 2022-11-30 15:18:48\n\n音效混音控制类,提供了控制音效混音的一系列方法\n\nIndex\n\nCore methods\n\n * createAudioEffect\n * start\n * stop\n * pause\n * resume\n * stopAll\n * pauseAll\n * resumeAll\n * getCurrentPosition\n * setVolume\n * getVolume\n * setAllEffectsVolume\n * release\n\nEvents\n\n * finish\n * error\n\nCore methods\n\ncreateAudioEffect\n\n * createAudioEffect(effectID: number,filePath: string | File): Promise<[QNAudioEffect](https://developer.qiniu.com/rtc/12350/WebQNAudioEffectMixer/12352/WebQNAudioEffect)>\n * 每一路参与混音的音效都需要对应一个 [QNAudioEffect](https://developer.qiniu.com/rtc/12350/WebQNAudioEffectMixer/12352/WebQNAudioEffect) 实例\n\nParameters\n\n * effectID: number \n\n音效文件的唯一标识符,需自行生成并维护,且确保唯一\n\n * filePath: string | File \n\n音效文件地址,支持本地文件和在线文件\n\nReturns\nPromise<[QNAudioEffect](https://developer.qiniu.com/rtc/12350/WebQNAudioEffectMixer/12352/WebQNAudioEffect)>\n\nstart\n\n * start(effectID: number): void\n * 开始音效混音\n\nParameters\n\n * effectID: number \n\n音效文件的唯一标识符,需自行生成并维护,且确保唯一\n\nReturns void\n\nstop\n\n * stop(effectID: number): void\n * 停止音效混音\n\nParameters\n\n * effectID: number \n\n音效文件的唯一标识符,需自行生成并维护,且确保唯一\n\nReturns void\n\npause\n\n * pause(effectID: number): void\n * 暂停音效混音\n\nParameters\n\n * effectID: number \n\n音效文件的唯一标识符,需自行生成并维护,且确保唯一\n\nReturns void\n\nresume\n\n * resume(effectID: number): void\n * 恢复音效混音\n\nParameters\n\n * effectID: number \n\n音效文件的唯一标识符,需自行生成并维护,且确保唯一\n\nReturns void\n\nstopAll\n\n * stopAll(): void\n * 停止所有正在混音的音效\n\nReturns void\n\npauseAll\n\n * pauseAll(): void\n * 暂停所有正在混音的音效\n\nReturns void\n\nresumeAll\n\n * resumeAll(): void\n * 恢复所有暂停混音的音效\n\nReturns void\n\ngetCurrentPosition\n\n * getCurrentPosition(effectID: number): number\n * 获取指定音效混音的位置,单位: ms\n\nParameters\n\n * effectID: number \n\n音效文件的唯一标识符,需自行生成并维护,且确保唯一\n\nReturns number\n\nsetVolume\n\n * setVolume(effectID: number, volume: number): void\n * 设置指定音效的混音音量\n\n> 若想修改混音中麦克风采集音频的音量,可通过\n> [QNMicrophoneAudioTrack.setVolume](https://developer.qiniu.com/rtc/12350/WebQNAudioEffectMixer/9064)\n> 接口实现\n\nParameters\n\n * effectID: number \n\n音效文件的唯一标识符,需自行生成并维护,且确保唯一\n\n * volume: number \n\n音效的混音音量\n\nReturns void\n\ngetVolume\n\n * getVolume(effectID: number): number\n * 获取指定音效的混音音量\n\nParameters\n\n * effectID: number \n\n音效文件的唯一标识符,需自行生成并维护,且确保唯一\n\nReturns number\n\nsetAllEffectsVolume\n\n * setAllEffectsVolume(volume: number): void\n * 设置所有正在混音的音效的混音音量\n\nParameters\n\n * volume: number \n\n音效的混音音量,默认音量均为 1.0\n\nReturns number\n\nrelease\n\n * release(): void\n * 释放所有音效资源\n\nReturns void\n\nEvents\n\nfinish\n\n * (effectID: number) => void\n * 音效混音完成时触发此回调\n\nParameters\n\n * effectID: number \n\n音效文件的唯一标识符,需自行生成并维护,且确保唯一\n\nerror\n\n * (error: [QNRTCError](https://developer.qiniu.com/rtc/12350/WebQNAudioEffectMixer/12354/WebQNRTCError)) => void\n * 混音操作发生错误时会触发此回调\n\nParameters\n\n * error: [QNRTCError](https://developer.qiniu.com/rtc/12350/WebQNAudioEffectMixer/12354/WebQNRTCError)\n\n错误对象\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNAudioEffect"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNAudioEffect\n\n# QNAudioEffect\n\n最近更新时间: 2022-11-30 14:19:20\n\n音效混音配置类,可通过\n[createAudioEffect](https://developer.qiniu.com/rtc/12352/WebQNAudioEffect/12350/WebQNAudioEffectMixer)\n接口进行创建\n\nIndex\n\nCore methods\n\n * getDuration\n * getID\n * getFilePath\n * setLoopCount\n * getLoopCount\n\nCore methods\n\ngetDuration\n\n * getDuration(): number\n * 获取指定音效文件的时长,单位:ms\n\nReturns number\n\ngetID\n\n * getID(): number\n * 获取音效文件的唯一标识符\n\nReturns number\n\ngetFilePath\n\n * getFilePath(): string\n * 获取音效文件的路径\n\nReturns string\n\nsetLoopCount\n\n * setLoopCount(loopCount: number): void\n * 设置音效混音的循环次数\n\nParameters\n\n * loopCount: number \n\n循环次数,-1 为无限循环,0 则不做混音,其他则为指定次数的混音,默认 1\n\nReturns void\n\ngetLoopCount\n\n * getLoopCount(): number\n * 获取音效循环次数\n\nReturns number\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNCustomMessage"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNCustomMessage\n\n# QNCustomMessage\n\n最近更新时间: 2022-07-14 15:11:33\n\n自定义文本消息,当接收到远端发送的自定义消息时,会触发\n[QNRTCClient](https://developer.qiniu.com/rtc/9081/WebQNCustomMessage/9090/WebQNRTCClient)\n对象的 [message-\nreceived](https://developer.qiniu.com/rtc/9081/WebQNCustomMessage/9090) 事件。\n\nIndex\n\nProperties\n\n * ID\n * userID\n * content\n * timestamp\n\nProperties\n\nID\n\n * ID: string\n * 消息 ID\n\nuserID\n\n * userID: string\n * 用户 ID\n\ncontent\n\n * content: string\n * 消息内容\n\ntimestamp\n\n * timestamp: number\n * 时间戳\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNMediaRelayConfig"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNMediaRelayConfig\n\n# QNMediaRelayConfig\n\n最近更新时间: 2022-07-14 15:12:27\n\n跨房转发配置\n\n>\n> 具体使用方法可见[跨房媒体转发使用指南](https://developer.qiniu.com/rtc/11355/QNMediaRelayConfig-\n> Web/11359/media-relay-web)。\n\nIndex\n\nProperties\n\n * destRoomInfos\n\nProperties\n\ndestRoomInfos\n\n * destRoomInfos: [QNMediaRelayInfo](https://developer.qiniu.com/rtc/11355/QNMediaRelayConfig-Web/11352/QNMediaRelayInfo-Web)[]\n * 目标房间信息列表\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNMediaRelayInfo"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNMediaRelayInfo\n\n# QNMediaRelayInfo\n\n最近更新时间: 2022-03-25 10:51:49\n\n转发目标房间信息\n\nIndex\n\nProperties\n\n * roomName\n * roomToken\n\nProperties\n\nroomName\n\n * roomName: string\n * 目标房间名\n\nroomToken\n\n * roomToken: string\n * 目标房间 token\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNMediaRelayState"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNMediaRelayState\n\n# QNMediaRelayState\n\n最近更新时间: 2022-03-25 10:52:47\n\n跨房媒体转发状态\n\nIndex\n\nEnumeration members\n\n * SUCCESS\n * STOPPED\n * INVALID_TOKEN\n * NO_ROOM\n * ROOM_CLOSED\n * PLAYER_EXISTED\n\nEnumeration members\n\nSUCCESS\n\n * SUCCESS = 0\n * 成功\n\nSTOPPED\n\n * STOPPED = 1\n * 主动退出\n\nINVALID_TOKEN\n\n * INVALID_TOKEN = 2\n * 无效 token\n\nNO_ROOM\n\n * NO_ROOM = 3\n * 目标房间不存在\n\nROOM_CLOSED\n\n * ROOM_CLOSED = 4\n * 目标房间已关闭\n\nPLAYER_EXISTED\n\n * PLAYER_EXISTED = 5\n * 目标房间存在相同用户名的用户\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNMediaRelayResult"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNMediaRelayResult\n\n# QNMediaRelayResult\n\n最近更新时间: 2022-07-14 15:13:15\n\n转发结果\n\nIndex\n\nProperties\n\n * roomName\n\nProperties\n\nroomName\n\n * [roomName: string]: [QNMediaRelayState](https://developer.qiniu.com/rtc/11356/QNMediaRelayResult-Web/11353/QNMediaRelayState-Web)\n * 转发房间 map 对象\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNLiveStreamingState"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNLiveStreamingState\n\n# QNLiveStreamingState\n\n最近更新时间: 2022-02-22 15:14:17\n\nCDN 转推状态\n\nIndex\n\nEnumeration members\n\n * STARTED\n * STOPPED\n\nEnumeration members\n\nSTARTED\n\n * STARTED = 'STARTED'\n * 开始\n\nSTOPPED\n\n * STOPPED = 'STOPPED'\n * 停止\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNVideoEncoderConfig"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNVideoEncoderConfig\n\n# QNVideoEncoderConfig\n\n最近更新时间: 2022-07-14 15:57:59\n\n视频编码配置\n\nIndex\n\nProperties\n\n * width\n * height\n * frameRate\n * bitrate\n * lowStreamConfig\n * isMultiProfileEnabled\n\nProperties\n\nwidth\n\n * width?: number | [QNConstrainLong](https://developer.qiniu.com/rtc/10564/WebQNVideoEncoderConfig/9052/WebQNConstrainLong)\n * 视频宽度\n\nheight\n\n * height?: number | [QNConstrainLong](https://developer.qiniu.com/rtc/10564/WebQNVideoEncoderConfig/9052/WebQNConstrainLong)\n * 视频高度\n\nframeRate\n\n * frameRate?: number\n * 视频帧率(fps)\n\nbitrate\n\n * bitrate?: number\n * 视频码率(Kbps)\n\n> 视频最低 100 kbps,最高 4000 kbps,若设置值超出最高码率或低于最低码率,则 sdk 会自动调整为最低或最高支持的码率。\n\nlowStreamConfig\n\n * lowStreamConfig?: [QNLowStreamConfig](https://developer.qiniu.com/rtc/10564/WebQNVideoEncoderConfig/10582/WebQNLowStreamConfig)\n * 视频码率(Kbps)\n\n> 小流配置。设置该属性则表示根据该属性的值来开启大小流功能。\n\nisMultiProfileEnabled\n\n * isMultiProfileEnabled?: number\n * 是否开启大小流\n\n> 废弃,请使用\n> [lowStreamConfig](https://developer.qiniu.com/rtc/10564/WebQNVideoEncoderConfig/10564)\n> 替代。\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNVideoEncoderConfiguration"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web\n>QNVideoEncoderConfiguration\n\n# QNVideoEncoderConfiguration\n\n最近更新时间: 2022-02-22 15:14:13\n\n视频编码配置\n\nIndex\n\nProperties\n\n * width\n * height\n * frameRate\n * bitrate\n * isMultiProfileEnabled\n\nProperties\n\nwidth\n\n * width?: number | [QNConstrainLong](https://developer.qiniu.com/rtc/9046/WebQNVideoEncoderConfiguration/9052/WebQNConstrainLong)\n * 视频宽度\n\nheight\n\n * height?: number | [QNConstrainLong](https://developer.qiniu.com/rtc/9046/WebQNVideoEncoderConfiguration/9052/WebQNConstrainLong)\n * 视频高度\n\nframeRate\n\n * frameRate?: number\n * 视频帧率(fps)\n\nbitrate\n\n * bitrate?: number\n * 视频码率(Kbps)\n\n> 视频最低 100 kbps,最高 4000 kbps,若设置值超出最高码率或低于最低码率,则 sdk 会自动调整为最低或最高支持的码率。\n\nisMultiProfileEnabled\n\n * isMultiProfileEnabled?: number\n * 是否开启大小流\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNMediaRecorderConfiguration"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web\n>QNMediaRecorderConfiguration\n\n# QNMediaRecorderConfiguration\n\n最近更新时间: 2024-01-08 11:14:24\n\n录制所需要的必要参数\n\nIndex\n\nProperties\n\n * width\n * height\n * frameRate\n * storagePath\n * maxDurationMs\n * recorderInfoUpdateInterval\n\nProperties\n\nwidth\n\n * width: number\n * 合流画布的宽\n\nheight\n\n * height: number\n * 合流画布的高\n\nframeRate\n\n * frameRate: number\n * 帧率\n\nstoragePath\n\n * storagePath: string\n * 录音文件在本地保存的绝对路径,需精确到文件名及格式。\n\n> Web 的特殊性只需要指定文件名即可\n\nmaxDurationMs\n\n * maxDurationMs: number\n * 最大录制时长,单位为毫秒,默认值为 120000\n\nrecorderInfoUpdateInterval\n\n * recorderInfoUpdateInterval?: number\n * 录制信息更新间隔,单位为毫秒,取值范围为 [1000,10000]。SDK 会根据该值的设置触发 [onRecorderInfoUpdated](https://developer.qiniu.com/rtc/12629/WebQNMediaRecorderConfiguration/12628) 回调,报告更新后的录制信息\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNRemoteVideoTrackStats"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web\n>QNRemoteVideoTrackStats\n\n# QNRemoteVideoTrackStats\n\n最近更新时间: 2022-07-14 15:14:07\n\n远端视频轨道的统计数据,可以通过\n[QNRemoteVideoTrack.getStats](https://developer.qiniu.com/rtc/9380/WebQNRemoteVideoTrackStats/9060)\n获取。\n\nIndex\n\nProperties\n\n * profile\n * downlinkFrameRate\n * downlinkBitrate\n * downlinkLostRate\n * uplinkRTT\n * uplinkLostRate\n\nProperties\n\nprofile\n\n * profile?: [QNTrackProfile](https://developer.qiniu.com/rtc/9380/WebQNRemoteVideoTrackStats/9050/WebQNTrackProfile)\n * 视频 profile(当前应用正在接收的视频轨道的 profile)\n\ndownlinkFrameRate\n\n * downlinkFrameRate: number\n * 下行视频帧率(当前应用正在下载的轨道的帧率)\n\ndownlinkBitrate\n\n * downlinkBitrate: number\n * 下行视频码率,单位 bps(当前应用下载对应轨道时的码率,和远端用户上行无关)\n\ndownlinkLostRate\n\n * downlinkLostRate: number\n * 下行网络丢包率,范围 [0, 100](当前应用下载对应轨道时的丢包率,和远端用户上行无关)\n\nuplinkRTT\n\n * uplinkRTT: number\n * 上行网络延时,单位毫秒(表示远端用户上传自身视频数据时的延时)\n\nuplinkLostRate\n\n * uplinkLostRate: number\n * 上行网络丢包率,范围 [0, 100](表示远端用户上传自身音频数据时的丢包率)\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNElectronScreenSource"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web\n>QNElectronScreenSource\n\n# QNElectronScreenSource\n\n最近更新时间: 2022-02-15 17:48:32\n\nElectron 环境下屏幕共享采集源\n\nIndex\n\nEnumeration members\n\n * ID\n * name\n * url\n\nEnumeration members\n\nID\n\n * ID: string\n * The ID of the screen source\n\nname\n\n * name: string\n * The name of the screen source\n\nurl\n\n * url: [Data URLs](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/Data_URIs)\n * The thumbnail of the screen source\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNLocalVideoTrackStats"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web\n>QNLocalVideoTrackStats\n\n# QNLocalVideoTrackStats\n\n最近更新时间: 2022-07-14 15:14:46\n\n近端视频轨道的统计数据,可以通过\n[QNLocalVideoTrack.getStats](https://developer.qiniu.com/rtc/9051/WebQNLocalVideoTrackStats/9061)\n获取。\n\nIndex\n\nProperties\n\n * profile\n * uplinkFrameRate\n * uplinkBitrate\n * uplinkRTT\n * uplinkLostRate\n\nProperties\n\nprofile\n\n * profile?: [QNTrackProfile](https://developer.qiniu.com/rtc/9051/WebQNLocalVideoTrackStats/9050/WebQNTrackProfile)\n * 该路 track 的 profile\n\nuplinkFrameRate\n\n * uplinkFrameRate: number\n * 上行视频帧率\n\nuplinkBitrate\n\n * uplinkBitrate: number\n * 上行视频码率,单位 bps\n\nuplinkRTT\n\n * uplinkRTT: number\n * 上行网络 rtt,单位 ms\n\nuplinkLostRate\n\n * uplinkLostRate: number\n * 上行网络丢包率,范围 [0, 100]\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNConnectionState"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNConnectionState\n\n# QNConnectionState\n\n最近更新时间: 2022-02-22 15:14:19\n\n房间连接状态\n\nIndex\n\nEnumeration members\n\n * DISCONNECTED\n * CONNECTING\n * CONNECTED\n * RECONNECTING\n * RECONNECTED\n\nEnumeration members\n\nDISCONNECTED\n\n * DISCONNECTED = 'DISCONNECTED'\n * 初始状态\n\nCONNECTING\n\n * CONNECTING = 'CONNECTING'\n * 正在连接\n\nCONNECTED\n\n * CONNECTED = 'CONNECTED'\n * 连接成功\n\nRECONNECTING\n\n * RECONNECTING = 'RECONNECTING'\n * 正在重连\n\nRECONNECTED\n\n * RECONNECTED = 'RECONNECTED'\n * 重连成功\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNAudioSourceState"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNAudioSourceState\n\n# QNAudioSourceState\n\n最近更新时间: 2022-02-22 15:14:07\n\n音频源处理状态\n\nIndex\n\nEnumeration members\n\n * PlAYING\n * PAUSED\n * STOPPED\n\nEnumeration members\n\nPlAYING\n\n * PlAYING = 'PlAYING'\n * 播放中\n\nPAUSED\n\n * PAUSED = 'PAUSED'\n * 暂停\n\nSTOPPED\n\n * STOPPED = 'STOPPED'\n * 停止\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNLocalAudioTrackStats"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web\n>QNLocalAudioTrackStats\n\n# QNLocalAudioTrackStats\n\n最近更新时间: 2022-07-14 15:15:25\n\n近端音频轨道的统计数据,可以通过\n[QNLocalAudioTrack.getStats](https://developer.qiniu.com/rtc/9083/WebQNLocalAudioTrackStats/9089)\n获取。\n\nIndex\n\nProperties\n\n * uplinkBitrate\n * uplinkRTT\n * uplinkLostRate\n\nProperties\n\nuplinkBitrate\n\n * uplinkBitrate: number\n * 上行音频码率,单位 bps\n\nuplinkRTT\n\n * uplinkRTT: number\n * 上行网络 rtt,单位 ms\n\nuplinkLostRate\n\n * uplinkLostRate: number\n * 上行网络丢包率,范围 [0, 100]\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNRemoteAudioTrackStats"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web\n>QNRemoteAudioTrackStats\n\n# QNRemoteAudioTrackStats\n\n最近更新时间: 2022-07-14 15:15:54\n\n远端音频轨道的统计数据,可以通过\n[QNRemoteAudioTrack.getStats](https://developer.qiniu.com/rtc/9093/WebQNRemoteAudioTrackStats)\n获取。\n\nIndex\n\nProperties\n\n * downlinkBitrate\n * uplinkRTT\n * uplinkLostRate\n * downlinkLostRate\n\nProperties\n\ndownlinkBitrate\n\n * downlinkBitrate: number\n * 下行音频码率,单位 bps(当前应用下载对应轨道时的码率,和远端用户上行无关)\n\nuplinkRTT\n\n * uplinkRTT: number\n * 上行网络 rtt,单位 ms(表示远端用户上传自身音频数据时的延时)\n\nuplinkLostRate\n\n * uplinkLostRate: number\n * 上行网络丢包率,范围 [0, 100](表示远端用户上传自身音频数据时的丢包率)\n\ndownlinkLostRate\n\n * downlinkLostRate: number\n * 下行网络丢包率,范围 [0, 100](当前应用下载对应轨道时的丢包率,和远端用户上行无关)\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNDeviceState"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNDeviceState\n\n# QNDeviceState\n\n最近更新时间: 2023-07-05 12:39:29\n\n设备状态\n\nIndex\n\nEnumeration members\n\n * ACTIVE\n * INACTIVE\n\nEnumeration members\n\nACTIVE\n\n * ACTIVE = 'ACTIVE'\n * 设备插入\n\nINACTIVE\n\n * INACTIVE = 'INACTIVE'\n * 设备拔出\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNRecorderState"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNRecorderState\n\n# QNRecorderState\n\n最近更新时间: 2024-01-08 10:36:38\n\n录制状态\n\nIndex\n\nEnumeration members\n\n * IDLE\n * RECORDING\n * STOPPED\n\nEnumeration members\n\nIDLE\n\n * IDLE = IDLE\n * 初始状态\n\nRECORDING\n\n * RECORDING = RECORDING\n * 录制中\n\nSTOPPED\n\n * STOPPED = STOPPED\n * 停止录制\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNLowStreamConfig"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Web >QNLowStreamConfig\n\n# QNLowStreamConfig\n\n最近更新时间: 2022-07-14 14:29:14\n\n小流配置\n\nIndex\n\nProperties\n\n * scaleResolutionDownBy\n * bitrate\n\nProperties\n\nscaleResolutionDownBy\n\n * scaleResolutionDownBy: number\n * 小流相对大流的缩放比例。例如,如果值为 2,则表示小流的宽高为大流的 1/2。\n\nbitrate\n\n * bitrate: number\n * 小流码率(kbps)\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频API 概览"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows >API 概览\n\n# API 概览\n\n最近更新时间: 2022-04-01 17:15:42\n\nQNRTCWin 是七牛云推出的一款跨平台平台的实时音视频 C++ SDK,提供了灵活的接口,支持高度定制以及二次开发。\n\n> 本文档适用于 QNRTC 5.x 版本的 SDK,并不向下兼容。\n>\n> QNRTC 5.x 基于 5.x 的基础上进行了优化,从而使使用者能够更加高效的接入,所以新接入的用户推荐使用最新版本的 SDK。\n>\n> 老用户也可以继续使用 QNRTC 3.x 版本,我们也会持续在此版本上进行问题修复和版本更迭,但是后续的一些新功能有可能并不会在旧版本上提供。\n\nQNRTC\n\nQNRTC 提供 SDK 的入口方法,可以通过它进行 SDK 的初始化,各个类型 Track 的创建等操作。其核心方法如下:\n\n创建 Track\n\n方法 | 描述 \n---|--- \n[CreateMicrophoneAudioTrack](https://developer.qiniu.com/rtc/9243/the-api-documentation/9445) | 创建麦克风音频轨道 \n[CreateCameraVideoTrack](https://developer.qiniu.com/rtc/9243/the-api-documentation/9445) | 创建摄像头视频轨道 \n[CreateScreenVideoTrack](https://developer.qiniu.com/rtc/9243/the-api-documentation/9445) | 创建屏幕采集的视频轨道 \n[CreateCustomAudioTrack](https://developer.qiniu.com/rtc/9243/the-api-documentation/9445) | 创建自定义导入音频数据轨道 \n[CreateCustomVideoTrack](https://developer.qiniu.com/rtc/9243/the-api-documentation/9445) | 创建自定义导入视频数据轨道 \n[DestroyLocalTrack](https://developer.qiniu.com/rtc/9243/the-api-documentation/9445/DestroyLocalTrack) | 销毁创建的音视频数据轨道 \n \nQNRTCClient\n\nQNRTCClient 提供和房间相关的一系列方法,通过它我们可以实现加入房间,在房间内发布或者订阅相应的音视频轨道等操作。主要的核心方法如下:\n\n核心方法\n\n方法 | 描述 \n---|--- \n[Join](https://developer.qiniu.com/rtc/9243/the-api-documentation/9446) | 加入房间 \n[Leave](https://developer.qiniu.com/rtc/9243/the-api-documentation/9446) | 离开房间 \n[Publish](https://developer.qiniu.com/rtc/9243/the-api-documentation/9446) | 发布本地的音视频轨道 \n[UnPublish](https://developer.qiniu.com/rtc/9243/the-api-documentation/9446) | 取消发布本地的音视频轨道 \n[Subscribe](https://developer.qiniu.com/rtc/9243/the-api-documentation/9446) | 订阅远端用户发布的音视频轨道 \n[UnSubscribe](https://developer.qiniu.com/rtc/9243/the-api-documentation/9446) | 取消订阅远端用户发布的音视频轨道 \n \n转推 CDN\n\n方法 | 描述 \n---|--- \n[StartLiveStreaming](https://developer.qiniu.com/rtc/9243/the-api-documentation/9446) | 开始单路转推或者合流转推 \n[StopLiveStreaming](https://developer.qiniu.com/rtc/9243/the-api-documentation/9446) | 停止单路转推或者合流转推 \n[SetTranscodingLiveStreamingTracks](https://developer.qiniu.com/rtc/9243/the-api-documentation/9446) | 更新合流转推的布局配置 \n[RemoveTranscodingLiveStreamingTracks](https://developer.qiniu.com/rtc/9243/the-api-documentation/9446) | 移除合流转推的布局配置 \n[SetLiveStreamingListener](https://developer.qiniu.com/rtc/9243/the-api-documentation/9446) | 设置转推相关事件的监听器 \n \nQNLocalTrack\n\nQNLocalTrack 是本地创建的一个音视或者视频轨道,可以通过 QNRTC\n来创建各种类型的音视频轨道,例如麦克风采集的音频轨道,摄像头采集的视频轨道,或者屏幕采集的视频轨道等等,我们真正用到的是它所衍生的子类。\n\n以 QNLocalTrack 为基类,向下衍生的子类如下:\n\nHierarchy\n\n * [QNLocalTrack](https://developer.qiniu.com/rtc/9243/the-api-documentation/9422/QNLocalTrack)\n * [QNLocalVideoTrack](https://developer.qiniu.com/rtc/9243/the-api-documentation/9424/QNLocalVideoTrack)\n * [QNCameraVideoTrack](https://developer.qiniu.com/rtc/9243/the-api-documentation/9404/QNCameraVideoTrack)\n * [QNScreenVideoTrack](https://developer.qiniu.com/rtc/9243/the-api-documentation/9448/QNScreenVideoTrack)\n * [QNCustomVideoTrack](https://developer.qiniu.com/rtc/9243/the-api-documentation/9412/QNCustomVideoTrack)\n * [QNLocalAudioTrack](https://developer.qiniu.com/rtc/9243/the-api-documentation/9420/QNLocalAudioTrack)\n * [QNMicrophoneAudioTrack](https://developer.qiniu.com/rtc/9243/the-api-documentation/9433/QNMicrophoneAudioTrack)\n * [QNCustomAudioTrack](https://developer.qiniu.com/rtc/9243/the-api-documentation/9409/QNCustomAudioTrack)\n\n本地音频轨道\n\n * 本地音频轨道 | 描述 \n---|--- \n[QNMicrophoneAudioTrack](https://developer.qiniu.com/rtc/9243/the-api-documentation/9433/QNMicrophoneAudioTrack) | 以麦克风采集到的音频数据为数据源创建的音频轨道,提供了音量控制,数据回调等接口,通过 CreateMicrophoneAudioTrack 创建。 \n[QNCustomAudioTrack](https://developer.qiniu.com/rtc/9243/the-api-documentation/9409/QNCustomAudioTrack) | 以外部导入的 PCM 音频数据为数据源创建的音频轨道,可以通过 PushAudioFrame 方法往房间内推送数据,通过 CreateCustomAudioTrack 创建。 \n本地视频轨道 | 描述 \n---|--- \n[QNCameraVideoTrack](https://developer.qiniu.com/rtc/9243/the-api-documentation/9404/QNCameraVideoTrack) | 以摄像头采集到的视频数据为数据源创建的视频轨道,提供了包括摄像头采集镜像,推送图片流等接口。通过 CreateCameraVideoTrack 创建。 \n[QNScreenVideoTrack](https://developer.qiniu.com/rtc/9243/the-api-documentation/9448/QNScreenVideoTrack) | 以屏幕采集到的视频数据为数据源创建的视频轨道,通过 CreateScreenVideoTrack 创建。 \n[QNCustomVideoTrack](https://developer.qiniu.com/rtc/9243/the-api-documentation/9412/QNCustomVideoTrack) | 以外部导入的 kI420 kYUY2 kRGB24 格式的视频数据为数据源创建的音频轨道,可以通过 PushVideoFrame 方法往房间内推送视频数据,通过 CreateCameraVideoTrack 创建。 \n\nQNRemoteTrack\n\nQNRemoteTrack 是远端用户创建的音视频轨道,本地用户可以通过房间中的事件回调获取。其衍生的子类如下:\n\nHierarchy\n\n * [QNRemoteTrack](https://developer.qiniu.com/rtc/9243/the-api-documentation/9440/QNRemoteTrack)\n * [QNRemoteVideoTrack](https://developer.qiniu.com/rtc/9243/the-api-documentation/9443/QNRemoteVideoTrack)\n * [QNRemoteAudioTrack](https://developer.qiniu.com/rtc/9243/the-api-documentation/9438/QNRemoteAudioTrack)\n\n * 远端音视频轨道 | 描述 \n---|--- \n[QNRemoteAudioTrack](https://developer.qiniu.com/rtc/9243/the-api-documentation/9438/QNRemoteAudioTrack) | 远端用户发布的音频轨道,当可以通过该类来调节播放的音量大小,并且还提供了回调音频数据等方法。 \n[QNRemoteVideoTrack](https://developer.qiniu.com/rtc/9243/the-api-documentation/9443/QNRemoteVideoTrack) | 远端用户发布的视频轨道,当可以通过该类来把远端视频画面渲染到视图中,并且还提供了回调视频数据等方法。 \n\nQNClientEventListener\n\n房间内的事件可以通过该接口类进行监听,通过调用 SetQNClientEventListener,把 QNClientEventListener\n作为参数传入。其提供的主要回调方法如下:\n\n * 方法 | 描述 \n---|--- \n[OnConnectionStateChanged](https://developer.qiniu.com/rtc/9243/the-api-documentation/9407) | 房间状态改变时会触发此回调方法,自己加入房间成功,state 变为 CONNECTED,当离开房间时,state 变为 DISCONNECTED。详情可参看 [QNConnectionState](https://developer.qiniu.com/rtc/9243/the-api-documentation/9464/QNConnectionState) \n[OnUserJoined](https://developer.qiniu.com/rtc/9243/the-api-documentation/9407) | 当远端用户加入房间时会触发此事件。 \n[OnUserPublished](https://developer.qiniu.com/rtc/9243/the-api-documentation/9407) | 当远端用户发布音视频轨道时会触发此事件。 \n[OnUserUnpublished](https://developer.qiniu.com/rtc/9243/the-api-documentation/9407) | 当远端用户取消发布音视频轨道时会触发此事件。 \n[OnUserLeft](https://developer.qiniu.com/rtc/9243/the-api-documentation/9407) | 当远端用户离开房间时会触发此事件。 \n[OnUserReconnecting](https://developer.qiniu.com/rtc/9243/the-api-documentation/9407) | 当远端用户正在重连时会触发此事件。 \n[OnUserReconnected](https://developer.qiniu.com/rtc/9243/the-api-documentation/9407) | 当远端用户重连成功时会触发此事件。 \n[OnSubscribed](https://developer.qiniu.com/rtc/9243/the-api-documentation/9407) | 当订阅远端用户的音视频轨道成功之后,会触发此事件。 \n[OnMessageReceived](https://developer.qiniu.com/rtc/9243/the-api-documentation/9407) | 当远端用户收到消息时会触发此事件。 \n[OnMediaRelayStateChanged](https://developer.qiniu.com/rtc/9243/the-api-documentation/9407) | 跨发媒体转发状态变更,非主动调用触发,由目标房间状态变化引起此通知 \n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNAudioDeviceInfo"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows >QNAudioDeviceInfo\n\n# QNAudioDeviceInfo\n\n最近更新时间: 2021-09-27 11:43:42\n\n音频设备信息\n\nIndex\n\nProperties\n\n * device_type\n * device_index\n * is_default\n * device_name\n * device_id\n\nEnum\n\n * QNAudioDeviceType\n\nProperties\n\ndevice_type\n\n * device_type: QNAudioDeviceType\n * 设备类型,录制 or 播放\n\ncode | describe | message \n---|---|--- \nadt_invalid | 无效设备 | 无 \nadt_record | 录制设备 | 无 \nadt_playout | 播放设备 | 无 \n\ndevice_index\n\n * device_index: int\n * 设备编号\n\nis_default\n\n * is_default: bool\n * 是否是系统默认设备\n\ndevice_name\n\n * device_name: char[]\n * 设备名\n\ndevice_id\n\n * device_id: char[]\n * 设备 id\n\nEnum\n\nQNAudioDeviceType\n\n * Type declaration\n\n * adt_invalid\n\n * adt_record\n\n * adt_playout\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNAudioDeviceState"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows\n>QNAudioDeviceState\n\n# QNAudioDeviceState\n\n最近更新时间: 2022-03-23 21:13:24\n\n音频设备当前状态,用与设备插拔的检测和通知\n\nIndex\n\nEnum\n\n * QNAudioDeviceState\n\nEnum\n\nQNAudioDeviceState\n\n * code | describe | message \n---|---|--- \nkAudioDeviceActive | 新的可用设备 | 无 \nkAudioDeviceDisabled | 设备失效 | 无 \nkAudioDeviceNotPresent | 设备不存在 | 无 \nkAudioDeviceLost | 设备被拔出 | 无 \n \nType declaration\n\n * kAudioDeviceActive\n\n * kAudioDeviceDisabled\n\n * kAudioDeviceNotPresent\n\n * kAudioDeviceLost\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNAudioFrameListener"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows\n>QNAudioFrameListener\n\n# QNAudioFrameListener\n\n最近更新时间: 2022-03-23 21:15:05\n\n音频数据回调接口\n\nIndex\n\nCore Methods\n\n * OnAudioFrame\n\nCore Methods\n\nOnAudioFrame\n\n * virtual void OnAudioFrame(const string& user_id,const string& track_id,const uint8_t* data, uint32_t data_size,uint32_t bits_per_sample, uint32_t sample_rate,uint32_t channels) = 0\n * 音频数据回调\n\nParameters\n\n * user_id: string \n\n此音频数据所属的用户\n\n * track_id: string \n\n此音频 track id\n\n * data: uint8_t* \n\n音频数据内存指针\n\n * data_size: uint32_t \n\n数据长度\n\n * bits_per_sample: uint32_t \n\n位宽,即每个采样点占用位数\n\n * sample_rate: uint32_t \n\n采样率\n\n * channels: uint32_t \n\n声道数\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNAudioQuality"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows >QNAudioQuality\n\n# QNAudioQuality\n\n最近更新时间: 2022-03-23 21:24:15\n\n描述音频质量的配置属性\n\nIndex\n\nProperties\n\n * sample_rate\n * channel_count\n * bits_per_sample\n * bitrate\n\nProperties\n\nsample_rate\n\n * sample_rate: int32_t\n * 采样率\n\nchannel_count\n\n * channel_count: int32_t\n * 声道数\n\nbits_per_sample\n\n * bits_per_sample: int32_t\n * 位宽\n\nbitrate\n\n * bitrate: int32_t\n * 编码码率,单位 kbps\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNCameraCapability"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows\n>QNCameraCapability\n\n# QNCameraCapability\n\n最近更新时间: 2022-03-23 21:27:22\n\n摄像头支持的采集能力\n\nIndex\n\nProperties\n\n * width\n * height\n * max_fps\n * video_frame_type\n\nProperties\n\nwidth\n\n * width: int\n * 采集宽\n\nheight\n\n * height: int\n * 采集高\n\nmax_fps\n\n * max_fps: int\n * 采集帧率\n\nvideo_frame_type\n\n * video_frame_type: [QNVideoFrameType](https://developer.qiniu.com/rtc/9401/qncameracapability/11324/QNVideoFrameType)\n * 采集的视频数据格式\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNCameraEventListener"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows\n>QNCameraEventListener\n\n# QNCameraEventListener\n\n最近更新时间: 2022-03-24 15:43:13\n\n摄像头开启,采集异常通知\n\nIndex\n\nCore Methods\n\n * OnCameraError\n\nCore Methods\n\nOnCameraError\n\n * virtual void OnCameraError(int error_code, const std::string& error_message)\n * 相机失败回调\n\nParameters\n\n * error_code: int \n\n详见错误码\n[QNErrorCode](https://developer.qiniu.com/rtc/development_guidelines/11350/error-\ncode-windows)\n\n * error_message: string \n\n错误描述\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNCameraInfo"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows >QNCameraInfo\n\n# QNCameraInfo\n\n最近更新时间: 2022-03-24 00:15:19\n\n摄像头设备信息\n\nIndex\n\nProperties\n\n * id\n * name\n * capabilities\n\nProperties\n\nid\n\n * id: string\n * 设备 Id,用于系统内做标识\n\nname\n\n * name: string\n * 设备名\n\ncapabilities\n\n * capabilities: std::vector<[QNCameraCapability](https://developer.qiniu.com/rtc/9402/qncameradeviceinfo/9401/QNCameraCapability)>\n * 此摄像头所支持的采集能力列表\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNCameraVideoTrack"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows\n>QNCameraVideoTrack\n\n# QNCameraVideoTrack\n\n最近更新时间: 2022-03-23 21:54:41\n\n本地视频相机 Track 类\n\nHierarchy\n\n * [QNTrack](https://developer.qiniu.com/rtc/9404/qncameravideotrack/9452/QNTrack)\n * [QNLocalTrack](https://developer.qiniu.com/rtc/9404/qncameravideotrack/9422/QNLocalTrack)\n * [QNLocalVideoTrack](https://developer.qiniu.com/rtc/9404/qncameravideotrack/9424/QNLocalVideoTrack)\n * QNCameraVideoTrack \n * [QNScreenVideoTrack](https://developer.qiniu.com/rtc/9404/qncameravideotrack/9448/QNScreenVideoTrack)\n * [QNCustomVideoTrack](https://developer.qiniu.com/rtc/9404/qncameravideotrack/9412/QNCustomVideoTrack)\n * [QNLocalAudioTrack](https://developer.qiniu.com/rtc/9404/qncameravideotrack/9420/QNLocalAudioTrack)\n * [QNMicrophoneAudioTrack](https://developer.qiniu.com/rtc/9404/qncameravideotrack/9433/QNMicrophoneAudioTrack)\n * [QNCustomAudioTrack](https://developer.qiniu.com/rtc/9404/qncameravideotrack/9409/QNCustomAudioTrack)\n * [QNRemoteTrack](https://developer.qiniu.com/rtc/9404/qncameravideotrack/9440/QNRemoteTrack)\n * [QNRemoteVideoTrack](https://developer.qiniu.com/rtc/9404/qncameravideotrack/9443/QNRemoteVideoTrack)\n * [QNRemoteAudioTrack](https://developer.qiniu.com/rtc/9404/qncameravideotrack/9438/QNRemoteAudioTrack)\n\nIndex\n\nCore Methods\n\n * GetTrackID\n * GetUserID\n * GetTag\n * IsAudio\n * IsVideo\n * IsMuted\n * SetMuted\n * SetVideoFrameListener\n * Play\n * SendSEI\n * StartCrop\n * StopCrop\n * StartScale\n * StopScale\n * StartCapture\n * StopCapture\n * SetMirror\n * PushImage\n\nCore Methods\n\nGetTrackID\n\n * virtual const std::string& GetTrackID() = 0\n * 获取 TrackID\n\nReturns string\n\nGetUserID\n\n * virtual const std::string& GetUserID() = 0\n * 获取 UserID\n\nReturns string\n\nGetTag\n\n * virtual const std::string& GetTag() = 0\n * 获取 Tag\n\nReturns string\n\nIsAudio\n\n * virtual bool IsAudio() = 0\n * 是否为音频 track\n\nReturns bool\n\nIsVideo\n\n * virtual bool IsVideo() = 0\n * 是否为视频 track\n\nReturns bool\n\nIsMuted\n\n * virtual bool IsMuted() = 0\n * 是否 muted\n\nReturns bool\n\nSetMuted\n\n * virtual void SetMuted(bool muted) = 0\n * 设置是否静默该 track,静默后,SDK 将会发送静默帧数据\n\nParameters\n\n * muted: bool \n\n是否静默\n\nSetVideoFrameListener\n\n * virtual void SetVideoFrameListener([QNVideoFrameListener](https://developer.qiniu.com/rtc/9404/qncameravideotrack/11326/QNVideoFrameListener)* listener) = 0\n * 设置视频数据监听\n\nParameters\n\n * listener: [QNVideoFrameListener](https://developer.qiniu.com/rtc/9404/qncameravideotrack/11326/QNVideoFrameListener)\n\n视频数据监听接口\n\nPlay\n\n * virtual void Play(QNView& view) = 0\n * 设置本地渲染参数\n\nParameters\n\n * view: [QNView](https://developer.qiniu.com/rtc/9404/qncameravideotrack/11328/QNView)\n\n渲染参数\n\nSendSEI\n\n * virtual void SendSEI(std::string& message,std::string& uuid,int32_t repeat_count) = 0\n * 发送 SEI\n\nParameters\n\n * message: string \n\n消息内容\n\n * uuid: string \n\n唯一 ID\n\n * repeat_count: int \n\n重复次数, -1 为每一帧都发送\n\nStartCrop\n\n * virtual void StartCrop(uint32_t crop_x,uint32_t crop_y,int32_t crop_width,int32_t crop_height) = 0\n * 开启原始帧裁剪功能,设置参数要求如下,如果设置不正确,则输出原始图像\n\nParameters\n\n * crop_x: uint32_t \n\n开始裁减的 X 坐标点,原点为左上角,必须落在原图之内\n\n * crop_y: uint32_t \n\n开始裁减的 Y 坐标点,原点为左上角,必须落在原图之内\n\n * crop_width: int32_t \n\n目标图像宽度,必须为 4 的整数倍,如果是裁剪模式,crop_x和 crop_y,裁剪图像必须在原始图像之内\n\n * crop_height: int32_t \n\n目标图像高度,必须为 4 的整数倍,如果是裁剪模式,crop_x 和 crop_y,裁剪图像必须在原始图像之内\n\nStopCrop\n\n * virtual void StopCrop() = 0\n * 关闭裁剪功能\n\nStartScale\n\n * virtual void StartScale(int32_t dst_width, int32_t dst_height) = 0\n * 开启原始帧缩放功能,设置参数要求如下,如果设置不正确,则输出原始图像\n\nParameters\n\n * dst_width: int32_t \n\n目标图像宽度,必须为 4 的整数倍\n\n * dst_height: int32_t \n\n目标图像高度,必须为 4 的整数倍\n\nStopScale\n\n * virtual void StopScale() = 0\n * 关闭缩放功能\n\nStartCapture\n\n * virtual void StartCapture() = 0\n * 开启摄像头采集\n\nStopCapture\n\n * virtual void StopCapture() = 0\n * 关闭摄像头采集\n\nSetMirror\n\n * virtual void SetMirror(bool mirror) = 0\n * 设置本地采集画面是否镜像,开启镜像后,订阅端看到的是镜像画面\n\nParameters\n\n * mirror: bool \n\n是否镜像\n\nPushImage\n\n * virtual void PushImage(void* object, [QNImageType](https://developer.qiniu.com/rtc/9404/qncameravideotrack/11329/QNImageType) type) = 0\n * 将摄像头采集流替换为图片流,此接口需要在摄像头推流成功后调用\n\nParameters\n\n * object: void* \n\nobject 推送图片的对象,可以是文件对象、文件路径、url,Window 平台只支持文件路径, 设置为 nullptr\n时,取消推送图片,关闭图片推送,恢复摄像头采集画面\n\n * type: [QNImageType](https://developer.qiniu.com/rtc/9404/qncameravideotrack/11329/QNImageType)\n\n图片格式,Windows 平台只支持 kImagJpeg 时,关闭图片推送,恢复摄像头采集画面\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNCameraVideoTrackConfig"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows\n>QNCameraVideoTrackConfig\n\n# QNCameraVideoTrackConfig\n\n最近更新时间: 2022-03-23 21:57:54\n\n摄像头视频采集 Track 的配置\n\nIndex\n\nProperties\n\n * encoder_config\n * capture_config\n * id\n * tag\n * multi_profile_enabled\n * camera_facing\n\nProperties\n\nencoder_config\n\n * encoder_config: [QNVideoEncoderConfig](https://developer.qiniu.com/rtc/9406/qncameravideotrackconfig/11330/QNVideoEncoderConfig)\n * 设置摄像头采集数据编码参数\n\ncapture_config\n\n * capture_config: [QNVideoCaptureConfig](https://developer.qiniu.com/rtc/9406/qncameravideotrackconfig/undefined)\n * 设置摄像头采集参数\n\nid\n\n * id: string\n * 设备 id\n\ntag\n\n * tag: string\n * 自定义标签\n\nmulti_profile_enabled\n\n * multi_profile_enabled: bool\n * 是否开启多流\n\ncamera_facing\n\n * camera_facing: QNCameraFacing\n * 设置前后置摄像头(仅适用于 Android 和 iOS)\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNClientEventListener"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows\n>QNClientEventListener\n\n# QNClientEventListener\n\n最近更新时间: 2022-03-24 00:16:25\n\n[QNRTCClient](https://developer.qiniu.com/rtc/9407/qnclienteventlistener/9446/QNRTCClient)\n事件回调接口\n\nIndex\n\nCore Methods\n\n * OnConnectionStateChanged\n * OnUserJoined\n * OnUserLeft\n * OnUserReconnecting\n * OnUserReconnected\n * OnUserPublished\n * OnUserUnpublished\n * OnSubscribed\n * OnMessageReceived\n * OnMediaRelayStateChanged\n\nCore Methods\n\nOnConnectionStateChanged\n\n * virtual void OnConnectionStateChanged(QNConnectionState state, const QNConnectionDisconnectedInfo* info) = 0\n * 房间连接状态变化时通知\n\nParameters\n\n * state: [QNConnectionState](https://developer.qiniu.com/rtc/9407/qnclienteventlistener/9464/QNConnectionState)\n\n连接状态\n\n * info: [QNConnectionDisconnectedInfo](https://developer.qiniu.com/rtc/9407/qnclienteventlistener/9408/QNConnectionDisconnectedInfo)\n\n断开连接时状态信息\n\nOnUserJoined\n\n * virtual void OnUserJoined(const std::string& remote_user_id, const std::string& user_data) = 0\n * 远端用户加入房间时通知\n\nParameters\n\n * remote_user_id: string \n\n远端用户 id\n\n * user_data: string \n\n远端用户自定义消息\n\nOnUserLeft\n\n * virtual void OnUserLeft(const std::string& remote_user_id) = 0\n * 远端用户离开房间时通知\n\nParameters\n\n * remote_user_id: string \n\n远端用户 id\n\nOnUserReconnecting\n\n * virtual void OnUserReconnecting(const std::string& remote_user_id) = 0\n * 远端用户重连时通知\n\nParameters\n\n * remote_user_id: string \n\n远端用户 id\n\nOnUserReconnected\n\n * virtual void OnUserReconnected(const std::string& remote_user_id) = 0\n * 远端用户重连成功时通知\n\nParameters\n\n * remote_user_id: string \n\n远端用户 id\n\nOnUserPublished\n\n * virtual void OnUserPublished(const std::string& remote_user_id, const RemoteTrackList& track_list) = 0\n * 远端用户发布新 track 时通知\n\nParameters\n\n * remote_user_id: string \n\n远端用户 id\n\n * track_list: std::list<shared_ptr<[QNRemoteTrack](https://developer.qiniu.com/rtc/9407/qnclienteventlistener/9440/QNRemoteTrack)>>\n\n远端用户新发布的 tracks\n\nOnUserUnpublished\n\n * virtual void OnUserUnpublished(const std::string& remote_user_id, const RemoteTrackList& track_list) = 0\n * 远端用户取消发布 track 时通知\n\nParameters\n\n * remote_user_id: string \n\n远端用户 id\n\n * track_list: std::list<shared_ptr<[QNRemoteTrack](https://developer.qiniu.com/rtc/9407/qnclienteventlistener/9440/QNRemoteTrack)>>\n\n远端用户取消发布的 tracks\n\nOnSubscribed\n\n * virtual void OnSubscribed(const std::string& remote_user_id,const RemoteAudioTrackList& remote_audio_track_list,const RemoteVideoTrackList& remote_video_track_list) = 0\n * 订阅 track 成功时通知\n\nParameters\n\n * remote_user_id: string \n\n订阅的 track 所属的远端用户\n\n * remote_audio_track_list: std::list<shared_ptr<[QNRemoteAudioTrack](https://developer.qiniu.com/rtc/9407/qnclienteventlistener/9438/QNRemoteAudioTrack)>>\n\n订阅的音频 tracks\n\n * remote_video_track_list: std::list<shared_ptr<[QNRemoteVideoTrack](https://developer.qiniu.com/rtc/9407/qnclienteventlistener/9443/QNRemoteVideoTrack)>>\n\n订阅的视频 tracks\n\nOnMessageReceived\n\n * virtual void OnMessageReceived(const CustomMessageList& message) = 0\n * 接收收到新的自定义消息时通知\n\nParameters\n\n * message: std::list<[QNCustomMessage](https://developer.qiniu.com/rtc/9407/qnclienteventlistener/9411/QNCustomMessage)>\n\n自定义消息列表\n\nOnMediaRelayStateChanged\n\n * virtual void OnMediaRelayStateChanged(const std::string& relay_room, const QNMediaRelayState state) = 0\n * 跨发媒体转发状态变更,非主动调用触发,由目标房间状态变化引起此通知\n\n目前仅当目标房间关闭时,会触发此通知\n\nParameters\n\n * relay_room: string \n\n发生状态变化的房间名\n\n * state: [QNMediaRelayState](https://developer.qiniu.com/rtc/9407/qnclienteventlistener/undefined)\n\n新状态,当前仅可能为 QNMediaRelayState::kRoomClosed\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNClientMode"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows >QNClientMode\n\n# QNClientMode\n\n最近更新时间: 2022-03-24 00:17:22\n\n使用场景,当前分为通信场景和互动直播场景,在\n[QNRTCClientConfig](https://developer.qiniu.com/rtc/11331/qnclientmode/11344/QNRTCClientConfig)\n中设置,建议根据不同需求选择不同场景,优化使用体验\n\nIndex\n\nProperties\n\n * kRtc\n * kLive\n\nProperties\n\nkRtc\n\n * kRtc: QNClientMode\n * 通信场景,用于常见的一对一通话或群聊,该场景中的用户均可以发布和订阅音视频轨道\n\nkLive\n\n * kLive: QNClientMode\n * 直播场景,有主播和观众两种用户角色,可以通过 [QNRTCClient.SetClientRole](https://developer.qiniu.com/rtc/11331/qnclientmode/9446) 方法设置用户角色为主播或观众。主播可以发布和订阅音视频轨道,而观众只能订阅音视频轨道,无法发布\n\n> 注意:直播场景中的用户角色默认为观众。如需发布音视频,必须将角色修改为主播\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNClientRole"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows >QNClientRole\n\n# QNClientRole\n\n最近更新时间: 2022-03-23 22:04:35\n\n角色类型,当前仅适用于互动直播场景,通过\n[QNRTCClient.SetClientRole](https://developer.qiniu.com/rtc/11332/qnclientrole/9446)\n来进行设置\n\nIndex\n\nProperties\n\n * kBroadcaster\n * kAudience\n\nProperties\n\nkBroadcaster\n\n * kBroadcaster: QNClientRole\n * 主播角色,拥有发布和订阅媒体流的权限,仅适用于互动直播场景\n\nkAudience\n\n * kAudience: QNClientRole\n * 观众角色,仅有订阅媒体流的权限,仅适用于互动直播场景\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNClientRoleListener"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows\n>QNClientRoleListener\n\n# QNClientRoleListener\n\n最近更新时间: 2022-03-23 22:05:47\n\n角色设置回调接口,可通过调用\n[QNRTCClient.SetClientRole](https://developer.qiniu.com/rtc/11333/qnclientrolelistener/9446)\n传入\n\nIndex\n\nCore Methods\n\n * OnClientRoleResult\n * OnClientRoleError\n\nCore Methods\n\nOnClientRoleResult\n\n * virtual void OnClientRoleResult(QNClientRole new_role) = 0\n * 设置角色成功回调\n\nParameters\n\n * new_role: [QNClientRole](https://developer.qiniu.com/rtc/11333/qnclientrolelistener/11332/QNClientRole)\n\n新角色类型\n\nOnClientRoleError\n\n * virtual void OnClientRoleError(int error_code, const std::string& error_message) = 0\n * 设置角色失败回调\n\ncode | describe | message \n---|---|--- \nErr_Invalid_Client_Mode | 使用场景不符 | 请确认当前场景为互动直播,通信场景不支持角色设置 \n \nParameters\n\n * errorCode: int \n\n错误码\n\n * error_message: string \n\n错误描述\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNConnectionDisconnectedInfo"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows\n>QNConnectionDisconnectedInfo\n\n# QNConnectionDisconnectedInfo\n\n最近更新时间: 2022-03-23 22:06:36\n\n房间断开时的相关信息\n\nIndex\n\nProperties\n\n * reason\n * error_code\n * error_message\n\nEnum\n\n * Reason\n\nProperties\n\nreason\n\n * reason: Reason\n * 获取用户房间断开的原因\n\ncode | describe | message \n---|---|--- \nkLeave | 主动退出 | 无 \nkKickOut | 被服务端踢出房间 | 无 \nkRoomClosed | 房间被管理员关闭 | 请确认房间管理逻辑 \nkRoomFull | 房间人数已满,当房间已加入人数超过房间的人数限制时触发 | 请确认后台服务的设置 \nkRoomError | 异常断开 | 可通过 error code 查看异常断开的原因 \n\nerror_code\n\n * error_code: int\n * 错误码\n\ncode | describe | message \n---|---|--- \nQNRTC_OK | 主动退出房间 | 无 \nErr_Auth_Failed | 加入房间认证失败 | 服务验证时出错,可能为服务网络异常。建议重新尝试加入房间 \nErr_Token_Error | 房间 Token 错误 | 表示您提供的房间 token 不符合七牛 token 签算规则, 详情请参考[RoomToken 签发服务](https://doc.qnsdk.com/rtn/docs/server_overview#1) \nErr_Token_Expired | 房间 Token 过期 | 表示您的房间 token 过期, 需要重新生成 token 再加入 \nErr_User_Already_Exist | 房间内已经有同名用户 | 后台如果配置为开启“禁止自动踢人”,则同一用户重复加入/未正常退出再加入会触发此错误 \nErr_Media_Not_Support | 媒体协商失败 | 表示该设备不支持指定的音视频格式,无法进行连麦的操作 \nErr_Fatal | 非预期错误 | 表示 RTC 生命周期异常,需要释放资源并重新初始化 \nErr_Reconnect_Failed | 内部重连失效 | 内部重连后出错,一般出现在网络非常不稳定时出现,建议提示用户并尝试重新加入房间。当发生该错误时,之前创建的转推任务将被服务销毁,如有需要,应在下次成功加入房间后按业务需要恢复 \n\nerror_message\n\n * error_message: string\n * 获取错误信息的文字描述\n\nEnum\n\nReason\n\n * Type declaration\n\n * kLeave\n\n * kKickOut\n\n * kRoomClosed\n\n * kRoomFull\n\n * kRoomError\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNConnectionState"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows >QNConnectionState\n\n# QNConnectionState\n\n最近更新时间: 2022-03-23 22:07:40\n\n连麦房间连接状态\n\nIndex\n\nProperties\n\n * kDisconnected\n * kConnecting\n * kConnected\n * kReconnecting\n * kReconnected\n\nProperties\n\nkDisconnected\n\n * kDisconnected: QNConnectionState\n * 未链接状态\n\nSDK 出现无法恢复错误时也进入该状态\n\nkConnecting\n\n * kConnecting: QNConnectionState\n * 正在连接状态\n\n当调用\n[QNRTCClient.Join](https://developer.qiniu.com/rtc/9464/qnconnectionstate/9446)\n后, 首先进入该状态,表示开始连接房间服务\n\nkConnected\n\n * kConnected: QNConnectionState\n * 连接成功状态\n\n当调用\n[QNRTCClient.Join](https://developer.qiniu.com/rtc/9464/qnconnectionstate/9446),在完成和房间服务连接后进入该状态\n\nkReconnecting\n\n * kReconnecting: QNConnectionState\n * 正在重连状态\n\n当和房间的已有连接由于网络中断、网络异常等原因断开,SDK 进行周期性重连时进入该状态\n\nkReconnected\n\n * kReconnected: QNConnectionState\n * 重连成功状态\n\n当从 kReconnecting 状态,重新和房间服务建立连接后进入该状态\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNCustomAudioTrack"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows\n>QNCustomAudioTrack\n\n# QNCustomAudioTrack\n\n最近更新时间: 2022-03-23 22:09:47\n\n本地音频自定义导入 Track 类\n\nHierarchy\n\n * [QNTrack](https://developer.qiniu.com/rtc/9409/qncustomaudiotrack/9452/QNTrack)\n * [QNLocalTrack](https://developer.qiniu.com/rtc/9409/qncustomaudiotrack/9422/QNLocalTrack)\n * [QNLocalVideoTrack](https://developer.qiniu.com/rtc/9409/qncustomaudiotrack/9424/QNLocalVideoTrack)\n * [QNCameraVideoTrack](https://developer.qiniu.com/rtc/9409/qncustomaudiotrack/9404/QNCameraVideoTrack)\n * [QNScreenVideoTrack](https://developer.qiniu.com/rtc/9409/qncustomaudiotrack/9448/QNScreenVideoTrack)\n * [QNCustomVideoTrack](https://developer.qiniu.com/rtc/9409/qncustomaudiotrack/9412/QNCustomVideoTrack)\n * [QNLocalAudioTrack](https://developer.qiniu.com/rtc/9409/qncustomaudiotrack/9420/QNLocalAudioTrack)\n * [QNMicrophoneAudioTrack](https://developer.qiniu.com/rtc/9409/qncustomaudiotrack/9433/QNMicrophoneAudioTrack)\n * QNCustomAudioTrack \n * [QNRemoteTrack](https://developer.qiniu.com/rtc/9409/qncustomaudiotrack/9440/QNRemoteTrack)\n * [QNRemoteVideoTrack](https://developer.qiniu.com/rtc/9409/qncustomaudiotrack/9443/QNRemoteVideoTrack)\n * [QNRemoteAudioTrack](https://developer.qiniu.com/rtc/9409/qncustomaudiotrack/9438/QNRemoteAudioTrack)\n\nIndex\n\nCore Methods\n\n * GetTrackID\n * GetUserID\n * GetTag\n * IsAudio\n * IsVideo\n * IsMuted\n * SetMuted\n * SetAudioFrameListener\n * SetVolume\n * GetVolumeLevel\n * MixAudioWithSoundCard\n * PushAudioFrame\n\nCore Methods\n\nGetTrackID\n\n * virtual const std::string& GetTrackID() = 0\n * 获取 TrackID\n\nReturns string\n\nGetUserID\n\n * virtual const std::string& GetUserID() = 0\n * 获取 UserID\n\nReturns string\n\nGetTag\n\n * virtual const std::string& GetTag() = 0\n * 获取 Tag\n\nReturns string\n\nIsAudio\n\n * virtual bool IsAudio() = 0\n * 是否为音频 track\n\nReturns bool\n\nIsVideo\n\n * virtual bool IsVideo() = 0\n * 是否为视频 track\n\nReturns bool\n\nIsMuted\n\n * virtual bool IsMuted() = 0\n * 是否 muted\n\nReturns bool\n\nSetMuted\n\n * virtual void SetMuted(bool muted) = 0\n * 设置是否静默该 track,静默后,SDK 将会发送静默帧数据\n\nParameters\n\n * muted: bool \n\n是否静默\n\nSetAudioFrameListener\n\n * virtual void SetAudioFrameListener([QNAudioFrameListener](https://developer.qiniu.com/rtc/9409/qncustomaudiotrack/11322/QNAudioFrameListener)* listener) = 0\n * 设置是音频数据监听\n\nParameters\n\n * listener: [QNAudioFrameListener](https://developer.qiniu.com/rtc/9409/qncustomaudiotrack/11322/QNAudioFrameListener)\n\n音频数据监听接口\n\nSetVolume\n\n * virtual void SetVolume(double volume) = 0\n * 设置采集音频音量(不改变系统设备的音量)\n\nParameters\n\n * volume: double \n\n音量大小,[1.0-10.0], 默认为 1.0\n\nGetVolumeLevel\n\n * virtual float GetVolumeLevel() = 0\n * 获取音频采集的音量\n\nReturns float\n\nMixAudioWithSoundCard\n\n * virtual void MixAudioWithSoundCard(bool enable, float volume_scale_ratio = 1.0f) = 0\n * 激活监听功能:混合本地麦克风和系统声卡的声音(仅适用于 Windows 和 iOS)\n\nParameters\n\n * enable: bool \n\n是否开启监听\n\n * volume_scale_ratio: float \n\n系统声音缩放倍数,以方便控制系统音量大小\n\nPushAudioFrame\n\n * virtual int32_t PushAudioFrame(const uint8_t* data,uint32_t data_size,uint32_t bits_per_sample,uint32_t sample_rate,uint32_t channels) = 0\n * 推送自定义音频帧\n\nParameters\n\n * data: uint8_t* \n\n音频数据\n\n * data_size: uint32_t \n\n数据长度\n\n * bits_per_sample: uint32_t \n\n位宽,即每个采样点占用位数\n\n * sample_rate: uint32_t \n\n采样率\n\n * channels: uint32_t \n\n声道数\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNCustomAudioTrackConfig"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows\n>QNCustomAudioTrackConfig\n\n# QNCustomAudioTrackConfig\n\n最近更新时间: 2022-03-23 22:11:11\n\n自定义音频采集 Track 的配置\n\nIndex\n\nProperties\n\n * audio_quality\n * tag\n\nProperties\n\naudio_quality\n\n * audio_quality: [QNAudioQuality](https://developer.qiniu.com/rtc/9410/qncustomaudiotrackconfig/11323/QNAudioQuality)\n * 设置音频质量\n\ntag\n\n * tag: string\n * 自定义标签\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNCustomMessage"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows >QNCustomMessage\n\n# QNCustomMessage\n\n最近更新时间: 2022-03-23 22:12:12\n\n自定义消息\n\nIndex\n\nProperties\n\n * msg_id\n * msg_sendid\n * msg_text\n * msg_stamp\n\nProperties\n\nmsg_id\n\n * msg_id: string\n * 消息唯一 id\n\nmsg_sendid\n\n * msg_sendid: string\n * 消息发送者的 user id\n\nmsg_text\n\n * msg_text: string\n * 消息内容\n\nmsg_stamp\n\n * msg_stamp: int32_t\n * 消息时间戳\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNCustomVideoTrack"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows\n>QNCustomVideoTrack\n\n# QNCustomVideoTrack\n\n最近更新时间: 2022-03-23 22:16:19\n\n本地视频自定义导入 Track 类\n\nHierarchy\n\n * [QNTrack](https://developer.qiniu.com/rtc/9412/qncustomvideotrack/9452/QNTrack)\n * [QNLocalTrack](https://developer.qiniu.com/rtc/9412/qncustomvideotrack/9422/QNLocalTrack)\n * [QNLocalVideoTrack](https://developer.qiniu.com/rtc/9412/qncustomvideotrack/9424/QNLocalVideoTrack)\n * [QNCameraVideoTrack](https://developer.qiniu.com/rtc/9412/qncustomvideotrack/9404/QNCameraVideoTrack)\n * [QNScreenVideoTrack](https://developer.qiniu.com/rtc/9412/qncustomvideotrack/9448/QNScreenVideoTrack)\n * QNCustomVideoTrack \n * [QNLocalAudioTrack](https://developer.qiniu.com/rtc/9412/qncustomvideotrack/9420/QNLocalAudioTrack)\n * [QNMicrophoneAudioTrack](https://developer.qiniu.com/rtc/9412/qncustomvideotrack/9433/QNMicrophoneAudioTrack)\n * [QNCustomAudioTrack](https://developer.qiniu.com/rtc/9412/qncustomvideotrack/9409/QNCustomAudioTrack)\n * [QNRemoteTrack](https://developer.qiniu.com/rtc/9412/qncustomvideotrack/9440/QNRemoteTrack)\n * [QNRemoteVideoTrack](https://developer.qiniu.com/rtc/9412/qncustomvideotrack/9443/QNRemoteVideoTrack)\n * [QNRemoteAudioTrack](https://developer.qiniu.com/rtc/9412/qncustomvideotrack/9438/QNRemoteAudioTrack)\n\nIndex\n\nCore Methods\n\n * GetTrackID\n * GetUserID\n * GetTag\n * IsAudio\n * IsVideo\n * IsMuted\n * SetMuted\n * SetVideoFrameListener\n * Play\n * SendSEI\n * StartCrop\n * StopCrop\n * StartScale\n * StopScale\n * StartCapture\n * StopCapture\n * PushVideoFrame\n\nCore Methods\n\nGetTrackID\n\n * virtual const std::string& GetTrackID() = 0\n * 获取 TrackID\n\nReturns string\n\nGetUserID\n\n * virtual const std::string& GetUserID() = 0\n * 获取 UserID\n\nReturns string\n\nGetTag\n\n * virtual const std::string& GetTag() = 0\n * 获取 Tag\n\nReturns string\n\nIsAudio\n\n * virtual bool IsAudio() = 0\n * 是否为音频 track\n\nReturns bool\n\nIsVideo\n\n * virtual bool IsVideo() = 0\n * 是否为视频 track\n\nReturns bool\n\nIsMuted\n\n * virtual bool IsMuted() = 0\n * 是否 muted\n\nReturns bool\n\nSetMuted\n\n * virtual void SetMuted(bool muted) = 0\n * 设置是否静默该 track,静默后,SDK 将会发送静默帧数据\n\nParameters\n\n * muted: bool \n\n是否静默\n\nSetVideoFrameListener\n\n * virtual void SetVideoFrameListener([QNVideoFrameListener](https://developer.qiniu.com/rtc/9412/qncustomvideotrack/11326/QNVideoFrameListener)* listener) = 0\n * 设置视频数据监听\n\nParameters\n\n * listener: [QNVideoFrameListener](https://developer.qiniu.com/rtc/9412/qncustomvideotrack/11326/QNVideoFrameListener)\n\n视频数据监听接口\n\nPlay\n\n * virtual void Play(QNView& view) = 0\n * 设置本地渲染参数\n\nParameters\n\n * view: [QNView](https://developer.qiniu.com/rtc/9412/qncustomvideotrack/11328/QNView)\n\n渲染参数\n\nSendSEI\n\n * virtual void SendSEI(std::string& message,std::string& uuid,int32_t repeat_count) = 0\n * 发送 SEI\n\nParameters\n\n * message: string \n\n消息内容\n\n * uuid: string \n\n唯一 ID\n\n * repeat_count: int \n\n重复次数, -1 为每一帧都发送\n\nStartCrop\n\n * virtual void StartCrop(uint32_t crop_x,uint32_t crop_y,int32_t crop_width,int32_t crop_height) = 0\n * 开启原始帧裁剪功能,设置参数要求如下,如果设置不正确,则输出原始图像\n\nParameters\n\n * crop_x: uint32_t \n\n开始裁减的 X 坐标点,原点为左上角,必须落在原图之内\n\n * crop_y: uint32_t \n\n开始裁减的 Y 坐标点,原点为左上角,必须落在原图之内\n\n * crop_width: int32_t \n\n目标图像宽度,必须为 4 的整数倍,如果是裁剪模式,crop_x和 crop_y,裁剪图像必须在原始图像之内\n\n * crop_height: int32_t \n\n目标图像高度,必须为 4 的整数倍,如果是裁剪模式,crop_x 和 crop_y,裁剪图像必须在原始图像之内\n\nStopCrop\n\n * virtual void StopCrop() = 0\n * 关闭裁剪功能\n\nStartScale\n\n * virtual void StartScale(int32_t dst_width, int32_t dst_height) = 0\n * 开启原始帧缩放功能,设置参数要求如下,如果设置不正确,则输出原始图像\n\nParameters\n\n * dst_width: int32_t \n\n目标图像宽度,必须为 4 的整数倍\n\n * dst_height: int32_t \n\n目标图像高度,必须为 4 的整数倍\n\nStopScale\n\n * virtual void StopScale() = 0\n * 关闭缩放功能\n\nStartCapture\n\n * virtual void StartCapture() = 0\n * 开启摄像头采集\n\nStopCapture\n\n * virtual void StopCapture() = 0\n * 关闭摄像头采集\n\nPushVideoFrame\n\n * virtual int PushVideoFrame(const uint8_t* data,const uint32_t data_size,const int32_t width,const int32_t height,const int64_t timestamp_us,QNVideoFrameType video_frame_type,QNVideoRotation rotation,bool mirror = false) = 0\n * 推送自定义视频帧\n\nParameters\n\n * data: uint8_t* \n\n视频数据\n\n * data_size: uint32_t \n\n数据长度\n\n * width: int32_t \n\n视频宽\n\n * height: int32_t \n\n视频高\n\n * int64_t: timestamp_us \n\n时间戳,单位:微妙\n\n * video_frame_type: [QNVideoFrameType](https://developer.qiniu.com/rtc/9412/qncustomvideotrack/11324/QNVideoFrameType)\n\n视频原始格式,支持:kI420 kYUY2 kRGB24\n\n * rotation: [QNVideoRotation](https://developer.qiniu.com/rtc/9412/qncustomvideotrack/9462/QNVideoRotation)\n\n导入后旋转角度,如果不需要旋转则使用默认值 kVideoRotation0 即可\n\n * mirror: bool \n\n是否镜像\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNCustomVideoTrackConfig"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows\n>QNCustomVideoTrackConfig\n\n# QNCustomVideoTrackConfig\n\n最近更新时间: 2022-03-23 22:17:09\n\n自定义视频采集 Track 的配置\n\nIndex\n\nProperties\n\n * encoder_config\n * tag\n * multi_profile_enabled\n\nProperties\n\nencoder_config\n\n * encoder_config: [QNVideoEncoderConfig](https://developer.qiniu.com/rtc/9413/qncustomvideotrackconfig/11330/QNVideoEncoderConfig)\n * 设置摄像头采集数据编码参数\n\ntag\n\n * tag: string\n * 自定义标签\n\nmulti_profile_enabled\n\n * multi_profile_enabled: bool\n * 是否开启多流\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNDirectLiveStreamingConfig"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows\n>QNDirectLiveStreamingConfig\n\n# QNDirectLiveStreamingConfig\n\n最近更新时间: 2022-03-23 22:19:35\n\n单路转推配置信息,通过 SDK 将参数发送到服务端,服务端按照指定的参数进行 CDN 转推\n\nIndex\n\nProperties\n\n * local_audio_track\n * local_video_track\n * stream_id\n * publish_url\n\nProperties\n\nlocal_audio_track\n\n * local_audio_track: [QNLocalAudioTrack](https://developer.qiniu.com/rtc/9414/qndirectlivestreamingconfig/9420/QNLocalAudioTrack)*\n * 单路转推的音频 track\n\nlocal_video_track\n\n * local_video_track: [QNLocalVideoTrack](https://developer.qiniu.com/rtc/9414/qndirectlivestreamingconfig/9424/QNLocalVideoTrack)*\n * 单路转推的视频 track\n\nstream_id\n\n * stream_id: string\n * 单路转推任务 id,由客户端设置,不可为空\n\npublish_url\n\n * publish_url: string\n * rtmp 转推地址\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNImageType"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows >QNImageType\n\n# QNImageType\n\n最近更新时间: 2022-03-23 21:51:01\n\n图片类型\n\nIndex\n\nEnum\n\n * QNImageType\n\nEnum\n\nQNImageType\n\n * Type declaration\n\n * kImagJpeg\n\n * kImagPng\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNLiveStreamingErrorInfo"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows\n>QNLiveStreamingErrorInfo\n\n# QNLiveStreamingErrorInfo\n\n最近更新时间: 2022-03-23 22:22:30\n\nCDN 转推结果信息\n\nIndex\n\nProperties\n\n * type\n * code\n * message\n\nEnum\n\n * Type\n\nProperties\n\ntype\n\n * type: Type\n * 转推类型\n\ncode | describe | message \n---|---|--- \nkStart | 开始转推 | 无 \nkStop | 停止转推 | 无 \nkUpdate | 更新转推 track | 无 \n\ncode\n\n * code: int32_t\n * 错误码\n\nmessage\n\n * message: string\n * 错误信息\n\nEnum\n\nType\n\n * Type declaration\n\n * kStart\n\n * kStop\n\n * kUpdate\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNLiveStreamingListener"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows\n>QNLiveStreamingListener\n\n# QNLiveStreamingListener\n\n最近更新时间: 2022-03-23 22:23:30\n\nCDN 转推结果通知接口\n\nIndex\n\nCore Methods\n\n * OnStarted\n * OnStopped\n * OnTranscodingTracksUpdated\n * OnLiveStreamingError\n\nCore Methods\n\nOnStarted\n\n * virtual void OnStarted(const std::string& stream_id) = 0\n * 转推任务成功创建时触发此回调\n\nParameters\n\n * stream_id: string \n\n开启转推的 stream id\n\nOnStopped\n\n * virtual void OnStopped(const std::string& stream_id) = 0\n * 转推任务成功停止时触发此回调\n\nParameters\n\n * stream_id: string \n\n停止转推的 stream id\n\nOnTranscodingTracksUpdated\n\n * virtual void OnTranscodingTracksUpdated(const std::string& stream_id) = 0\n * 转推任务配置更新时触发此回调\n\nParameters\n\n * stream_id: string \n\n配置更新的 stream id\n\nOnLiveStreamingError\n\n * virtual void OnLiveStreamingError(const std::string& stream_id, const [QNLiveStreamingErrorInfo](https://developer.qiniu.com/rtc/9418/qnlivestreaminglistener/9417/QNLiveStreamingErrorInfo)& error_info) = 0\n * 转推任务出错时触发此回调\n\nParameters\n\n * stream_id: string \n\n出现错误的 stream id\n\n * error_info: [QNLiveStreamingErrorInfo](https://developer.qiniu.com/rtc/9418/qnlivestreaminglistener/9417/QNLiveStreamingErrorInfo)\n\n详细错误原因\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNLocalAudioTrack"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows >QNLocalAudioTrack\n\n# QNLocalAudioTrack\n\n最近更新时间: 2022-03-23 22:25:43\n\n本地音频 Track 基类\n\nHierarchy\n\n * [QNTrack](https://developer.qiniu.com/rtc/9420/qnlocalaudiotrack/9452/QNTrack)\n * [QNLocalTrack](https://developer.qiniu.com/rtc/9420/qnlocalaudiotrack/9422/QNLocalTrack)\n * [QNLocalVideoTrack](https://developer.qiniu.com/rtc/9420/qnlocalaudiotrack/9424/QNLocalVideoTrack)\n * [QNCameraVideoTrack](https://developer.qiniu.com/rtc/9420/qnlocalaudiotrack/9404/QNCameraVideoTrack)\n * [QNScreenVideoTrack](https://developer.qiniu.com/rtc/9420/qnlocalaudiotrack/9448/QNScreenVideoTrack)\n * [QNCustomVideoTrack](https://developer.qiniu.com/rtc/9420/qnlocalaudiotrack/9412/QNCustomVideoTrack)\n * QNLocalAudioTrack \n * [QNMicrophoneAudioTrack](https://developer.qiniu.com/rtc/9420/qnlocalaudiotrack/9433/QNMicrophoneAudioTrack)\n * [QNCustomAudioTrack](https://developer.qiniu.com/rtc/9420/qnlocalaudiotrack/9409/QNCustomAudioTrack)\n * [QNRemoteTrack](https://developer.qiniu.com/rtc/9420/qnlocalaudiotrack/9440/QNRemoteTrack)\n * [QNRemoteVideoTrack](https://developer.qiniu.com/rtc/9420/qnlocalaudiotrack/9443/QNRemoteVideoTrack)\n * [QNRemoteAudioTrack](https://developer.qiniu.com/rtc/9420/qnlocalaudiotrack/9438/QNRemoteAudioTrack)\n\nIndex\n\nCore Methods\n\n * GetTrackID\n * GetUserID\n * GetTag\n * IsAudio\n * IsVideo\n * IsMuted\n * SetMuted\n * SetAudioFrameListener\n * SetVolume\n * GetVolumeLevel\n * MixAudioWithSoundCard\n\nCore Methods\n\nGetTrackID\n\n * virtual const std::string& GetTrackID() = 0\n * 获取 TrackID\n\nReturns string\n\nGetUserID\n\n * virtual const std::string& GetUserID() = 0\n * 获取 UserID\n\nReturns string\n\nGetTag\n\n * virtual const std::string& GetTag() = 0\n * 获取 Tag\n\nReturns string\n\nIsAudio\n\n * virtual bool IsAudio() = 0\n * 是否为音频 track\n\nReturns bool\n\nIsVideo\n\n * virtual bool IsVideo() = 0\n * 是否为视频 track\n\nReturns bool\n\nIsMuted\n\n * virtual bool IsMuted() = 0\n * 是否 muted\n\nReturns bool\n\nSetMuted\n\n * virtual void SetMuted(bool muted) = 0\n * 设置是否静默该 track,静默后,SDK 将会发送静默帧数据\n\nParameters\n\n * muted: bool \n\n是否静默\n\nSetAudioFrameListener\n\n * virtual void SetAudioFrameListener([QNAudioFrameListener](https://developer.qiniu.com/rtc/9420/qnlocalaudiotrack/11322/QNAudioFrameListener)* listener) = 0\n * 设置是音频数据监听\n\nParameters\n\n * listener: [QNAudioFrameListener](https://developer.qiniu.com/rtc/9420/qnlocalaudiotrack/11322/QNAudioFrameListener)\n\n音频数据监听接口\n\nSetVolume\n\n * virtual void SetVolume(double volume) = 0\n * 设置采集音频音量(不改变系统设备的音量)\n\nParameters\n\n * volume: double \n\n音量大小,[1.0-10.0], 默认为 1.0\n\nGetVolumeLevel\n\n * virtual float GetVolumeLevel() = 0\n * 获取音频采集的音量\n\nReturns float\n\nMixAudioWithSoundCard\n\n * virtual void MixAudioWithSoundCard(bool enable, float volume_scale_ratio = 1.0f) = 0\n * 激活监听功能:混合本地麦克风和系统声卡的声音(仅适用于 Windows 和 iOS)\n\nParameters\n\n * enable: bool \n\n是否开启监听\n\n * volume_scale_ratio: float \n\n系统声音缩放倍数,以方便控制系统音量大小\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNLocalAudioTrackStats"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows\n>QNLocalAudioTrackStats\n\n# QNLocalAudioTrackStats\n\n最近更新时间: 2022-03-23 22:26:34\n\n本地发布的音频统计信息\n\nIndex\n\nProperties\n\n * uplink_bitrate\n * uplink_rtt\n * uplink_lostrate\n\nProperties\n\nuplink_bitrate\n\n * uplink_bitrate: int32_t\n * 上行音频码率\n\nuplink_rtt\n\n * uplink_rtt: int32_t\n * 上行网络 rtt\n\nuplink_lostrate\n\n * uplink_lostrate: int32_t\n * 上行网络丢包率\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNLocalTrack"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows >QNLocalTrack\n\n# QNLocalTrack\n\n最近更新时间: 2022-03-23 22:31:43\n\n音视频本地 Track 基类\n\nHierarchy\n\n * [QNTrack](https://developer.qiniu.com/rtc/9422/qnlocaltrack/9452/QNTrack)\n * QNLocalTrack \n * [QNLocalVideoTrack](https://developer.qiniu.com/rtc/9422/qnlocaltrack/9424/QNLocalVideoTrack)\n * [QNCameraVideoTrack](https://developer.qiniu.com/rtc/9422/qnlocaltrack/9404/QNCameraVideoTrack)\n * [QNScreenVideoTrack](https://developer.qiniu.com/rtc/9422/qnlocaltrack/9448/QNScreenVideoTrack)\n * [QNCustomVideoTrack](https://developer.qiniu.com/rtc/9422/qnlocaltrack/9412/QNCustomVideoTrack)\n * [QNLocalAudioTrack](https://developer.qiniu.com/rtc/9422/qnlocaltrack/9420/QNLocalAudioTrack)\n * [QNMicrophoneAudioTrack](https://developer.qiniu.com/rtc/9422/qnlocaltrack/9433/QNMicrophoneAudioTrack)\n * [QNCustomAudioTrack](https://developer.qiniu.com/rtc/9422/qnlocaltrack/9409/QNCustomAudioTrack)\n * [QNRemoteTrack](https://developer.qiniu.com/rtc/9422/qnlocaltrack/9440/QNRemoteTrack)\n * [QNRemoteVideoTrack](https://developer.qiniu.com/rtc/9422/qnlocaltrack/9443/QNRemoteVideoTrack)\n * [QNRemoteAudioTrack](https://developer.qiniu.com/rtc/9422/qnlocaltrack/9438/QNRemoteAudioTrack)\n\nIndex\n\nCore Methods\n\n * GetTrackID\n * GetUserID\n * GetTag\n * IsAudio\n * IsVideo\n * IsMuted\n * SetMuted\n\nCore Methods\n\nGetTrackID\n\n * virtual const std::string& GetTrackID() = 0\n * 获取 TrackID\n\nReturns string\n\nGetUserID\n\n * virtual const std::string& GetUserID() = 0\n * 获取 UserID\n\nReturns string\n\nGetTag\n\n * virtual const std::string& GetTag() = 0\n * 获取 Tag\n\nReturns string\n\nIsAudio\n\n * virtual bool IsAudio() = 0\n * 是否为音频 track\n\nReturns bool\n\nIsVideo\n\n * virtual bool IsVideo() = 0\n * 是否为视频 track\n\nReturns bool\n\nIsMuted\n\n * virtual bool IsMuted() = 0\n * 是否 muted\n\nReturns bool\n\nSetMuted\n\n * virtual void SetMuted(bool muted) = 0\n * 设置是否静默该 track,静默后,SDK 将会发送静默帧数据\n\nParameters\n\n * muted: bool \n\n是否静默\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNNetworkGrade"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows >QNNetworkGrade\n\n# QNNetworkGrade\n\n最近更新时间: 2022-03-23 22:42:59\n\n网络状态\n\nIndex\n\nProperties\n\n * kINVALID\n * kEXCELLENT\n * kGOOD\n * kFAIR\n * kPOOR\n\nProperties\n\nkINVALID\n\n * kINVALID: QNNetworkGrade\n * 当前网络未知\n\nkEXCELLENT\n\n * kEXCELLENT: QNNetworkGrade\n * 当前网络优秀\n\nkGOOD\n\n * kGOOD: QNNetworkGrade\n * 当前网络良好\n\nkFAIR\n\n * kFAIR: QNNetworkGrade\n * 当前网络一般\n\nkPOOR\n\n * kPOOR: QNNetworkGrade\n * 当前网络较差\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNNetworkQuality"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows >QNNetworkQuality\n\n# QNNetworkQuality\n\n最近更新时间: 2022-03-23 22:44:14\n\n网络上下行质量\n\nIndex\n\nProperties\n\n * uplink_network_grade\n * downlink_network_grade\n\nProperties\n\nuplink_network_grade\n\n * uplink_network_grade: [QNNetworkGrade](https://developer.qiniu.com/rtc/11339/qnnetworkquality/11338/QNNetworkGrade)\n * 当前网络上行质量\n\ndownlink_network_grade\n\n * downlink_network_grade: [QNNetworkGrade](https://developer.qiniu.com/rtc/11339/qnnetworkquality/11338/QNNetworkGrade)\n * 当前网络下行质量\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNNetworkQualityListener"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows\n>QNNetworkQualityListener\n\n# QNNetworkQualityListener\n\n最近更新时间: 2022-03-23 22:45:02\n\n网络状态监听\n\nIndex\n\nCore Methods\n\n * OnNetworkQualityNotified\n\nCore Methods\n\nOnNetworkQualityNotified\n\n * virtual void OnNetworkQualityNotified(const [QNNetworkQuality](https://developer.qiniu.com/rtc/11340/qnnetworkqualitylistener/11339/QNNetworkQuality)& quality) = 0\n * 近端网络状态回调\n\nParameters\n\n * quality: [QNNetworkQuality](https://developer.qiniu.com/rtc/11340/qnnetworkqualitylistener/11339/QNNetworkQuality)\n\n网络质量\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNPublishResultCallback"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows\n>QNPublishResultCallback\n\n# QNPublishResultCallback\n\n最近更新时间: 2022-03-23 22:45:58\n\n发布结果回调接口\n\nIndex\n\nCore Methods\n\n * OnPublished\n * OnPublishError\n\nCore Methods\n\nOnPublished\n\n * virtual void OnPublished() = 0\n * 发布成功后触发此回调\n\nOnPublishError\n\n * virtual void OnPublishError(int error_code, const std::string& error_message) = 0\n * 发布成功后触发此回调\n\nParameters\n\n * error_code: int \n\n错误码\n\n * error_message: string \n\n错误消息\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNRemoteAudioMixedFrameListener"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows\n>QNRemoteAudioMixedFrameListener\n\n# QNRemoteAudioMixedFrameListener\n\n最近更新时间: 2022-03-23 22:47:05\n\n远端各用户音频数据混音后的回调接口\n\nIndex\n\nCore Methods\n\n * OnRemoteAudioMixedFrame\n\nCore Methods\n\nOnRemoteAudioMixedFrame\n\n * virtual void OnRemoteAudioMixedFrame(const uint8_t* data, uint32_t data_size, uint32_t bits_per_sample, uint32_t sample_rate, uint32_t channels) = 0\n * 远端各用户音频数据混音后的回调\n\nParameters\n\n * data: const uint8_t* \n\n音频数据内存指针\n\n * bits_per_sample: uint32_t \n\n位宽,即每个采样点占用位数\n\n * sample_rate: uint32_t \n\n采样率\n\n * channels: uint32_t \n\n声道数\n\n * data_size: uint32_t \n\n音频数据大小\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNRemoteAudioTrack"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows\n>QNRemoteAudioTrack\n\n# QNRemoteAudioTrack\n\n最近更新时间: 2022-03-23 22:49:57\n\n音频远端 Track 类\n\nHierarchy\n\n * [QNTrack](https://developer.qiniu.com/rtc/9438/qnremoteaudiotrack/9452/QNTrack)\n * [QNLocalTrack](https://developer.qiniu.com/rtc/9438/qnremoteaudiotrack/9422/QNLocalTrack)\n * [QNLocalVideoTrack](https://developer.qiniu.com/rtc/9438/qnremoteaudiotrack/9424/QNLocalVideoTrack)\n * [QNCameraVideoTrack](https://developer.qiniu.com/rtc/9438/qnremoteaudiotrack/9404/QNCameraVideoTrack)\n * [QNScreenVideoTrack](https://developer.qiniu.com/rtc/9438/qnremoteaudiotrack/9448/QNScreenVideoTrack)\n * [QNCustomVideoTrack](https://developer.qiniu.com/rtc/9438/qnremoteaudiotrack/9412/QNCustomVideoTrack)\n * [QNLocalAudioTrack](https://developer.qiniu.com/rtc/9438/qnremoteaudiotrack/9420/QNLocalAudioTrack)\n * [QNMicrophoneAudioTrack](https://developer.qiniu.com/rtc/9438/qnremoteaudiotrack/9433/QNMicrophoneAudioTrack)\n * [QNCustomAudioTrack](https://developer.qiniu.com/rtc/9438/qnremoteaudiotrack/9409/QNCustomAudioTrack)\n * [QNRemoteTrack](https://developer.qiniu.com/rtc/9438/qnremoteaudiotrack/9440/QNRemoteTrack)\n * [QNRemoteVideoTrack](https://developer.qiniu.com/rtc/9438/qnremoteaudiotrack/9443/QNRemoteVideoTrack)\n * QNRemoteAudioTrack \n\nIndex\n\nCore Methods\n\n * GetTrackID\n * GetUserID\n * GetTag\n * IsAudio\n * IsVideo\n * IsMuted\n * SetTrackInfoChangedListener\n * IsSubscribed\n * SetAudioFrameListener\n * SetVolume\n * GetVolumeLevel\n\nCore Methods\n\nGetTrackID\n\n * virtual const std::string& GetTrackID() = 0\n * 获取 TrackID\n\nReturns string\n\nGetUserID\n\n * virtual const std::string& GetUserID() = 0\n * 获取 UserID\n\nReturns string\n\nGetTag\n\n * virtual const std::string& GetTag() = 0\n * 获取 Tag\n\nReturns string\n\nIsAudio\n\n * virtual bool IsAudio() = 0\n * 是否为音频 track\n\nReturns bool\n\nIsVideo\n\n * virtual bool IsVideo() = 0\n * 是否为视频 track\n\nReturns bool\n\nIsMuted\n\n * virtual bool IsMuted() = 0\n * 是否 muted\n\nReturns bool\n\nSetTrackInfoChangedListener\n\n * virtual void SetTrackInfoChangedListener([QNTrackInfoChangedListener](https://developer.qiniu.com/rtc/9438/qnremoteaudiotrack/9453/QNTrackInfoChangedListener)* listener) = 0\n * 设置当前轨道信息变化监听器\n\nParameters\n\n * listener: [QNTrackInfoChangedListener](https://developer.qiniu.com/rtc/9438/qnremoteaudiotrack/9453/QNTrackInfoChangedListener)\n\n轨道信息变化监听器\n\nIsSubscribed\n\n * virtual bool IsSubscribed() = 0;\n * 该 Track 是否被订阅\n\nReturns bool\n\nSetAudioFrameListener\n\n * virtual void SetAudioFrameListener([QNAudioFrameListener](https://developer.qiniu.com/rtc/9438/qnremoteaudiotrack/11322/QNAudioFrameListener)* listener) = 0\n * 设置是音频数据监听\n\nParameters\n\n * listener: [QNAudioFrameListener](https://developer.qiniu.com/rtc/9438/qnremoteaudiotrack/11322/QNAudioFrameListener)\n\n音频数据监听接口\n\nSetVolume\n\n * virtual void SetVolume(double volume) = 0\n * 设置远端音频音量(不改变系统设备的音量)\n\nParameters\n\n * volume: double \n\n音量大小,[1.0-10.0], 默认为 1.0\n\nGetVolumeLevel\n\n * virtual float GetVolumeLevel() = 0\n * 获取音频播放音量\n\nReturns float\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNRemoteAudioTrackStats"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows\n>QNRemoteAudioTrackStats\n\n# QNRemoteAudioTrackStats\n\n最近更新时间: 2022-03-23 22:50:53\n\n订阅远端的音频统计信息\n\nIndex\n\nProperties\n\n * downlink_bitrate\n * downlink_lostrate\n * uplink_rtt\n * uplink_lostrate\n\nProperties\n\ndownlink_bitrate\n\n * downlink_bitrate: int32_t\n * 下行音频码率\n\ndownlink_lostrate\n\n * downlink_lostrate: int32_t\n * 下行网络丢包率\n\nuplink_rtt\n\n * uplink_rtt: int32_t\n * 上行网络 rtt\n\nuplink_lostrate\n\n * uplink_lostrate: int32_t\n * 上行网络丢包率\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNRemoteTrack"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows >QNRemoteTrack\n\n# QNRemoteTrack\n\n最近更新时间: 2022-03-23 22:52:18\n\n音视频远端 Track 基类\n\nHierarchy\n\n * [QNTrack](https://developer.qiniu.com/rtc/9440/qnremotetrack/9452/QNTrack)\n * [QNLocalTrack](https://developer.qiniu.com/rtc/9440/qnremotetrack/9422/QNLocalTrack)\n * [QNLocalVideoTrack](https://developer.qiniu.com/rtc/9440/qnremotetrack/9424/QNLocalVideoTrack)\n * [QNCameraVideoTrack](https://developer.qiniu.com/rtc/9440/qnremotetrack/9404/QNCameraVideoTrack)\n * [QNScreenVideoTrack](https://developer.qiniu.com/rtc/9440/qnremotetrack/9448/QNScreenVideoTrack)\n * [QNCustomVideoTrack](https://developer.qiniu.com/rtc/9440/qnremotetrack/9412/QNCustomVideoTrack)\n * [QNLocalAudioTrack](https://developer.qiniu.com/rtc/9440/qnremotetrack/9420/QNLocalAudioTrack)\n * [QNMicrophoneAudioTrack](https://developer.qiniu.com/rtc/9440/qnremotetrack/9433/QNMicrophoneAudioTrack)\n * [QNCustomAudioTrack](https://developer.qiniu.com/rtc/9440/qnremotetrack/9409/QNCustomAudioTrack)\n * QNRemoteTrack \n * [QNRemoteVideoTrack](https://developer.qiniu.com/rtc/9440/qnremotetrack/9443/QNRemoteVideoTrack)\n * [QNRemoteAudioTrack](https://developer.qiniu.com/rtc/9440/qnremotetrack/9438/QNRemoteAudioTrack)\n\nIndex\n\nCore Methods\n\n * GetTrackID\n * GetUserID\n * GetTag\n * IsAudio\n * IsVideo\n * IsMuted\n * SetTrackInfoChangedListener\n * IsSubscribed\n\nCore Methods\n\nGetTrackID\n\n * virtual const std::string& GetTrackID() = 0\n * 获取 TrackID\n\nReturns string\n\nGetUserID\n\n * virtual const std::string& GetUserID() = 0\n * 获取 UserID\n\nReturns string\n\nGetTag\n\n * virtual const std::string& GetTag() = 0\n * 获取 Tag\n\nReturns string\n\nIsAudio\n\n * virtual bool IsAudio() = 0\n * 是否为音频 track\n\nReturns bool\n\nIsVideo\n\n * virtual bool IsVideo() = 0\n * 是否为视频 track\n\nReturns bool\n\nIsMuted\n\n * virtual bool IsMuted() = 0\n * 是否 muted\n\nReturns bool\n\nSetTrackInfoChangedListener\n\n * virtual void SetTrackInfoChangedListener([QNTrackInfoChangedListener](https://developer.qiniu.com/rtc/9440/qnremotetrack/9453/QNTrackInfoChangedListener)* listener) = 0\n * 设置当前轨道信息变化监听器\n\nParameters\n\n * listener: [QNTrackInfoChangedListener](https://developer.qiniu.com/rtc/9440/qnremotetrack/9453/QNTrackInfoChangedListener)\n\n轨道信息变化监听器\n\nIsSubscribed\n\n * virtual bool IsSubscribed() = 0;\n * 该 Track 是否被订阅\n\nReturns bool\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNRemoteUser"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows >QNRemoteUser\n\n# QNRemoteUser\n\n最近更新时间: 2022-03-23 22:53:09\n\n远端用户信息\n\nIndex\n\nProperties\n\n * user_id\n * user_data\n * remote_audio_track_list\n * remote_video_track_list\n\nProperties\n\nuser_id\n\n * user_id: string\n * 远端用户的 userID\n\nuser_data\n\n * user_data: string\n * 远端用户数据\n\nremote_audio_track_list\n\n * remote_audio_track_list: std::list<[QNRemoteAudioTrack*](https://developer.qiniu.com/rtc/9441/qnremoteuser/undefined)>\n * 远端用户发布的音频 track 列表\n\nremote_video_track_list\n\n * remote_video_track_list: std::list<[QNRemoteVideoTrack*](https://developer.qiniu.com/rtc/9441/qnremoteuser/undefined)>\n * 远端用户发布的视频 track 列表\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNRemoteVideoTrack"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows\n>QNRemoteVideoTrack\n\n# QNRemoteVideoTrack\n\n最近更新时间: 2022-03-23 22:56:10\n\n视频远端 Track 类\n\nHierarchy\n\n * [QNTrack](https://developer.qiniu.com/rtc/9443/qnremotevideotrack/9452/QNTrack)\n * [QNLocalTrack](https://developer.qiniu.com/rtc/9443/qnremotevideotrack/9422/QNLocalTrack)\n * [QNLocalVideoTrack](https://developer.qiniu.com/rtc/9443/qnremotevideotrack/9424/QNLocalVideoTrack)\n * [QNCameraVideoTrack](https://developer.qiniu.com/rtc/9443/qnremotevideotrack/9404/QNCameraVideoTrack)\n * [QNScreenVideoTrack](https://developer.qiniu.com/rtc/9443/qnremotevideotrack/9448/QNScreenVideoTrack)\n * [QNCustomVideoTrack](https://developer.qiniu.com/rtc/9443/qnremotevideotrack/9412/QNCustomVideoTrack)\n * [QNLocalAudioTrack](https://developer.qiniu.com/rtc/9443/qnremotevideotrack/9420/QNLocalAudioTrack)\n * [QNMicrophoneAudioTrack](https://developer.qiniu.com/rtc/9443/qnremotevideotrack/9433/QNMicrophoneAudioTrack)\n * [QNCustomAudioTrack](https://developer.qiniu.com/rtc/9443/qnremotevideotrack/9409/QNCustomAudioTrack)\n * [QNRemoteTrack](https://developer.qiniu.com/rtc/9443/qnremotevideotrack/9440/QNRemoteTrack)\n * QNRemoteVideoTrack \n * [QNRemoteAudioTrack](https://developer.qiniu.com/rtc/9443/qnremotevideotrack/9438/QNRemoteAudioTrack)\n\nIndex\n\nCore Methods\n\n * GetTrackID\n * GetUserID\n * GetTag\n * IsAudio\n * IsVideo\n * IsMuted\n * SetTrackInfoChangedListener\n * IsSubscribed\n * SetVideoFrameListener\n * Play\n * IsMultiProfileEnabled\n * SetProfile\n\nCore Methods\n\nGetTrackID\n\n * virtual const std::string& GetTrackID() = 0\n * 获取 TrackID\n\nReturns string\n\nGetUserID\n\n * virtual const std::string& GetUserID() = 0\n * 获取 UserID\n\nReturns string\n\nGetTag\n\n * virtual const std::string& GetTag() = 0\n * 获取 Tag\n\nReturns string\n\nIsAudio\n\n * virtual bool IsAudio() = 0\n * 是否为音频 track\n\nReturns bool\n\nIsVideo\n\n * virtual bool IsVideo() = 0\n * 是否为视频 track\n\nReturns bool\n\nIsMuted\n\n * virtual bool IsMuted() = 0\n * 是否 muted\n\nReturns bool\n\nSetTrackInfoChangedListener\n\n * virtual void SetTrackInfoChangedListener([QNTrackInfoChangedListener](https://developer.qiniu.com/rtc/9443/qnremotevideotrack/9453/QNTrackInfoChangedListener)* listener) = 0\n * 设置当前轨道信息变化监听器\n\nParameters\n\n * listener: [QNTrackInfoChangedListener](https://developer.qiniu.com/rtc/9443/qnremotevideotrack/9453/QNTrackInfoChangedListener)\n\n轨道信息变化监听器\n\nIsSubscribed\n\n * virtual bool IsSubscribed() = 0;\n * 该 Track 是否被订阅\n\nReturns bool\n\nSetVideoFrameListener\n\n * virtual void SetVideoFrameListener([QNVideoFrameListener](https://developer.qiniu.com/rtc/9443/qnremotevideotrack/11326/QNVideoFrameListener)* listener) = 0\n * 设置视频数据监听\n\nParameters\n\n * listener: [QNVideoFrameListener](https://developer.qiniu.com/rtc/9443/qnremotevideotrack/11326/QNVideoFrameListener)\n\n视频数据监听接口\n\nPlay\n\n * virtual void Play(QNView& view) = 0\n * 设置渲染参数\n\nParameters\n\n * view: [QNView](https://developer.qiniu.com/rtc/9443/qnremotevideotrack/11328/QNView)\n\n渲染参数\n\nIsMultiProfileEnabled\n\n * virtual bool IsMultiProfileEnabled() = 0\n * 获取当前 track 是否开启了大小流发布\n\nReturns bool\n\nSetProfile\n\n * virtual void SetProfile([QNTrackProfile](https://developer.qiniu.com/rtc/9443/qnremotevideotrack/9454/QNTrackProfile) profile) = 0\n * 设置当前视频轨道质量等级\n\nParameters\n\n * profile: [QNTrackProfile](https://developer.qiniu.com/rtc/9443/qnremotevideotrack/9454/QNTrackProfile)\n\n视频质量\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNRemoteVideoTrackStats"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows\n>QNRemoteVideoTrackStats\n\n# QNRemoteVideoTrackStats\n\n最近更新时间: 2022-03-23 22:56:51\n\n订阅远端的视频统计信息\n\nIndex\n\nProperties\n\n * profile\n * downlink_framerate\n * downlink_bitrate\n * downlink_lostrate\n * uplink_rtt\n * uplink_lostrate\n\nProperties\n\nprofile\n\n * profile: [QNTrackProfile](https://developer.qiniu.com/rtc/9444/qnremotevideotrackstats/9454/QNTrackProfile)\n * 该路 track 的 profile\n\ndownlink_framerate\n\n * downlink_framerate: int32_t\n * 下行视频帧率\n\ndownlink_bitrate\n\n * downlink_bitrate: int32_t\n * 下行视频码率\n\ndownlink_lostrate\n\n * downlink_lostrate: int32_t\n * 下行网络丢包率\n\nuplink_rtt\n\n * uplink_rtt: int32_t\n * 上行网络 rtt (即远端用户发布视频的网络链路的 rtt)\n\nuplink_lostrate\n\n * uplink_lostrate: int32_t\n * 上行网络丢包率\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNRenderMode"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows >QNRenderMode\n\n# QNRenderMode\n\n最近更新时间: 2022-03-23 21:47:17\n\n各平台渲染方式\n\nIndex\n\nEnum\n\n * QNRenderMode\n\nEnum\n\nQNRenderMode\n\n * code | describe | message \n---|---|--- \nkRenderD3D | Windows 平台 D3D 方式渲染 | 无 \nkRenderGDI | Windows 平台 GDI 方式渲染 | 无 \nkRenderIOS | iOS 平台渲染 | 无 \nkRenderDefault | 默认渲染方式 | 无 \n \nType declaration\n\n * kRenderD3D\n\n * kRenderGDI\n\n * kRenderIOS\n\n * kRenderDefault\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
[
{
"from": "human",
"value": "如何处理七牛云的实时音视频QNRTC"
},
{
"from": "gpt",
"value": "[实时音视频](https://developer.qiniu.com/rtc) > API 文档 > Windows >QNRTC\n\n# QNRTC\n\n最近更新时间: 2022-03-24 00:23:01\n\nQNRTC 作为 SDK 的入口,提供创建本地音视频轨道,创建\n[QNRTCClient](https://developer.qiniu.com/rtc/9445/qnrtc/9446/QNRTCClient)\n对象等一系列静态方法\n\nIndex\n\nCore Methods\n\n * GetVersion\n * SetLogFile\n * Init\n * DeInint\n * GetCameraCount\n * GetCameraInfo\n * GetScreenWindowCount\n * GetScreenWindowInfo\n * GetAudioRecordingDeviceCount\n * GetAudioRecordingDeviceInfo\n * GetAudioPlaybackDeviceCount\n * GetAudioPlaybackDeviceInfo\n * SetAudioRecordingDevice\n * SetAudioPlaybackDevice\n * CreateClient[1/2]\n * CreateClient[2/2]\n * DestroyRtcClient\n * SetClientEventListener\n * CreateMicrophoneAudioTrack\n * CreateCustomAudioTrack\n * CreateCameraVideoTrack\n * CreateScreenVideoTrack\n * CreateCustomVideoTrack\n * DestroyLocalTrack\n * SetAudioRouteToSpeakerphone\n * SetAudioPlayMute\n * IsAudioPlayMute\n\nCore Methods\n\nGetVersion\n\n * static void GetVersion(std::string& ver)\n * 获取 sdk 版本号\n\nParameters\n\n * ver: string \n\n版本号\n\nSetLogFile\n\n * static void SetLogFile([QNLogLevel](https://developer.qiniu.com/rtc/9445/qnrtc/9432/QNLogLevel) level, const std::string& dir_name, const std::string& file_name)\n * 设置是否开启本地日志保存功能,并设置保存文件的路径和前缀名,不调用此接口,默认不开启日志记录\n\nParameters\n\n * level: [QNLogLevel](https://developer.qiniu.com/rtc/9445/qnrtc/9432/QNLogLevel)\n\n日志开启级别\n\n * dir_name: string \n\n日志文件目录名,不存在则自动创建\n\n * file_name: string \n\n日志文件前缀名\n\nInit\n\n * static void Init(QNRTCSetting& setting, QNRTCEventListener* listener)\n * 初始化\n\nParameters\n\n * setting: [QNRTCSetting](https://developer.qiniu.com/rtc/9445/qnrtc/11343/QNRTCSetting)\n\n日志开启级别\n\n * listener: [QNRTCEventListener](https://developer.qiniu.com/rtc/9445/qnrtc/11345/QNRTCEventListener)\n\n日志文件目录名,不存在则自动创建\n\nDeInint\n\n * static void DeInint()\n * 反初始化\n\nGetCameraCount\n\n * static int32_t GetCameraCount()\n * 获取摄像头数量, 此方法仅适用于 Windows 和 Linux 平台\n\nReturns 设备数量\n\nGetCameraInfo\n\n * static const [QNCameraInfo](https://developer.qiniu.com/rtc/9445/qnrtc/9402/9402)& GetCameraInfo(int32_t index)\n * 获取指定序号的摄像头设备信息,首先通过 GetCameraCount 获取摄像头数量,此方法仅适用于 Windows 和 Linux 平台\n\nParameters\n\n * index: int32_t \n\n设备序号,<= GetCameraCount()\n\nReturns 指定index 的摄像头设备信息\n\nGetScreenWindowCount\n\n * static int32_t GetScreenWindowCount()\n * 获取可进行画面采集的屏幕、窗口数量;如需刷新则再次调用即可,此方法仅适用于 Windows 平台\n\nReturns 可以进行采集的屏幕、窗口数量\n\nGetScreenWindowInfo\n\n * sstatic const [QNScreenWindowInfo](https://developer.qiniu.com/rtc/9445/qnrtc/9450/QNScreenWindowInfo)& GetScreenWindowInfo(int32_t index)\n * 获取指定 index 的屏幕窗口信息,根据此信息可以进行对应的画面采集,此方法仅适用于 Windows\n\nParameters\n\n * index: int32_t \n\n窗口序号,<= GetScreenWindowCount()\n\nReturns 指定 index 的窗口信息\n\nGetAudioRecordingDeviceCount\n\n * static int32_t GetAudioRecordingDeviceCount()\n * 获取音频采集设备数量,此方法仅适用于 Windows 和 Linux 平台\n\nReturns 音频采集设备的数量\n\nGetAudioRecordingDeviceInfo\n\n * static [QNAudioDeviceInfo](https://developer.qiniu.com/rtc/9445/qnrtc/9396QNAudioDeviceInfo)& GetAudioRecordingDeviceInfo(int32_t index)\n * 获取指定 index 音频采集设备信息,此方法仅适用于 Windows 和 Linux 平台\n\nParameters\n\n * index: int32_t \n\n采集设备序号,<= GetAudioRecordingDeviceCount()\n\nReturns 指定 index 的音频采集设备信息\n\nGetAudioPlaybackDeviceCount\n\n * static int32_t GetAudioPlaybackDeviceCount()\n * 获取音频播放设备数量,此方法仅适用于 Windows 和 Linux 平台\n\nReturns 音频播放设备的数量\n\nGetAudioPlaybackDeviceInfo\n\n * static [QNAudioDeviceInfo](https://developer.qiniu.com/rtc/9445/qnrtc/9396QNAudioDeviceInfo)& GetAudioPlaybackDeviceInfo(int32_t index)\n * 获取指定 index 音频播放设备信息,此方法仅适用于 Windows 和 Linux 平台\n\nParameters\n\n * index: int32_t \n\n采集设备序号,<= GetAudioRecordingDeviceCount()\n\nReturns 指定 index 的音频播放设备信息\n\nSetAudioRecordingDevice\n\n * static int32_t SetAudioRecordingDevice(int32_t index)\n * 设置连麦使用的音频采集设备,不调用则使用系统默认采集设备,连麦过程中设置无效,需在发布音频 Track 前调用,此方法仅适用于 Windows 和 Linux 平台\n\nParameters\n\n * index: int32_t \n\n采集设备序号,<= GetAudioRecordingDeviceCount()\n\nReturns 成功返回 0,其它请参考错误码列表\n\nSetAudioPlaybackDevice\n\n * static int32_t SetAudioPlaybackDevice(int32_t index)\n * 设置连麦使用的音频播放设备,不调用则使用系统默认采集设备,连麦过程中设置无效,需在发布音频 Track 前调用,此方法仅适用于 Windows 和 Linux 平台\n\nParameters\n\n * index: int32_t \n\n采集设备序号,<= GetAudioPlaybackDeviceCount()\n\nReturns 成功返回 0,其它请参考错误码列表\n\nCreateClient[1/2]\n\n * static [QNRTCClient](https://developer.qiniu.com/rtc/9445/qnrtc/9446/QNRTCClient)* CreateClient([QNClientEventListener](https://developer.qiniu.com/rtc/9445/qnrtc/9407/QNClientEventListener)* listener)\n * 创建一个连接对象,QNRTCClient 全局只可存在一个\n\nParameters\n\n * listener: [QNClientEventListener](https://developer.qiniu.com/rtc/9445/qnrtc/9407/QNClientEventListener)\n\n连接中的事件监听\n\nReturns 连接管理对象指针\n\nCreateClient[2/2]\n\n * static [QNRTCClient](https://developer.qiniu.com/rtc/9445/qnrtc/9446/QNRTCClient)* CreateClient([QNRTCClientConfig](https://developer.qiniu.com/rtc/9445/qnrtc/11344/QNRTCClientConfig)& config, [QNClientEventListener](https://developer.qiniu.com/rtc/9445/qnrtc/9407/QNClientEventListener)* listener)\n * 使用 [QNRTCClientConfig](https://developer.qiniu.com/rtc/9445/qnrtc/11344/QNRTCClientConfig) 设创建一个连接对象并为其设置事件监听,QNRTCClient 全局只可存在一个\n\nParameters\n\n * config: [QNRTCClientConfig](https://developer.qiniu.com/rtc/9445/qnrtc/11344/QNRTCClientConfig)\n\n连接管理对象初始化配置\n\n * listener: [QNClientEventListener](https://developer.qiniu.com/rtc/9445/qnrtc/9407/QNClientEventListener)\n\n连接中的事件监听\n\nReturns 连接管理对象指针\n\nDestroyRtcClient\n\n * static void DestroyRtcClient([QNRTCClient](https://developer.qiniu.com/rtc/9445/qnrtc/9446/QNRTCClient)* ptr)\n * 释放由 CreateClient 创建的连接管理对象\n\nParameters\n\n * ptr: [QNRTCClient](https://developer.qiniu.com/rtc/9445/qnrtc/9446/QNRTCClient)\n\n[QNRTCClient](https://developer.qiniu.com/rtc/9445/qnrtc/9446/QNRTCClient)\n对象指针\n\nSetClientEventListener\n\n * static void SetClientEventListener([QNClientEventListener](https://developer.qiniu.com/rtc/9445/qnrtc/9407/QNClientEventListener)* listener)\n * 设置连接中的事件监听\n\nParameters\n\n * listener: [QNClientEventListener](https://developer.qiniu.com/rtc/9445/qnrtc/9407/QNClientEventListener)\n\n[QNClientEventListener](https://developer.qiniu.com/rtc/9445/qnrtc/9407/QNClientEventListener)\n对象指针\n\nCreateMicrophoneAudioTrack\n\n * static [QNMicrophoneAudioTrack](https://developer.qiniu.com/rtc/9445/qnrtc/9433/QNMicrophoneAudioTrack)* CreateMicrophoneAudioTrack([QNMicrophoneAudioTrackConfig](https://developer.qiniu.com/rtc/9445/qnrtc/9434/QNMicrophoneAudioTrackConfig)& config)\n * 通过指定一个 [QNMicrophoneAudioTrackConfig](https://developer.qiniu.com/rtc/9445/qnrtc/9434/QNMicrophoneAudioTrackConfig) 来创建一个麦克风音频轨道对象\n\n> 本地音频 Track 只可存在一个,重复创建会返回 null\n\nParameters\n\n * config: [QNMicrophoneAudioTrackConfig](https://developer.qiniu.com/rtc/9445/qnrtc/9434/QNMicrophoneAudioTrackConfig)\n\n采集音频的配置\n\nReturns\n[QNMicrophoneAudioTrack](https://developer.qiniu.com/rtc/9445/qnrtc/9433/QNMicrophoneAudioTrack)\n对象指针\n\nCreateCustomAudioTrack\n\n * static [QNCustomAudioTrack](https://developer.qiniu.com/rtc/9445/qnrtc/9409/QNCustomAudioTrack)* CreateCustomAudioTrack([QNCustomAudioTrackConfig](https://developer.qiniu.com/rtc/9445/qnrtc/9410/QNCustomAudioTrackConfig)& config)\n * 通过指定一个 [QNCustomAudioTrackConfig](https://developer.qiniu.com/rtc/9445/qnrtc/9410/QNCustomAudioTrackConfig) 来创建一个自定义音频轨道对象\n\n> 本地音频 Track 只可存在一个,重复创建会返回 null\n\nParameters\n\n * config: [QNCustomAudioTrackConfig](https://developer.qiniu.com/rtc/9445/qnrtc/9410/QNCustomAudioTrackConfig)\n\n自定义采集音频的配置\n\nReturns\n[QNCustomAudioTrack](https://developer.qiniu.com/rtc/9445/qnrtc/9409/QNCustomAudioTrack)\n对象指针\n\nCreateCameraVideoTrack\n\n * static [QNCameraVideoTrack](https://developer.qiniu.com/rtc/9445/qnrtc/9404/QNCameraVideoTrack)* CreateCameraVideoTrack([QNCameraVideoTrackConfig](https://developer.qiniu.com/rtc/9445/qnrtc/9406/QNCameraVideoTrackConfig)& config)\n * 通过指定一个 [QNCameraVideoTrackConfig](https://developer.qiniu.com/rtc/9445/qnrtc/9406/QNCameraVideoTrackConfig) 来创建一个摄像头视频轨道对象\n\nParameters\n\n * config: [QNCameraVideoTrackConfig](https://developer.qiniu.com/rtc/9445/qnrtc/9406/QNCameraVideoTrackConfig)\n\n摄像头 Track 的配置,可设置视频的采集配置与开关大小流功能等\n\nReturns\n[QNCameraVideoTrack](https://developer.qiniu.com/rtc/9445/qnrtc/9404/QNCameraVideoTrack)\n对象指针\n\nCreateScreenVideoTrack\n\n * static [QNScreenVideoTrack](https://developer.qiniu.com/rtc/9445/qnrtc/9448/QNScreenVideoTrack)* CreateScreenVideoTrack([QNScreenVideoTrackConfig](https://developer.qiniu.com/rtc/9445/qnrtc/9449/QNScreenVideoTrackConfig)& config)\n * 通过指定一个 [QNScreenVideoTrackConfig](https://developer.qiniu.com/rtc/9445/qnrtc/9449/QNScreenVideoTrackConfig) 来创建一个屏幕共享轨道对象\n\n> Linux 平台暂不支持屏幕共享\n\nParameters\n\n * config: [QNScreenVideoTrackConfig](https://developer.qiniu.com/rtc/9445/qnrtc/9449/QNScreenVideoTrackConfig)\n\n屏幕共享 Track 的配置,可设置视频的编码配置\n\nReturns\n[QNScreenVideoTrack](https://developer.qiniu.com/rtc/9445/qnrtc/9448/QNScreenVideoTrack)\n对象指针\n\nCreateCustomVideoTrack\n\n * static [QNCustomVideoTrack](https://developer.qiniu.com/rtc/9445/qnrtc/9412/QNCustomVideoTrack)* CreateCustomVideoTrack([QNCustomVideoTrackConfig](https://developer.qiniu.com/rtc/9445/qnrtc/9413/QNCustomVideoTrackConfig)& config)\n * 通过指定一个 [QNCustomVideoTrackConfig](https://developer.qiniu.com/rtc/9445/qnrtc/9413/QNCustomVideoTrackConfig) 来创建一个自定义视频采集轨道对象\n\n> 自定义视频采集轨道可以创建多个,而其他采集轨道只能创建一个\n\nParameters\n\n * config: [QNCustomVideoTrackConfig](https://developer.qiniu.com/rtc/9445/qnrtc/9413/QNCustomVideoTrackConfig)\n\n自定义视频采集 Track 的配置,可设置视频的编码配置\n\nReturns\n[QNCustomVideoTrack](https://developer.qiniu.com/rtc/9445/qnrtc/9412/QNCustomVideoTrack)\n对象指针\n\nDestroyLocalTrack\n\n * static void DestroyLocalTrack([QNLocalTrack](https://developer.qiniu.com/rtc/9445/qnrtc/9422/QNLocalTrack)* ptr)\n * 释放前面创建的音视频轨道实例\n\nParameters\n\n * ptr: [QNLocalTrack](https://developer.qiniu.com/rtc/9445/qnrtc/9422/QNLocalTrack)\n\n音视频轨道实例\n\nSetAudioRouteToSpeakerphone\n\n * static void SetAudioRouteToSpeakerphone(bool audio_route_to_speakerphone)\n * 设置是否将音频路由切换到扬声器,设置为 false 后将会切换到之前的音频路由。\n\n设置生效后会触发\n[QNRTCEventListener.OnAudioRouteChanged](https://developer.qiniu.com/rtc/9445/qnrtc/11345)\n回调对应的路由状态,该方法仅仅适用于 Android 和 iOS 平台\n\nParameters\n\n * audio_route_to_speakerphone: bool \n\n是否将音频路由切换到扬声器\n\nSetAudioPlayMute\n\n * static void SetAudioPlayMute(bool muted)\n * 设置音频播放静音,该方法仅仅适用于 Android 和 iOS 平台\n\nParameters\n\n * muted: bool \n\n是否将音频播放设置为静音\n\nIsAudioPlayMute\n\n * static bool IsAudioPlayMute()\n * 当前音频播放是否为静音,该方法仅仅适用于 Android 和 iOS 平台\n\nReturns 音频静音状态,true 为静音,false 不为静音\n\n以上内容是否对您有帮助?\n\n"
}
] | qiniu developer | 1 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.