對於 Ant Media Server 方面,它只是 Apache Kafka 的事件生產者。 Ant Media Server 每 15 秒向 Apache Kafka 發送實例和串流統計訊息。您可以使用任何其他工具或管道消耗 Apache Kafka 事件。對於我們的情況,我們使用以下管道來收集和可視化統計訊息。
因此,在本文章中,我們展示了如何安裝上述管道,以為 Ant Media Server 實例提供即插即用的監控解決方案。讓我們開始安裝以下組件:
Apache Kafka 對於構建 real-time streaming 數據管道以在系統或應用程序之間獲取數據非常有用。
apt-get update && apt-get install openjdk-8-jdk -y
wget -qO- https://archive.apache.org/dist/kafka/2.2.0/kafka_2.12-2.2.0.tgz | tar -zxvf- -C /opt/ && mv /opt/kafka* /opt/kafka
vim /opt/kafka/config/server.properties
listeners=PLAINTEXT://your_server_ip:9092
/opt/kafka/bin/zookeeper-server-start.sh /opt/kafka/config/zookeeper.properties &
/opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server.properties &
首先,我們啟動了 ZooKeeper,因為 Kafka 需要 ZooKeeper,然後我們啟動了 Kafka。
netstat -tpln | egrep "9092|2181"
如果您看到端口(9092 和 2181)處於偵聽模式,則表示它正在工作。您已經完成了第一次安裝。
將 Apache Kafka 作為 systemd 服務運行將使我們能夠使用 systemctl 命令管理 Apache Kafka 服務。
請按照以下說明操作:
systemd
unit file for Apache Kafkavim /lib/systemd/system/kafka.service
kafka.service
內容中。[Unit]
Description=Apache Kafka Server
Requires=network.target remote-fs.target
After=network.target remote-fs.target kafka-zookeeper.service
[Service]
Type=simple
Environment=JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-amd64
ExecStart=/opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server.properties
ExecStop=/opt/kafka/bin/kafka-server-stop.sh
[Install]
WantedBy=multi-user.target
systemd
unit file for Zookeepervim /lib/systemd/system/kafka-zookeeper.service
kafka-zookeeper.service
file you’ve created above.[Unit]
Description=Apache Zookeeper Server
Requires=network.target remote-fs.target
After=network.target remote-fs.target
[Service]
Type=simple
Environment=JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-amd64
ExecStart=/opt/kafka/bin/zookeeper-server-start.sh /opt/kafka/config/zookeeper.properties
ExecStop=/opt/kafka/bin/zookeeper-server-stop.sh
[Install]
WantedBy=multi-user.target
systemctl enable kafka-zookeeper.service
systemctl enable kafka.service
systemctl start kafka-zookeeper.service
systemctl start kafka.service
如果您想監控 Ant Media Server,您需要在文件中設置您的 Apache Kafka 的 IP 地址。 AMS_INSTALLTION_DIR/conf/red5.properties
vim /usr/local/antmedia/conf/red5.properties
server.kafka_brokers=ip_address:port_number
替換 ip_address:port_number
為 Apache Kafka IP 地址和端口號。例如: server.kafka_brokers=192.168.1.230:9092
service antmedia restart
/opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server 192.168.1.230:9092 --topic ams-instance-stats --from-beginning
輸出應如下所示:{"instanceId":"a06e5437-40ee-49c1-8e38-273544964335","cpuUsage":
{"processCPUTime":596700000,"systemCPULoad":0,"processCPULoad":1},"jvmMemoryUsage":
{"maxMemory":260046848,"totalMemory":142606336,"freeMemory":21698648,"inUseMemory":120907688},"systemInfo":
{"osName":"Linux","osArch":"amd64","javaVersion":"1.8","processorCount":1},"systemMemoryInfo":
...
/opt/kafka/bin/kafka-topics.sh --list --bootstrap-server your_kafka_server:9092
例如:/opt/kafka/bin/kafka-topics.sh --list --bootstrap-server 192.168.1.230:9092
ams-instance-stats
ams-webrtc-stats
kafka-webrtc-tester-stats
/opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server 192.168.1.230:9092 --topic ams-instance-stats --from-beginning
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
sudo apt-get install apt-transport-https
echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
apt-get update && apt-get install elasticsearch
systemctl enable elasticsearch.service
systemctl start elasticsearch.service
Logstash 是一種服務器端數據處理管道,可同時從多個來源獲取數據,對其進行轉換,然後將其發送到“存儲”(如 Elasticsearch)
logstash
apt-get update && apt-get install logstash
systemctl enable logstash.service
kafka_server_ip
並確保 elasticsearch_ip
正確。 #kafka
input {
kafka {
bootstrap_servers => "kafka_server_ip:9092"
client_id => "logstash"
group_id => "logstash"
consumer_threads => 3
topics => ["ams-instance-stats","ams-webrtc-stats","kafka-webrtc-tester-stats"]
codec => "json"
tags => ["log", "kafka_source"]
type => "log"
}
}
#elasticsearch
output {
elasticsearch {
hosts => ["127.0.0.1:9200"] #elasticsearch_ip
index => "logstash-%{[type]}-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
logstash
服務 systemctl restart logstash
您可以使用以下命令測試 Elasticsearch 和 Logstash 是否正常工作。curl -XGET 'localhost:9200/_cat/indices?v&pretty'
示例輸出:
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open logstash-log-2020.03.23 mf-ffIHBSNO4s7_YoUr_Rw 1 1 1300 0 527.5kb 527.5kb
Grafana 是一個開源指標分析和可視化套件。
sudo apt-get install -y software-properties-common wget apt-transport-https
wget -q -O - https://packages.grafana.com/gpg.key | sudo apt-key add -
sudo add-apt-repository "deb https://packages.grafana.com/oss/deb stable main"
sudo apt-get update && sudo apt-get install grafana
systemctl enable grafana-server
systemctl start grafana-server
http://your_ip_address:3000/login
) through your web browser. Default username and password is admin/admin
Add data source
URL : http://127.0.0.1:9200
Index name: logstash-
Time filed name: @timestamp
Version: 7.0+
New dashboard
Add Query
Query: ElasticSearch
/newbot
Use this token to access the HTTP API:
1254341629:AAHYHhJK8TgsUXa7jqBK7wU1bJ8hzWhUFzs
Keep your token secure and store it safely, it can be used by anyone to control your bot.
https://api.telegram.org/bot{USE_YOUR_ACCESS_TOKEN}/getUpdates
{"ok":true,"result":[{"update_id":222389875,
"channel_post":{"message_id":2,"chat":
{"id":-1001181377238,"title":"test","type":"channel"},"date":1587016720,"text":"test"}}]}
-1001181377238
我們已經配置了上面的聊天機器人。現在讓我們開始配置 Grafana Notification
http://your_grafana_server:3000
Name : name_of_your_notification.
Type : Telegram
Bot Api Token: your_bot_token_id
Chat ID: your_channel_id
If you click on the Send Test and there is a message on the telegram, everything is fine.
現在您已經根據需要設置了通知。
這是監控 Ant Media Server 的整個設置。
這次的分享到這邊,我們下次見。