Installing JMX Exporter as a Debian Package

Posted on Tuesday, April 25, 2017


I already got a JMX Exporter installed and working on a simple Kafka server see http://www.whiteboardcoder.com/2017/04/prometheus-and-jmx.html [1].

Now I want to see if I can simplify it a little by installing the tool via a Debian package. 

The JMX Exporter build will also build a debian installation package.




Download and build


Install maven


  > sudo apt-get install maven





Clone the JMX exporter repo.


  > git clone https://github.com/prometheus/jmx_exporter.git





Head into the repo and build it.


  > cd jmx_exporter/
  > mvn package










This should create a debian package at jmx_prometheus_httpserver/target/

Check the debian package info


  > cd jmx_prometheus_httpserver/target/
  > dpkg-deb --info \
jmx_prometheus_httpserver_0.10+SNAPSHOT_all.deb




Now install it!


  > sudo dpkg -i \
jmx_prometheus_httpserver_0.10+SNAPSHOT_all.deb




Now you should have this tool installed at /usr/bin/jmx_exporter


  > ls -alh /usr/bin/jmx_exporter




This looks for its yaml file at


  > ls -alh /etc/jmx_exporter/jmx_exporter.yaml


 




Looking at it


  > vi /etc/jmx_exporter/jmx_exporter.yaml





The default is very basic and it listens on port 5555.








Getting it connected to Kafka


Now that I have this installed how do I get it working with some java app? 

For my simple Kafka install I need to edit the kafka-server-start.sh script adding in a javaagent


  > sudo vi /opt/kafka/kafka_2.11-0.10.1.0/bin/kafka-server-start.sh


And add the following to the top  (Basically turn on jmx remote management)


export KAFKA_OPTS="$KAFKA_OPTS -Dcom.sun.management.jmxremote"
#Should retrieve local IP address
IP_ADDR=`ip route get 8.8.8.8 | awk '{print $NF; exit}'`
export KAFKA_OPTS="$KAFKA_OPTS -Djava.rmi.server.hostname=$IP_ADDR"
export KAFKA_OPTS="$KAFKA_OPTS -Dcom.sun.management.jmxremote.port=5555"
export KAFKA_OPTS="$KAFKA_OPTS -Dcom.sun.management.jmxremote.authenticate=false"
export KAFKA_OPTS="$KAFKA_OPTS -Dcom.sun.management.jmxremote.ssl=false"


After you do that restart Kafka (In my particular case I run this command on the server.


  > sudo /opt/kafka/kafka_2.11-0.10.1.0/bin/kafka-server-start.sh /opt/kafka/kafka_2.11-0.10.1.0/config/server.properties




Now that is started up start up the jmx exporter web server.


  > jmx_exporter 192.168.0.140:9999 \
  /etc/jmx_exporter/jmx_exporter.yaml





Now that it has been started it should be gathering data from the kafka server via JMX remote.   Then showing all that data at localhost:9999/metrics



As a test I can run this curl.


  > curl -s 192.168.0.140:9999/metrics





As a further test I want to see how many metrics I get


  > curl -s 192.168.0.140:9999/metrics | grep -v ^# | wc -l



That is too much for me I want to update the .yaml file.


Let me update the yaml file.


  > vi /etc/jmx_exporter/jmx_exporter.yaml




---
hostPort: 127.0.0.1:5555
lowercaseOutputName: true
whitelistObjectNames: [
   "kafka.controller:type=KafkaController,name=ActiveControllerCount,*",

   "kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec,*",
   "kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,*",
   "kafka.server:type=BrokerTopicMetrics,name=BytesOutPerSec,*",

   "kafka.server:type=ReplicaManager,name=IsrExpandsPerSec,*",
   "kafka.server:type=ReplicaManager,name=IsrShrinksPerSec,*",
   "kafka.server:type=ReplicaManager,name=LeaderCount,*",
   "kafka.server:type=ReplicaManager,name=PartitionCount,*",
   "kafka.server:type=ReplicaManager,name=UnderReplicatedPartitions,*",

   "kafka.server:type=ReplicaFetcherManager,name=MaxLag,*",

   "kafka.server:type=DelayedOperationPurgatory,name=PurgatorySize,delayedOperation=Fetch,*",
   "kafka.server:type=DelayedOperationPurgatory,name=PurgatorySize,delayedOperation=Produce,*",

   "kafka.network:type=RequestMetrics,name=RequestsPerSec,request=Produce,*",
   "kafka.network:type=RequestMetrics,name=RequestsPerSec,request=FetchConsumer,*",
   "kafka.network:type=RequestMetrics,name=RequestsPerSec,request=FetchFollower,*",

   "kafka.network:type=RequestMetrics,name=TotalTimeMs,request=Produce,*",
   "kafka.network:type=RequestMetrics,name=TotalTimeMs,request=FetchConsumer,*",
   "kafka.network:type=RequestMetrics,name=TotalTimeMs,request=FetchFollower,*",

   "kafka.network:type=RequestMetrics,name=RequestQueueTimeMs,request=Produce,*",
   "kafka.network:type=RequestMetrics,name=RequestQueueTimeMs,request=FetchConsumer,*",
   "kafka.network:type=RequestMetrics,name=RequestQueueTimeMs,request=FetchFollower,*",

   "kafka.network:type=RequestMetrics,name=LocalTimeMs,request=Produce,*",
   "kafka.network:type=RequestMetrics,name=LocalTimeMs,request=FetchConsumer,*",
   "kafka.network:type=RequestMetrics,name=LocalTimeMs,request=FetchFollower,*",

   "kafka.network:type=RequestMetrics,name=RemoteTimeMs,request=Produce,*",
   "kafka.network:type=RequestMetrics,name=RemoteTimeMs,request=FetchConsumer,*",
   "kafka.network:type=RequestMetrics,name=RemoteTimeMs,request=FetchFollower,*",

   "kafka.network:type=RequestMetrics,name=ResponseQueueTimeMs,request=Produce,*",
   "kafka.network:type=RequestMetrics,name=ResponseQueueTimeMs,request=FetchConsumer,*",
   "kafka.network:type=RequestMetrics,name=ResponseQueueTimeMs,request=FetchFollower,*",

   "kafka.network:type=RequestMetrics,name=ResponseSendTimeMs,request=Produce,*",
   "kafka.network:type=RequestMetrics,name=ResponseSendTimeMs,request=FetchConsumer,*",
   "kafka.network:type=RequestMetrics,name=ResponseSendTimeMs,request=FetchFollower,*",

   "kafka.network:type=SocketServer,name=NetworkProcessorAvgIdlePercent,*",
]

rules:
#kafka.controller rules
  - pattern: kafka.controller<type=(KafkaController*), name=(ActiveControllerCount.*)><>(Value)
    name: kafka_controller_$1_$2_$3
    type: GAUGE

#kafka.server rules
  - pattern: kafka.server<type=(.+), name=(.+)><>(Count)
    name: kafka_server_$1_$2_$3
    type: COUNTER
  - pattern: kafka.server<type=(.+), name=(.+)><>(FiveMinuteRate)
    name: kafka_server_$1_$2_$3
    type: GAUGE
  - pattern: kafka.server<type=(ReplicaManager.*), name=((Leader|Partition)Count.*|UnderReplicatedPartitions.*)><>(Value)
    name: kafka_server_$1_$2_$3
    type: GAUGE
  - pattern: kafka.server<type=(ReplicaFetcherManager.*), name=(MaxLag.*)><>(Value)
    name: kafka_server_$1_$2_$3
    type: GAUGE
  - pattern: kafka.server<type=(DelayedOperationPurgatory.*), name=(PurgatorySize.*)><>(Value)
    name: kafka_server_$1_$2_$3
    type: GAUGE



#kafka.network rules
 # name=TotalTimeMs -- ??Not sure why (TotalTimeMs) does not work??#
  - pattern: kafka.network<type=(.+), name=(TotalTimeMs.*|RequestQueueTimeMs.*|LocalTimeMs.*|RemoteTimeMs.*|ResponseQueueTimeMs.*|ResponseSendTimeMs.*)><>(Count)
    name: kafka_network_$1_$2_$3
    type: COUNTER
  - pattern: kafka.network<type=(.+), name=(TotalTimeMs.*|RequestQueueTimeMs.*|LocalTimeMs.*|RemoteTimeMs.*|ResponseQueueTimeMs.*|RepsonseSendTimeMs.*)><>(Min|Max|Mean|50th*|99th*)
    name: kafka_network_$1_$2_$3
    type: GAUGE
  - pattern: kafka.network<type=(.+), name=(NetworkProcessorAvgIdlePercent.*)><>(Value)
    name: kafka_network_$1_$2_$3
    type: GAUGE




Start the JMX Exporter again


  > jmx_exporter 192.168.0.140:9999 \
  /etc/jmx_exporter/jmx_exporter.yaml




  > curl -s 192.168.0.140:9999/metrics | grep -v ^# | wc -l



With a single topic on the kafka server I get 516 data points.

If I filter down just to the kafka data points



  > curl -s 192.168.0.140:9999/metrics | grep -v ^# | grep kafka | wc -l






So that works… but I think I prefer agent versus the web server.


But I do like having the debian package installer because not only does it install the

·        /etc/jmx_exporter/jmx_exporter.yaml
·        /usr/bin/jmx_exporter

It also installs the jar file

·         /usr/share/jmx_exporter/jmx_prometheus_httpserver-0.10-SNAPSHOT-jar-with-dependencies.jar


This gives me a good default location to put the jmx java agent exporter jar file.





Setting up the agent


Let me download the latest maven build of the jmx_prometheus_javaagent-0.9.jar file and place it in the /usr/share/jmx_exporter/ folder



  > sudo wget -P /usr/share/jmx_exporter \
https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.9/jmx_prometheus_javaagent-0.9.jar




Let me go back and edit my kafka start script


  > sudo vi /opt/kafka/kafka_2.11-0.10.1.0/bin/kafka-server-start.sh

I am going to leave the remote jmx on as I will still use that for other reason.



Add this to the beginning


JMX_EXP_JAR="/usr/share/jmx_exporter/jmx_prometheus_javaagent-0.9.jar"
JMX_EXP_PORT="9999"
JMX_EXP_CONFIG="/etc/jmx_exporter/jmx_exporter.yaml"
export KAFKA_OPTS="$KAFKA_OPTS -javaagent:$JMX_EXP_JAR=$JMX_EXP_PORT:$JMX_EXP_CONFIG"





Now restart Kafka (in my case I run this)


  > sudo /opt/kafka/kafka_2.11-0.10.1.0/bin/kafka-server-start.sh /opt/kafka/kafka_2.11-0.10.1.0/config/server.properties


Now the JMX Exporter agent should be running and I should be able to get my data out at localhost:9999/metrics


  > curl -s localhost:9999/metrics | grep -v ^# | grep kafka | wc -l




Yep worked just fine.


References


[1]        Prometheus and JMX


1 comment:

  1. Nice article. step by step execution nice for dummies.

    ReplyDelete