Apache Karaf with Messages Broker and Monitoring Display by ELK
From Above an Architecture design, I will have expanded. How to use The messages broker for several sever communication between one to more than one application. So, The messages broker will be taking for messages communication. An application has taken the standard messages such as XML, SOAP. > read more.
While an application interface via messages broker we can get events, logs on sever and keep transaction data to visualize data by ELK stack. I would like to share the components when we need to use a service offering dashboard to collect data transportation between a stand-alone server to other servers by ELK.
Karaf is a lightweight, powerful, and enterprise-ready application runtime. It provides all the ecosystem and bootstrapping options you need for your applications. It runs on-premise or on cloud. By polymorphic, it means that Karaf can host any kind of applications: WAR, OSGi, Spring, and much more.
https://karaf.apache.org/documentation.html.
Message brokers A message broker is software that enables applications, systems, and services to communicate with each other and exchange information. The message broker does this by translating messages between formal messaging protocols. This allows interdependent services to “talk” with one another directly, even if they were written in different languages or implemented on different platforms.
https://www.ibm.com/cloud/learn/message-brokers.
ELK is the acronym for three open sourceElasticsearch Logstash and KibanaElasticsearch is a search and analytics engine. Logstash is a server‑side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a “stash” like Elasticsearch. Kibana lets users visualize data with charts and graphs in Elasticsearch.
https://www.elastic.co/what-is/elk-stack.
OK .. Hand-On,
Prerequisite
- OS example by RHEL
- Apache Karaf
- WMQ, Web messages Broker Client
- Tomcat (Optional)
- ELK
- Postgres DB (Optional) Round End-Point Broker
Install Apache Karaf
Download .tgz from an official Download site
Check Java version
Download load Apache Karaf
wget
http://www-us.apache.org/dist/kafka/0.9.0.1/kafka_2.11-0.9.0.1.tgzS
Set wrapper.conf
root@localhost:/bin/didania/node1/tnd/map0/app1map0/6.3/etc # vi App20-tnd6317-wrapper.conf
# ------------------------------------------------------------------------
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ------------------------------------------------------------------------#********************************************************************
# Wrapper Properties
##********************************************************************
#set.default.JAVA_HOME=/bin/didania/node1/tnd/map0/app1map0/6.3/../../JDK8
set.default.JAVA_HOME= /bin/didania/node1/tnd/map0/JDK8
set.default.KARAF_HOME=/bin/didania/node1/tnd/map0/app1map0/6.3
set.default.KARAF_BASE=/bin/didania/node1/tnd/map0/app1map0/6.3
set.default.KARAF_DATA=/busdata/didania/node1/tnd/map0/app1map0/6.3/data.6317
set.default.KARAF_ETC=/bin/didania/node1/tnd/map0/app1map0/6.3/etc
set.TESB_ENV_PASSWORD=AwdssUs4msP7tzc5lBy3asC# Java Application
wrapper.working.dir=%KARAF_BASE%
wrapper.java.command=%JAVA_HOME%/SWT/java
wrapper.java.mainclass=org.apache.karaf.wrapper.internal.service.Main
wrapper.java.classpath.1=%KARAF_HOME%/lib/boot/*.jar
wrapper.java.classpath.2=%KARAF_HOME%/lib/wrapper/*.jar
wrapper.java.library.path.1=%KARAF_HOME%/lib/wrapper/# Application Parameters. Add parameters as needed starting from 1
#wrapper.app.parameter.1=# JVM Parameters
# note that n is the parameter number starting from 1.
wrapper.java.additional.1=-Dkaraf.home=%KARAF_HOME%
wrapper.java.additional.2=-Dkaraf.base=%KARAF_BASE%
wrapper.java.additional.3=-Dkaraf.data=%KARAF_DATA%
wrapper.java.additional.4=-Dkaraf.etc=%KARAF_ETC%
wrapper.java.additional.5=-Dcom.sun.management.jmxremote
wrapper.java.additional.6=-Dkaraf.startLocalConsole=false
wrapper.java.additional.7=-Dkaraf.startRemoteShell=true
wrapper.java.additional.8=-Djava.endorsed.dirs=%JAVA_HOME%/jre/lib/endorsed:%JAVA_HOME%/lib/endorsed:%KARAF_HOME%/lib/endorsed
wrapper.java.additional.9=-Djava.ext.dirs=%JAVA_HOME%/jre/lib/ext:%JAVA_HOME%/lib/ext:%KARAF_HOME%/lib/ext
wrapper.java.additional.10=-Dcom.ibm.msg.client.commonservices.log.outputName=%KARAF_DATA%/log
wrapper.java.additional.11=-Dcom.ibm.msg.client.commonservices.trace.outputName=%KARAF_DATA%/log/mq-traces/
# added to change the garbage collector to G1 with a target of max pause of 500 ms
wrapper.java.additional.12=-XX:+UseG1GC
wrapper.java.additional.13=-XX:MaxGCPauseMillis=500
wrapper.java.additional.14=-verbosegc
#ibm mq debug
#wrapper.java.additional.15=-Dcom.ibm.msg.client.commonservices.trace.status=ON# Uncomment to enable jmx
#wrapper.java.additional.n=-Dcom.sun.management.jmxremote.port=1616
#wrapper.java.additional.n=-Dcom.sun.management.jmxremote.authenticate=false
#wrapper.java.additional.n=-Dcom.sun.management.jmxremote.ssl=false# Uncomment to enable YourKit profiling
#wrapper.java.additional.n=-Xrunyjpagent# Uncomment to enable remote debugging
#wrapper.java.additional.n=-Xdebug -Xnoagent -Djava.compiler=NONE
#wrapper.java.additional.n=-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005# Initial Java Heap Size (in MB)
#wrapper.java.initmemory=3# Maximum Java Heap Size (in MB)
wrapper.java.maxmemory=512#********************************************************************
# Wrapper Logging Properties
Start Karaf
. /bin/kafka-server-start.sh
ssh to Karfa host ~$ ssh -p 18107 karaf@localhost
MQ client Link
Install ELK
Elasticsearch
sudo apt-get update
sudo apt-get install elasticsearch
Config node and Cluster by .yml
##################### Elasticsearch Configuration Example ###################### This file contains an overview of various configuration settings,#give your cluster a name.
cluster.name: my-cluster
#give your nodes a name (change node number from node to node).
node.name: "es-node-1"
#define node 1 as master-eligible:
node.master: true
#define nodes 2 and 3 as data nodes:
node.data: true
#enter the private IP and port of your node:
network.host: localhost
http.port: 9200
#detail the private IPs of your nodes: (If you have more than 1 cluater and node)
discovery.zen.ping.unicast.hosts: ["172.11.61.27", "172.31.22.131","172.31.32.221"]
Run elasticsearch
sudo service elasticsearch start
http://localhost.com:9200/_nodes?v
Logistash
sudo apt-get update && sudo apt-get install logstashbin/logstash -f logstash-simple.conf
set /logstash-simple.conf
input {
log4j {
port => 18050
type => bsm__0
}
}
## input from Log4j Pluginfilter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
## Format file form put to elasticsearch clusteroutput {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}
#File to Elastidearch
Kibana Just link