This document will show the setup and configuration required for running the logstash, elasticsearch, kibana, and elastalert for alerting.
This document shows the setup on MacOS, assuming you have homebrew package manager.
Step1: Run "brew install logstash"
Step2: Run "brew install elasticsearch"
Step3: Run "brew install kibana"
This example will consider the selenium server logs and display them on Kibana and alert them by querying the elasticsearch using elastalert.
Run Selenium Server:
1. cd Downloads
2. nohup java -jar -Dselenium.LOGGER=output.log -Dselenium.LOGGER.level=FINEST selenium-server-standalone-3.4.0.jar &
Now, we will see an output.log file generated in the Downloads directory.
Configure Logstash:
1. Create a logstash configuration file in "Downloads" directory as follows
input {
file {
path => "/Users/Yuvaraj/Downloads/output.log"
start_position => beginning
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}"}
}
geoip {
source => "clientip"
}
}
output {
elasticsearch {
}
}
and save it as selenium-log.conf.
2.Now run the logstash with this configuration as below in the "Downloads" directory.
logstash -f selenium-log.conf
Now logstash should display the following logs
Sending Logstash's logs to /usr/local/Cellar/logstash/5.4.2/libexec/logs which is now configured via log4j2.properties
[2017-07-06T09:34:36,967][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://127.0.0.1:9200/]}}
[2017-07-06T09:34:36,972][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://127.0.0.1:9200/, :path=>"/"}
[2017-07-06T09:34:37,050][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0xeda20da URL:http://127.0.0.1:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused"}
[2017-07-06T09:34:37,051][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2017-07-06T09:34:37,056][WARN ][logstash.outputs.elasticsearch] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused {:url=>http://127.0.0.1:9200/, :error_message=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2017-07-06T09:34:37,057][ERROR][logstash.outputs.elasticsearch] Failed to install template. {:message=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :backtrace=>["/usr/local/Cellar/logstash/5.4.2/libexec/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:271:in `perform_request_to_url'", "/usr/local/Cellar/logstash/5.4.2/libexec/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:257:in `perform_request'", "/usr/local/Cellar/logstash/5.4.2/libexec/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:347:in `with_connection'", "/usr/local/Cellar/logstash/5.4.2/libexec/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:256:in `perform_request'", "/usr/local/Cellar/logstash/5.4.2/libexec/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:264:in `get'", "/usr/local/Cellar/logstash/5.4.2/libexec/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.2-java/lib/logstash/outputs/elasticsearch/http_client.rb:86:in `get_version'", "/usr/local/Cellar/logstash/5.4.2/libexec/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.2-java/lib/logstash/outputs/elasticsearch/template_manager.rb:16:in `get_es_version'", "/usr/local/Cellar/logstash/5.4.2/libexec/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.2-java/lib/logstash/outputs/elasticsearch/template_manager.rb:20:in `get_es_major_version'", "/usr/local/Cellar/logstash/5.4.2/libexec/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.2-java/lib/logstash/outputs/elasticsearch/template_manager.rb:7:in `install_template'", "/usr/local/Cellar/logstash/5.4.2/libexec/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.2-java/lib/logstash/outputs/elasticsearch/common.rb:62:in `install_template'", "/usr/local/Cellar/logstash/5.4.2/libexec/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.2-java/lib/logstash/outputs/elasticsearch/common.rb:29:in `register'", "/usr/local/Cellar/logstash/5.4.2/libexec/logstash-core/lib/logstash/output_delegator_strategies/shared.rb:9:in `register'", "/usr/local/Cellar/logstash/5.4.2/libexec/logstash-core/lib/logstash/output_delegator.rb:41:in `register'", "/usr/local/Cellar/logstash/5.4.2/libexec/logstash-core/lib/logstash/pipeline.rb:268:in `register_plugin'", "/usr/local/Cellar/logstash/5.4.2/libexec/logstash-core/lib/logstash/pipeline.rb:279:in `register_plugins'", "org/jruby/RubyArray.java:1613:in `each'", "/usr/local/Cellar/logstash/5.4.2/libexec/logstash-core/lib/logstash/pipeline.rb:279:in `register_plugins'", "/usr/local/Cellar/logstash/5.4.2/libexec/logstash-core/lib/logstash/pipeline.rb:288:in `start_workers'", "/usr/local/Cellar/logstash/5.4.2/libexec/logstash-core/lib/logstash/pipeline.rb:214:in `run'", "/usr/local/Cellar/logstash/5.4.2/libexec/logstash-core/lib/logstash/agent.rb:398:in `start_pipeline'"]}
[2017-07-06T09:34:37,059][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::Generic:0x289e630a URL://127.0.0.1>]}
[2017-07-06T09:34:37,109][INFO ][logstash.filters.geoip ] Using geoip database {:path=>"/usr/local/Cellar/logstash/5.4.2/libexec/vendor/bundle/jruby/1.9/gems/logstash-filter-geoip-4.1.1-java/vendor/GeoLite2-City.mmdb"}
[2017-07-06T09:34:37,132][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>1000}
[2017-07-06T09:34:37,254][INFO ][logstash.pipeline ] Pipeline main started
[2017-07-06T09:34:37,295][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2017-07-06T09:34:42,071][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://127.0.0.1:9200/, :path=>"/"}
[2017-07-06T09:34:42,076][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0xeda20da URL:http://127.0.0.1:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused"}
[2017-07-06T09:34:47,092][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://127.0.0.1:9200/, :path=>"/"}
[2017-07-06T09:34:47,097][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0xeda20da URL:http://127.0.0.1:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused"}
[2017-07-06T09:34:52,112][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://127.0.0.1:9200/, :path=>"/"}
[2017-07-06T09:34:52,116][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0xeda20da URL:http://127.0.0.1:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused"}
[2017-07-06T09:34:57,127][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://127.0.0.1:9200/, :path=>"/"}
[2017-07-06T09:34:57,129][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0xeda20da URL:http://127.0.0.1:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused"}
[2017-07-06T09:35:02,138][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://127.0.0.1:9200/, :path=>"/"}
[2017-07-06T09:35:02,140][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0xeda20da URL:http://127.0.0.1:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused"}
[2017-07-06T09:35:07,162][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://127.0.0.1:9200/, :path=>"/"}
[2017-07-06T09:35:07,269][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0xeda20da URL:http://127.0.0.1:9200/>}
Now, you can confirm if the logstash is running by
curl -i http://localhost:9600/
Response: 200
{"host":"Z4437-E6B0-1CA8.uk.ad.ba.com","version":"5.4.2","http_address":"127.0.0.1:9600","id":"2b161c1d-ac74-4840-b716-67577b2c03d7","name":"Z4437-E6B0-1CA8.uk.ad.ba.com","build_date":"2017-06-15T03:03:41Z","build_sha":"76c4926f4da6fa4a5d1ec9f5e5770dd85a8b1958","build_snapshot":false}Configure & Run Elastic search:
1. Run the elastic search with the following command
cd Downloads
elasticsearch
Now, we must see the following logs
[2017-07-06T09:34:56,207][INFO ][o.e.n.Node ] [] initializing ...
[2017-07-06T09:34:56,287][INFO ][o.e.e.NodeEnvironment ] [4_D7gOx] using [1] data paths, mounts [[/ (/dev/disk1)]], net usable_space [59.7gb], net total_space [232.6gb], spins? [unknown], types [hfs]
[2017-07-06T09:34:56,288][INFO ][o.e.e.NodeEnvironment ] [4_D7gOx] heap size [1.9gb], compressed ordinary object pointers [true]
[2017-07-06T09:34:56,351][INFO ][o.e.n.Node ] node name [4_D7gOx] derived from node ID [4_D7gOxPQZCwvn8sll6G4A]; set [node.name] to override
[2017-07-06T09:34:56,352][INFO ][o.e.n.Node ] version[5.4.3], pid[2569], build[eed30a8/2017-06-22T00:34:03.743Z], OS[Mac OS X/10.11.6/x86_64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_51/25.51-b03]
[2017-07-06T09:34:56,352][INFO ][o.e.n.Node ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+DisableExplicitGC, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/local/Cellar/elasticsearch/5.4.3/libexec]
[2017-07-06T09:34:57,121][INFO ][o.e.p.PluginsService ] [4_D7gOx] loaded module [aggs-matrix-stats]
[2017-07-06T09:34:57,122][INFO ][o.e.p.PluginsService ] [4_D7gOx] loaded module [ingest-common]
[2017-07-06T09:34:57,122][INFO ][o.e.p.PluginsService ] [4_D7gOx] loaded module [lang-expression]
[2017-07-06T09:34:57,122][INFO ][o.e.p.PluginsService ] [4_D7gOx] loaded module [lang-groovy]
[2017-07-06T09:34:57,122][INFO ][o.e.p.PluginsService ] [4_D7gOx] loaded module [lang-mustache]
[2017-07-06T09:34:57,122][INFO ][o.e.p.PluginsService ] [4_D7gOx] loaded module [lang-painless]
[2017-07-06T09:34:57,122][INFO ][o.e.p.PluginsService ] [4_D7gOx] loaded module [percolator]
[2017-07-06T09:34:57,122][INFO ][o.e.p.PluginsService ] [4_D7gOx] loaded module [reindex]
[2017-07-06T09:34:57,122][INFO ][o.e.p.PluginsService ] [4_D7gOx] loaded module [transport-netty3]
[2017-07-06T09:34:57,122][INFO ][o.e.p.PluginsService ] [4_D7gOx] loaded module [transport-netty4]
[2017-07-06T09:34:57,122][INFO ][o.e.p.PluginsService ] [4_D7gOx] no plugins loaded
[2017-07-06T09:34:58,526][INFO ][o.e.d.DiscoveryModule ] [4_D7gOx] using discovery type [zen]
[2017-07-06T09:34:59,105][INFO ][o.e.n.Node ] initialized
[2017-07-06T09:34:59,106][INFO ][o.e.n.Node ] [4_D7gOx] starting ...
[2017-07-06T09:34:59,254][INFO ][o.e.t.TransportService ] [4_D7gOx] publish_address {127.0.0.1:9300}, bound_addresses {[fe80::1]:9300}, {[::1]:9300}, {127.0.0.1:9300}
[2017-07-06T09:35:02,301][INFO ][o.e.c.s.ClusterService ] [4_D7gOx] new_master {4_D7gOx}{4_D7gOxPQZCwvn8sll6G4A}{gGc_p74URDm6iT5_rczMOg}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-07-06T09:35:02,319][INFO ][o.e.h.n.Netty4HttpServerTransport] [4_D7gOx] publish_address {127.0.0.1:9200}, bound_addresses {[fe80::1]:9200}, {[::1]:9200}, {127.0.0.1:9200}
[2017-07-06T09:35:02,321][INFO ][o.e.n.Node ] [4_D7gOx] started
[2017-07-06T09:35:02,688][INFO ][o.e.g.GatewayService ] [4_D7gOx] recovered [14] indices into cluster_state
[2017-07-06T09:35:03,391][INFO ][o.e.c.r.a.AllocationService] [4_D7gOx] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[logstash-2017.06.30][4]] ...]).
[2017-07-06T09:36:22,753][INFO ][o.e.c.m.MetaDataCreateIndexService] [4_D7gOx] [logstash-2017.07.06] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [_default_]
[2017-07-06T09:36:22,934][INFO ][o.e.c.m.MetaDataMappingService] [4_D7gOx] [logstash-2017.07.06/9Y3_2-ImRP6F3P0DQH61Bg] create_mapping [logs]
[2017-07-06T12:07:48,059][INFO ][o.e.c.m.MetaDataMappingService] [4_D7gOx] [elastalert_status/x3wnpwI-QnSWrDFbBrPh4A] update_mapping [silence]
[2017-07-06T12:17:25,204][INFO ][o.e.c.m.MetaDataMappingService] [4_D7gOx] [elastalert_status/x3wnpwI-QnSWrDFbBrPh4A] update_mapping [elastalert_error]
[2017-07-06T12:17:25,219][INFO ][o.e.c.m.MetaDataMappingService] [4_D7gOx] [elastalert_status/x3wnpwI-QnSWrDFbBrPh4A] update_mapping [elastalert]
[2017-07-06T13:49:19,911][INFO ][o.e.c.m.MetaDataMappingService] [4_D7gOx] [elastalert_status/x3wnpwI-QnSWrDFbBrPh4A] update_mapping [elastalert]
[2017-07-07T09:36:51,669][INFO ][o.e.c.m.MetaDataCreateIndexService] [4_D7gOx] [logstash-2017.07.07] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [_default_]
[2017-07-07T09:36:51,744][INFO ][o.e.c.m.MetaDataMappingService] [4_D7gOx] [logstash-2017.07.07/GldhiF5YRbCLVY3FoVidpw] create_mapping [logs]
We can confirm if the elasticsearch is running by making the following request
Z4437-E6B0-1CA8:Downloads Yuvaraj$ curl -i http://localhost:9200/
HTTP/1.1 200 OK
content-type: application/json; charset=UTF-8
content-length: 335
{
"name" : "4_D7gOx",
"cluster_name" : "elasticsearch_Yuvaraj",
"cluster_uuid" : "5aVFyKihT9O61L1XdEopbA",
"version" : {
"number" : "5.4.3",
"build_hash" : "eed30a8",
"build_date" : "2017-06-22T00:34:03.743Z",
"build_snapshot" : false,
"lucene_version" : "6.5.1"
},
"tagline" : "You Know, for Search"
}
Configure & Run Kibana:
1. cd Downloads
kibana
log [12:58:27.113] [info][status][plugin:kibana@5.4.2] Status changed from uninitialized to green - Ready
log [12:58:27.190] [info][status][plugin:elasticsearch@5.4.2] Status changed from uninitialized to yellow - Waiting for Elasticsearch
log [12:58:27.223] [info][status][plugin:console@5.4.2] Status changed from uninitialized to green - Ready
log [12:58:27.232] [warning] You're running Kibana 5.4.2 with some different versions of Elasticsearch. Update Kibana or Elasticsearch to the same version to prevent compatibility issues: v5.4.3 @ 127.0.0.1:9200 (127.0.0.1)
log [12:58:27.243] [info][status][plugin:metrics@5.4.2] Status changed from uninitialized to green - Ready
log [12:58:27.403] [info][status][plugin:elasticsearch@5.4.2] Status changed from yellow to green - Kibana index ready
log [12:58:27.404] [info][status][plugin:timelion@5.4.2] Status changed from uninitialized to green - Ready
log [12:58:27.408] [info][listening] Server running at http://localhost:5601
log [12:58:27.409] [info][status][ui settings] Status changed from uninitialized to green - Ready
Now, launch "http://localhost:5601" in your favourite browser. You will the following web app
With this, you have completed running the log analysis tools, once you configure the index pattern, you can see the logs by clicking the discovery
Configure Elastalert to monitor the logs and to alert:
1. Install the elastalert with the python as per the documentation available
https://elastalert.readthedocs.io/en/latest/running_elastalert.html
2. Once the installation is completed, run the elastalert after creating the following configurations
Elastalert needs a global config.yaml which it refers always. So, create a config.yaml as follows
es_host: localhost
es_port: 9200
smtp_host: smtp.gmail.com
email: gyuvaraj16@gmail.com
smtp_port: 465
smtp_ssl: true
smtp_auth_file: '/Users/Yuvaraj/Downloads/smtp_auth_file.yaml'
from_addr: gyuvaraj10@gmail.com
rules_folder: elastrules
buffer_time:
hours: 1000
run_every:
minutes: 1
writeback_index: elastalert_status
Now, create a rule & alert configuration yaml file as follows
mkdir elastrules
cd elastrules
vim selenium-rule.yaml
alert:
- "command"
command: "echo {match[username]}"
email:
- "yuvaraj.gunisetti@ba.com"
es_host: localhost
es_port: 9200
filter:
- query:
query_string:
query: "message: com.opera.core.systems.OperaDriver"
index: logstash-*
name: Selenium_Alert
num_events: 1
timeframe:
hours: 1000
type: frequency
realert:
minutes: 0
Now, go to Downloads directory where the config.yaml is available and run the following command
python -m elastalert.elastalert --verbose --rule elastrules/selenim-rule.yaml --es_debug_trace file.log
Now,if there are any matches with the pattern specified in the selenium-rule.yaml "filter" you will see the following logs
INFO:elastalert:Queried rule Selenium_Alert from 2017-07-06 13:52 BST to 2017-07-07 09:37 BST: 60 / 60 hits
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
INFO:elastalert:Ran Selenium_Alert from 2017-07-06 13:52 BST to 2017-07-07 09:37 BST: 60 query hits (28 already seen), 32 matches, 32 alerts sent
You are done