Friday, July 7, 2017

Log Analysis and Alert system with ELK and Elastalert



This document will show the setup and configuration required for running the logstash, elasticsearch, kibana, and elastalert for alerting.

This document shows the setup on MacOS, assuming you have homebrew package manager.
 Step1: Run "brew install logstash"
 Step2: Run  "brew install elasticsearch"
 Step3: Run "brew install kibana"

This example will consider the selenium server logs and display them on Kibana and alert them by querying the elasticsearch using elastalert.

Run Selenium Server:
1. cd Downloads
2. nohup java -jar -Dselenium.LOGGER=output.log -Dselenium.LOGGER.level=FINEST selenium-server-standalone-3.4.0.jar &

Now, we will see an output.log file generated in the Downloads directory.

Configure Logstash:
  1. Create a logstash configuration file in "Downloads" directory as follows
input {

  file {
     path => "/Users/Yuvaraj/Downloads/output.log"
     start_position => beginning
  }
}
filter {
  grok {
     match => { "message" => "%{COMBINEDAPACHELOG}"}
  }
  geoip {
     source => "clientip"
  }
}
output {
  elasticsearch {

  }
}

and save it as selenium-log.conf.
2.Now run the logstash with this configuration as below in the "Downloads" directory.
    logstash -f selenium-log.conf

Now logstash should display the following logs
Sending Logstash's logs to /usr/local/Cellar/logstash/5.4.2/libexec/logs which is now configured via log4j2.properties
[2017-07-06T09:34:36,967][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://127.0.0.1:9200/]}}
[2017-07-06T09:34:36,972][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://127.0.0.1:9200/, :path=>"/"}
[2017-07-06T09:34:37,050][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0xeda20da URL:http://127.0.0.1:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused"}
[2017-07-06T09:34:37,051][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2017-07-06T09:34:37,056][WARN ][logstash.outputs.elasticsearch] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused {:url=>http://127.0.0.1:9200/, :error_message=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2017-07-06T09:34:37,057][ERROR][logstash.outputs.elasticsearch] Failed to install template. {:message=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :backtrace=>["/usr/local/Cellar/logstash/5.4.2/libexec/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:271:in `perform_request_to_url'", "/usr/local/Cellar/logstash/5.4.2/libexec/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:257:in `perform_request'", "/usr/local/Cellar/logstash/5.4.2/libexec/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:347:in `with_connection'", "/usr/local/Cellar/logstash/5.4.2/libexec/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:256:in `perform_request'", "/usr/local/Cellar/logstash/5.4.2/libexec/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:264:in `get'", "/usr/local/Cellar/logstash/5.4.2/libexec/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.2-java/lib/logstash/outputs/elasticsearch/http_client.rb:86:in `get_version'", "/usr/local/Cellar/logstash/5.4.2/libexec/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.2-java/lib/logstash/outputs/elasticsearch/template_manager.rb:16:in `get_es_version'", "/usr/local/Cellar/logstash/5.4.2/libexec/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.2-java/lib/logstash/outputs/elasticsearch/template_manager.rb:20:in `get_es_major_version'", "/usr/local/Cellar/logstash/5.4.2/libexec/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.2-java/lib/logstash/outputs/elasticsearch/template_manager.rb:7:in `install_template'", "/usr/local/Cellar/logstash/5.4.2/libexec/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.2-java/lib/logstash/outputs/elasticsearch/common.rb:62:in `install_template'", "/usr/local/Cellar/logstash/5.4.2/libexec/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.2-java/lib/logstash/outputs/elasticsearch/common.rb:29:in `register'", "/usr/local/Cellar/logstash/5.4.2/libexec/logstash-core/lib/logstash/output_delegator_strategies/shared.rb:9:in `register'", "/usr/local/Cellar/logstash/5.4.2/libexec/logstash-core/lib/logstash/output_delegator.rb:41:in `register'", "/usr/local/Cellar/logstash/5.4.2/libexec/logstash-core/lib/logstash/pipeline.rb:268:in `register_plugin'", "/usr/local/Cellar/logstash/5.4.2/libexec/logstash-core/lib/logstash/pipeline.rb:279:in `register_plugins'", "org/jruby/RubyArray.java:1613:in `each'", "/usr/local/Cellar/logstash/5.4.2/libexec/logstash-core/lib/logstash/pipeline.rb:279:in `register_plugins'", "/usr/local/Cellar/logstash/5.4.2/libexec/logstash-core/lib/logstash/pipeline.rb:288:in `start_workers'", "/usr/local/Cellar/logstash/5.4.2/libexec/logstash-core/lib/logstash/pipeline.rb:214:in `run'", "/usr/local/Cellar/logstash/5.4.2/libexec/logstash-core/lib/logstash/agent.rb:398:in `start_pipeline'"]}
[2017-07-06T09:34:37,059][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::Generic:0x289e630a URL://127.0.0.1>]}
[2017-07-06T09:34:37,109][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/local/Cellar/logstash/5.4.2/libexec/vendor/bundle/jruby/1.9/gems/logstash-filter-geoip-4.1.1-java/vendor/GeoLite2-City.mmdb"}
[2017-07-06T09:34:37,132][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>1000}
[2017-07-06T09:34:37,254][INFO ][logstash.pipeline        ] Pipeline main started
[2017-07-06T09:34:37,295][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2017-07-06T09:34:42,071][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://127.0.0.1:9200/, :path=>"/"}
[2017-07-06T09:34:42,076][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0xeda20da URL:http://127.0.0.1:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused"}
[2017-07-06T09:34:47,092][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://127.0.0.1:9200/, :path=>"/"}
[2017-07-06T09:34:47,097][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0xeda20da URL:http://127.0.0.1:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused"}
[2017-07-06T09:34:52,112][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://127.0.0.1:9200/, :path=>"/"}
[2017-07-06T09:34:52,116][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0xeda20da URL:http://127.0.0.1:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused"}
[2017-07-06T09:34:57,127][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://127.0.0.1:9200/, :path=>"/"}
[2017-07-06T09:34:57,129][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0xeda20da URL:http://127.0.0.1:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused"}
[2017-07-06T09:35:02,138][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://127.0.0.1:9200/, :path=>"/"}
[2017-07-06T09:35:02,140][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0xeda20da URL:http://127.0.0.1:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused"}
[2017-07-06T09:35:07,162][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://127.0.0.1:9200/, :path=>"/"}

[2017-07-06T09:35:07,269][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0xeda20da URL:http://127.0.0.1:9200/>}

Now, you can confirm if the logstash is running by

curl -i http://localhost:9600/

Response: 200
{"host":"Z4437-E6B0-1CA8.uk.ad.ba.com","version":"5.4.2","http_address":"127.0.0.1:9600","id":"2b161c1d-ac74-4840-b716-67577b2c03d7","name":"Z4437-E6B0-1CA8.uk.ad.ba.com","build_date":"2017-06-15T03:03:41Z","build_sha":"76c4926f4da6fa4a5d1ec9f5e5770dd85a8b1958","build_snapshot":false}

Configure & Run Elastic search:
1. Run the elastic search with the following command
    cd Downloads
    elasticsearch
Now, we must see the following logs
[2017-07-06T09:34:56,207][INFO ][o.e.n.Node               ] [] initializing ...
[2017-07-06T09:34:56,287][INFO ][o.e.e.NodeEnvironment    ] [4_D7gOx] using [1] data paths, mounts [[/ (/dev/disk1)]], net usable_space [59.7gb], net total_space [232.6gb], spins? [unknown], types [hfs]
[2017-07-06T09:34:56,288][INFO ][o.e.e.NodeEnvironment    ] [4_D7gOx] heap size [1.9gb], compressed ordinary object pointers [true]
[2017-07-06T09:34:56,351][INFO ][o.e.n.Node               ] node name [4_D7gOx] derived from node ID [4_D7gOxPQZCwvn8sll6G4A]; set [node.name] to override
[2017-07-06T09:34:56,352][INFO ][o.e.n.Node               ] version[5.4.3], pid[2569], build[eed30a8/2017-06-22T00:34:03.743Z], OS[Mac OS X/10.11.6/x86_64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_51/25.51-b03]
[2017-07-06T09:34:56,352][INFO ][o.e.n.Node               ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+DisableExplicitGC, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/local/Cellar/elasticsearch/5.4.3/libexec]
[2017-07-06T09:34:57,121][INFO ][o.e.p.PluginsService     ] [4_D7gOx] loaded module [aggs-matrix-stats]
[2017-07-06T09:34:57,122][INFO ][o.e.p.PluginsService     ] [4_D7gOx] loaded module [ingest-common]
[2017-07-06T09:34:57,122][INFO ][o.e.p.PluginsService     ] [4_D7gOx] loaded module [lang-expression]
[2017-07-06T09:34:57,122][INFO ][o.e.p.PluginsService     ] [4_D7gOx] loaded module [lang-groovy]
[2017-07-06T09:34:57,122][INFO ][o.e.p.PluginsService     ] [4_D7gOx] loaded module [lang-mustache]
[2017-07-06T09:34:57,122][INFO ][o.e.p.PluginsService     ] [4_D7gOx] loaded module [lang-painless]
[2017-07-06T09:34:57,122][INFO ][o.e.p.PluginsService     ] [4_D7gOx] loaded module [percolator]
[2017-07-06T09:34:57,122][INFO ][o.e.p.PluginsService     ] [4_D7gOx] loaded module [reindex]
[2017-07-06T09:34:57,122][INFO ][o.e.p.PluginsService     ] [4_D7gOx] loaded module [transport-netty3]
[2017-07-06T09:34:57,122][INFO ][o.e.p.PluginsService     ] [4_D7gOx] loaded module [transport-netty4]
[2017-07-06T09:34:57,122][INFO ][o.e.p.PluginsService     ] [4_D7gOx] no plugins loaded
[2017-07-06T09:34:58,526][INFO ][o.e.d.DiscoveryModule    ] [4_D7gOx] using discovery type [zen]
[2017-07-06T09:34:59,105][INFO ][o.e.n.Node               ] initialized
[2017-07-06T09:34:59,106][INFO ][o.e.n.Node               ] [4_D7gOx] starting ...
[2017-07-06T09:34:59,254][INFO ][o.e.t.TransportService   ] [4_D7gOx] publish_address {127.0.0.1:9300}, bound_addresses {[fe80::1]:9300}, {[::1]:9300}, {127.0.0.1:9300}
[2017-07-06T09:35:02,301][INFO ][o.e.c.s.ClusterService   ] [4_D7gOx] new_master {4_D7gOx}{4_D7gOxPQZCwvn8sll6G4A}{gGc_p74URDm6iT5_rczMOg}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-07-06T09:35:02,319][INFO ][o.e.h.n.Netty4HttpServerTransport] [4_D7gOx] publish_address {127.0.0.1:9200}, bound_addresses {[fe80::1]:9200}, {[::1]:9200}, {127.0.0.1:9200}
[2017-07-06T09:35:02,321][INFO ][o.e.n.Node               ] [4_D7gOx] started
[2017-07-06T09:35:02,688][INFO ][o.e.g.GatewayService     ] [4_D7gOx] recovered [14] indices into cluster_state
[2017-07-06T09:35:03,391][INFO ][o.e.c.r.a.AllocationService] [4_D7gOx] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[logstash-2017.06.30][4]] ...]).
[2017-07-06T09:36:22,753][INFO ][o.e.c.m.MetaDataCreateIndexService] [4_D7gOx] [logstash-2017.07.06] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [_default_]
[2017-07-06T09:36:22,934][INFO ][o.e.c.m.MetaDataMappingService] [4_D7gOx] [logstash-2017.07.06/9Y3_2-ImRP6F3P0DQH61Bg] create_mapping [logs]
[2017-07-06T12:07:48,059][INFO ][o.e.c.m.MetaDataMappingService] [4_D7gOx] [elastalert_status/x3wnpwI-QnSWrDFbBrPh4A] update_mapping [silence]
[2017-07-06T12:17:25,204][INFO ][o.e.c.m.MetaDataMappingService] [4_D7gOx] [elastalert_status/x3wnpwI-QnSWrDFbBrPh4A] update_mapping [elastalert_error]
[2017-07-06T12:17:25,219][INFO ][o.e.c.m.MetaDataMappingService] [4_D7gOx] [elastalert_status/x3wnpwI-QnSWrDFbBrPh4A] update_mapping [elastalert]
[2017-07-06T13:49:19,911][INFO ][o.e.c.m.MetaDataMappingService] [4_D7gOx] [elastalert_status/x3wnpwI-QnSWrDFbBrPh4A] update_mapping [elastalert]
[2017-07-07T09:36:51,669][INFO ][o.e.c.m.MetaDataCreateIndexService] [4_D7gOx] [logstash-2017.07.07] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [_default_]

[2017-07-07T09:36:51,744][INFO ][o.e.c.m.MetaDataMappingService] [4_D7gOx] [logstash-2017.07.07/GldhiF5YRbCLVY3FoVidpw] create_mapping [logs]

We can confirm if the elasticsearch is running by making the following request
Z4437-E6B0-1CA8:Downloads Yuvaraj$ curl -i http://localhost:9200/
HTTP/1.1 200 OK
content-type: application/json; charset=UTF-8
content-length: 335

{
  "name" : "4_D7gOx",
  "cluster_name" : "elasticsearch_Yuvaraj",
  "cluster_uuid" : "5aVFyKihT9O61L1XdEopbA",
  "version" : {
    "number" : "5.4.3",
    "build_hash" : "eed30a8",
    "build_date" : "2017-06-22T00:34:03.743Z",
    "build_snapshot" : false,
    "lucene_version" : "6.5.1"
  },
  "tagline" : "You Know, for Search"

}

Configure & Run Kibana:
1. cd Downloads
    kibana
  log   [12:58:27.113] [info][status][plugin:kibana@5.4.2] Status changed from uninitialized to green - Ready
  log   [12:58:27.190] [info][status][plugin:elasticsearch@5.4.2] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [12:58:27.223] [info][status][plugin:console@5.4.2] Status changed from uninitialized to green - Ready
  log   [12:58:27.232] [warning] You're running Kibana 5.4.2 with some different versions of Elasticsearch. Update Kibana or Elasticsearch to the same version to prevent compatibility issues: v5.4.3 @ 127.0.0.1:9200 (127.0.0.1)
  log   [12:58:27.243] [info][status][plugin:metrics@5.4.2] Status changed from uninitialized to green - Ready
  log   [12:58:27.403] [info][status][plugin:elasticsearch@5.4.2] Status changed from yellow to green - Kibana index ready
  log   [12:58:27.404] [info][status][plugin:timelion@5.4.2] Status changed from uninitialized to green - Ready
  log   [12:58:27.408] [info][listening] Server running at http://localhost:5601

  log   [12:58:27.409] [info][status][ui settings] Status changed from uninitialized to green - Ready

Now, launch "http://localhost:5601" in your favourite browser. You will the following web app

With this, you have completed running the log analysis tools, once you configure the index pattern, you can see the logs by clicking the discovery

Configure Elastalert to monitor the logs and to alert:
1. Install the elastalert with the python as per the documentation available
https://elastalert.readthedocs.io/en/latest/running_elastalert.html
2. Once the installation is completed, run the elastalert after creating the following configurations
    Elastalert needs a global config.yaml which it refers always. So, create a config.yaml as follows
es_host: localhost
es_port: 9200
smtp_host: smtp.gmail.com
email: gyuvaraj16@gmail.com
smtp_port: 465
smtp_ssl: true
smtp_auth_file: '/Users/Yuvaraj/Downloads/smtp_auth_file.yaml'
from_addr: gyuvaraj10@gmail.com
rules_folder: elastrules 
buffer_time: 
  hours: 1000
run_every: 
  minutes: 1

writeback_index: elastalert_status 

Now, create a rule & alert configuration yaml file as follows
mkdir elastrules
cd elastrules
vim selenium-rule.yaml

alert: 
  - "command"
command: "echo {match[username]}"
email: 
  - "yuvaraj.gunisetti@ba.com"
es_host: localhost
es_port: 9200
filter: 
  - query: 
      query_string:
          query: "message: com.opera.core.systems.OperaDriver"
index: logstash-*
name: Selenium_Alert
num_events: 1
timeframe: 
  hours: 1000
type: frequency
realert:

  minutes: 0

Now, go to Downloads directory where the config.yaml is available and run the following command

python -m elastalert.elastalert --verbose --rule elastrules/selenim-rule.yaml --es_debug_trace file.log 

Now,if there are any matches with the pattern specified in the selenium-rule.yaml "filter" you will see the following logs
INFO:elastalert:Queried rule Selenium_Alert from 2017-07-06 13:52 BST to 2017-07-07 09:37 BST: 60 / 60 hits
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}
{match[username]}

INFO:elastalert:Ran Selenium_Alert from 2017-07-06 13:52 BST to 2017-07-07 09:37 BST: 60 query hits (28 already seen), 32 matches, 32 alerts sent


You are done



Tuesday, June 6, 2017

Link Godaddy DNS with AWS Elastic IP




First we need to set up AWS to provide an IP address for your DNS settings.
  1. On EC2 Management console you will have a vertical menu on the left hand side.
  2. Under “NETWORK & SECURITY” group click on “Elastic IPs”.
  3. On the top menu you will see a blue button “Allocate New Address” click on it.
  4. Just be sure “EIP used in” is set to “EC2” then click “Yes, Allocate”.
  5. A new IP address will be created on the table, select it by clicking on the empty square by the left of the name.
  6. Now click on “Associate Address” on the pop-up click on instance and select the instance you would like to associate to this IP.
  7. Finally click “Associate” and that’s it. For now to access via SSH, FTP, etc. you will need to use the new elastic IP.
On the godaddy side we will set up the points to address with the new elastic ip.
  1. Login into your godaddy account.
  2. Under the upper menu click “Domains” and then click “Manage my Domains”.
  3. Select the domain you would like to change by clicking the link to the domain on the table under “Domain Name” column.
  4. In Domain Details there are three tabs, you should click on “DNS Zone File”.
  5. Under A(Host) , click on “Edit Record” at the end in “Actions” column.
  6. Now change the value on the field “Points to” with the elastic ip of your amazon ec2 instance.

Monday, May 15, 2017

Load balance multiple selenium Grid hubs for Distributed testing




Load balance, your Selenium Grid hub servers to improve the testing speed. There are multiple providers available in the market to load balance your applications. Few open source softwares are
nginx, apache httpd, etc.,

This page will help you in setting up your selenium grid hub servers in a load balanced environment by using the nginx.

1. As a primary step install the nginx on your local machine. Those who use mac can use the home brew to install the nginx.
  command: brew install nginx
if the nginx is already installed and needs to be upgraded, then run "brew update nginx"

2. Run <ps -ef|grep "nginx"> to check if there are any nginx process running in the background.
   If you find any nginx process running, then either just stop the service or kill the process

3. Now, download the selenium-server-standalone jar file from the seleniumhq.org. This example uses "selenium-server-standalone-2.53.1.jar".

4. From the server directory, run the hub as follows


java -jar selenium-server-standalone-2.53.1.jar -role hub   --- call it ---- HUB A

17:24:16.808 INFO - Launching Selenium Grid hub
2017-05-15 17:24:17.359:INFO::main: Logging initialized @691ms
17:24:17.367 INFO - Will listen on 4444
17:24:17.402 INFO - Will listen on 4444
2017-05-15 17:24:17.405:INFO:osjs.Server:main: jetty-9.2.z-SNAPSHOT
2017-05-15 17:24:17.425:INFO:osjsh.ContextHandler:main: Started o.s.j.s.ServletContextHandler@a74868d{/,null,AVAILABLE}
2017-05-15 17:24:17.445:INFO:osjs.ServerConnector:main: Started ServerConnector@5c072e3f{HTTP/1.1}{0.0.0.0:4444}
2017-05-15 17:24:17.446:INFO:osjs.Server:main: Started @779ms
17:24:17.446 INFO - Nodes should register to http://169.254.86.238:4444/grid/register/
17:24:17.446 INFO - Selenium Grid hub is up and running
17:40:18.394 INFO - Registered a node http://169.254.86.238:5555

5. From the same directory, create another hub as follows

java -jar selenium-server-standalone-2.53.1.jar -role hub -port 4445 ---- call it ---- HUB B

17:25:00.741 INFO - Launching Selenium Grid hub
2017-05-15 17:25:01.296:INFO::main: Logging initialized @696ms
17:25:01.304 INFO - Will listen on 4445
17:25:01.344 INFO - Will listen on 4445
2017-05-15 17:25:01.346:INFO:osjs.Server:main: jetty-9.2.z-SNAPSHOT
2017-05-15 17:25:01.371:INFO:osjsh.ContextHandler:main: Started o.s.j.s.ServletContextHandler@a74868d{/,null,AVAILABLE}
2017-05-15 17:25:01.401:INFO:osjs.ServerConnector:main: Started ServerConnector@5c072e3f{HTTP/1.1}{0.0.0.0:4445}
2017-05-15 17:25:01.401:INFO:osjs.Server:main: Started @801ms
17:25:01.402 INFO - Nodes should register to http://169.254.86.238:4445/grid/register/
17:25:01.402 INFO - Selenium Grid hub is up and running
17:42:49.934 INFO - Registered a node http://169.254.86.238:5556
 
6. Now register a node to hub A as follows,


java -jar selenium-server-standalone-2.53.1.jar -role wd -hub http://localhost:4444/grid/register

17:40:17.905 INFO - Launching a Selenium Grid node
17:40:18.258 INFO - Java: Oracle Corporation 25.51-b03
17:40:18.258 INFO - OS: Mac OS X 10.11.6 x86_64
17:40:18.265 INFO - v2.53.1, with Core v2.53.1. Built from revision a36b8b1
17:40:18.316 INFO - Driver provider org.openqa.selenium.ie.InternetExplorerDriver registration is skipped:
registration capabilities Capabilities [{ensureCleanSession=true, browserName=internet explorer, version=, platform=WINDOWS}] does not match the current platform MAC
17:40:18.316 INFO - Driver provider org.openqa.selenium.edge.EdgeDriver registration is skipped:
registration capabilities Capabilities [{browserName=MicrosoftEdge, version=, platform=WINDOWS}] does not match the current platform MAC
17:40:18.316 INFO - Driver class not found: com.opera.core.systems.OperaDriver
17:40:18.316 INFO - Driver provider com.opera.core.systems.OperaDriver is not registered
17:40:18.318 INFO - Driver class not found: org.openqa.selenium.htmlunit.HtmlUnitDriver
17:40:18.318 INFO - Driver provider org.openqa.selenium.htmlunit.HtmlUnitDriver is not registered
17:40:18.356 INFO - Selenium Grid node is up and ready to register to the hub
17:40:18.376 INFO - Starting auto registration thread. Will try to register every 5000 ms.
17:40:18.376 INFO - Registering the node to the hub: http://localhost:4444/grid/register
17:40:18.395 INFO - The node is registered to the hub and ready to use

7. Now register a node to hub B as follows,

java -jar selenium-server-standalone-2.53.1.jar -role wd -hub http://localhost:4445/grid/register -port 5556

17:42:49.494 INFO - Launching a Selenium Grid node
17:42:49.817 INFO - Java: Oracle Corporation 25.51-b03
17:42:49.817 INFO - OS: Mac OS X 10.11.6 x86_64
17:42:49.820 INFO - v2.53.1, with Core v2.53.1. Built from revision a36b8b1
17:42:49.862 INFO - Driver provider org.openqa.selenium.ie.InternetExplorerDriver registration is skipped:
registration capabilities Capabilities [{ensureCleanSession=true, browserName=internet explorer, version=, platform=WINDOWS}] does not match the current platform MAC
17:42:49.863 INFO - Driver provider org.openqa.selenium.edge.EdgeDriver registration is skipped:
registration capabilities Capabilities [{browserName=MicrosoftEdge, version=, platform=WINDOWS}] does not match the current platform MAC
17:42:49.863 INFO - Driver class not found: com.opera.core.systems.OperaDriver
17:42:49.863 INFO - Driver provider com.opera.core.systems.OperaDriver is not registered
17:42:49.864 INFO - Driver class not found: org.openqa.selenium.htmlunit.HtmlUnitDriver
17:42:49.864 INFO - Driver provider org.openqa.selenium.htmlunit.HtmlUnitDriver is not registered
17:42:49.896 INFO - Selenium Grid node is up and ready to register to the hub
17:42:49.916 INFO - Starting auto registration thread. Will try to register every 5000 ms.
17:42:49.917 INFO - Registering the node to the hub: http://localhost:4445/grid/register
17:42:49.934 INFO - The node is registered to the hub and ready to use

8. Now you have setup 2 grid servers and registered a node each.

9. Let us now start configuring the nginx with a round-robin load balanced configuration.

add the following config to http node in the nginx configuration file "/usr/local/etc/nginx/nginx.conf"

    upstream gridapp {
      server localhost:4444;
      server localhost:4445;
    }

in the above, gridapp is loadbalanced name

Then update the server that listens on 80 port as follows.
server {
         listen 80;
         location / {
           proxy_pass http://gridapp;
         }
    }

This means, when users access http://localhost:80, nginx proxy passes to loadbalanced servers (localhost:4444, localhost:4445) named with gridapp.

To Test:
From your browser, hit http://localhost:80 and observe different nodes displayed.




Wednesday, December 7, 2016

Download Previous versions of Chrome Browsers

 Chrome Browser Versions:  Here are the links to download the previous versions of Chrome Browsers

https://commondatastorage.googleapis.com/chromium-browser-continuous/index.html
https://omahaproxy.appspot.com/

Saturday, November 19, 2016

Create Your First Angular application with npm, bower, angularjs

This post assumes you have already installed nodejs, npm on your machines. To start with let us use the terminal/sublime text editor to create our app which shows our name on the webpage.

Follow the steps to create your simple page.
1. Launch the terminal and create a directory named <ajs> anywhere in your drive. I chose "/Users/Yuvaraj" location to create the <ajs> directory.

    /Users/Yuvaraj> mkdir ajs

2. Navigate to the ajs directory.
    /Users/Yuvaraj> cd ajs

3. Now interactively create a package.json file in ajs directory by running the following command,
    /Users/Yuvaraj/ajs>npm init
     we should see the following questions on running this command.
 
     This utility will walk you through creating a package.json file.

It only covers the most common items, and tries to guess sensible defaults.

See `npm help json` for definitive documentation on these fields
and exactly what they do.

Use `npm install <pkg> --save` afterwards to install a package and
save it as a dependency in the package.json file.

Press ^C at any time to quit.
name: (ajs) 
version: (1.0.0) 
description: 
entry point: (index.js) 
test command: 
git repository: 
keywords: 
author: 
license: (ISC) 
About to write to /Users/Yuvaraj/ajs/package.json:

{
  "name": "ajs",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "",
  "license": "ISC"
}

Is this ok? (yes)

4. After generating the package.json file. Let us install http-server, bower libraries using the npm and save these dependencies into the package.json by running the following command.
   /Users/Yuvaraj/ajs>npm install --save http-server
   /Users/Yuvaraj/ajs>npm install --save bower
  Observe that http-server, bower are automatically saved into package.json file in the dependencies section and a folder node_modules is created with the libraries installed.

5. Now create an app directory where we put all the application code.
     /Users/Yuvaraj/ajs>mkdir app

6. Initialise bower by running the following command.  Make sure we see bower.json after running the command in app directory.
     /Users/Yuvaraj/ajs/app>bower init

7. After initialising the bower, install the angular dependency and save to the bower.json.
    /Users/Yuvaraj/ajs/app>bower install --save angular
   
    We observe that a new folder bower_components gets created in app folder with angular dependency added.   

8. Now edit the package.json file by adding a start script into the script section.

     "scripts": {
  "start": "http-server ./app -a localhost -p 2222",
        "test": "echo \"Error: no test specified\" && exit 1"
      },

With this step we are done with the setup. Once the setup is completed we can start creating the angular modules and bind them with the html.

9. Now create a app.js file in the app folder and put the following content in it.
 
    angular.module('yuvi', [])
     .controller('TestController', function TestController(){
         this.name = "Yuvaraj";
    });
 
10. Create a index.html in app folder and put the following html in it.
   
      <html ng-app='yuvi'>
       <head>
          <script type="text/javascript" src="/bower_components/angular/angular.js"></script>
<!-- this refers the angular.js file reference added from the bower dependencies installed -->
          <script type="text/javascript" src="app.js"></script>   <!-- this refers the app.js file that we have just created -->
        </head>

        <body ng-controller='TestController as testCtrl'>
            <h1>{{testCtrl.name}}</h1>
        </body>
     </html>

11. Now we are done with writing our simple app which displays our name. Now let us start our application by running the start script configured in our package.json.
 In the terminal run npm start command where the package.json file is available.

   /Users/Yuvaraj/ajs>npm start
   we must a message saying the server is started on a specific port with the url to the application.
    http://localhost:2222.

 Simply launch the url <http://localhost:2222> in the browser. We see "Yuvaraj" displayed as a header.
 

  

Friday, October 21, 2016

Launch Selenium Grid on Amazon EC2

Here are the steps to launch your selenium grid on ec2 instance.

1. To launch the selenium grid on ec2, you must have an aws console account.
2. Create a t2.micro instance through ec2 service in the console.
3. Click on Security groups link in the ec2 management console.
4. Click on the default/security group currently assigned to your instance.
5. Add the following protocol rules to the inbound and outbound traffic.
   Type: All TCP
   Protocol: TCP
   Port Range: 4444 / 0 - 65535 (default value)
   Source: 0.0.0.0/0
6. Using the keypair generated while creating the instance, connect to the box.
7. Create a folder <selenium>.
8. run the following command to download the selenium server jar file.
   wget http://selenium-release.storage.googleapis.com/2.53/selenium-server-standalone-2.53.1.jar
9. Launch the grid using the following command
     java -jar selenium-server-standalone-2.53.1.jar.1 -role hub
10. Now, open your browser on your local machine and launch the grid url in the browser
      http://xxxxxxxxx:4444/grid/console

Next post will describe the steps to register the nodes to the grid....

Friday, September 16, 2016

How to Install Groovy Plugin in the Jenkins and write the groovy code in the console



Install the groovy plugin:

In order to install the groovy plugin in your jenkins server, first we download the the required (groovy) plugin from the plugin repository https://updates.jenkins-ci.org/download/plugins/groovy/.

The plugin extensions are hpi. Once the plugin is downloaded from the plugin repository, go to Manage Jenkins page and Manage Plugins page. Now goto Advanced tab, then chose the plugin file you have downloaded and click upload. This will install the groovy plugin on your jenkins server.

Write groovy code in the groovy script console.

 Refer the https://www.cloudbees.com/jenkins/juc-2015/presentations/JUC-2015-USEast-Groovy-With-Jenkins-McCollum.pdf

groovy Script vs System Groovy Script

The plain "Groovy Script" is run in a forked JVM, on the slave where the build is run. It's the basically the same as running the "groovy" command and pass in the script.

The system groovy script, OTOH, runs inside the Jenkins master's JVM. Thus it will have access to all the internal objects of Jenkins, so you can use this to alter the state of Jenkins

Where to store the script?

● Can put it in source and pull it from source
● Or you can put the script directly into the command box that is provided