About merlos

This is the personal blog of Giovanni “merlos” Mellini.
Here I write on open source software, security and other stuff.

You can have a look at my Linkedin profile here

You can send me a secure email to giovanni [dot] mellini [at] protonmail [dot] com using my public PGP signature available here

 

7 thoughts on “About merlos

  1. Hi Giovanni,
    I have been working through your document on integrating MineMeld with Splunk. We would like to do the exact same thing here. So far, I have not run into any issues, but I do have a question. On the 4th post, you have this:

    From prototypes page, clone the stdlib.localLogStash prototype to a new one minemeldlocal.LOG-TO-SPLUNK. While cloning change the 2 prototype parameters as follow:

    logstash_host: ;
    logstash_port: 1534 (or any port where Splunk will listen for MineMeld data).

    When you say , do you mean the IP address of an indexer or a search head? I am assuming that it is a search head (we have 3 here), is that correct?

    Sincerely,
    Jon

    Like

    1. Hi Jon
      good to ear you did the integration 🙂
      > When you say , do you mean the IP address of an indexer or a search head? I am assuming that it is a search head (we have 3 here), is that correct?
      It depends from your architecture.
      In my case I send the data from the output node to an HA Heavy forwarders cluster that I use to forward logs to Indexers cluster in case I cannot install Splunk agents (eg. Firewall or Minemeld output node).
      I need to send Minemeld logs in a reliable way, like the splunk agent does on the remote clients/servers while load balancing indexers.
      So the IP logstash_host is the Heavy Forwarders VIP made on top of a DRBD cluster of 2 heavy forwarders.
      To the heavy forwarders I push a small application (from my deployment server) that just sends data received on the tcp port to the Indexer cluster
      etc/apps/forw_portsinput/default/inputs.conf
      [tcp://:1534]
      sourcetype=minemeld_ioc
      index=minemeld_ioc

      etc/apps/Hforw-conf/default/outputs.conf
      [tcpout]
      defaultGroup = default-autolb-group
      #indexers
      [tcpout:default-autolb-group]
      server=INDEXER1:9997,INDEXER2:9997
      autoLB = true

      So you have a reliable and HA architecture.
      I don’t send any log directly to the Search Heads because you need to index the data in the indexer cluster so any SH can access it with the right authorization.
      Hope is clear
      Giovanni

      Like

      1. Hi Giovanni,
        I see… we have a slightly different environment. We have 3 non-clustered search heads, 3 clustered indexers, and 1 cluster master. We have all of our log sources sending syslog to a RHEL syslog-ng system running a light forwarder and not the heavy forwarder. Syslog-ng is setup to filer the logs so we don’t need to use the heavy forwarder. Thus, I am assuming that I need to set “logstash_host: ;” to the IP address of our syslog-ng server? I would also need to install the the TA on the syslog-ng server as well?

        Thanks Again!
        Jon

        Like

      2. > Thus, I am assuming that I need to set “logstash_host: ;” to the IP address of our syslog-ng server? I would also need to install the the TA on the syslog-ng server as well?
        Yes in this case you need to send the logs to the syslog server but I’m not sure if the TA works here, you need to try 🙂

        Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.