Reporting Engine: Configure the Data Sources

Document created by RSA Information Design and Development on Sep 15, 2017Last modified by RSA Information Design and Development on Oct 4, 2017
Version 5Show Document
  • View in full screen mode
  

You must configure data sources such as NWDB, Warehouse, or RESPOND. You can configure NWDB, Warehouse, and RESPOND to generate Reports, Charts, and Alerts respectively. Optionally, you can also configure Archiver, Collection, and Workbench data sources.

Configure a NWDB Data Source

To add a NWDB data source:

  1. Go to ADMIN > Services.
  2. In the Services, select Reporting Engine service.
  3. Click > View > Config

    The Services Config View of Reporting Engine is displayed.

  4. On the Sources tab, click   > Available Services.

    The Available Services dialog is displayed.

    List of available services for NWDB data source

  5. Select a NWDB service you want to add and click OK.
  6. In the Service Information for Broker dialog, enter the service information for the service and click OK. In this example, we are adding a Broker service.
    Enter credentials to add a Broker service
  7. The service is displayed in the Sources tab when it is successfully added.List of services added displayed in the sources tab

Note: The services with the Trust Model enabled must be added individually. You are prompted to provide a username and password for the selected service.

Configure a Warehouse Data Source

You can add the warehouse data source to Reporting Engine, so that you can extract the data from the required services, store them in MapR or Horton works and generate Reports and Alerts. The procedure to configure Warehouse as a data source differs. To extract data from a Warehouse data source, you must configure it using the following procedure.

Note: Warehouse Analytics is not supported in Netwitness Suite 11.0 release.

Prerequisite

Make sure you:

  • Add a Warehouse Data Source to Reporting Engine
  • Set Warehouse Data Source as the Default Source
  • HIVE server is in running state on all the Warehouse nodes. Use the following command to check the status of the HIVE server:
    status hive2 (MapR deployments)
    service hive-server2 status (Horton Works deployments)
  • Warehouse Connector is configured to write data to the warehouse deployments.
  • If Kerberos authentication is enabled for HiveServer2, make sure that the keytab file is copied to the /var/netwitness/re-server/rsa/soc/reporting-engine/conf/ directory in the Reporting Engine Host.

    Note: The rsasoc user should have read permissions for the keytab file. For more information, see Configure Data Source Permissions.

    Also, make sure that you update the keytab file location in the Kerberos Keytab File parameter in the Reporting Engine Service Config View. For more information, see General Tab.

To add Warehouse data source for MapR:

  1. Go to AdminServices.
  2. In the Services list, select the Reporting Engine service.
  3. Click > View > Config.
  4. Click the Sources tab.

    The Service Config view is displayed with the Reporting Engine Sources tab open.

  5. Click and select New Service.

    The New Service dialog is displayed.

    New service dialog

  6. In the Source Type drop-down menu, select WAREHOUSE.
  7. In the Warehouse Source drop-down menu, select the warehouse data source. 
  8. In the Name field, enter the host name of the Warehouse data source.
  9. In the HDFS Path field, enter the HDFS root path to which the Warehouse Connector writes the data.

    For example:
    If /saw is the local mount point for HDFS that you have configured while mounting NFS on the device. And if you have installed the Warehouse Connector service to write to SAW. For more information, see "Mount the Warehouse on the Warehouse Connector" topic in the RSA Netwitness Warehouse (MapR) Configuration Guide.

    If you have created a directory named Ionsaw01 under /saw and provided the corresponding Local Mount Path as /saw/Ionsaw01, then the corresponding HDFS root path would be /Ionsaw01.

    The /saw mount point implies to /as the root path for HDFS. The Warehouse Connector writes the data / Ionsaw01 in HDFS. If there is no data available in this path, the following error is displayed:

    “No data available. Check HDFS path”

    Make sure that /lonsaw01/rsasoc/v1/sessions/meta contains avro files of the meta data before performing test connection.

  10. Select the Advanced checkbox to use the advanced settings, and fill in the Database URL with the complete JDBC URL to connect to the HiveServer2.

    For example:
    If kerberos is enabled in HIVE then the JDBC url will be:

    jdbc:hive2://<host>:<port>/<db>;principal=<Kerberos serverprincipal>

    If SSL is enabled in HIVE then the JDBC url will be:

    jdbc:hive2://<host>:<port>/<db>;ssl=true;sslTrustStore=<trust_store_path>;trustStorePassword=<trust_store_password>

    For more information on HIVE server clients, see https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients.

  11. If not using the advanced settings, enter the values for the Host and Port.

    • In the Host field, enter the IP address of the host on which HiveServer2 is hosted.

      Note: You can use the virtual IP address of MapR only if HiveServer2 is running on all the nodes in the cluster.

    • In the Port field, enter the HiveServer2 port of the Warehouse data source. By default, the port number is 10000.
  12. In the Username and Password field, enter the JDBC credentials used to access HiveServer2.

    Note: You can also use LDAP mode of authentication using Active Directory. For instructions to enable LDAP authentication mode, see Enable LDAP Authentication.

  13. To run warehouse analytics reports, see Configuring Warehouse Data Source For Reporting in Configuring Warehouse Data Source For Reporting.
  14. Enable Kerberos authentication: see Configuring Warehouse Data Source For Reporting in Configuring Warehouse Data Source For Reporting.
  15. If you want set the added Warehouse data source as default source for the Reporting Engine, select the added Warehouse data source and click .

To add Warehouse data source for Horton Works (HDP):

Note: Make sure you download the hive-jdbc-1.2.1-with-full-dependencies.jar. This jar contains the driver file of HIVE 1.2.1 which connects to Reporting Engine for Hive 1.2.1 Hiveserver2, from RSA Link (https://community.rsa.com/docs/DOC-67251).

  1. SSH to the Netwitness Suite server.
  2. In the /opt/rsa/soc/reporting-engine/plugins/ folder, take a backup of the following jar:
    hive-jdbc-0.12.0-with-full-dependencies.jar or hive-jdbc-1.0.0-mapr-1508-standalone.jar
  3. Remove the following jar:
    hive-jdbc-0.12.0-with-full-dependencies.jar or hive-jdbc-1.0.0-mapr-1508-standalone.jar
  4. In the /opt/rsa/soc/reporting-engine/plugins folder, copy the following jar using WinSCP:
    hive-jdbc-1.2.1-with-full-dependencies.jar
  5. Restart the Reporting Engine service.
  6. Log in to (Undefined variable: SAVariables.ProductShortName) UI.
  7. Select the Reporting Engine service and select > View > Explore.
  8. In the hiveConfig, set EnableSmallSplitBasedSchemaLiteralCreation parameter to true.

Enable Jobs

Note: Warehouse Analytics is not supported in Netwitness Suite 11.0 release.

To run warehouse analytics reports, perform this procedure.

  1. Select the Enable Jobs checkbox.

    Warehouse configuration

    Note: Do not select Pivotal in the HDFS field as it is not supported for this release.

  2. Enter the following details:

    1. Select the type of HDFS from the HDFS Type drop-down menu.

      • If you select the Horton Works HDFS type, enter the following information:

                                                           
        FieldDescription

        HDFS Username

        Enter the username that Reporting Engine should claim when connecting to Horton Works. For standard horton works DCA clusters, this would be ‘gpadmin’.
        HDFS NameEnter the URL to access HDFS. For example, hdfs://hdm1.gphd.local:8020.

        HBase Zookeeper Quorom

        Enter the list of host names separated by a comma on which the ZooKeeper servers are running.
        HBase Zookeeper PortEnter the port number for the ZooKeeper servers. The default port is 2181.

        Input Path Prefix

        Enter the output path of the Warehouse Connector (/sftp/rsasoc/v1/sessions/data/<year>/<month>/<date>/<hour>) until the year directory.

        For example, /sftp/rsasoc/v1/sessions/data/.

        Output Path PrefixEnter the location where the data science job results are stored in HDFS.

        Yarn Host Name

        Enter the Hadoop yarn resource-manager host name in the DCA cluster.

        For example, hdm3.gphd.local.

        Job History Server

        Enter the Hadoop job-history-server address in the DCA cluster.

        For example, hdm3.gphd.local:10020.

        Yarn Staging Directory

        Enter the staging directory for YARN in the DCA cluster.

        For example, /user.

        Socks Proxy

        If you are using the standard DCA cluster, most of the hadoop services will be running in a local private network, not reachable from Reporting Engine. Then, you must run a socks proxy in the DCA cluster and allow access from outside to the cluster.

        For example, mdw.netwitness.local:1080.

      • If you select the MapR HDFS type, enter the following information:

                                                       
        FieldDescription
        MapR Host Name

        The user can populate the public ip address of any one of the MapR warehouse hosts.

        MapR Host UserEnter a UNIX username in the given host that has access to execute map-reduce jobs on the cluster. The default value is 'mapr'.
        MapR Host Password(Optional)To setup password-less authentication, copy the public key of the “rsasoc” user from /home/rsasoc/.ssh/id_rsa.pub to the “authorized_keys” file of the warehouse host located in /home/mapr/.ssh/authorized_keys, with the assumption that “mapr” is the remote UNIX user.
        MapR Host Work Dir

        Enter a path that the given UNIX user (for example, “mapr” ) has write access to.

        Note: The work directory is used by Reporting Engine to remotely copy the Warehouse Analytics jar files and start the jobs from the given host name. You must not use “/tmp” to avoid filling up of the system temporary space. The given work directory will be remotely managed by Reporting Engine.

        HDFS NameEnter the URL to access HDFS. For example, to access a specific cluster, maprfs:/mapr/<cluster-name>.
        HBase Zookeeper PortEnter the port number for the ZooKeeper servers. The default port is 5181.

        Input Path Prefix

        Enter the output path (/rsasoc/v1/sessions/data/<year>/<month>/<date>/<hour>) until the year directory.

        For example, /rsasoc/v1/sessions/data/.

        Input Filenameenter the file name filter for avro files. For example, sessions-warehouseconnector.
        Output Path PrefixEnter the location where the data science job results are stored in HDFS.
    2. Select the MapReduce Framework as per the HDFS type.

      Note: For HDFS type MapR, select MapReduce framework as Classic. For HDFS type Horton Works, select MapReduce Framework as Yarn.

Next, enable Kerberos authentication.

Enable Kerberos Authentication

  1. Select Kerberos Authentication checkbox, if the Warehouse has Kerberos enabled HIVE  server.

    Kerberos Authentication screen

  2. Fill in the fields as follows:

                           
    FieldDescription

    Server Principal

    Enter the Principle used by the HIVE  server to authenticate with the Kerberos Key Distribution Center (KDC) Server.

    User PrincipalEnter the Principle that HIVE JDBC client uses to authenticate with the KDC server for connecting the HIVE server. For example, gpadmin@EXAMPLE.COM.

    Kerberos Keytab File

    View the Kerberos keytab file location configured in the HIVE Configuration panel on the Reporting Engine General Tab.

    Note: Reporting Engine supports only the data sources configured with the same Kerberos credentials, like, User Principal and key tab file.

  3. Click Test Connection to test the connection with the values entered.
  4. Click Save.

    The added Warehouse data source is displayed in the Reporting Engine Sources tab.

  5. Click   > Available Services.

    The Available Services dialog box is displayed.

    Available services screen

  6. In the Available Services dialog box, select the service that you want to add as data source to the Reporting Engine and click OK.

    NetWitness Suite adds this as a data source available to reports and alerts against this Reporting Engine.

    Warehouse Configuration in Reporting Engine's Source tab.

    Note: This step is relevant only for an Untrusted model.

Set a Data Source as the Default Source

To set a data source to be the default source when you create reports and alerts:

  1. Go to Dashboard  > Administration > Services.
  2. In the Services list, select a Reporting Engine service.
  3. Select  > ViewConfig.

    The Services Config View of Reporting Engine is displayed.

  4. Select the Sources tab.

    The Services Config View is displayed with the Reporting Engine Sources tab open.

  5. Select the source that you want to be the default source (for example, Broker).
  6. Click the Set Default checkbox.

    NetWitness Suite defaults to this data source when you create reports and alerts against this Reporting Engine.

You are here
Table of Contents > Configure Reporting Engine with NWDB

Attachments

    Outcomes