Warehouse Analytics: Define a Job

Document created by RSA Information Design and Development on Jul 31, 2016
Version 1Show Document
  • View in full screen mode

To define a Warehouse Analytics Job, you must first import the Warehouse Analytics model from the Live server and then schedule the job.


Make sure that you have understood the following:

Note: It is recommended that you always deploy Warehouse Analytics models from Live.

Perform the following steps to add and schedule a job:

  1. In the Security Analytics menu, click Reports.
    The Manage tab is displayed.
  2. Click Warehouse Analytics.
    The Warehouse Analytics view is displayed, as shown below:
  3. In the Warehouse Analytics toolbar, click run_config_add.png.
    The Job definition tab is displayed.
  4. To execute the jobs as per the schedule, select Enable checkbox.
  5. In the Name field, enter a name for the job configuration.
  6. From the Model field, click Browse to select a jar file to be imported.
    Security Analytics provides a file system view.
  7. Locate the jar file and click Open.
    The file is added to the job definition view.
  8. From the Warehouse field, select the data source created in the Reporting Engine configuration page. (For example, Pivotal or MapR).
  9. Do one of the following:
    - For specific number of days, select the date range to run the query based on Past
    - For a specific time frame, specify the From and To date from the calendar

Note: When you upgrade from 10.5, the jobs for Suspicious Domains, Suspicious DNS Activity and Host Profile models are deprecated and disabled. They appear under the Manage > Warehouse Analytics tab as "DEPRECATED" jobs and can be used as a reference to create new jobs.

  1. In the Advanced Options field, do the following:
    • In the Model Params field, enter the DS model or job parameters from the List Selection window. For more information on using a whitelist, see Use a Whitelist in a Warehouse Analytics Job
    • In the HDFS Params field, enter the HDFS configuration parameters.
    • In the MapReduce Params field, enter the Hadoop or MapR configuration parameters.
    • In the SandBox JVM Params field, enter the JVM or "-D" system parameters for JVM executing DS model.

Note: On uploading the job, several important parameters are automatically populated. If the parameters are not specified, the job runs with the default values.

  1. Click Save.
    The scheduled job executes as scheduled and provides the configured outputs.

Next Steps

You can view the scheduled job on the Warehouse Analytics view.

You are here: Step 3. Configure Warehouse Analytics Models > Define a Warehouse Analytics Job