Skip navigation
All Places > Products > RSA NetWitness Platform > RSA NetWitness Platform Online Documentation > Documents
Log in to create and rate content, and to follow, bookmark, and share content with other members.

AWS Deploy 11.5: Configure Packet Capture

Document created by RSA Information Design and Development Employee on Sep 9, 2020Last modified by RSA Information Design and Development Employee on Sep 9, 2020
Version 2Show Document
  • View in full screen mode
 

You can integrate any of the following Third-Party solutions with the Network Decoder to capture packets in the AWS cloud:

Integrate Gigamon GigaVUE with the Network Decoder

There are two main tasks to configure the Gigamon third-party Tap vendor packet capture solution:

Task 1. Integrate the Gigamon solution.

Task 2. Configure a tunnel on Network Decoder

Task 1. Integrate the Gigamon Solution

Gigamon Visibility Platform on AWS will be available through the AWS Marketplace and activated by a BYOL license. A thirty-day free trial is also available.

For more information on the Gigamon solution, see "Gigamon Visibility Platform for AWS Data Sheet"https://www.gigamon.com/sites/default/files/resources/datasheet/ds-gigamon-visibility-platform-for-aws-4095.pdf.

For more information on the deployment details, see "Gigamon Visibility Platform for AWS Getting Started Guide" https://www.gigamon.com/sites/default/files/resources/deployment-guide/dg-visibility-platform-for-aws-getting-started-guide-4111.pdf.

After the “Monitoring Session” is deployed within the Gigamon GigaVUE-FM, you can configure the Network Decoder Tunnel.

Task 2. Configure Tunnel on the Network Decoder

  1. SSH to the Decoder.
  2. Submit the following command strings.

    $ sudo ip link add tun0 type gretap local any remote <ip_address_of_VSERIES_NODE_TUNNEL_INTERFACE> ttl 255 key 0

    $ sudo ip link set tun0 up mtu <MTU-SIZE>

    $ sudo ifconfig (to verify if the tunnel tun0 is being listed in the list of interfaces)

    $ sudo lsmod | grep gre ( to make sure if the below kernel modules are running:

    ip_gre 18245 0

    ip_tunnel 25216 1)

    If they are not running then execute the below commands to enable the modules

    $ sudo modprobe act_mirred

    $ sudo modprobe ip_gre

  3. Create a firewall rule in the Network Decoder to allow traffic through the tunnel.
    1. Open the iptables file.
      vi /etc/sysconfig/iptables
    2. Append the line -A INPUT -p gre -j ACCEPT before the commit statement
    3. Restart iptables by executing the following commands.
      service iptables restart
  4. Set the interface in the Network Decoder.
    1.  Log in NetWitness Platform, select the decoder/config node in Explorer view for the Network Decoder service.
    2. Set the capture.selected = packet_mmap_,tun0.

  5. (Conditional) - If you have multiple tunnels on the Network Decoder.
    1. Restart Decoder service after you create the tunnel in Network Decoder.
    2. Log in to NetWitness Platform, select the decoder/config node in Explorer view for the Network Decoder service, and set the following parameters.

      capture.device.params = interfaces=tun0,tun1,tun2

      capture.selected = packet_mmap_,All


  6. Restart decoder service.
    $ sudo restart nwdecoder
    The user should be all set to capture the network traffic in Decoder.

Integrate Ixia with the Network Decoder

You must complete the following tasks to integrate the Network Decoder with Ixia CloudLens.

Task 1. Deploy Client Machines

Task 2. Create CloudLens Project

Task 3. Install Docker Container on Decoder

Task 4. Install Docker Container on Clients

Task 5. Map Network Decoder to Ixia Clients

Task 6. Validate CloudLens Packets Arriving at Decoder

Task 7. Set Interface in Network Decoder

Task 1. Deploy Client Machines

  • Deploy client machines onto which you want to route the traffic to the Network Decoder. See the Ixia CloudLens documentation (https://www.ixia.cloud/help/Default.htm) for specifications needed for supported client machines.
  • For Client Machines (as well as Decoder machine) the following ports must be opened on AWS Security Group Inbound Rules; UDP 19993 from all, TCP 22 from Admin IP.

Task 2. Create CloudLens Project

Complete the following steps to create a new project and get your project key.

  1. Get Cloudlens login credentials and access to a free trial.
    1. Create an Ixia login account at https://www.ixiacom.com/products/cloudlens-trial-a.
  2. Go to the Cloudlens public site (https://www.ixia.cloud).
  3. Click + (add) to create a new project with a name of your choosing (for example, NetWitness-IxiaIntegration).

  4. Click on your newly created project and make note of your Project Key.
    You need the key later for the API key configured on the Host & Tool agents.

Task 3. Install Docker Container on Decoder

Complete the following steps to install the Docker container onto the Network Decoder.

  1. SSH to the Network Decoder.
  2. Enter the following commands to complete the install the Docker service on the Decoder.
    #yum clean all
    # yum –y install docker
  3. Enter the following command string to start the Docker service.
    # service docker start
  4. Enter the following commands to:
    • Access the Ixia repository and obtain the cloudlens-agent container.
    • Replace the ProjectKeyFromIxiaProjectPortal variable, which identifies your project key in Ixia portal, with the Project Key you created in Task 2. Create CloudLens Project.

    sudo docker run \
    --name cloudlens \
    -v /:/host \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -d --restart=always \
    --net=host \
    --privileged \
    ixiacom/cloudlens-agent:latest \
    --server agent.ixia.cloud \
    --accept_eula y \
    --apikey ProjectKeyFromIxiaProjectPortal \

Task 4. Install Docker Container on Clients

Complete the follow steps to Y install the Docker Container onto the client machines for which you want to route the traffic to the Network Decoder.

  1. SSH to the AWS Client instance.
  2. Enable root access to OS CLI (for example sudo su -).
  3. Enter the following commands to install Docker.
    # yum –y install docker

    Caution: The above example of the installed docker engine is for CentOS7. The instructions may vary slightly for different Linux Distributions. For more information, see the Docker docs at https://docs.docker/install.

  4. Enter the following commands to start the Docker service.
    # service docker start
  5. Enter the following commands to:
    • Access the Ixia repository and obtain the cloudlens-agent container.
    • Replace the variable ProjectKeyFromIxiaProjectPortal, which identifies your project key in Ixia portal, with the Project Key you created in the previous section.
      sudo docker run \
      --name cloudlens \
      -v /:/host \
      -v /var/run/docker.sock:/var/run/docker.sock \
      -d --restart=always \
      --net=host \
      --privileged \
      ixiacom/cloudlens-agent:latest \
      --server agent.ixia.cloud \
      --accept_eula y \
      --apikey ProjectKeyFromIxiaProjectPortal \

Warning: If you cut and paste commands from a PDF, first paste them into a test editor such as Notepad to confirm the syntax before pasting into the OS CLI. Direct cut and paste between PDF and CLI can contain dashes or other special characters that should not be part of the commands.

Task 5. Map the Network Decoder to Ixia Clients

Complete the following steps to map the Network Decoder to the client machines to route the traffic to the Network Decoder.

  1. Go to the Cloudlens public site (https://www.ixia.cloud).
  2. Double-click on your project to open it.
  3. Click the Define Group button or the Instances count.
    You should see two instances listed, one for your decoder and the other for the client machines.
  4. Filter for the decoder instance and click Save Search.
  5. Choose Save as a tool.
  6. Specify a name for the tool, and the Aggregation Interface.
    Use a meaningful name for the Aggregation Interface (for example cloudlens0. This is a virtual interface that appears in the OS where your Tool is installed. You need to instruct your tool to ‘listen’ to that interface in a subsequent step.
  7. Filter the client host instance from the list, and click Save Search.
  8. Navigate back to the top-level view of the project.
    Your client machine instance and Decoder instance are now displayed.
  9. Drag a connection between the your client machine instance and Decoder instance to allow the flow of packets.

Task 6. Validate CloudLens Packets Arriving at Decoder

Complete the following steps to validate that packets are actually arriving at the Network Decoder.

  1. SSH to the Network Decoder.
  2. Enter the following command.
    ifconfig
    The new aggregation interface you created is displayed.
  3. Generate traffic from the client OS instance CLI (for example, wget http://www.google.com/).
  4. SSH to Network Decoder to go to your Network Decoder instance CLI.
  5. Enter the following commands to look for suitable results in the tcpdump.
    tcpdump -I Cloudlens0

Task 7. Set the Interface in the Network Decoder

Complete the following steps in the Network Decoder to set the interface to use for the Ixia integration.

  1. SSH to the Network Decoder.
  2. Enter the following commands to restart decoder service.
    $ sudo restart nwdecoder
    The Network Decoder is now set to capture network traffic.
  3. Log in to NetWitness Platform and click (Admin) > Services.
  4. Select a Decoder service and click > View > Explore.
  5. Expand the decoder node and click config to view the configuration settings.
  6. Set the capture.selected parameter to the following value.
    packet_mmap_,cloudlens0(bpf)
  7. (Conditional) - If you have multiple capture interfaces on the Network Decoder, set the parameters with the following values.
    capture.device.params --> interfaces=cloudlens0,cloudlens1
    capture.selected --> packet_mmap_,All
  8. Restart the Decoder service after you set the capture.selected parameter.

Integrate f5 BIG-IP with the Network Decoder

IG-IP Virtual Edition (VE) is an inline virtual server and load balancer. A common use case would be for the f5 box to be a virtual web server that presents a single IP address / host name that manages requests to a pool of web servers in the cloud.

All traffic to RSA NetWitness Platform flows through the f5 BIG-IP VE virtual server.

The virtual server functions of the BIG-IP clone all traffic to a designated computer by re-writing mac addresses and loading them into a subnet shared with the destination sniffer. This guide describes how to set up the Decoder as the sniffer.

f5 BIG-IP VE Deployment Information

f5 BIG-IP VE on AWS will be available through the AWS Marketplace and activated by a BYOL license. A thirty-day free trial is also available.

For more information on this solution refer to the f5 BIG-IP DNS Data Sheet (https://www.f5.com/pdf/products/big-ip-dns-datasheet.pdf).

Task 1: Set Up a BIG-IP VE Virtual Server Instance

Set up a BIG-IP VE Virtual Server Instance according to the instructions in the "BIG-IP Virtual Edition 12.1.0 and Amazon Web Services: Multi-NIC Manual" ( https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/bigip-ve-multi-nic-setup-amazon-ec2-12-1-0.html). Complete all the steps through the last steps, "Creating a virtual server."

This virtual server performs packet capture. You may need to create multiple virtual servers to depending on your volume.

As part of creating the virtual server, you must have at least one server in your NetWitness Platform domain to handle the traffic routed by the virtual server (for example, you can create another instance in AWS to host the internal server).

Task 2: Create a Clone Pool

  1. Make sure that your Decoder has a network interface on the same subnet as one of the network interfaces on the BIG-IP VE instance.
    The clone pool sends packets to the Decoder by rewriting MAC addresses and sending them out a network interface. MAC address rewriting can be used to route packets to another subnet.
  2. Set up the clone pool within the BIG-IP VE virtual server according to the instructions in "K13392: Configuring the BIG-IP system to send traffic to an intrusion detection system (11.x - 13.x)" article (https://support.f5.com/kb/en-us/solutions/public/13000/300/sol13392.html).
    This document explains how to create the clone pool, and how to make an existing virtual server copy traffic to the clone pool. In this case, we will place the Decoder instance in the clone pool.

Guidelines

The following guidelines will help you to configure packet capture correctly using BIG-IP VE.

  • The Decoder instance must have its own IP address on one of the same subnets as BIG-IP VE. BIG-IP uses that IP address to identify the Decoder as being part of the clone pool.
  • When adding the Decoder instance to the clone pool, BIG-IP asks for a port number in addition to the IP address. This port number does not matter for the cloned traffic. The Decoder will receive all the cloned traffic, regardless of what port number was used here.
  • By default, the AWS subnet shared by the Decoder and BIG-IP VE will not allow the cloned traffic to travel from the BIG-IP VE interface to the Decoder interface. You must disable the source/dest. check on both the Decoder and BIG-IP VE network interfaces in AWS.
  • The Decoder instance must have a single network interface, eth0, by default. The Decoder captures traffic on this interface, but it may also receive administrative traffic on this interface. RSA recommends using network rules to filter out ssh and nwdecoder traffic from the capture stream. These are ports 22 (ssh) and 50004/56004 (nwdecoder).

Troubleshooting Tips

There are areas to troubleshoot if packets are not being accepted by the Decoder.

  • Make sure that the BIG-IP VE is sending the packets out of the correct interface.
    The BIG-IP VE instance contains tcpdump. Use it to verify the cloned packets are being sent out the expected interface. If they are not, there is a problem in the setup of the clone pool or the virtual server.
  • Make sure that the Decoder is receiving packets.
    The Decoder has tcpdump installed on it. Use it to verify that the Decoder is receiving packets. If the Decoder is not capturing packets, make sure that
    • The AWS source/dest. check is turned off.
    • The Decoder is on the same subnet as the interface the BIG-IP VE is using to clone packets.

    Integrate VPC Traffic Mirroring with the Network Decoder

    VPC Traffic Mirroring allows users to capture and inspect network traffic to analyze packets without using any third-party packet forwarding agents. The solution provides insight and access to network traffic across VPC infrastructure. Users can copy network traffic at any ENI (Elastic Network Interfaces) in VPC, and send it to NetWitness Platform to analyze, monitor, and troubleshoot performance issues.

    You must complete the following tasks to integrate the Network Decoder with VPC Traffic Mirroring:

    Task 1. Configure the Network Decoder as a VPC Traffic Mirroring Destination

    Task 2. Configure a VPC Traffic Mirroring Filter

    Task 3. Configure a VPC Traffic Mirroring Session

    Task 4. Setup a new VXLAN interface on the Network Decoder

    Task 5. Validate VPC Traffic Mirroring Packets Arriving at Network Decoder

    Task 1. Configure the Network Decoder as a VPC Traffic Mirroring Destination.

    1. Open the VPC service console view at https://console.aws.amazon.com/vpc/home.
    2. In the navigation panel, select Traffic Mirroring.
    3. Select Mirror Targets.

    Task 2. Configure a VPC Traffic Mirroring Filter

    You must configure a VPC Traffic Mirroring Filter to send only the required packets to the Network Decoder. You can determine if the inbound or outbound traffic needs to be captured or not.

    Note: Make sure the UDP port 4789 is open on the AWS instance of Network Decoder.

    Task 3. Configure a VPC Traffic Mirroring Session

    You must configure a VPC Traffic Mirroring Session to mirror the traffic by a communication channel between source ENI and destination ENI.

    Task 4. Set Up a new VXLAN Interface on the Network Decoder

    To capture the UDP enabled traffic you must create an interface and tunnel it to Network Decoder by performing the following steps.

    1. SSH to the Decoder.

    2. Enter the following commands.

      sudo ip link add tun0 type vxlan id <VXNLAN ID> local any dev <primary interface ex: eth0> dstport 4789

      sudo ip link set tun0 up

      ifconfig

    1. To create a firewall rule in the Network Decoder to allow traffic through the tunnel.
    • Open the IP tables file using the command vi /etc/sysconfig/iptables.
    • Append the line -I INPUT -p udp -m udp --dport 4789 -j ACCEPT.
    • Restart IP tables by using the following commands.
      service iptables restart
      service iptables status
    1. To set the interface in the Network Decoder.
      1. Log in to NetWitness Platform, select the decoder/config node in Explorer view of the Network Decoder service.
      2. Set the capture.selected = packet_mmap_,tun0 parameter.

    2. (Conditional) If you have multiple tunnels on the Network Decoder.
      1. Restart the Decoder service after you create the tunnel in Network Decoder.
      2. Log in to NetWitness Platform, select the decoder/config node in Explorer view of the Network Decoder service, and set the following parameters.

        capture.device.params = interfaces=tun0,tun1,tun2

        capture.selected = packet_mmap_,All


    3. Restart the Decoder service.
      $ sudo restart nwdecoder
      The user should be all set to capture the network traffic in the Network Decoder.

    Task 5. Validate VPC Traffic Mirroring Packets Arriving at the Network Decoder

    Perform the following steps to validate if the Network Decoder is receiving the network data (packets) successfully.

    1. Generate traffic from the client OS instance CLI (for example, wget http://www.google.com/).

    1. Enter the tcpdump -i tun0 command to look for suitable results in the tcpdump.

    1. The NetWitness Platform reflects meta values as shown below.

    Note: You can mirror traffic from an EC2 instance that is supported by the AWS Nitro system (A1, C5, C5d, C5n, I3en, M5, M5a, M5ad, M5d, p3dn.24xlarge, R5, R5a, R5ad, R5d, T3, T3a, and z1d).

    Note: For more information, see "New – VPC Traffic Mirroring" documentation at
    https://aws.amazon.com/blogs/aws/new-vpc-traffic-mirroring/.

You are here
Table of Contents > AWS Deployment > Configure Packet Capture

Attachments

    Outcomes