Splunk App 1.0.9 & Technology Add-On 1.0.9
- WORKFLOW DIAGRAM
- DOWNLOAD FILES
- INSTALLATION LOCATIONS
- INSTALLATION INSTRUCTIONS
- Recommended Best Practices
- Installation through web user-interface for both Splunk Enterprise and Splunk Cloud Instances
- Installation through Command Prompt for Splunk Enterprise Instances only
- Configure the TruSTAR Technology Add-Ons (TA)
- Best Practices for Configuring Rest Input
- Change Macro Definition
- Usage & App Commands
- Splunk ES Setup & Configuration
This article provides a description of the Splunk App built for TruSTAR and a step by step guide to install, setup and troubleshoot that app.
The Splunk App allows users to use context from TruSTAR’s IOCs and incident reports within their Splunk analysis workflow. TruSTAR arms security teams with high-signal intelligence from sources such as internal historical data, open and closed intelligence feeds, and anonymized incident reports from TruSTAR’s vetted community of enterprise members.
- Dashboard displaying IOCs and reports from TruSTAR that match log and event data stored in your Splunk indexes.
- View TruSTAR reports in the Splunk app and launch IOC search and investigations against Splunk data.
- SplunkES capability to generate notable events from matched data.
Splunk Engineer - Knows a lot about Splunk, how to install apps/add-ons, craft searches, optimize data ingestion/storage, etc., the Splunk application's file/directory structure, and how to manipulate the application's behavior by editing config file stanzas at the terminal.
Sysadmin - is a person who is responsible for the upkeep, configuration, and reliable operation of the systems that will be housing the TruSTAR splunk app.
TruSTAR integration user (CTI Analyst / Splunk user) - This person will be using the TruSTAR app on a daily basis as part of their workflow. The user has understanding of the use cases that are supported by the app and how it can help with incident detection and triage.
New Splunk Updates
TruSTAR's Splunk app has received a refresh. The most current version of the TruSTAR APP is v1.0.8 and the Technology add-on version is v1.0.9. The updates extend the capability to our Splunk app so users can maintain parity in experience between the TruSTAR platform and the Splunk app.
What was updated?
- Splunk App to version 1.0.9
- Splunk TA to version 1.0.9
What was improved ?
- Improved ingestion Options - The updates provides our users with the ability to ingest not only TruSTAR reports but also their IOC list that were submitted to TruSTAR using IOC management. IOC management is a capability that allows users to submit large amounts of IOCs into TruSTAR as a collection. Users can now ingest their IOC lists into Splunk to be correlated against.
- Splunk App Dashboard Update - The new App dashboard is more streamlined making relevant information more visible to the user. Users can now see the sources/enclaves from which indicators were ingested into Splunk.
- Optimized Queries - The TruSTAR app is more efficient in data ingest and has optimized Splunk queries.
How can i update to the newest version?
- Refer to the FAQ's section on "What to do when the TruSTAR Splunk App is updated ?"
- Users who already have TruSTAR app versions 1.0.3 and TA versions 1.0.4 and above installed can update to the newest version of the app through the UI. Versions prior to this version have to do a clean-install to update to the newest app version.
- New users should follow Instructions below to download the most up to date version of the App and TA.
The details below summarizes the prerequisites and requirements needed for the TruSTAR Splunk app to work. Please make sure below components are downloaded/available.
Splunk Enterprise 6.6.0 or above.
Splunk Enterprise can be downloaded from here: https://www.splunk.com/en_us/download/splunk-enterprise.html
To install Splunk Enterprise, follow guidelines given in below link: http://docs.splunk.com/Documentation/Splunk/latest/SearchTutorial/InstallSplunk
Set environment variable for Splunk Home
export SPLUNK_HOME=__________________________[insert path to Splunk folder]_____________________
In OS X, the Splunk folder path is usually: /Applications/Splunk/
In Ubuntu Linux, the Splunk folder path will likely be: /opt/Splunk/
For Splunk Enterprise instances:
These bundles are required to successfully install the TruSTAR app on Splunk Enterprise instances. Note: These bundles can only be installed on Splunk Enterprise instances. The TruSTAR app and TA for a Splunk Cloud instance must be downloaded from the Splunkbase website (see
This bundle will fetch reports and IoC data from TruSTAR using modular input and indexes it, after which users can search it using the Splunk search tool. This bundle needs to be installed before installing the next bundle. Current version 1.0.9
This bundle contains the dashboards that display data received from TruSTAR Station. Current version 1.0.9
For Splunk Cloud instances:
Installation bundle files to be used with a Splunk Cloud instance must be downloaded from the Splunkbase website here:
TruSTAR App: https://splunkbase.splunk.com/app/3678/
TruSTAR Technology Add-On: https://splunkbase.splunk.com/app/3679/
Single-Instance Splunk Enterprise Deployment
In a single-server Splunk deployment, a single instance of Splunk Enterprise serves as the data-collection node, indexer, and search head all in one. In such scenarios, install both the TruSTAR Application and Technology Add-On on this instance.
Multi-Instance Distributed Splunk Enterprise Deployment
In a distributed deployment, Splunk Enterprise is installed on at least two instances. One node functions as the search head the remaining nodes serve as indexers and data-collection nodes. The TruSTAR Application should only be installed on the search head node. The TruSTAR Technology Add-on needs to be installed on all indexer and data-collection nodes.
If you have a separate data-collection nodes, please ensure they are running the full Splunk Enterprise version.
Managed Splunk Cloud Deployment
In a managed Splunk Cloud deployment, the data indexing will take place on the cloud instance. The data collection however can take place on an on-premisis Splunk Enterprise instance used as a heavy forwarder.
- Customer’s Heavy Forwarder on which the TA will reside must be able to reach https://station.trustar.co. (firewall / proxy rules)
- Customer’s API limits need to be checked.
- If the customer uses ES, setup an enclave for their Notable events and, during the call, have the customer add that enclave to the list of enclaves they import to Splunk.
- Customer’s Splunk Station user account needs to be given access only to the enclaves they want to import from and submit notables to.
- Customer identifies which of their Splunk indexes contain which types of IOCs.
- User setting up integration should know which indexes we want the App to monitor and for which types of IOCs.
- User setting up integration should know which Station enclaves we want to corroborate against the indexes.
- User setting up integration should have an account on Station.
- User setting up integration should have access to the customer’s Splunk user account.
Recommended Best Practices
- Customer / prospect creates an email account for their Splunk integration (ex: firstname.lastname@example.org) on their company's email server. (
- Customer / prospect creates a new index in Splunk environment to which the TA will copy their Station data.
- Prior to setup user identifies which enclaves/data in TruSTAR they want to copy into their Splunk index
- Customer / prospect should create TruSTAR user account tied to the Splunk integration email address (above). The customer's Splunk instance will use that Station user account's API credentials. Customer / prospect gives this Station user account read access only to the enclaves whose data the customer wants to copy into their Splunk index that houses their Station data.
- Customer / prospect should install the TruSTAR App for Splunk on all search heads they want to be able to use it on. SplunkCloud customers have email@example.com install the TruSTAR app on their SplunkCloud instance.
- Customer / prospect should install the TruSTAR Technology Add-On on one Heavy Forwarder.
- Users should send all notable events to a newly created enclave in TruSTAR. Ask your account executive about setting up a new enclave for your notable events in TruSTAR.
Installation through web user-interface for both Splunk Enterprise and Splunk Cloud Instances
For Manual Downloads:
Download the TruSTAR Technology add-on and TruSTAR app bundles. Download the bundle from the "manual installation section" above.
After successfully downloading follow these steps:
- Select Apps -> Manage Apps from the main menu bar.
- First upload the Technology Add-on for TruSTAR file.
- Next upload the TruSTAR App for Splunk file.
- After successfully uploading the two files go to the App Configuration section.
Installation through Command Prompt for Splunk Enterprise Instances only
To install from the command window, go to $SPLUNK_HOME/bin folder and execute following command:
./splunk install app TA-trustar.spl
./splunk install app Trustar.spl
Configure the TruSTAR Technology Add-Ons (TA)
Follow these instructions for all Technology Add-Ons in your architecture.
- Login to your Splunk node.
- If desired, create a new index in which to store the data the TA will import from TruSTAR Station (TruSTAR recommends doing this, but it's not required). By default, the TA will store data in the "main" index.
- Best Practice: CREATE 2 INDEXES. The first index should be named "trustar", and that's where we'll index the indicators and reports that your TA will import from Station. The second index should be named "trustar_app_ta_logs", and that's where we'll index the logs generated by the TruSTAR App and TA, in case we need to troubleshoot / debug.
- Go to Settings->Data inputs.
- Select "TruSTAR Configuration" on the next page.
- Fill in the configuration details (see Table below for more details).
- Select "Enable Data Collection" to begin ingesting data from the TruSTAR enclaves specified in the "Enclave IDs" box.No indicators or reports will be ingested into Splunk if "Enable Data Collection" is not selected.
Rest Input Name
The name of rest modular input. This can be any name you like, but each Modular Input's name must be unique.
Name it "trustar001" - no caps, no special characters, alphanumeric characters only. Special characters cause problems. If you need to delete this REST input and create a new REST input in the future, you'll likely need to give the new REST input a new name (Ex: "trustar002").
URL to Connect
Use https://station.trustar.co This is TruSTAR station URL from where data get collected by executing API calls.
API Authentication Key
Your TruSTAR API Key. The App and TA use this for making API calls. Find this key in the TruSTAR Station web interface under Settings-> API. How to find your API Key
It will be in clear text at the time of new modular input creation.
On save of Modular input, Authentication key will get stored at /storage/password entity of Splunk in encrypted format.
On edit of modular input this field will be blank.
Your TruSTAR API Secret. It will be used for making API calls. Available under Settings-> API on TruSTAR Station. How to find your API Secret
It will be in clear text at the time of new modular input creation.
On save of Modular input, Secret key will get stored at /storage/password entity of Splunk in encrypted format.
On edit of modular input this field will be blank.
Date (UTC in "YYYY-MM-DD hh:mm:ss" format)
Submission date/timestamp for the oldest report you want to import into Splunk. Leaving this blank will result in importing data for the past 90 days prior to the moment you successfully save these configurations.
SSL Certificate Path
Path of SSL Certificate, which will be used while executing any API request to TruSTAR station. No need to give path in case of CA signed certificate.
Enable Data Collection
Enabling data collection will cause the TA to begin importing data from the TruSTAR enclaves specified in the Enclave IDs field.
Enter Enclave ID's (alphanumeric id next to enclave name in TruSTAR Station) from which to import data. If you want to import data from multiple enclaves, separate each enclave ID with a comma. Retrieving your Enclave IDs
HTTPS Proxy Address
Proxy address to use for communication with the TruSTAR station, e.g. http://10.10.1.10
HTTPS Proxy Port
Proxy port to use for communication with the TruSTAR station e.g. 3128
HTTPS Proxy Username
Proxy username. Your system administrators / helpdesk should be able to give you this.
HTTPS Proxy Password
Proxy password. Your system adminstrators / helpdesk should be able to give you this.
Enclave types for fetching Priority Score
Recommended value: leave this field empty, if possible.
Polling interval in seconds. This is the amount of time (in seconds) that the TA will wait before again polling the TruSTAR enclaves whose IDs you entered in the "Enclave IDs" box for new indicators / reports to import into Splunk.
Recommended value: "86400" (once / day)
Standard field of Splunk with options: automatic, manual. Default value in our case is automatic. We have kept validation of not allowing user to change from UI, as we give SourceType from code.
This parameter allows user to decide which index to be used for TruSTAR data. User needs to ensure that index is already present in the Splunk environment (should have been created as part of "Configure TruSTAR Technology Add-ons" Step 1). If no value is provided, by default the TA will import data from TruSTAR Station to the “main” index. If the user changes this setting from the Default, the user MUST follow the "Change Macro Definition" instructions below.
Click on Next button on top, after adding each value for modular input in the form.
Best Practices for Configuring Rest Input
Rest Input Name: Name it "trustar001" - no caps, no special characters, alphanumeric characters only. Special characters cause problems. If you need to delete this REST input and create a new REST input in the future, you'll likely need to give the new REST input a new name (Ex: "trustar002").
API Key &Secret: Before making any changes to rest input settings, disable the rest input you currently have configured. (this is done in the Settings --> Data Inputs --> TruSTAR Configuration screen). Attempting to make changes to the rest input settings while it is running in the background often doesn't work, you'll receive various error messages.
If you're going to change the enclaves from which you download, or the time window over which you download, you have to just delete your current TruSTAR rest input and create a new TruSTAR rest input by a name different from the rest input you just deleted. The 1 time window is applied to all the enclaves, and adding an enclave to the list after the rest input has run for a while will cause it to only bring in data from that enclave from its most-recent bookmarked time; it will not reach all the way back to the initial time to catch that enclave up in time with the data it's already imported from the enclaves you originally specified, it just keeps going from whatever time it's presently at. If that's acceptable to you, then you don't need to delete the rest input and create a new one. If not, then delete and create a new & different one.
Priority Scores: Don't have the TA fetch any priority scores. Our priority scoring engine is currently undergoing a revamp and setting it up will just increase (drastically) the amount of time it takes to get indicators into your index. You control this by leaving the "Enclave Types for Fetching Priority Score" field in the rest input settings blank. If you're an on-prem Splunk user (you manage your own Splunk architecture, whether it resides in a cloud service or your own data center), you should be able to leave this field blank and skip over the next paragraph of this message.
If you're a SplunkCloud customer, it might not allow you to leave this field blank; if that's the case, I recommend you put "OPEN" in this field and then don't include any open-source enclaves in the "Enclave IDs" field. Unfortunately, the web page in the Station web u/i from which you obtain enclave IDs doesn't specify each enclave's type, but on the page in the web u/i where you view reports, the enclaves are grouped by type into tiles named "Closed Source", "Open Sources", "Intel Researchers"......try to avoid having your TA import data from any of the "Open Sources" enclaves into your Splunk index. The icing on the cake with this optimization is to ensure that the Station user account whose API credentials you are using in your Splunk rest input only has read permissions in the Station platform to the enclaves that you want to import to your Splunk index.
Polling Interval: avoid importing data from TruSTAR far back in time, if you can. The further back in time you tell the TA to reach, the longer the import process takes (more reports and indicators have to be downloaded [and more priority scores need to be obtained if you don't successfully shut that functionality off] before the indicators actually post to your index). Also, large time windows become large queries against Station's databases, which take long times to process, and Station times out any database request that takes more than You tweak this in the "Date" field in the rest input settings. If you need to reach further back in time for certain enclaves than you do for others, you can setup additional copies of the TA on other heavy forwarders in your Splunk architecture, each TA configured to import from a unique set of enclaves & time windows, but all of them dumping the data into the same index so the TruSTAR App can see/search it all.
Enclave Selection: avoid importing from more enclaves than you need to. Identify the enclaves you believe to be the most pertinent to your needs, and try to only import those. If you feel you really want to import data from a bunch of enclaves whose data quality/reliability is unknown to you, consider setting up the TruSTAR TA on an another different Splunk Heavy Forwarder and having that secondary TA import data from those enclaves into the same index as your primary heavy forwarder/TA pushes the important/known/vetted data.
User Account: Make a new user account on Station specifically for use by your Splunk TA. This should be a service account of sorts that is tied to a team email address. I'd recommend you have an email address created called firstname.lastname@example.org and have its traffic all redirected by the mail server to your team address, which is probably something similar to email@example.com. Then jump on Station and make a new user account tied to the email address firstname.lastname@example.org.
Enclave access/ Permissions: Make sure that that new user has only view access to the enclaves you want it to import to Splunk. Use its API credentials in your Splunk rest input settings.
WARNING: *WARNING: If you decide later that you want to import data from additional enclaves into Splunk, you need to make sure to edit the email@example.com Station user account's enclave permissions accordingly, or you'll bump up against the problem mentioned in the warning immediately preceding this one.
5. Polling Interval: Set it to 84400 (once/day) initially, and when it's caught up / downloaded everything it needs to download, you can drop that time to 3600 (once/hr), 1800 (twice / hr), or as tight as you want it to go. Default on this is to make it run every 30 mins; however, the TA usually doesn't finish downloading and posting to the index everything it needs to within that 30 minutes, and every time that interval expires, it will restart itself, causing it to have to re-download things it already downloaded once but didn't post to the index before the 30-minute mark hit. You have to let it run its course uninterrupted for a few days. The polling interval you can shrink it to eventually will depend on how many enclaves you are importing data from. If you elect to import from so many enclaves that it can't finish a complete pull in 10 minutes, then you should bump this interval up to 30 minutes. If it still lags behind real-time too much, bump it up to 60 minutes, and keep bumping it up until it seems like it catches up and stays as caught-up as possible. It will take some playing with.
Change Macro Definition
If you customized the destination index you will need to follow these steps.
- Open the Splunk user-interface on the the Splunk Search Head.
- Go to Settings-> Advanced search-> Search macros.
- Select "TruStar App for Splunk" in App Context dropdown.
- Modify `trustar_get_index` macro definition with index=”<new index name>”.
Usage & App Commands
The TruSTAR app dashboard shows count of Reports and Indicators Imported and Matched in All time and Last 4 hours.
Below are details of the panels in this dashboard:
- Matched Data :This panel displays 4 single values of Matched data.
- Count of Matched Reports in Last 4 hours and a trend arrow showing the count from the previous 4 hours.
- Count of Matched Indicators in Last 4 hours and a trend arrow showing the count from the previous 4 hours.
- Count of Matched Reports in All time.
- Imported Data: This panel displays 4 single values of Imported data.
- Count of Imported Reports in Last 4 hours and trend of count with previous 4 hours.
- Count of Imported Indicators in Last 4 hours and trend of count with previous 4 hours.
- Count of Imported Reports in All time.
This screen displays Report details like name, creation time, distribution, last scan and result count of matched for specific report.
- Report Details: User can see this dashboard on drill-down of Report Name from TruSTAR Reports dashboard. It displays all details of specific report like name, total indicators count, report body and table of all related indicators. User can investigate the indicator in raw events and also perform actions like marking an IOC as false positive so that it is not considered an IOC in future matches.
This screen displays basic details of indicators like time of download, value, count of co-related reports, status, count of matched reports. Also user can perform actions like investigate IOC in raw events and mark an IOC as false positive so that it not considered in future matches.
- Match Configuration: We can configure the attributes for Matching the events like:
- Index : Index to consider to matching the TruStar events.
- Timerange(in days) : Timerange for the data to be matched.(e.g If one wants to consider last 2 days of events for matching, this property should be configured as 2)
- Enclaves: Enclaves to consider for matching against TruStar events.
- Submit Enclaves Configuration:Enclaves to which the TruStar submission should happen while using AR and workflow action.
Splunk ES Setup & Configuration
trustar_get_match_reports correlation search is part of TA_trustar. By default, its in a disabled state. User has to enable it to generate notable events from matched events.
Adaptive Response Actions
TA_trustar has Submit report adaptive response action implemented. Once AR action is executed, it will submit report to TruSTAR and index the response in Splunk. It will index AR action response in default main index only.
TruStar Match Report
The TruStar App for Splunk allows users to utilize the context of the TruStar platform's IOCs and incidents within their Splunk workflow. TruStar arms security teams with the highest signal intelligence from sources such as internal historical data, open and closed intelligence feeds and anonymized incident reports from TruStar's vetted community of enterprise members
Below is the topology of data collection from TruStar station to Splunk in distributed and standalone environment.
Stand-alone Splunk Deployment
In case of deploying this App on Stand-alone Splunk Deployment, User would have to install TA-trustar and Trustar App for Splunk both on Splunk instance and then configure theta to start fetching data from TruStar Station.
Distributed Splunk Deployment
In case of deploying TruStar App for Splunk on distributed setup, Following are the changes needed on each type of node.
Splunk Heavy Forwarder: On Splunk Heavy forwarder, Install TA-TruStar and configure using TruStar credentials.
Splunk Indexer Cluster: On Splunk Indexer cluster, User would have to define specific index in case user don’t want to use default index (main) or in case user have already defined Index on Splunk Heavy Forwarder.
Splunk Search-Head Cluster: On Splunk Search Head Cluster, User would install the App & TA of TruStar App for Splunk.
This section describes the overall App architecture.
Access Path: Settings → Indexes
TruSTAR App for Splunk can populate the panels based on the index defined while indexing data into the Splunk. By default data will get populated under “main” index until it’s changed while configuring data input.
Splunk recommends using Splunk’s default index (that is “main” index) for simplicity and reusability.
Refer below URL to create custom index.
Reference URL: http://docs.splunk.com/Documentation/Splunk/6.5.0/Indexer/Setupmultipleindexes
Note: In case changes are done in Index name, Please follow steps mentioned under macro section.
Access Path: Settings → Source Types
Source-Type are default Splunk fields to categorize and filter the indexed data to narrow down the search results. Since TruSTAR app collects two different types of data from Trustar Station, it has been indexed in below source types.
Below is the table, which shows alerts and activities data are separated.
This contains all the reports sent from TruSTAR station to Splunk using rest API call.
This contains all the indicators sent from TruSTAR station to Splunk using rest API call.
Access Path: Settings → Advanced Search → Search Macros
All the visualizations in TruSTAR App for Splunk are referred by a “trustar_get_index” macro, which helps App to identify the Index in which data is getting indexed.
By default, it’s referred to “main” index and in case user is changing the Index value then same changes has to be done in the macro.
TruSTAR App for Splunk has another macro called “trustar_get_index_and_sourcetype”, which helps App to identify the index and sourcetype in which indicators of TruSTAR app should be matched.
By default, it’s referred to index=* and in case user has some specific index and source type to consider to find matches,then it should be updated in the macro.
There is a known limitation in Splunk where App Icon doesn’t get visible before restarting Splunk. Hence, it’s recommended to restart post installation of the App to load the App Icon.
In Splunk Modular input that in case of failure it doesn’t show proper raised error message, but shows generic failure message on UI in windows machine.
In the case of Splunk v7.1.x, the Whitelisted Input Dropdown of Indicators Dashboard wouldn't work for ‘All’ option. Workaround is to select either ‘Yes’ or ‘No’ and filter specific data.
In the case of users on Splunk v7.0.3, the matching count for SHA256 type of indicators wouldn’t be considered.
TROUBLESHOOTING / FAQs
Q: What is the function of the TruSTAR App for Splunk ?
A: The App is the user-interface component of TruSTAR's Splunk integration; it is composed an overview dashboard and a few tables that display IOCs and reports from Station and 30 saved-searches monitor the indexes you specify for the presence of IOCs that your TA has copied from Station into the Splunk index you've chosen to house your TruSTAR data in.
Q: What is the function of the TruSTAR Technology Add-on for Splunk ("TA") ?
A: The TA copies data from enclaves in Station that the user specifies into a Splunk index that the user specifies.
Q: How long does it take to setup Integration ?
A: Splunk Integration setup can take anywhere from 20 - 60 mins. This is dependent on the splunk environment and whether it is a standalone or distributed environment
Q: How do you delete/reinstall/upgrade the TruSTAR instance ?
A: User can upgrade TruSTAR app and TA through CLI or UI.
Upgrade through CLI:
- Download tar of App or TA from Splunk base
- Stop Splunk server
- Install app APP_NAME.tgz –update 1 –auth username:password
- Start Splunk Server
Upgrade through UI:
- Click on Manage Apps
- Find Trustar app And TA entry from list
- Click on link of newer version under version column on related entry
- Install the TA first followed by the App
- Click on Manage Apps
- Click on Install App from file
- Locate Trustar TA file from local drive
- Select to Upgrade app
- Click Upload
- Repeat for TruSTAR app also
Delete old app and add-on from backend:
- Go to $SPLUNK_HOME/etc/apps/ and remove TA-trustar and Trustar
- Restart Splunk
Q: After completing installation of application, the dashboards did not start populating data what do i do next?
A: Confirm that you have modified macro `trustar_get_index` with indexes selected while creating Modular input. For example, If all modular input entries have index=default, then update macro definition with index=main and save. If any specific index has been set in modular input then add it in macro definition.
- Check following query to verify data is getting indexed into Splunk
search `trustar_get_index` | stats count by sourcetype
- Verify that SPLUNK_HOME is pointing to correct Splunk directory.
- Look for errors in trustar_modinput.log file. This file will be available under $SPLUNK_HOME/var/log/trustar folder.
- Check the modular input In case API Key or Secret Key of TruSTAR gets modified after setup in modular input.
User can update it from Modular input UI
- Go to Settings-> Data Inputs -> TruSTAR Configuration
- Open specific TruStar station entry, and enter new Authentication key and Secret Key in both fields.
- On click of save, modified key will get updated for that specific TruStar Station.
Q: Can i build the App from source code?
A: Any splunk application can be built to tar file with .spl extension.
Follow steps below to build both TA and App from any Linux distribution.
tar cv <app_name> > <app_name>.tar
mv <app_name>.tar.gz <app_name>.spl
(Replace <app_name> with the name of app. For eg. TA-trustar or Trustar)
Q: My Dashboard is not being populated?
A: On initial setup of the TruSTAR app it takes about 24 hours depending on the amount of data being ingested into TruSTAR for all data to be downloaded into the splunk and the dashboard to be fully populated. To confirm if data is being ingested selected the imported data tab and check if new reports are being downloaded. Reach out to TruSTAR support if the dashboard isn't fully populated after 48 hours
Q: What to do when the TruSTAR Splunk App is updated ?
A: When a new version of the TruSTAR app is available on Splunkbase users will see an update button on their TruSTAR App and Technology Add-on's in Splunk dashboard. Follow instructions above on "How do you delete/reinstall/upgrade the TruSTAR instance ?" to update to the latest version
Q: What Resources do i need to run the TruSTAR App ?
A: I am currently running the TruSTAR App and TA on a single Splunk Enterprise instance that is functioning as Indexer and Search Head all in one. It is sitting on a c5n.2xlarge EC-2 instance in AWS, and it has a 64GB hard drive, of which 18 GB are being used. My Trustar index is 2.3GB disk size, and my MongoDB files are consuming 6 GB as a result of the TruSTAR App's search activity. I have put 200 MB of dummy log data into another index for the TruSTAR App to search against.
Q: What are some of the saved searches that the app runs ?
A: -get_enclaves: Runs on-demand. Gets all unique enclaves from the last 2 days. Used in: _____?__
-Trustar_All_Indicators_Cumulative: Runs at :00 & :30 every hr. Gets all unique indicators from your "trustar" index that were downloaded in the last 24 hrs. Appends results to key="indicator" in the "trustar_all_indicators_cumulative_lookup".
-Trustar_Mark_False_Positive: Runs on-demand.
-Trustar_All_Matching_Indicators_Cumulative_For_Type_MD5: Runs every 15 mins.
Q: How can i optimize my saved searches ?
A: Customers / prospects electing to copy data from several Station enclaves into their Splunk index that houses their Station data can install the TA on multiple heavy forwarders, have each heavy forwarder use a different set of API credentials, and have each heavy forwarder focus on a portion of the enclaves. Point all of them at the same index so the data all ends up in the same place, just gets there faster / more real-time.
- Create a new search macro belonging to the Trustar app. Give it a definition in the spirit of: ((index=md5_index_1 OR index=md5_index_2) AND earliest=-360d ) where md5_index_1 and md5_index_2 are the names of indexes of your company's data that contain MD5 hashes that you would want the TruSTAR app to monitor for the presence of malicious MD5 hashes that the TruSTAR app knows about, and "-360d" is the number of days back in your indexes you'd like to scan for the presence of malicious MD5's that TruSTAR knows about.
- Edit the saved-search that looks for that type of IOC. The very beginning of the search string will reference the `trustar_get_index_and_sourcetype` macro. Replace that with the name of the macro you just created. For example, this search:
trustar_get_index_and_sourcetype` sourcetype!="trustar:*" [| inputlookup trustar_matching_indicators_cumulative_lookup where type="MD5" | rename value as search | eval value = replace(search, "\x5C\x5C","\\") | eval search=if(search!=value,search+"|"+value,search) | eval search=split(search,"|") | stats count by search | table search | format] | eval value= [| inputlookup trustar_matching_indicators_cumulative_lookup where type="MD5" | rename value as search | eval value = replace(search, "\x5C\x5C","\\") | eval search=if(search!=value,search+"|"+value,search) | eval search=split(search,"|") | stats count by search | stats values(search) as query count | eval query=mvjoin(query,",") | eval query=if(isnull(query),"''",query) | table query ] | eval value=split(value,",") | stats latest(_time) as _time count by _raw value index | eval valueSearch=lower(value) | eval rawText=lower(_raw) | where like(rawText,"%"+valueSearch+"%") | eval replaceValue = replace(value, "\x5C","\\\\\\") | lookup trustar_matching_indicators_cumulative_lookup value OUTPUT type as Match, Enclaves as Enclaves, _time as _time | eval value=if(isnull(Match),replaceValue,value) | dedup value,rawText,_time,index,Enclaves | stats count(rawText) as result, values(_time) as _time by value index Enclaves | inputlookup trustar_indicators_match_result_lookup append=true | stats latest(_time) as _time, latest(result) as result, values(Enclaves) as Enclaves by value index | eval Enclaves = mvjoin(Enclaves,",") | eval indicator=value+"_"+index | table _time,value,result,indicator,index,Enclaves | outputlookup key_field=indicator trustar_indicators_match_result_lookup
indexes_containing_md5s` sourcetype!="trustar:*" [| inputlookup trustar_matching_indicators_cumulative_lookup where type="MD5" | rename value as search | eval value = replace(search, "\x5C\x5C","\\") | eval search=if(search!=value,search+"|"+value,search) | eval search=split(search,"|") | stats count by search | table search | format] | eval value= [| inputlookup trustar_matching_indicators_cumulative_lookup where type="MD5" | rename value as search | eval value = replace(search, "\x5C\x5C","\\") | eval search=if(search!=value,search+"|"+value,search) | eval search=split(search,"|") | stats count by search | stats values(search) as query count | eval query=mvjoin(query,",") | eval query=if(isnull(query),"''",query) | table query ] | eval value=split(value,",") | stats latest(_time) as _time count by _raw value index | eval valueSearch=lower(value) | eval rawText=lower(_raw) | where like(rawText,"%"+valueSearch+"%") | eval replaceValue = replace(value, "\x5C","\\\\\\") | lookup trustar_matching_indicators_cumulative_lookup value OUTPUT type as Match, Enclaves as Enclaves, _time as _time | eval value=if(isnull(Match),replaceValue,value) | dedup value,rawText,_time,index,Enclaves | stats count(rawText) as result, values(_time) as _time by value index Enclaves | inputlookup trustar_indicators_match_result_lookup append=true | stats latest(_time) as _time, latest(result) as result, values(Enclaves) as Enclaves by value index | eval Enclaves = mvjoin(Enclaves,",") | eval indicator=value+"_"+index | table _time,value,result,indicator,index,Enclaves | outputlookup key_field=indicator trustar_indicators_match_result_lookup
Q: How can i make splunk index my TruSTAR App and TA Logs to an index of my choice ?
A:This is handy because it allows you to use the Splunk u/i to look at the logs generated by your TruSTAR integration components.
- Create a file/directory monitor input. Have it monitor this directory: $SPLUNK_HOME/var/log/trustar/ (replace $SPLUNK_HOME with your splunk directory.) That directory will have 2 log files: "trustar_match.log" and "trustar_modinput.log".
"trustar_modinput.log": this is the log file for the TA.
"trustar_match.log": this is the log file for the app.
Q: Where can i find my checkpoint file?
A: A checkpoint file for each rest input can be found in this directory: $SPLUNK_HOME/var/lib/splunk/modinputs/trustar/ and the filename for a given rest input's checkpoint will be the name of the rest input.
Q: Can i edit my configuration files ?
Q: where can i find my Rest input configs?
A: Usually the TA save its config stanza to these locations:
TruSTAR: Company name.
Station: our primary product, a threat intelligence management SAAS platform.
Station Member Company: A customer (or prospect) company that has an account on the Station platform. Sharing groups are Station Member Companies.
Station Member Company Account: an account on the Station platform for the Station Member Company.
Station User: An individual human that uses the Station platform; all users are members of a Station Member Company.
Station User Account: an account on the Station platform used by Station Users. Station User Accounts are all subordinate to a Station Member Company account. TruSTAR personnel can restrict the number of Station User Accounts that a Station Member Company Account can create.
Enclaves: Data repositories in the Station platform. The Station platform can restrict what level of access (none, view, submit-to, full) a member company can have to a given enclave, and it allows member companies to control the level of access each of their individual users has to any of the enclaves it has access to. No individual user can have a level of access to an enclave greater than the user's member company's level of access to that enclave.
Open-sources: cyber threat intelligence websites / blogs / feeds that are open to the world, to which all users have free access.
Closed-sources: cyber threat intelligence websites that require users to pay for access.
TruSTAR keeps the following primary attributes on a Report object:
-"submitted": the timestamp at which the report was first submitted to Station. User can't modify this, it is set automatically by Station.
-"updated": the timestamp at which the report was most-recently updated in Station. User can't modify this, it is set automatically by Station.
-"timeBegan": the only timestamp for a report that the user can modify. This is the timestamp you want to see on the report. For reports in open-source/closed-source enclaves, Station adds the report's timestamp into this field (which might be different from the timestamp at which Station imported the report into itself).
-"title": the report's title.
-"reportBody": the body of the report. This is the text that Station extracts IOCs from.
-"id": the GUID that identifies this report within the Station platform. This cannot be specified/modified by the user.
-"externalId": a GUID field whose value the user can specify/modify to make it easy for the user to refer to this report in the future. (also sometimes goes by the name "externalTrackingId")
-"enclaveIds": a list of GUIDs for the enclaves in which this report resides.
-"tags": a list of tags that users in your Station Member Company have applied to this report.
-"notes": a list of notes that users in your Station Member Company have added to this report.
If you look through the documentation on our Public API, you might see mention of a few other report object attributes ( "distributionType", "sector", "submissionStatus" ), but they are not relevant to the TruSTAR-Splunk integration.
See here for the current Report Object attributes: https://docs.trustar.co/api/v13/reports/index.html
TruSTAR keeps the following primary attributes on an Indicator object:
-"value": the indicator itself (the IP address, domain, URL, email address, etc.)
-"source": only IOCs submitted individually through the "Submit Indicators" endpoint will contain this attribute.
API limits: The TruSTAR Station platform limits all user accounts to 20 API calls / min, and every company account has a maximum number of aggregate API calls that its users can make each day. We want the TruSTAR
For other questions reach out to firstname.lastname@example.org for any additional questions.